Storytelling with Data – Enertia User Conference 2022

Thank you to everyone who attended my session at the Enertia User Conference in Las Vegas earlier this month! It was a blast meeting everyone, and I can’t wait to see your BI solutions come to life. Big shout out to my dad, Steve Hughes, for introducing me to this amazingly talented group of people, and to Fernando Salazar, for welcoming me into the Enertia community. Below is all the research used in my presentation on Storytelling with Data. Please use this link for the GitHub folder containing the presentation, sample data used, and a sample report that we worked on together. Happy coding!

In 2015, the average company spent $7.4 million (USD) a year on data-driven initiatives (IDG Big Data & Analytics Survey). In 2021, the average company expected to spend $12.3 million (USD) on data initiatives in 2022 and 55% expect their budgets to increase (Data & Analytics Study). Data and analytics budgets have grown nearly 40% in five years, yet Foundry found in their 2022 study that “lack of appropriate skill sets is a top challenge” (2022 Study). It’s clear that data is a priority to many organizations, but without the proper training on how to use data to deliver insights that data sit like a box of nails without a hammer.

To turn our millions of dollars worth of data into actionable insights, we need to tell a story with it. Why a story? Storytelling has been and is a vital way humans understand and share the world around them. If numbers were as interesting and memorable as stories, our classic works of art would be the Pythagorean theorem instead of the Iliad and Odyssey. Listening to a story engages multiple parts of the brain, which is why listening to your great aunt’s story for hours is exhausting, but also why stories are more memorable. Here are the parts of the brain engaged when you hear a story:

  • Wernicke’s area (language)
  • Amygdala (emotions)
  • Minor neurons (empathizing)

The hippocampus (short-term memory storage) is more likely to push stories into long-term memory than numbers because they engage multiple areas of your brain. This is why so many math classes include word problems. Thankfully, visualizing your data doesn’t need to be like a math test.

There are a few main elements to be successful at data storytelling:

  1. Data
  2. Story
  3. Visualizations

Data powers the visualizations that can communicate your story clearly and memorably. Similar to a picture book, a good report will quickly communicate a story and a great report will help you change that story through actionable insights. So how do we go from numbers to a story? From math class to lit? Through connecting visuals and numbers to elements of a story. Every story contains a few common elements: characters, setting, conflict, and resolution.

For example, imagine you have seen a downward trend in sales from Texas for smart cars. You characters would be your previous and current customers in Texas, setting is the timeline of the downward trend and the customers across the country, conflict may be that news stations in Texas ran a story about hackers getting into smart cars, and resolution may be to advertise a security patch through local news stations in Texas. There may not be a conflict in your story (maybe you’re exceeding your goals), so that section can be skipped in favor of focusing on what is going well.

Always remember that applying analytical techniques to managerial problems requires both art and science.

Jan Hammond, Harvard Business School Professor

Creating Data-Driven Decisions

Storytelling with data can leave an audience asking questions such as “how do we make our sales go up?” or “what is xyz branch doing to increase profit percentages higher than everyone else?” This is where the resolution portion of the story comes into play. A lot of good reports will “tell the news”, that is they will display a current state of affairs retroactively. To generate a true return on investment (ROI) from our data, we need to use it to proactively drive decisions.

For example, in 2008 Starbucks closed hundreds of locations. Howard Schultz returned as CEO and declared they would use data to place their stores strategically going forward. Starbucks now consults with an analytics company called Esri to analyze retail locations for various variables that are proven to drive coffee shop traffic and overall success (reference).

The key to building a data-driven culture is to avoid deceiving your audience. Visualizations are a powerful tool, but they can be used to trick the viewer into disproportionate understandings of data. For example, New York Times came out with a visual in 1983 that showed the mandated fuel economy standards changing from 18 to 27.5 miles, an increase of 53%. The graph in the article, however, had an increase of 783%! A good way to build trust in your data is to avoid the Lie Factor.

The representation of numbers, as physically measured on the surface of the graphic itself, should be directly proportional to the quantities represented. The Lie Factor = the ratio of the size of an effect show in the graphic to the size of the effect in the data.

Tufte

This all sounds well and good, but how much money can using data instead of your gut actually save you? According to a study by Harvard Business Review, Fortune 1000 executives have seen the most value by aiming to decrease expenses. The second highest is by finding new innovation avenues. Decreasing expenses is great, it allows your company to lower overhead costs of operation and increase profit margin. However, there is only so much overhead you can eliminate which is why finding new innovation avenues will be the long term ROI from using big data.

Building a Data-Driven Culture

One of the biggest blockers to using big data effectively is adoption by the larger business. According to HBR, more than half of Americans depend on their gut to make decisions even if there is evidence that disproves their theory meanwhile data-driven organizations are 3x more likely to report significant improvements in decision-making. There are many ways to become more data-driven, even in your daily life. HBR summaries a few easy steps to take to begin building a culture of cultivating insights from big data:

  1. Look for patterns everywhere
  2. Tie every decision back to data
  3. Visualize the meaning behind data
  4. Consider furthering your education to learn more data analyst strategies

Thankfully, this approach does not need to be all or nothing. It can be as simple as looking for patterns in your personal life like spending more money on ice cream deliveries after watching sad movies. A simple cost-saving would be to stock up on ice cream in advance and avoid the delivery fees. Finding a new innovation source may look like watching romantic comedies instead of sad movies or switching to popcorn which is much more cost effective.

Additional Resources

SQL Saturday Baton Rogue Recap

Thank you to everyone who came out and saw me present at the SQL Saturday in Baton Rogue! It was a blast to see smiling faces and not be the only one laughing at my jokes haha. HUGE thank you to the team of volunteers who made this event possible and to the sponsors for enabling such an amazing event.

My presentation of Power BI Meets Programmability had 58 people in attendance, the largest crowd of the whole weekend! Out of that came many great conversations about the potential shift in IT teams from separating reporting teams from application development to playing to each other’s strengths to deliver sustainable and scalable reporting to the end-users of your application. As promised, below is all the code used in the session. Thank you again to everyone who attended, and happy coding!

using System;
using Microsoft.AnalysisServices.Tabular;

namespace TOM_Testing_Live
{
        internal class Program
        {
        static void Main(string[] args)
        {
//-------------- Connect to PBI Premium Workspace ---------------- //

            // create the connect string - powerbi://api.powerbi.com/v1.0/myorg/WORKSPACE_NAME
            string workspaceConnection = "powerbi://api.powerbi.com/v1.0/myorg/Happy%20Coding";
            string connectString = $"DataSource={workspaceConnection};";

            // connect to the Power BI workspace referenced in connect string
            Server server = new Server();
            server.Connect(connectString);


//---------------- List out current state before we make changes -------------------//

            // enumerate through datasets in workspace to display their names
            foreach (Database database in server.Databases)
            {
                Console.WriteLine($"ID : {database.ID}, Name : {database.Name}, CompatibilityLevel: {database.CompatibilityLevel}, Last Updated : {database.LastSchemaUpdate}");
            }

            // enumerate through tables in one database (use the database ID from previous step)
            Model model = server.Databases["5d161e6b-697a-49b6-a3e6-7d19b940a8fd"].Model;

            //if you don't specify a database, it will only grab models from the first database in the list
            foreach (Table table in model.Tables)
            {
                Console.WriteLine($"Table : {table.Name} IsHidden? : {table.IsHidden}");
            }

            // Specify a single table in the dataset
            Table table_product = model.Tables["Product"];

            // List out the columns in the product table
            foreach (Column column in table_product.Columns)
            {
                Console.WriteLine($"Columns: {column.Name}");
            }

            // List out the measures in the product table
            foreach (Measure productmeasures in table_product.Measures)
            {
                Console.WriteLine($"Measures: {productmeasures.Name}");
            }

            // List of translations on the model
            foreach (Culture culture in model.Cultures)
            {
                Console.WriteLine($"Existing Culture: {culture.Name}");
            }

            // List out the hierarchies in the product table
            foreach (Hierarchy hierarchy in table_product.Hierarchies)
            {
                Console.WriteLine($"Hierarchies: {hierarchy.Name}, Lineage Tag = {hierarchy.LineageTag}");
            }

            // List out the levels in our category hierarchy
            if (table_product.Hierarchies.ContainsName("Category Hierarchy"))
                {
                Hierarchy hierarchy_category_ = table_product.Hierarchies["Category Hierarchy"];
                foreach (Level level_categoryhierarchy in hierarchy_category_.Levels)
                {
                    Console.WriteLine($"Category Hierarchy Level {level_categoryhierarchy.Ordinal}: {level_categoryhierarchy.Name} Lineage Tag: {level_categoryhierarchy.LineageTag} from {level_categoryhierarchy.Column.Name}");
                }
            }
            else
                {
                Console.WriteLine($"No Category Hierarchy");
                }


        
 //----------------Deleting columns and measures ----------------//
   /*         
           // Deleting a column if it exists
             if (table_product.Columns.ContainsName("Testing"))
             //this looks to see if there is a column already named "Testing"
             {
                 Console.WriteLine($"Column Exists");
                 table_product.Columns.Remove("Testing"); //if the column exists, this will remove it
                 Console.WriteLine($"Column Deleted");
             }
             else
             {
                 Console.WriteLine($"Column Does Not Exist");
             } 

             // Deleting a measure if it exists
             if (table_product.Measures.ContainsName("Test Measure"))
             //this looks to see if there is a measure already named "Test Measure"
             {
                  Console.WriteLine($"Measure Exists");
                  table_product.Measures.Remove("Test Measure"); //if the measure exists, this will remove it
                  Console.WriteLine($"Measure Deleted");
              }
              else
              {
                  Console.WriteLine($"Measure Does Not Exist");
              }

*/
//--------------- Adding columns and measures --------------------//            

              // Adding our column if it doesn't already exist
                if (table_product.Columns.ContainsName("Testing")) //this looks to see if there is a column already named "Testing"
                {
                    Console.WriteLine($"Column Exists");
                    //table_product.Columns.Remove("Testing"); //if the column exists, this will remove it
                    //Console.WriteLine($"Column Deleted");
                   // Column column_testing = new DataColumn() //this will add back the deleted column
                    //{
                  //      Name = "Testing",
                  //      DataType = DataType.String,
                  //      SourceColumn = "Product"
                  //  };
                  //  table_product.Columns.Add(column_testing);
                  //  Console.WriteLine($"Column Re-created!");
                }
                else
                {
                    Column column_testing = new DataColumn() //this will add the column
                    {
                        Name = "Testing",  //name your column for Power BI
                        DataType = DataType.String, //set the data type
                        SourceColumn = "Product" //this must match the name of the column your source 
                    };
                    table_product.Columns.Add(column_testing);
                    Console.WriteLine($"Column Created!");
                }

                //Get the partition sources for each table
                foreach (DataSource datasource in model.DataSources)
                {
                    Console.WriteLine($"Datasource : {datasource.Name}");

                };

                // Add a measure if it doesn't already exist in a specified table called product
                if (table_product.Measures.ContainsName("Test Measure"))
                {
                    Measure measure = table_product.Measures["Test Measure"];
                    measure.Expression = "\"Hello there\""; //you can update an existing measure using this script
                    Console.WriteLine($"Measure Exists");
                }
                else
                {
                    Measure measure = new Measure()
                    {
                        Name = "Test Measure",
                        Expression = "\"Hello World\"" //you can also use DAX here
                    };
                    table_product.Measures.Add(measure);
                    Console.WriteLine($"Measure Added");
                }


//------------------- Manipulating Hierarchies ---------------------//

            //Editing an existing hierarchy originally called Category Hierarchy
            Hierarchy hierarchy_category = table_product.Hierarchies["Category Hierarchy Rename"];
            {
                hierarchy_category.Name = "Category Hierarchy Rename"; //this renames the hierarchy, note the lineage tag will remain unchanged
                Console.WriteLine($"Category Hierarchy Renamed");
             }
            
            //Editing an existing hierarchy level 
              Level level_Category = hierarchy_category.Levels.FindByLineageTag("ca792793-d3c3-4b5d-9bee-2b46c01833bb");
              Level level_Subcategory = hierarchy_category.Levels.FindByLineageTag("4304e645-bb8f-4d7e-b25c-f629be2110d8");
              {
                  level_Category.Name = "Category";
                  level_Category.Ordinal = 0;
                  level_Subcategory.Name = "Subcategory";
                  level_Subcategory.Ordinal = 1;
                  Console.WriteLine($"Category Hierarchy Levels Renamed & Reordered");
              }

            //Adding a new level to the hierarchy if it doesn't already exist
              if (hierarchy_category.Levels.ContainsName("Model"))
              {
                  Console.WriteLine($"Hierarchy Level Exists");
              }
              else
              {
                  Level level_Model = new Level()
                  {
                      Name = "Model",
                      Ordinal = 2,
                      Column = table_product.Columns.Find("Model")
                  };
                  hierarchy_category.Levels.Add(level_Model);
                  Console.WriteLine($"Hierarchy Level Added");
              }
          
 
             //Add a new hierarchy if it doesn't already exist
               if (table_product.Hierarchies.ContainsName("New Hierarchy"))
               {
                   Console.WriteLine($"New Hierarchy Exists");
               }
               else
               {
                   Hierarchy hiearchy_new = new Hierarchy()
                   {
                       Name = "New Hierarchy",
                   };
                   table_product.Hierarchies.Add(hiearchy_new);
                   Console.WriteLine($"Hierarchy Added");
             //Creating levels to add to the new hierarchy
                   Level level_one = new Level()
                   {
                       Name = "Model",
                       Ordinal = 0,
                       Column = table_product.Columns.Find("Model")
                   };
                   Level level_two = new Level()
                   {
                       Name = "Product",
                       Ordinal = 1,
                       Column = table_product.Columns.Find("Product")
                   };
                   hiearchy_new.Levels.Add(level_one);
                   hiearchy_new.Levels.Add(level_two);
                   Console.WriteLine($"Levels added to new hiearchy");
               };

         
//-------------------------- Translations ------------------------------//

//Translations can be used to rename existing columns without rebuilding the model. This also updates any visuals that use that column.          

            // List of translations on the model
            foreach (Culture culture in model.Cultures)
            {
                Console.WriteLine($"Existing Culture: {culture.Name}");
            }
            // Let's get a list of the existing translations within the en_US culture
            Culture enUsCulture = model.Cultures.Find("en-US");

            foreach (ObjectTranslation objectTranslation in enUsCulture.ObjectTranslations)
            {
                Console.WriteLine($"Translated Object: {objectTranslation.Value}");
            }
            // Narrow down what column within this culture/language you would like to add the translation to
            MetadataObject dataColumn = table_product.Columns.Find("SKU"); //this needs to always be the original column name within the data model.
            ObjectTranslation proposedTranslation = enUsCulture.ObjectTranslations[dataColumn, TranslatedProperty.Caption];

            // Only one translation per entity per culture.
            if (proposedTranslation != null)
            {
               Console.WriteLine($"Translation Exists for this Culture & Column combo");
               enUsCulture.ObjectTranslations.Remove(proposedTranslation); //need to remove the existing translation to overwrite it
               ObjectTranslation overwriteTranslation = new ObjectTranslation()
               {
                   Object = dataColumn,
                   Property = TranslatedProperty.Caption,
                   Value = "Cool Stuff"
               };
               enUsCulture.ObjectTranslations.Add(overwriteTranslation);
            }
            else
            {
               ObjectTranslation newTranslation = new ObjectTranslation()
               {
                   Object = dataColumn,
                   Property = TranslatedProperty.Caption,
                   Value = "Total Rad"
               };
               enUsCulture.ObjectTranslations.Add(newTranslation);
            }             

           // List out the translations to see what they are now that we have run the script    
           foreach (ObjectTranslation objectTranslation in enUsCulture.ObjectTranslations)
           {
               Console.WriteLine($"Final Translated Object: {objectTranslation.Value}");
           }

//------------------- List out end state --------------------------------//

            // List out the columns in the product table one more time to make sure our column is added
            foreach (Column column in table_product.Columns)
            {
                Console.WriteLine($"Columns: {column.Name}");
            }         

            // List out the measures in the product table one more time to make sure our measure is added
            foreach (Measure productmeasures in table_product.Measures)
            {
                Console.WriteLine($"Measures: {productmeasures.Name}");
            }

            // List out the hierarchies in the product table
            foreach (Hierarchy hierarchy in table_product.Hierarchies)
            {
                Console.WriteLine($"Hierarchies: {hierarchy.Name}, Lineage Tag: {hierarchy.LineageTag}");
                foreach (Level level_hierarchy in hierarchy.Levels)
                {
                    Console.WriteLine($"Level {level_hierarchy.Ordinal}: {level_hierarchy.Name}, Lineage Tag: {level_hierarchy.LineageTag} from {level_hierarchy.Column.Name}");
                }
            }

//-------------- Refresh our version of the data model then push/save changes back to the model in the PBI service -------------//
            table_product.RequestRefresh(RefreshType.Full);
            model.RequestRefresh(RefreshType.Full);
            model.SaveChanges(); 
            //make sure this is the last line! Note, this line will not work unless your dataset has proper data source credentials connected in the service

            Console.WriteLine($"Script Complete!");


        }
    }
}

Azure SQL Database Elasticity – Presentation Notes

This blog covers the content and points to the code used to create the demos in my Azure SQL Database Elasticity presentations. As of today, I have presented this at the Minnesota SQL Server User Group (PASSMN) in September 2020 and as a webinar for 3Cloud in October 2020.

Elastic Queries

Elastic queries allow developers to interact with data from multiple databases supported on the Azure SQL database platform including Synapse. Elastic queries are often referred to as Polybase which is currently implemented in SQL Server 2019 and Azure Synapse. The key difference is that elastic queries only allow you to interact with other Azure SQL Databases but not Hadoop or other database implementations (e.g. Teradata or Oracle). Part of the confusion comes from the fact that the implementation looks very similar. Both toolsets use external tables in SQL Server to interact with the connected data sources. However, Polybase requires additional components to run whereas elastic queries are ready to go without additional setup.

Be aware elastic queries are still in preview. Also, elastic queries are included in the cost of Azure SQL Database in standard and premium tiers.

Elastic Query Strategies

Elastic queries support three key concepts and will influence how you implement the feature.

  1. Vertical partitioning. This concept uses complete tables in separate databases. It could be a shared date table or dimensions in a data warehouse solution. Vertical partitioning is a method to scale out data solutions. This is one method to use Azure SQL database for larger data solutions.
  2. Horizontal partitioning or sharding. Whereas vertical partitioning keeps tables together, horizontal partitioning shards or spreads the data from a single table across multiple Azure SQL Databases. This is the most complex type of partitioning as it requires a shard map. This is typically implemented with .NET or Java applications.
  3. Data virtualization. This concept is a mix of the partitioning solutions to achieve the goal of virtualizing the data. The idea with data virtualization is that we can use a single Azure SQL Database to interact with data from multiple databases. While this concept is limited due to the limit to use Azure SQL Databases, it is a concept to look for more improvements as the product matures even more.

Elastic Query Demo

The demo used in the presentations is configured as shown here:

Three S1 Azure SQL Databases on the same Azure SQL Server. I used ADF (Azure Data Factory) to move Fact.Purchase to WideWorldDW_2 and the three related dimensions (dimDate, dimStockItem, dimSupplier) to WideWorldDW_3. I then used WideWorldDW_3 to implement the external tables to work with the data. The WideWorldImportersDW-Standard was used as the original restore of the sample database. It is the source of the data but is not used in the demos.

One note on the demo. I did not include the ADF jobs. Use the Copy activity to move the tables to the target databases. You can find more information here.

The demo code to set up the environment can be found here.

Elastic Jobs

Elastic jobs is the alternative to SQL Server Agent Jobs in Azure SQL Database. While Agent is included in Azure SQL Managed Instance, the rest of the platform needed an option to create jobs. Elastic jobs solves that issue. Currently this is also in preview and is also included with Azure SQL Database. The only additional cost is that a dedicated job database is required to support elastic jobs.

The best comparison is still with SQL Server Agent. Elastic jobs are structured with jobs which have job steps. The only limitation at the moment is that job steps must be T-SQL. Jobs can be created in the Azure portal, with PowerShell, with REST, or with T-SQL.

Elastic Transactions

One of the key pieces that was originally missing from the Azure SQL Database rollout was cross database transactions that were supported in SQL Server with MSDTC. Elastic transactions add this functionality to Azure SQL Database and is built into the platform. This functionality is application driven and currently supported in the latest .NET libraries. Overall, this will allow you to support transactions across 100 databases or fewer. While there is no limit, Microsoft currently recommends only using this to support distributed transactions over 100 or less databases due to potential performance issues.

There are a few limitations to be aware of:

  • Only supports Azure SQL Databases
  • Only supports .NET transactions
  • Does not support T-SQL Distributed transactions
  • Does not support WCF transactions

Wrap Up

Microsoft continues to improve the functionality in Azure SQL Database. These elastic features are part of that process. While I typically do not have many uses for distributed transactions, we have actively implemented elastic queries and elastic jobs for customers and look to use them more in the future.

Azure SQL Elasticity References

Hopefully you too will be able to use the elastic functionality as you continue to embrace the Azure data platform.

SQL Saturday #796 – Minnesota, 2018

sqlsat796_header

First, many thanks to the SQL Saturday and MNPass team for putting on another great event and letting me participate.

I co-presented with Joshuha Owen (@JoshSQL) in a precon on Modern Enterprise Data Warehousing on Azure. Thanks to those who attended and participated in the conversations around changing the way we implement data warehouse capabilities in Azure. Josh and I will be talking more about this in the future.

Now, those of you who attended my Saturday presentation on Consumption Based Architecture, I wanted to get you the slide deck and reference materials here. Thanks again for attending.

The slide deck from the session is here.

cba-featured-pic

I also have blog post around this topic: Consumption Based Architecture for Modern Data Analytics. Feel free to join the conversation there around this.

ERPs and the Consumption Based Architecture Conversation

One of the key topics that came up during the session was related to handling ERPs with minimal change. The key issue surrounding ERP solutions is with the data structure in those systems. Whether you work with SAP, JackHenry, or Dynamics, you have a situation where the data model is very complex and definitely not user friendly. In Consumption Based Architecture, we try to minimize data transformation and reshaping, but ERP solutions are by nature cryptic and complex. By definition, they are not consumable. So in the consumable space, we typically recommend using the vendor supplied solutions such as JHKnow, SAP BW and so on. These solutions provide a vendor managed interpretation of the data in the ERP for reporting and other solutions.

Security in this Architecture

The question was raised during the session around how to secure this. This does not have a simple answer. Each solution may have implemented security differently. For instance, an Oracle database may use user names and not have AD integration. This means that you need to determine how to secure your consumable space. For instance, if you pick Azure Active Directory, you would move data to AAD compliant structures in Azure such as Azure SQL Database, Azure SQL Datawarehouse, and Azure Databricks. This means you might need to use a tool like Goldengate with CDC to update a SQL DB which you can apply security to. This will allow you to centralize security for your consumable data. You will need to plan for security in whatever you do and create what you need to support it.

Thanks again everyone for joining us at SQL Saturday.

 

 

Minnesota BI User Group – Powering Up HDInsight with Power BI (December 2015)

On Wednesday, December 16, I presented on this topic at the Minnesota BI User Group.  This session is based on five blog posts that I created in August 2015.

You can find the presentation here: Powering Up HDInsight with Power BI (pdf).

The details can be found in the blog posts noted below:

HDInsight-Series-Featured-Pic_thumb

Setting Up and HDInsight Cluster (No Scripts Required)

Exploring the Microsoft Azure HDInsight Query Console (No Scripting Required)

Uploading Files to an HDInsight Cluster (No Scripting Required)

Using Power BI with HDInsight Part 1: Power Query and Files

Using Power BI with HDInsight Part 2: Power BI Desktop and Hive

My goals for this series

1. Document using Power BI with HDInsight

2. Prove that you can set up a HDInsight Cluster with no scripts

Other References from the Session

Azure: http://azure.microsoft.com/en-us/

Cloud Berry: http://www.cloudberrylab.com/free-microsoft-azure-explorer.aspx

 

Thanks for attending my session.