All posts by Steve

T-SQL Tuesday #87 – Fixing Old Problems with Shiny New Toys: STRING_SPLIT

tsql2sday-300x300Thanks to Matt Gordon (@atsqlspeed) for hosting this T-SQL Tuesday.

Splitting Strings in SQL

A problem that has plagued SQL developers through the years is splitting strings. Many techniques have been used as more capabilities were added to SQL Server including XML datatypes, recursive CTEs and even CLR. I have used XML datatype methods to solve the problem most often. So, without further ado…

T-SQL Function: STRING_SPLIT

I have previously highlighted this function in a webinar with Pragmatic Works as a Hidden Gem in SQL Server 2016. It was not announced with great fanfare, but once discovered, solves a very common problem.

Syntax

STRING_SPLIT(string, delimiter)

The STRING_SPLIT function will return a single column result set. The column name is “value”. The datatype will be NVARCHAR for strings that are NCHAR or NVARCHAR. VARCHAR is used for strings that are CHAR or VARCHAR types.

Example

DECLARE @csvString AS VARCHAR(100)
SET @csvString = 'Monday, Tuesday, Wednesday, Thursday, Friday'
SELECT value AS WorkDayOfTheWeek 
FROM STRING_SPLIT (@csvString, ',');

The initial example returns the follow results:#tsql2sday

value
Monday
 Tuesday
 Wednesday
 Thursday
 Friday

As you can see in the example, the results returned a leading space which was in the original string. The following example trims leading and trailing spaces.

DECLARE @csvString AS VARCHAR(100)
SET @csvString = 'Monday, Tuesday, Wednesday, Thursday, Friday'
SELECT LTRIM(RTRIM(value)) AS WorkDayOfTheWeek 
FROM STRING_SPLIT (@csvString, ',');

The cleaned example returns the follow results:

value
Monday
Tuesday
Wednesday
Thursday
Friday

Thanks again Matt for this opportunity to share an underrated, but really useful shiny new tool in SQL Server 2016.

Consumption Based Architecture for Modern Data Analytics

Throughout many years of working with BI solutions and data warehouse solutions, we have strived to put all the data in one location so it could be easily consumed by reporting and analysis tools from SQL Server Reporting Services to Microsoft Excel. We have followed the dimensional modeling processes promoted by Ralph Kimball and others. These techniques were developed to turn relational data platforms into viable and well-performing reporting platforms. They worked.

Throughout the years, I have built many star schemas, enterprise data warehouses, and reporting databases based on these techniques. However, they were not without their flaws. In the early days, it would take years to create the enterprise data warehouse. When it was done, the business had already moved on. So, we started creating data marts which were departmentally focused. This allowed us to shorten the development lifecycle to more quickly meet the needs of the business. But the speed of business continued to outpace IT’s ability to deliver effective BI solutions when they were needed.

During this same time, the worlds most ubiquitous BI tool, Microsoft Excel, ran more and more businesses, both large and small. Why was Excel so popular? It allowed the user, notExcel-2013-Icon_thumb.png IT, to do the analysis and produce results when the business needed them. Self-service BI is not new, we just refuse to accept Excel as a “real” BI tool in the industry. Inevitably, no matter how good your BI or reporting tool is, users want to know, “How do I export this to Excel?” I found it humorous that Microsoft suffered the same problem with Power BI. The ability to extract the data is hugely important to users. Why? So they could do this on their own. They feel empowered with Excel.

In today’s world, we are also witnessing a shift to a more mobile, tech savvy group of users. As my teenage and young adult children begin to enter the workforce, I still see a disconnect with enterprise BI solutions in most cases. They want the data at their fingertips and easily consumable to solve the question they have now, not in three weeks. That brings us to the architecture I have been promoting for a few years. It’s not new, but I needed a way to talk about modern data and BI solutions that focused on one the most significant needs in the business – consumable data.

The Consumption Based Architecture is based on the following key concepts:

  1. Keep the data close to the source
  2. Data interfaces should be easy to use
  3. Modern, in-memory tools make this possible

Keep the data close to the source

In the normal enterprise data warehouse solution, we process data to clean it up, reshape it, and generally make it “better”. However, the reality is that the users don’t see it as sourceoftruthbetter if it does not match the source. The only “source of truth” that matters in reality is the one that is closes to the data entry as possible. If that data is wrong, it needs to be corrected there. Why? Because users will always verify that the data warehouse or any BI solution is correct by checking the system of record.

Besides the source of truth issue, any time data is transformed or shaped differently than the source, documentation and maintenance are required. In most cases, documentation is lacking in data warehouse solutions. It’s hard and changes often. Furthermore, the amount of maintenance required to support a traditional data warehouse solution is a problem. When the business needs to change a field or the source changes, how do you measure how long it will take to get that in the data warehouse? Days? Weeks? Months? Years? By the time we have the change in place, tested and ready for use, the business has moved on.

In Consumption Based Architecture, the goal is to move the data as needed and transform it as little as possible. Operational Data Stores which are replicas or copies of the data from the source systems are the best mechanisms to move the data to a reporting area. Only transform if absolutely necessary. In most cases, our transforms will involve data type cleanup or other in cases where fields have changed use over time maybe adding a column to clarify the data. Each time you move data or transform data, it should be to make the data more consumable.

The primary exception to this rule is dimensional data. Dimensional data has a special place in the architecture. While it is not necessary to transform this data, the overall solution is improved by combining similar dimensions and using natural keys for relationships. For example, if you have a customer record in CRM and shipping data, you can create a “golden record” or conformed dimension which contains both natural keys. This will allow you to cross reference data easily in both solutions. Master data solutions help with this but are not required to be successful in this architecture.

Data Interfaces Should Be Easy

Having done a lot of work in the application development field where interfaces are used to simplify programming, I think we should have a similar concept in working with data. We have dabbled in the area for some time. I know that I have used views with schemas (or users in Oracle) that were specifically designed to support SQL Server Analysis Services. We used views to support a consistent data set to the cubes for processing and allow changes in the back end data as needed. The views operated as an interface between the data warehouse and the cube.

In a similar fashion, these interfaces need to be identified and used throughout the Consumption Based Architecture. If you are referencing a relational data structure, views continue to make sense. They allow consumers to interact with the data in a known fashion. They are also logical constructs which can be deprecated with a timeline for users to move off of them to the newer versions.

Analytic models such as those found in Power BI, Qlik, and SQL Server Analysis Services create a similar tool for consumers. Usually those models can be used in the tool itself or even in other tools to produce reports and dashboards. Depending on how the model is viewsmodvirtcreated, they are often a table based view of data. For instance, Power BI can turn a folder of files into a table structure for easy consumption in Power BI reports. Power Pivot models created in Excel can be shared in SharePoint and in Power BI. These are just a few examples of using analytic models as interfaces.

The third option is data virtualization. This tends to be fairly expensive. The two that I am aware of are Cisco Data Virtualization and RedHat OpenShift virtualization. The concept of data virtualization is perfect for Consumption Based Architecture; however, I have not seen this used much due to cost implications. Microsoft may be changing the landscape of data visualization. During PASS Summit 2016, they announced expanding the use of Polybase to reference other data sources such as Oracle and Teradata. Currently, Polybase supports Hadoop data, but this change could allow it to become a virtualization tool that is cost effective (included with SQL Server) and simple to use.  I will be reviewing these tools in a later post as I get more information on them.

Modern, In-Memory Tools

The most significant technological improvement in the past few years to make this architecture really possible is in-memory data tools. While this revolution has occurred in more technologies than Microsoft, Microsoft tools are what I am most familiar with. In the Microsoft arena, this started with Power Pivot in Excel. The ability to mashup various data sources using in-memory models is awesome.

When Power Pivot came on the scene, I was doing a lot of work with SQL Server Analysis Services cubes. While these cubes provided a great analytic layer, they were very fragile in my experience. However, Power Pivot allowed us create better performing models more quickly. At that point, I knew a shift was coming. Two of the biggest pain points with cube design were tackled – speed of development and ease of use. We could solve problems quicker and easier with Power Pivot.

speed-and-ease

Since the release of Power Pivot, Microsoft has also improved their overall in-memory solution set by adding SQL Server Analysis Services Tabular Models and OLTP in-memory and columnstore functionality in SQL Server. These improvements continue to make it easier than ever to build out consumable models in memory. You can now use columnstore in SQL Server without an analytics model, simplifying your architecture without performance penalties. With the release of SQL Server 2016, Microsoft has created a data engine capable of in-memory OLTP to improve transactional loads while also supporting columnstore indexes for reporting loads in the same database. While your mileage may vary on implementation, it is easy to see that Consumption Based Architectures are best able to take advantage of these advances and making data more easily accessible and consumable to our business users.

What’s Next

Over the next few weeks, I will be digging in on some of the topics that support Consumption Based Architecture. The goal is to help you begin to take advantage of this architecture in your business and build out a flexible, easily consumed data and analytics platform.

cba-featured-pic

Upcoming Topics:

  • Interface Layers
  • Data Dictionaries
  • Dimensional or Master Data
  • Moving and Transforming Data
  • Modeling in MSBI
  • Reporting with Consumption Based Architecture

North Texas SQL Server User Group – September 2016 Follow

ntssugv6_1

Thanks for attending my presentation on window functions in TSQL during the September 15, 2016. I hope you learned something you can take back and use in your projects or at your work. You will find an link to the session and code I used below. If you have any questions about the session post them in comments and I will try to get you the answers.

Slides, Code, and Follow Up Posts

The presentation can be found here: A Window into Your Data

The code was put into a Word document that you can get here: TSQL Window Function Code

This session is also backed by an existing blog series I have written.

T-SQL Window Functions – Part 1- The OVER() Clause

T-SQL Window Functions – Part 2- Ranking Functions

T-SQL Window Functions – Part 3: Aggregate Functions

T-SQL Window Functions – Part 4- Analytic Functions

Microsoft Resources:

sqlsat563_webCome On Out to SQL Saturday 563 on September 24, 2016 for more great content

 

Power BI Is Finally in the Azure Trust Center

With the most recent announcement of Power BI’s inclusion in the Azure Trust Center, it is a good time to review where we are today with Power BI security and compliance as it relates to various customer needs. I do a lot of work with financial, energy, and medical customers. These groups represent a large amount of compliance and regulation needs. I wanted to understand where we are today and this announcement is significant.

What’s in the Announcement?

One the primary roadblocks to accepting the Power BI service has been the lack of compliance and concerns around security. Microsoft has been making a number of enterprise level improvement to the Power BI service and desktop. Power BI now has the following compliance certifications:

PowerBI Compliance 2016

This announcement shows Microsoft’s continued commitment to security and compliance in its cloud based products. While Power BI is not yet to the level of Office 365, some key compliance areas are now covered.

I think the most significant compliance certification is HIPAA/HITECH which removes barriers related for the medical industry. As hospitals, insurance companies, and providers scramble to meet reporting demands from their customers and the government, Power BI gives them a flexible reporting and visualization platform to meet those needs. It will empower self-service in the organizations and departmental or enterprise collaboration with data. The HIPAA/HITECH certification will allow them to use the platform with more confidence and security.

Beyond medical, more institutions will be able to rely on Power BI in a manner that is compliant and safe. As Microsoft continues this journey with Power BI and its other Azure based offerings, customers will be able to react more quickly to the changing business and regulatory environments with confidence in the security and management of their data.

The Reality – You Are as Secure as You Choose to Be

Even with this significant move by Microsoft, you are still responsible for implementing a secure, compliant solution. Microsoft is merely providing tools that are secure and will comply with regulations if implemented correctly. The key to a secure environment will always be you. The data you use and analyze with Power BI is ultimately your responsibility.

I encourage you to review the following resources in addition to the ones above as you determine your security and compliance within the Power BI product:

 

JSON in SQL Server 2016 – The Good, The Bad, and The Ugly

JSONonSQL

Microsoft introduced support for JSON data in SQL Server 2016 and in Azure SQL Database. I was excited to see this functionality coming. As an early user of XML when it was introduced, my expectations were high. Microsoft has done some really good work with JSON support, but I find that is not really as comprehensive as I had hoped.

The Good: Functions to Work with JSON Data

SQL Server now has some built in functions that support working with JSON. These are on par with XML data type methods. Here is the rundown on what they are and what they do.

OPENJSON

OPENJSON allows us to convert JSON to a tabular format. This function supports two output formats: default and explicit. Default returns key value pairs, where as explicit lets us define the context. Here is a sample of the syntax that can be used.

select Restaurant.RestaurantName, Restaurant.City, Restaurant.SeatsJSON, Seats.*
from dbo.Restaurant
cross apply

openjson(Restaurant.SeatsJSON, '$.Seats')
with (SeatNumber int '$."Seat Number"'
 , SeatCode varchar (50) '$.SeatCode'
 , TableNumber int '$.TableNumber' 
) as Seats

FOR JSON

FOR JSON works in reverse. This allows us to convert tabular data into JSON. AUTO automatically formats the results as arrays and nested arrays when joins are used. You are able to use aliases to as object names. PATH allows you to specify the JSON path you want to use for the results.

ISJSON

ISJSON can be used against a string value to determine if it is properly formatted. This can be handy when working with JSON functions or even in a CHECK constraint to make sure the column has properly formatted data.

JSON_VALUE

JSON_VALUE returns a scalar value from a JSON string using JSON path. The key here is that the value returned is scalar so working with arrays can sometimes be an issue if you cannot identify the position in the array.

JSON_VALUE (SeatsJSON, '$.Seats[0].SeatCode')

JSON_QUERY

JSON_QUERY returns a JSON formatted array or object using JSON path.

JSON_QUERY (SeatsJSON, '$.Seats')

JSON_MODIFY

JSON_MODIFY allows us to change the value based on the JSON path specified.

JSON_MODIFY(@RestJSON, '$.Restaurant.ZIP', '55337')

I know that I highlighted the basics around these functions. I plan to follow up on these later. These functions represent the good parts of the JSON implementation in SQL Server.

The Bad: Not a Datatype

My key disappointment with the implementation is the fact it is not a native data type like XML. At first I did not think this would be an issue, but after working with the new functionality, it feels incomplete. We can add a constraint with the ISJSON function to make sure our data is of the right type, but XML is just a data type. The biggest miss around the data type is likely concerning indexes as I talk about next.

The Ugly: Indexes

This is the worst part of the JSON solution. Because it is not a data type, no native indexing is supported. The current recommendation is to create a computed column using the JSON_VALUE function. However, this does not work with arrays, making the indexes of limited value. In a simple set of data, such as seats in a restaurant shown below, you cannot index the seats, only the restaurants.

{
    "Restaurants": [
        {
            "Restaurant": {
                "Restaurant ID": 1,
                "RestaurantName": "Sensational Servings MSP",
                "Seats": [
                    {
                        "SeatCode": "SSMSP-1-1",
                        "TableType": "Bar",
                        "Seat Number": 1
                    },
                    {
                        "SeatCode": "SSMSP-1-2",
                        "TableType": "Bar",
                        "Seat Number": 2
                    }                ]
            }
        },
        {
            "Restaurant": {
                "Restaurant ID": 2,
                "RestaurantName": "Sensational Servings LAS",
                "Seats": [
                    {
                        "SeatCode": "SSLAS-1-101",
                        "TableType": "Bar",
                        "Seat Number": 101
                    },
                    {
                        "SeatCode": "SSLAS-1-102",
                        "TableType": "Bar",
                        "Seat Number": 102
                    }                ]
            }
        }
    ]
}

So if I am looking for seats with the TableType Bar, we would not be able to do that with an index without storing the JSON file differently because each restaurant contains an array of seats. With the attribute as part of the array, we are unable to return every instance of the seats in the index. This will result in a table scan in most cases.

We can add indexes to simple JSON snippets, but complex JSON will result in table scans due to the nature of the functions. You should test your solutions to determine if the index is sufficient to support your query pattern.

The Moral of the Story

The JSON functionality is similar to XML data type methods. The lack of real index support will likely cause issues with the functionality at scale. Use the functions to help make JSON more usable in your environment, but be aware of its limitations as well.