Connecting to Azure Blobs in Power BI

The step-by-step process below walks through connecting to data housed in Azure Blob Storage from Power BI using a SAS token. There are many ways to grab your data from Blob Storage, but this is the most efficient, scalable, and secure way that I found (with some security restrictions from watchful DBAs).

Resources Needed:

  • Base URL for container
  • SAS Token (must have read AND list permissions)
    • Check out the link in resources for a tutorial on generating SAS Tokens.
  • File Path (should end with .csv)
  • Power BI Desktop

Notes:

  • You can skip ahead to the sample M script if you have all your elements. Simply swap out the BaseURL, SASToken, and FilePath and you’re good to go. Otherwise, feel free to walk through the steps below to gain a deeper understanding of the process.
  • Make sure your Base URL ends with a “/”, your SAS Token starts with “?”, and your file path ends with “.csv”
  • Keep the double quotes around each parameter value, this forces Power BI to recognize it as text.

Process:

  1. In Power BI Desktop, go to Get Data and select the Web option.
  2. Switch to the advanced view and put the base URL in the first box.
  3. Put in the second box the SAS token.
  4. In a third box (click add part to get the third one), put “&restype=container&comp=list” (this will allow you to list all the blobs in that container).
  5. Expand the blob down then filter the name on the file path.
  6. Create a custom column to create the entire URL for the file (M code samples are below).
    • FileURL = BaseURL & [Name] & SASToken
  7. Create another custom column to access the web contents of your FileURL column.
    • BinaryURLContents = Web.Contents([FileURL])
  8. Remove all columns except the BinaryURLContents.
  9. Click on the “Binary” value and watch Power BI expand out your CSV file.
  10. Manipulate data from there as needed.

Final M Code:

let
    BaseURL = "BASE_URL_HERE"
    ,SASToken = "SAS_TOKEN_HERE"
    ,FilePath = "FILE_NAME_HERE_(Note do not include section of the URL from Base URL)"
    ,Source = Xml.Tables(Web.Contents(Text.From(BaseURL) &Text.From(SASToken) & "&restype=container&comp=list")),
    #"Changed Type" = Table.TransformColumnTypes(Source,{{"Attribute:ServiceEndpoint", type text}, {"Attribute:ContainerName", type text}}),
    #"Removed Other Columns" = Table.SelectColumns(#"Changed Type",{"Blobs"}),
    #"Expanded Blobs" = Table.ExpandTableColumn(#"Removed Other Columns", "Blobs", {"Blob"}, {"Blob"}),
    #"Expanded Blob" = Table.ExpandTableColumn(#"Expanded Blobs", "Blob", {"Name", "Properties", "OrMetadata"}, {"Name", "Properties", "OrMetadata"}),
    #"Filtered Rows" = Table.SelectRows(#"Expanded Blob", each ([Name] = Text.From(FilePath))),
    #"Added Custom" = Table.AddColumn(#"Filtered Rows", "FileURL", each BaseURL &  [Name] &  SASToken),
    #"Added Custom1" = Table.AddColumn(#"Added Custom", "BinaryURLContents", each Web.Contents([FileURL])),
    #"Removed Other Columns1" = Table.SelectColumns(#"Added Custom1",{"BinaryURLContents"}),
    BinaryURLContents = #"Removed Other Columns1"{0}[BinaryURLContents],
    #"Imported CSV" = Csv.Document(BinaryURLContents,[Delimiter=",", Columns=24, Encoding=1252, QuoteStyle=QuoteStyle.None]),
    #"Promoted Headers" = Table.PromoteHeaders(#"Imported CSV", [PromoteAllScalars=true])
  in
   #"Promoted Headers"
//Use this query to validate your file path
let
    Source = Xml.Tables(Web.Contents("BASE URL" & "SAS TOKEN" & "&restype=container&comp=list")),
    #"Changed Type" = Table.TransformColumnTypes(Source,{{"NextMarker", type text}, {"Attribute:ServiceEndpoint", type text}, {"Attribute:ContainerName", type text}}),
    #"Removed Other Columns" = Table.SelectColumns(#"Changed Type",{"Blobs"}),
    #"Expanded Blobs" = Table.ExpandTableColumn(#"Removed Other Columns", "Blobs", {"Blob"}, {"Blob"}),
    #"Expanded Blob" = Table.ExpandTableColumn(#"Expanded Blobs", "Blob", {"Name", "Properties", "OrMetadata"}, {"Name", "Properties", "OrMetadata"}),
    #"Filtered Rows" = Table.SelectRows(#"Expanded Blob", each [Name] = "FILE PATH")
in
    #"Filtered Rows"

Additional Resources:

Moving Synapse Databases Between Subscriptions – Practical Guidance

One of the tasks, we often do with migration projects is move large volumes of data. Depending on how you are configured, you may need to do the migration project in a development or UAT environment as opposed to a production environment. This is particularly true if you have policies in place on your production subscription that don’t allow the individuals doing the migration and validation tasks to work in that subscription.

Just Copy It… Nope

So you can copy Azure SQL Database using the Azure Portal, PowerShell, Azure CLI, and T-SQL. However, this functionality is limited to Azure SQL Database and does not work for Azure Synapse databases (a.k.a. SQL Pools). Early in 2021, the ability to use the copy functionality to copy databases between subscriptions is also supported but requires security work to make sure the permissions in the database servers and networking allow that to happen.

You can find out more about copying Azure SQL Database in this Microsoft Doc.

Just Restore It… Nope

You can restore to the current server or another server on the same subscription. However, you are unable to restore across subscription boundaries at this time. If you need to move to another server in the current subscription, the process is straightforward, you can use the restore process in Synapse to restore to the current server using a different name. You can also restore to a different server in either the current or different resource group in the same subscription. The restore technique is used in our move process, so details on how to restore a Synapse database will be in the next section.

Let’s Move a Synapse Database

The process to move a Synapse database to another subscription requires some planning and pre-work. The first thing you need to do is create a new SQL Server in the same subscription you have your current Synapse environment. Because you can’t simply create servers, I would recommend that you add an Azure SQL Database to the server as a placeholder. An S0 should be sufficient to keep this server in place for what we are doing. DO NOT ADD anything to this server that will not be migrated. This is a temporary holding place for migrating databases. (This also works for other SQL Databases, but other options may work as well but are not the focus of this post.)

Now that you have the migration server created, the next step is to create a restore point. While this is not required because you can use the automatic restore points, creating a user-defined restore point is recommended. A user-defined restore point, allows you to choose the status of the database you want to migrate, rather than relying on the automatic points and trying to make sure you pick the right time (in UTC of course).

Once you have set the restore point, in the database you want to migrate, select Restore to open the panel to restore your database.

On the restore page, you have a number of options to complete.

  • Restore point type: Choose User-defined restore points
  • SQL pool name: This is not a big deal. The name is the database name used during the migration process and is not the final name used in the target server. Make sure it is something you know.
  • Restore point: Select the restore point you created for this purpose.
  • Server: Choose the migration server you created as the target.
  • Performance level: This one is more interesting. I typically choose a smaller performance level for this restore. Keep in mind that Azure needs to allocate resources to support the restore. Because this is not a final deployment, smaller may go faster. However, NO SLAs exist for this process. That means your mileage will vary. We have seen restores happen in 30 minutes one day and over 5 hours the next. It will be very dependent on the data center and how busy it is. This time variation must be accounted for in your planning.

The next step is to move the server using the Move operation on the server page. You have the option to move to another resource group or another subscription. In our case, we will choose another subscription. IMPORTANT: You will need Contributor permissions in the target subscription in order to move the server to that subscription.

After you have moved the server to the target subscription, you need to set a restore point for that database on the migration server. Then you can restore that database to the target server. It is very important that you use the naming convention and performance levels that you need for this restore as it is the final step in the process. Once again the restore process has no Microsoft SLA and as a result may take longer than planned. You need to have contingencies in place if you are working in a deployment window or have time restrictions.

Finally, you need to clean up the migration server. I would recommend either scaling down or pausing the Synapse database to give you a backup for a while if needed. Once the database is validated on the target server, you can remove the Synapse database (removes storage costs). I would recommend keeping this server as your migration server to use in the future. You can use this process to create copies of databases for development and UAT or similar needs from production instances.

Other Thoughts and Considerations

Here are my final thoughts on this process. First, the fact that no SLA on the restore process is provided by Microsoft has created issues for us in some cases. We have had to extend deployment windows during production deployments on occasion. My recommendation is to plan for the worst case and finish early if all comes together on time.

This process works! You can use it with other SQL assets and you can use it in multiple directions. Keep the migration server around so you can support other processes. If you clean most of it out, the cost of maintaining it is the S0 SQL Database.

One final thought, this is Azure. Thus, this guidance could change tomorrow. We have been using this for about 12 months when this was written. I hope this helps some of you move these databases to support your business and development needs.

Azure SQL Database Elasticity – Presentation Notes

This blog covers the content and points to the code used to create the demos in my Azure SQL Database Elasticity presentations. As of today, I have presented this at the Minnesota SQL Server User Group (PASSMN) in September 2020 and as a webinar for 3Cloud in October 2020.

Elastic Queries

Elastic queries allow developers to interact with data from multiple databases supported on the Azure SQL database platform including Synapse. Elastic queries are often referred to as Polybase which is currently implemented in SQL Server 2019 and Azure Synapse. The key difference is that elastic queries only allow you to interact with other Azure SQL Databases but not Hadoop or other database implementations (e.g. Teradata or Oracle). Part of the confusion comes from the fact that the implementation looks very similar. Both toolsets use external tables in SQL Server to interact with the connected data sources. However, Polybase requires additional components to run whereas elastic queries are ready to go without additional setup.

Be aware elastic queries are still in preview. Also, elastic queries are included in the cost of Azure SQL Database in standard and premium tiers.

Elastic Query Strategies

Elastic queries support three key concepts and will influence how you implement the feature.

  1. Vertical partitioning. This concept uses complete tables in separate databases. It could be a shared date table or dimensions in a data warehouse solution. Vertical partitioning is a method to scale out data solutions. This is one method to use Azure SQL database for larger data solutions.
  2. Horizontal partitioning or sharding. Whereas vertical partitioning keeps tables together, horizontal partitioning shards or spreads the data from a single table across multiple Azure SQL Databases. This is the most complex type of partitioning as it requires a shard map. This is typically implemented with .NET or Java applications.
  3. Data virtualization. This concept is a mix of the partitioning solutions to achieve the goal of virtualizing the data. The idea with data virtualization is that we can use a single Azure SQL Database to interact with data from multiple databases. While this concept is limited due to the limit to use Azure SQL Databases, it is a concept to look for more improvements as the product matures even more.

Elastic Query Demo

The demo used in the presentations is configured as shown here:

Three S1 Azure SQL Databases on the same Azure SQL Server. I used ADF (Azure Data Factory) to move Fact.Purchase to WideWorldDW_2 and the three related dimensions (dimDate, dimStockItem, dimSupplier) to WideWorldDW_3. I then used WideWorldDW_3 to implement the external tables to work with the data. The WideWorldImportersDW-Standard was used as the original restore of the sample database. It is the source of the data but is not used in the demos.

One note on the demo. I did not include the ADF jobs. Use the Copy activity to move the tables to the target databases. You can find more information here.

The demo code to set up the environment can be found here.

Elastic Jobs

Elastic jobs is the alternative to SQL Server Agent Jobs in Azure SQL Database. While Agent is included in Azure SQL Managed Instance, the rest of the platform needed an option to create jobs. Elastic jobs solves that issue. Currently this is also in preview and is also included with Azure SQL Database. The only additional cost is that a dedicated job database is required to support elastic jobs.

The best comparison is still with SQL Server Agent. Elastic jobs are structured with jobs which have job steps. The only limitation at the moment is that job steps must be T-SQL. Jobs can be created in the Azure portal, with PowerShell, with REST, or with T-SQL.

Elastic Transactions

One of the key pieces that was originally missing from the Azure SQL Database rollout was cross database transactions that were supported in SQL Server with MSDTC. Elastic transactions add this functionality to Azure SQL Database and is built into the platform. This functionality is application driven and currently supported in the latest .NET libraries. Overall, this will allow you to support transactions across 100 databases or fewer. While there is no limit, Microsoft currently recommends only using this to support distributed transactions over 100 or less databases due to potential performance issues.

There are a few limitations to be aware of:

  • Only supports Azure SQL Databases
  • Only supports .NET transactions
  • Does not support T-SQL Distributed transactions
  • Does not support WCF transactions

Wrap Up

Microsoft continues to improve the functionality in Azure SQL Database. These elastic features are part of that process. While I typically do not have many uses for distributed transactions, we have actively implemented elastic queries and elastic jobs for customers and look to use them more in the future.

Azure SQL Elasticity References

Hopefully you too will be able to use the elastic functionality as you continue to embrace the Azure data platform.

Your First Databricks Notebook

On August 4, 2020, I presented this on the weekly Pragmatic Works webinar series. You can view that presentation here.

As part of that presentation, I committed to give you access to the databricks notebooks so you can run through this as well. You can find the notebook on my Github. It is stored as a dbc (Databricks notebook) file. You will need Databricks to open the file.

Two questions were asked during the session that I wanted to handle here. The first was related to connecting to relational databases. The short answer is yes. You can use the JDBC driver to work with SQL Server or Snowflake for instance. Details can be found in the data sources article on the Databricks site.

The next question was related to Databricks ability to work with SFTP. While I cannot speak to the specifics, I was able to find the following Spark library that may be able to provide the support you need. To be clear, I have not implemented this myself but wanted to provide a potential resource to help with this implementation. I found this in a Databricks forum and it may work for you: https://github.com/springml/spark-sftp. If one of you finds this useful, feel free to post a comment here for others to refer to.

Thanks again for everyone who was able to attend. We look forward to continuing to work with Databricks more.

Walmart Chose Azure, How About You?

Another major corporation has announced their decision to move to Azure. Perhaps you’ve heard of them? Walmart—yes, Walmart and Microsoft have announced an ongoing cloud partnership this year. While AWS would have been a competitor in this case, because Amazon put themselves in that position. But that’s not the real reason Walmart chose Azure. In this post, I’ll go over the key reasons why they went with Microsoft Azure. Clay Johnson, Walmart CIO and Executive VP of Global Business Services stated that Walmart is excited to accelerate their digital transformation using Microsoft and the Azure cloud.

In a joint press release, Microsoft and Walmart CEOs called out 3 key reasons why Walmart chose to use Azure and Microsoft Office 365:

Digital Transformation

We hear about the great concept of digital transformation all the time and we’ll see it in action with Walmart and Microsoft. They’ll work together to move 100s of applications to cloud based solutions. Not only will they do lift and shift, but in many cases, they’ll be looking for optimization opportunities to transform their business.

Looking at Azure or want to do more with it, join us next week for Azure Data Week!

Innovation

Walmart plans to build a global IOT solution on Azure and the advanced technologies to improve their performance in the world of energy conservation and to drive efficiencies in supply chain with machine learning to become a better company.

Collaboration and Agility

Using Microsoft 365 and a variety of Azure tools, Walmart is investing in their people to move to a more collaborative and productive workforce.

So, as you can see, Walmart put in a lot of thought and consideration before they made their choice to go with Azure. Look at these 3 key areas and at your business – can Azure help you improve your business in such a dramatic way by moving to the Azure cloud?

Making the move is not a technology decision, it’s a business one so you can do better business.