Last Non-NULL Date in SQL Server

The simplest of requests are often the most difficult to execute. For example, a finance team needs to know every time a customer did not invoice for 90 days in the past 2 years. The simplicity of the ask is deceiving. Tracking differences across multiple dimensions (customer and invoice date in this case) and accounting for NULL values in the changing dimension (aka when a customer did not invoice on a day) appears to be hopeless without the support of a CRM code change. But have no fear, complicated SQL is here!

Testing Scenario: the business would like you to create a customer attrition report. To do this, you need to find gaps in invoice dates per customer and determine when and how many customers go “inactive” and are “reactivated” in the past two years. A customer is deemed “inactive” whenever there are greater than 90 days since the last invoice. This can occur multiple times in one year, so a customer can be “reactivated” multiple times in one year.

Resources Needed:

  1. SQL Server Access to the needed data elements
    • In this scenario, this consists of invoice date by customer. You can swap this out for any other date range or any other unique ID.
  2. Business logic
    • In this scenario, activations in a year = anytime a customer has invoiced first the first time in a 90 day period. You can swap customer field for any dimension such as sales rep, carrier, business segment, etc. You can also swap out invoice date for any date field such as creation date, pickup date, paid date, delivery date, etc.
  3. Start and End dates
  4. Ability to use CTE’s/Temp Tables
    • This really comes into play if you are trying to create a Direct Query based report in Power BI or using any other reporting tools that do not allow calling Temp Tables. If you hit this limitation, then you will need to leverage a database/code solution instead of the method below.

Notes:

  • If your SQL server instance is after 2016, then you will not need to use the custom date temp table and can use IGNORE NULL within the MAX OVER statement (see alternative line in the final SQL code below).
  • The process below lays out each portion of the final query, but feel free to skip ahead to the end for the final sql statement if you don’t need each section explained.

Process:

  1. Set up parameters
    • DECLARE @StartDate DATE = '2019-01-01'
      DECLARE @EndDate DATE = GETDATE()
      DECLARE @ActivationRange INT = 90 --notates how many days can be between invoice dates before a customer is deemed "inactive".
  2. Create a date/calendar table. Check with your DBA’s first to make sure they haven’t already created something similar that you can use, all you need is a list of sequential calendar dates with no gaps.
    • ;WITH cte AS (
      SELECT @StartDate AS myDate
      UNION ALL|
      SELECT DATEADD(day,1,myDate) as myDate
      FROM cte
      WHERE DATEADD(day,1,myDate) <= @EndDate
      )
      SELECT myDate 'CalendarDate'
      INTO #Calendar
      FROM cte
      OPTION (MAXRECURSION 0) –this works around the usual 100 recursion row limit
  3. If you need to partition by a dimension other than date, such as customer in this scenario, you will need to create a table to grab that dimension’s values as well. After this, you’ll need to create a bridge table that will have a value for every date in your range and every customer (or other dimension) value as well.
    • –Customer Table
      SELECT DISTINCT
      DA.AccountsKey
      ,DA.CompanyID
      ,DA.CompanyName
      ,MIN(FSF.InvoiceDateKey) 'FirstInvoiceDate'
      INTO #Companies
      FROM DimAccount DA
      JOIN ShipmentFacts FSF ON FSF.AccountKey = DA.AccountsKey
      WHERE FSF.InvoiceDateKey IS NOT NULL
      GROUP BY
      DA.AccountsKey
      ,DA.CompanyID
      ,DA.CompanyName
    • –Bridge Table that combines both Customer and Date values
      SELECT DISTINCT
      C.CalendarDate
      ,Comp.CompanyID
      ,Comp.CompanyName
      ,MIN(Comp.FirstInvoiceDate) 'FirstInvoiceDate'
      ,CONCAT(C.CalendarDate,Comp.CompanyID) 'ID'
      INTO #Bridge
      FROM #Calendar C, #Companies Comp
      GROUP BY
      C.CalendarDate
      ,Comp.CompanyID
      ,Comp.CompanyName
      ,CONCAT(C.CalendarDate,Comp.CompanyID)
  4. Next, we need to create our unique ID’s that combine all the dimensions we are hoping to account for in our “IGNORE NULLS” scenario. In this test case, we need to create one ID that grabs the actual dates a customer invoiced on and another for all the dates in our range that a customer could have possibly invoiced on. Then, we join the two together to grab the last time a customer invoiced and get ignore those pesky NULL values. This is the section where having SQL Server 2016 and later will do you a lot of favors (see code below).
    • –Actual Invoiced Dates by Customer
      SELECT DISTINCT
      FSF.InvoiceDateKey
      ,DA.CompanyName
      ,DA.CompanyID
      ,CONCAT(FSF.InvoiceDateKey,DA.CompanyId) 'ID'
      INTO #ShipmentData
      FROM ShipmentFacts FSF
      JOIN #Companies DA ON DA.AccountsKey = FSF.AccountKey
      WHERE FSF.InvoiceDateKey BETWEEN @StartDate AND @EndDate
    • –Joining together and filling in the NULLS with the previous invoiced date by customer
      SELECT DISTINCT
      C.ID
      ,S.ID 'ShipData'
      ,CAST( SUBSTRING( MAX( CAST (C.ID AS BINARY(4)) + CAST(S.ID AS BINARY(20))) OVER (PARTITION BY C.CompanyID ORDER BY C.ID ROWS UNBOUNDED PRECEDING),5,20) AS varchar) 'PreviousInvoiceDateKey'
      --ALTERNATIVE FOR POST SQL Server 2012--
      --,CAST( SUBSTRING( MAX( CAST (C.ID AS BINARY(4)) + CAST(S.ID AS BINARY(20))) IGNORE NULLS OVER (PARTITION BY C.CompanyID ORDER BY C.ID ROWS UNBOUNDED PRECEDING),5,20) AS varchar) 'PreviousInvoiceDateKey'

      INTO #RunningDates
      FROM #Bridge C
      LEFT JOIN #ShipmentData S ON S.ID = C.ID
  5. The rest of the code is based on business logic, so please use at your discretion and edit for your own needs.

Full SQL Code:

DECLARE @StartDate DATE = '2019-01-01'
DECLARE @EndDate DATE = GETDATE()
DECLARE @ActivationRange INT = 90 --notates how many days can be between invoice dates before a customer is deemed "inactive"
;WITH cte AS (
SELECT @StartDate AS myDate
UNION ALL
SELECT DATEADD(day,1,myDate) as myDate
FROM cte
WHERE DATEADD(day,1,myDate) <= @EndDate
)
SELECT myDate 'CalendarDate'
INTO #Calendar
FROM cte
OPTION (MAXRECURSION 0)


SELECT DISTINCT
DA.AccountsKey
,DA.CompanyID
,DA.CompanyName
,MIN(FSF.InvoiceDateKey) 'FirstInvoiceDate'
INTO #Companies
FROM DimAccount DA
JOIN ShipmentFacts FSF ON FSF.AccountKey = DA.AccountsKey
WHERE FSF.InvoiceDateKey >= '2000-01-01'
GROUP BY
DA.AccountsKey
,DA.CompanyID
,DA.CompanyName


SELECT DISTINCT
C.CalendarDate
,Comp.CompanyID
,Comp.CompanyName
,MIN(Comp.FirstInvoiceDate) 'FirstInvoiceDate'
,CONCAT(C.CalendarDate,Comp.CompanyID) 'ID'
INTO #Bridge
FROM #Calendar C, #Companies Comp
GROUP BY
C.CalendarDate
,Comp.CompanyID
,Comp.CompanyName
,CONCAT(C.CalendarDate,Comp.CompanyID)

SELECT DISTINCT
FSF.InvoiceDateKey
,DA.CompanyName
,DA.CompanyID
,CONCAT(FSF.InvoiceDateKey,DA.CompanyId) 'ID'
INTO #ShipmentData
FROM ShipmentFacts FSF
JOIN #Companies DA ON DA.AccountsKey = FSF.AccountKey
WHERE FSF.InvoiceDateKey BETWEEN @StartDate AND @EndDate

SELECT DISTINCT
C.ID
,S.ID 'ShipData'
,CAST( SUBSTRING( MAX( CAST (C.ID AS BINARY(4)) + CAST(S.ID AS BINARY(20))) OVER (PARTITION BY C.CompanyID ORDER BY C.ID ROWS UNBOUNDED PRECEDING),5,20) AS varchar) 'PreviousInvoiceDateKey'
--ALTERNATIVE FOR POST SQL Server 2012--
--,CAST( SUBSTRING( MAX( CAST (C.ID AS BINARY(4)) + CAST(S.ID AS BINARY(20))) IGNORE NULLS OVER (PARTITION BY C.CompanyID ORDER BY C.ID ROWS UNBOUNDED PRECEDING),5,20) AS varchar) 'PreviousInvoiceDateKey'
INTO #RunningDates
FROM #Bridge C
LEFT JOIN #ShipmentData S ON S.ID = C.ID


SELECT DISTINCT
R.ID
,R.ShipData
,R.PreviousInvoiceDateKey
,LEFT(R.PreviousInvoiceDateKey,10) 'PreviousInvoiceDate'
,LEFT(R.ID,10) 'DateKey'
,RIGHT(R.ID,5) 'CompanyId'
,B.FirstInvoiceDate
INTO #ActivationData
FROM #RunningDates R
LEFT JOIN #Bridge B ON B.ID = R.ID

SELECT DISTINCT
A.ID
,A.DateKey
,A.CompanyId
,A.PreviousInvoiceDate
,YEAR(A.DateKey) 'Year'
,YEAR(A.FirstInvoiceDate) 'InitialActivationYear'
,CASE WHEN YEAR(A.DateKey) = YEAR(A.FirstInvoiceDate) THEN 1 ELSE 0 END 'IsActivationYear'
,DATEDIFF(Day,A.PreviousInvoiceDate,A.DateKey) 'DaysSinceInvoice'
,CASE WHEN DATEDIFF(Day,A.PreviousInvoiceDate,A.DateKey) = @ActivationRange THEN 1 ELSE 0 END 'IsInactive'
,CASE WHEN DATEDIFF(Day,A.PreviousInvoiceDate,A.DateKey) = @ActivationRange THEN A.DateKey ELSE NULL END 'InactiveDate'
INTO #ActivationDetails
FROM #ActivationData A

SELECT DISTINCT
D.Year
,D.CompanyId
,SUM(D.IsInactive) 'InactivatedPeriods'
,MAX(D.IsActivationYear) 'IsFirstActivationYear'
,MAX(D.DaysSinceInvoice) 'BiggestGapInInvoicing (Days)'
,MAX(D.InactiveDate) 'LastInactiveDate'
,MAX(D.PreviousInvoiceDate) 'LastInvoiceDate'
,CASE WHEN MAX(D.InactiveDate) > MAX(D.PreviousInvoiceDate) THEN -1 ELSE 0 END 'NotActiveAtEndOfYear'

--to grab the activations per customer per year follow equation below
-- Activations = [InactivatedPeriods] + [NotActiveAtEndOfYear] + [IsFirstActivationYear] --this part will be done in Power BI
FROM #ActivationDetails D
GROUP BY
D.Year
,D.CompanyId


DROP TABLE #Calendar
DROP TABLE #Companies
DROP TABLE #Bridge
DROP TABLE #ShipmentData
DROP TABLE #RunningDates
DROP TABLE #ActivationData
DROP TABLE #ActivationDetails

Additional Resource:

Connecting to Azure Blobs in Power BI

The step-by-step process below walks through connecting to data housed in Azure Blob Storage from Power BI using a SAS token. There are many ways to grab your data from Blob Storage, but this is the most efficient, scalable, and secure way that I found (with some security restrictions from watchful DBAs).

Resources Needed:

  • Base URL for container
  • SAS Token (must have read AND list permissions)
    • Check out the link in resources for a tutorial on generating SAS Tokens.
  • File Path (should end with .csv)
  • Power BI Desktop

Notes:

  • You can skip ahead to the sample M script if you have all your elements. Simply swap out the BaseURL, SASToken, and FilePath and you’re good to go. Otherwise, feel free to walk through the steps below to gain a deeper understanding of the process.
  • Make sure your Base URL ends with a “/”, your SAS Token starts with “?”, and your file path ends with “.csv”
  • Keep the double quotes around each parameter value, this forces Power BI to recognize it as text.

Process:

  1. In Power BI Desktop, go to Get Data and select the Web option.
  2. Switch to the advanced view and put the base URL in the first box.
  3. Put in the second box the SAS token.
  4. In a third box (click add part to get the third one), put “&restype=container&comp=list” (this will allow you to list all the blobs in that container).
  5. Expand the blob down then filter the name on the file path.
  6. Create a custom column to create the entire URL for the file (M code samples are below).
    • FileURL = BaseURL & [Name] & SASToken
  7. Create another custom column to access the web contents of your FileURL column.
    • BinaryURLContents = Web.Contents([FileURL])
  8. Remove all columns except the BinaryURLContents.
  9. Click on the “Binary” value and watch Power BI expand out your CSV file.
  10. Manipulate data from there as needed.

Final M Code:

let
    BaseURL = "BASE_URL_HERE"
    ,SASToken = "SAS_TOKEN_HERE"
    ,FilePath = "FILE_NAME_HERE_(Note do not include section of the URL from Base URL)"
    ,Source = Xml.Tables(Web.Contents(Text.From(BaseURL) &Text.From(SASToken) & "&restype=container&comp=list")),
    #"Changed Type" = Table.TransformColumnTypes(Source,{{"Attribute:ServiceEndpoint", type text}, {"Attribute:ContainerName", type text}}),
    #"Removed Other Columns" = Table.SelectColumns(#"Changed Type",{"Blobs"}),
    #"Expanded Blobs" = Table.ExpandTableColumn(#"Removed Other Columns", "Blobs", {"Blob"}, {"Blob"}),
    #"Expanded Blob" = Table.ExpandTableColumn(#"Expanded Blobs", "Blob", {"Name", "Properties", "OrMetadata"}, {"Name", "Properties", "OrMetadata"}),
    #"Filtered Rows" = Table.SelectRows(#"Expanded Blob", each ([Name] = Text.From(FilePath))),
    #"Added Custom" = Table.AddColumn(#"Filtered Rows", "FileURL", each BaseURL &  [Name] &  SASToken),
    #"Added Custom1" = Table.AddColumn(#"Added Custom", "BinaryURLContents", each Web.Contents([FileURL])),
    #"Removed Other Columns1" = Table.SelectColumns(#"Added Custom1",{"BinaryURLContents"}),
    BinaryURLContents = #"Removed Other Columns1"{0}[BinaryURLContents],
    #"Imported CSV" = Csv.Document(BinaryURLContents,[Delimiter=",", Columns=24, Encoding=1252, QuoteStyle=QuoteStyle.None]),
    #"Promoted Headers" = Table.PromoteHeaders(#"Imported CSV", [PromoteAllScalars=true])
  in
   #"Promoted Headers"
//Use this query to validate your file path
let
    Source = Xml.Tables(Web.Contents("BASE URL" & "SAS TOKEN" & "&restype=container&comp=list")),
    #"Changed Type" = Table.TransformColumnTypes(Source,{{"NextMarker", type text}, {"Attribute:ServiceEndpoint", type text}, {"Attribute:ContainerName", type text}}),
    #"Removed Other Columns" = Table.SelectColumns(#"Changed Type",{"Blobs"}),
    #"Expanded Blobs" = Table.ExpandTableColumn(#"Removed Other Columns", "Blobs", {"Blob"}, {"Blob"}),
    #"Expanded Blob" = Table.ExpandTableColumn(#"Expanded Blobs", "Blob", {"Name", "Properties", "OrMetadata"}, {"Name", "Properties", "OrMetadata"}),
    #"Filtered Rows" = Table.SelectRows(#"Expanded Blob", each [Name] = "FILE PATH")
in
    #"Filtered Rows"

Additional Resources:

Moving Synapse Databases Between Subscriptions – Practical Guidance

One of the tasks, we often do with migration projects is move large volumes of data. Depending on how you are configured, you may need to do the migration project in a development or UAT environment as opposed to a production environment. This is particularly true if you have policies in place on your production subscription that don’t allow the individuals doing the migration and validation tasks to work in that subscription.

Just Copy It… Nope

So you can copy Azure SQL Database using the Azure Portal, PowerShell, Azure CLI, and T-SQL. However, this functionality is limited to Azure SQL Database and does not work for Azure Synapse databases (a.k.a. SQL Pools). Early in 2021, the ability to use the copy functionality to copy databases between subscriptions is also supported but requires security work to make sure the permissions in the database servers and networking allow that to happen.

You can find out more about copying Azure SQL Database in this Microsoft Doc.

Just Restore It… Nope

You can restore to the current server or another server on the same subscription. However, you are unable to restore across subscription boundaries at this time. If you need to move to another server in the current subscription, the process is straightforward, you can use the restore process in Synapse to restore to the current server using a different name. You can also restore to a different server in either the current or different resource group in the same subscription. The restore technique is used in our move process, so details on how to restore a Synapse database will be in the next section.

Let’s Move a Synapse Database

The process to move a Synapse database to another subscription requires some planning and pre-work. The first thing you need to do is create a new SQL Server in the same subscription you have your current Synapse environment. Because you can’t simply create servers, I would recommend that you add an Azure SQL Database to the server as a placeholder. An S0 should be sufficient to keep this server in place for what we are doing. DO NOT ADD anything to this server that will not be migrated. This is a temporary holding place for migrating databases. (This also works for other SQL Databases, but other options may work as well but are not the focus of this post.)

Now that you have the migration server created, the next step is to create a restore point. While this is not required because you can use the automatic restore points, creating a user-defined restore point is recommended. A user-defined restore point, allows you to choose the status of the database you want to migrate, rather than relying on the automatic points and trying to make sure you pick the right time (in UTC of course).

Once you have set the restore point, in the database you want to migrate, select Restore to open the panel to restore your database.

On the restore page, you have a number of options to complete.

  • Restore point type: Choose User-defined restore points
  • SQL pool name: This is not a big deal. The name is the database name used during the migration process and is not the final name used in the target server. Make sure it is something you know.
  • Restore point: Select the restore point you created for this purpose.
  • Server: Choose the migration server you created as the target.
  • Performance level: This one is more interesting. I typically choose a smaller performance level for this restore. Keep in mind that Azure needs to allocate resources to support the restore. Because this is not a final deployment, smaller may go faster. However, NO SLAs exist for this process. That means your mileage will vary. We have seen restores happen in 30 minutes one day and over 5 hours the next. It will be very dependent on the data center and how busy it is. This time variation must be accounted for in your planning.

The next step is to move the server using the Move operation on the server page. You have the option to move to another resource group or another subscription. In our case, we will choose another subscription. IMPORTANT: You will need Contributor permissions in the target subscription in order to move the server to that subscription.

After you have moved the server to the target subscription, you need to set a restore point for that database on the migration server. Then you can restore that database to the target server. It is very important that you use the naming convention and performance levels that you need for this restore as it is the final step in the process. Once again the restore process has no Microsoft SLA and as a result may take longer than planned. You need to have contingencies in place if you are working in a deployment window or have time restrictions.

Finally, you need to clean up the migration server. I would recommend either scaling down or pausing the Synapse database to give you a backup for a while if needed. Once the database is validated on the target server, you can remove the Synapse database (removes storage costs). I would recommend keeping this server as your migration server to use in the future. You can use this process to create copies of databases for development and UAT or similar needs from production instances.

Other Thoughts and Considerations

Here are my final thoughts on this process. First, the fact that no SLA on the restore process is provided by Microsoft has created issues for us in some cases. We have had to extend deployment windows during production deployments on occasion. My recommendation is to plan for the worst case and finish early if all comes together on time.

This process works! You can use it with other SQL assets and you can use it in multiple directions. Keep the migration server around so you can support other processes. If you clean most of it out, the cost of maintaining it is the S0 SQL Database.

One final thought, this is Azure. Thus, this guidance could change tomorrow. We have been using this for about 12 months when this was written. I hope this helps some of you move these databases to support your business and development needs.

Power BI: Making Date & Time Keys

Saving the Day from Delay Part 2

Creating DateKey and TimeKey columns can be done with built in functions in the Power Query editor. Quick call out, if you need the time along with dates, I highly recommend splitting your datetime columns in half – one date only and one time only. From there, you can use the same process to convert your time to a decimal number and use a Time Table for your time functions (GitHub link below). Below are some screenshots to walk you through the process.

Let’s say you have a datetime column like my Date column below. To start, I recommend going to the Add Column tab in the query editor, and select Date Only then Time Only to create two new columns. This way the new columns will be right next to each other in the applied steps which will make troubleshooting down the road a lot easier. Don’t forget, you can right click on steps and rename them to help yourself walk through and/or adjust steps in the future.

Time to make our keys! There are a couple ways to do this, but the easiest is to click on the calendar icon (or clock icon for time) and select whole number (select decimal for the time only column). If you’ve worked in Excel, this will look familiar. These whole numbers for date (or decimal for datetime) is the same across the two platforms and is what DAX uses in the background to process datetime equations.

Final Date and DateKey Columns
Final Time and TimeKey Columns

And that’s it! Next post we will look at how to join the date and time tables to your keys in the data model.

External Resources:
https://data-mozart.com/inside-vertipaq-compress-for-success/
https://github.com/AnytsirkGTZ/TimeTable_MCode/blob/main/MQuery%20Time

Power BI: Data Model Optimization

Saving the Day from Delay Part 1

Optimizing your data model can be a daunting task. If you read the intro to this series, you know one of the most efficient and sustainable solutions to a bogged-down data model is to remove native date queries and use a date table. This post will dig into how and why this will speed up performance in both refreshes and in the online PBI service and how to make date and time keys.

The key to optimization is compression. An efficiently compressed data model is a lean, mean, query running machine. There are two types of compression – horizontal and vertical. Horizontal compression occurs on a row by row basis while vertical compression occurs column by column. Power BI uses the Vertipaq Engine, a vertical compression model, to compress data inside the data model. While vertical is more CPU intensive, it is also more efficient as it finds the best option for compression based on the data type in the column (values/whole numbers are most efficient). Data mozart does an in-depth look on this process that I highly recommend reviewing if you have more questions (link at bottom of this post).

Vertical compression is significantly slower on date time columns than value columns. DateKeys are your best friend in compressing your data model because they allow you to capture vital date information but store it in a value format (the most optimal format – think whole number). Converting all your primary date fields to a DateKey will allow all calculations using that primary date field to run much faster as Vertipaq can process the requests more efficiently.

Next post we’ll cover making time and date keys in the Power Query Editor.

External Resources:
https://data-mozart.com/inside-vertipaq-compress-for-success/
https://github.com/AnytsirkGTZ/TimeTable_MCode/blob/main/MQuery%20Time