T-SQL Window Functions – Part 4: Analytic Functions

16 07 2014

This is a reprint with some revisions of a series I originally published on LessThanDot. You can find the links to the original blogs on my Series page.

TSQL-WIndow-Functions_thumb1_thumb_tIn the final installment of my series on SQL window functions, we will explore using analytic functions. Analytic functions were introduced in SQL Server 2012 with the expansion of the OVER clause capabilities. Analytic functions fall in to two primary categories: values at a position and percentiles. Four of the functions, LAG, LEAD, FIRST_VALUE and LAST_VALUE find a row in the partition and returns the desired value from that row. CUME_DIST and PERCENT_RANK break the partition into percentiles and return a rank value for each row. PERCENTILE_CONT and PERCENTILE_DISC a value at the requested percentile in the function for each row. All of the functions and examples in this blog will only work with SQL Server 2012.
Once again, the following CTE will be used as the query in all examples throughout the post:

with CTEOrders as
(select cast(1 as int) as OrderID, cast(‘3/1/2012′ as date) as OrderDate, cast(10.00 as money) as OrderAmt, ‘Joe’ as CustomerName
union select 2, ‘3/1/2012′, 11.00, ‘Sam’
union select 3, ‘3/2/2012′, 10.00, ‘Beth’
union select 4, ‘3/2/2012′, 15.00, ‘Joe’
union select 5, ‘3/2/2012′, 17.00, ‘Sam’
union select 6, ‘3/3/2012′, 12.00, ‘Joe’
union select 7, ‘3/4/2012′, 10.00, ‘Beth’
union select 8, ‘3/4/2012′, 18.00, ‘Sam’
union select 9, ‘3/4/2012′, 12.00, ‘Joe’
union select 10, ‘3/4/2012′, 11.00, ‘Beth’
union select 11, ‘3/5/2012′, 14.00, ‘Sam’
union select 12, ‘3/6/2012′, 17.00, ‘Beth’
union select 13, ‘3/6/2012′, 19.00, ‘Joe’
union select 14, ‘3/7/2012′, 13.00, ‘Beth’
union select 15, ‘3/7/2012′, 16.00, ‘Sam’
)
select OrderID
,OrderDate
,OrderAmt
,CustomerName
from CTEOrders;

Position Value Functions: LAG, LEAD, FIRST_VALUE, LAST_VALUE

Who has not needed to use values from other rows in the current row for a report or other query? A prime example is needing to know what the last order value was to calculate growth or just show the difference in the results. This has never been easy in SQL Server until now. All of these functions require the use of the OVER clause and the ORDER BY clause. They all use the current row within the partition to find the row at the desired position.

The LAG and LEAD functions allow you to specify the offset or how many rows to look forward or backward and they support a default value in cases where the value returned would be null. These functions do not support the use of ROWS or RANGE in the OVER clause. The FIRST_VALUE and LAST_VALUE allow you to further define the partition using ROWS or RANGE if desired.

The following example illustrates all of the functions with various variations on the parameters and settings.

select OrderID
,OrderDate
,OrderAmt
,CustomerName
,LAG(OrderAmt) OVER (PARTITION BY CustomerName ORDER BY OrderID) as PrevOrdAmt
,LEAD(OrderAmt, 2) OVER (PARTITION BY CustomerName ORDER BY OrderID) as NextTwoOrdAmt
,LEAD(OrderDate, 2, ‘9999-12-31’) OVER (PARTITION BY CustomerName ORDER BY OrderID) as NextTwoOrdDtNoNull
,FIRST_VALUE(OrderDate) OVER (ORDER BY OrderID) as FirstOrdDt
,LAST_VALUE(CustomerName) OVER (PARTITION BY OrderDate ORDER BY OrderID) as LastCustToOrdByDay

from CTEOrders

Results (with shortened column names):
ID OrderDate Amt Cust PrevOrdAmt NextTwoAmt NextTwoDt FirstOrd LastCust
1 3/1/2012 10 Joe NULL 12 3/3/2012 3/1/2012 Joe
2 3/1/2012 11 Sam NULL 18 3/4/2012 3/1/2012 Sam
3 3/2/2012 10 Beth NULL 11 3/4/2012 3/1/2012 Beth
4 3/2/2012 15 Joe 10 12 3/4/2012 3/1/2012 Joe
5 3/2/2012 17 Sam 11 14 3/5/2012 3/1/2012 Sam
6 3/3/2012 12 Joe 15 19 3/6/2012 3/1/2012 Joe
7 3/4/2012 10 Beth 10 17 3/6/2012 3/1/2012 Beth
8 3/4/2012 18 Sam 17 16 3/7/2012 3/1/2012 Sam
9 3/4/2012 12 Joe 12 NULL 12/31/9999 3/1/2012 Joe
10 3/4/2012 11 Beth 10 13 3/7/2012 3/1/2012 Beth
11 3/5/2012 14 Sam 18 NULL 12/31/9999 3/1/2012 Sam
12 3/6/2012 17 Beth 11 NULL 12/31/9999 3/1/2012 Beth
13 3/6/2012 19 Joe 12 NULL 12/31/9999 3/1/2012 Joe
14 3/7/2012 13 Beth 17 NULL 12/31/9999 3/1/2012 Beth
15 3/7/2012 16 Sam 14 NULL 12/31/9999 3/1/2012 Sam

If you really like subselects, you can also mix in some subselects and have a very creative SQL statement. The following statement uses LAG and a subselect to find the first value in a partition. I am showing this to illustrate some more of the capabilities of the function in case you have a solution that requires this level of complexity.

select OrderID
,OrderDate
,OrderAmt
,CustomerName
,LAG(OrderAmt, (
select count(*)-1
from CTEOrders c
where c.CustomerName = CTEOrders.CustomerName
and c.OrderID <= CTEOrders.OrderID), 0)
OVER (PARTITION BY CustomerName ORDER BY OrderDate, OrderID) as FirstOrderAmt
FROM CTEOrders

Percentile Functions: CUME_DIST, PERCENT_RANK, PERCENTILE_CONT, PERCENTILE_DISC

As I wrap up my discussion on window functions, the percentile based functions were the functions I knew the least about. While I have already used the position value functions above, I have not yet needed to use the percentiles. So, that meant I had to work with them for a while so I could share their usage and have some samples for you to use.

The key differences in the four function have to do with ranks and values. CUME_DIST and PERCENT_RANK return a ranking value while PERCENTILE_CONT and PERCENTILE_DISC return data values.

CUME_DIST returns a value that is greater than zero and lest than or equal to one (>0 and <=1) and represents the percentage group that the value falls into based on the order specified. PERCENT_RANK returns a value between zero and one as well (>= 0 and <=1). However, in PERCENT_RANK the first group is always represented as 0 whereas in CUME_DIST it represents the percentage of the group. Both functions return the last percent group as 1. In both cases, as the ranking percentages move from lowest to highest, each group’s percent value includes all of the earlier values in the calculation as well.

The following statement shows both of the functions using the default partition to determine the rankings of order amounts within our dataset.

select OrderID
,OrderDate
,OrderAmt
,CustomerName
,CUME_DIST() OVER (ORDER BY OrderAmt) CumDist
,PERCENT_RANK() OVER (ORDER BY OrderAmt) PctRank
FROM CTEOrders

Results:
OrderID OrderDate OrderAmt CustomerName CumDist PctRank
1 3/1/2012 10 Joe 0.2 0
3 3/2/2012 10 Beth 0.2 0
7 3/4/2012 10 Beth 0.2 0
2 3/1/2012 11 Sam 0.33333333 0.214285714
10 3/4/2012 11 Beth 0.33333333 0.214285714
6 3/3/2012 12 Joe 0.46666667 0.357142857
9 3/4/2012 12 Joe 0.46666667 0.357142857
14 3/7/2012 13 Beth 0.53333333 0.5
11 3/5/2012 14 Sam 0.6 0.571428571
4 3/2/2012 15 Joe 0.66666667 0.642857143
15 3/7/2012 16 Sam 0.73333333 0.714285714
5 3/2/2012 17 Sam 0.86666667 0.785714286
12 3/6/2012 17 Beth 0.86666667 0.785714286
8 3/4/2012 18 Sam 0.93333333 0.928571429
13 3/6/2012 19 Joe 1 1

The last two functions, PERCENTILE_CONT and PERCENTILE_DISC, return the value at the percentile requested. PERCENTILE_CONT will return the true percentile value whether it exists in the data or not. For instance, if the percentile group has the values 10 and 20, it will return 15. If PERCENTILE_DISC, is applied to the same group it will return 10. It will return the smallest value in the percentile group, which in this case is 10. Both functions ignore NULL values and do not use the ORDER BY, ROWS, or RANGE clauses with the PARTITION BY clause. Instead, WITHIN GROUP is introduced which must contain a numeric data type and ORDER BY clause. Only one column can be specified here. Both functions need a percentile value which can be between 0.0 and 1.0.

The following script illustrates a couple of variations. The first two functions return the median of the default partition. Then next two return the median value for each day. Finally, the last two functions return the low and high values within the partition. The values segmented by the date partition highlight the key difference between the two functions.

select OrderID as ID
,OrderDate as ODt
,OrderAmt as OAmt
,CustomerName as CName
,PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY OrderAmt) OVER() PerCont05
,PERCENTILE_DISC(0.5) WITHIN GROUP (ORDER BY OrderAmt) OVER() PerDisc05
,PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY OrderAmt) OVER(PARTITION BY OrderDate) PerContDt
,PERCENTILE_DISC(0.5) WITHIN GROUP (ORDER BY OrderAmt) OVER(PARTITION BY OrderDate) PerDiscDt
,PERCENTILE_CONT(0) WITHIN GROUP (ORDER BY OrderAmt) OVER() PerCont0
FROM CTEOrders

Results
ID ODt OAmt CName PerCont05 PerDisc05 PerContDt PerDiscDt PerCont0
1 3/1/2012 10 Joe 13 13.00 10.5 10.00 10
2 3/1/2012 11 Sam 13 13.00 10.5 10.00 10
3 3/2/2012 10 Beth 13 13.00 15.0 15.00 10
4 3/2/2012 15 Joe 13 13.00 15.0 15.00 10
5 3/2/2012 17 Sam 13 13.00 15.0 15.00 10
6 3/3/2012 12 Joe 13 13.00 12.0 12.00 10
7 3/4/2012 10 Beth 13 13.00 11.5 11.00 10
10 3/4/2012 11 Beth 13 13.00 11.5 11.00 10
9 3/4/2012 12 Joe 13 13.00 11.5 11.00 10
8 3/4/2012 18 Sam 13 13.00 11.5 11.00 10
11 3/5/2012 14 Sam 13 13.00 14.0 14.00 10
12 3/6/2012 17 Beth 13 13.00 18.0 17.00 10
13 3/6/2012 19 Joe 13 13.00 18.0 17.00 10
14 3/7/2012 13 Beth 13 13.00 14.5 13.00 10
15 3/7/2012 16 Sam 13 13.00 14.5 13.00 10

As I wrap up this post, I have to give a shout out to my daughter, Kristy, who is an honors math student. She helped me get my head around this last group of functions. Her honors math work and some statistical work she had done in science helped provide additional insight into the math behind the functions. (Kristy – you rock!)

Series Wrap Up

I hope this series helps everyone understand the power and flexibility in the window functions made available in SQL Server 2012. If you happen to use Oracle, I know that many of these functions or there equivalent are also available in 11g and they also appear to be in 10g. I have to admit my first real production usage was with Oracle 11g but has since used them with SQL Server 2012. The expanded functionality in SQL Server 2012 is just one more reason to upgrade to the latest version.





T-SQL Window Functions – Part 3: Aggregate Functions

7 07 2014

This is a reprint with some revisions of a series I originally published on LessThanDot. You can find the links to the original blogs on my Series page.

TSQL WIndow Functions_thumb[1]_thumbThis is part 3 in my series on SQL window functions. In this post, we will explore using aggregation functions with T-SQL windows. SQL Server supports most of the aggregation functions such as SUM and AVG in this context with the exceptions of GROUPING and GROUPING_ID. However, prior to SQL Server 2012 only the PARTITION BY clause was supported which greatly limited the usability of aggregate window functions. When support for the ORDER BY clause was introduced in SQL Server 2012, more complex business problems such as running totals could be solved without the extensive use of cursors or nested select statement. In my experience, I used to try various ways to get around this limitation including pushing the data to .NET as it could solve this problem more efficiently. However, this was not always possible when working with reporting. Now that we are able to use SQL to solve the problem, more complex and low-performing solutions can be replaced with these window functions.

Once again, the following CTE will be used as the query in all examples throughout the post:

with CTEOrders as
(select cast(1 as int) as OrderID, cast(‘3/1/2012′ as date) as OrderDate, cast(10.00 as money) as OrderAmt, ‘Joe’ as CustomerName
union select 2, ‘3/1/2012′, 11.00, ‘Sam’
union select 3, ‘3/2/2012′, 10.00, ‘Beth’
union select 4, ‘3/2/2012′, 15.00, ‘Joe’
union select 5, ‘3/2/2012′, 17.00, ‘Sam’
union select 6, ‘3/3/2012′, 12.00, ‘Joe’
union select 7, ‘3/4/2012′, 10.00, ‘Beth’
union select 8, ‘3/4/2012′, 18.00, ‘Sam’
union select 9, ‘3/4/2012′, 12.00, ‘Joe’
union select 10, ‘3/4/2012′, 11.00, ‘Beth’
union select 11, ‘3/5/2012′, 14.00, ‘Sam’
union select 12, ‘3/6/2012′, 17.00, ‘Beth’
union select 13, ‘3/6/2012′, 19.00, ‘Joe’
union select 14, ‘3/7/2012′, 13.00, ‘Beth’
union select 15, ‘3/7/2012′, 16.00, ‘Sam’
)
select OrderID
,OrderDate
,OrderAmt
,CustomerName
from CTEOrders;

Using PARTITION BY with Aggregate Functions

SQL Server 2005 and the newer versions supports the usage of the PARTITION BY clause by itself. This allowed for some simple aggregate windows. The following example shows SUM and AVG for different partitions of data. The third function actually creates and average using a SUM and COUNT function.

select CustomerName
,OrderDate
,OrderAmt
,SUM(OrderAmt) OVER (PARTITION BY CustomerName) CustomerTotal
,AVG(OrderAmt) OVER (PARTITION BY OrderDate) AvgDailyAmt
,CAST(COUNT(OrderID) OVER (PARTITION BY OrderDate) as decimal(8,3)) / CAST(COUNT(OrderID) OVER() as decimal(8,3)) PctOfTotalPerDay
from CTEOrders
order by OrderDate;

NOTE: The COUNT aggregate returns an integer value. In order to return the decimal, the values need to be explicitly converted to decimal types. Otherwise, the result was rounding to zero for all results in this sample.

Results

CustomerName OrderDate OrderAmt CustomerTotal AvgDailyAmt PctOfTotalPerDay
Joe 3/1/2012 10 68 10.5 0.133333333
Sam 3/1/2012 11 76 10.5 0.133333333
Sam 3/2/2012 17 76 14 0.2
Joe 3/2/2012 15 68 14 0.2
Beth 3/2/2012 10 61 14 0.2
Joe 3/3/2012 12 68 12 0.066666667
Joe 3/4/2012 12 68 12.75 0.266666667
Beth 3/4/2012 10 61 12.75 0.266666667
Beth 3/4/2012 11 61 12.75 0.266666667
Sam 3/4/2012 18 76 12.75 0.266666667
Sam 3/5/2012 14 76 14 0.066666667
Beth 3/6/2012 17 61 18 0.133333333
Joe 3/6/2012 19 68 18 0.133333333
Beth 3/7/2012 13 61 14.5 0.133333333
Sam 3/7/2012 16 76 14.5 0.133333333

Using Subselects

Subselect statements in SQL Server are supported, but harder to optimize in SQL Server versus Oracle. Until window functions were introduced all of the situations above could be solved by subselects, but performance would degrade as the results needed to work with larger sets of data. With the improved functionality in SQL Server 2012, you should not need to use subselects to return row-based aggregations. Besides the performance implications, maintenance will also be much simpler as the SQL becomes more transparent. For reference, here is the subselect syntax to return the same results as above:

select cte.CustomerName
, cte.OrderDate
, cte.OrderAmt
, (select SUM(OrderAmt) from CTEOrders where CustomerName = cte.CustomerName) CustomerTotal
, (select cast(COUNT(OrderID) as decimal(8,3)) from CTEOrders where OrderDate = cte.OrderDate) / (select cast(COUNT(OrderID) as decimal(8,3)) from CETOrders) AvgDailyAmt
from CETOrders cte
order by cte.OrderDate;

While it is possible to solve the same function using the subselects, the code is already getting messier and with data sets larger than what we have here, you would definitely see performance degradation.

Some Thoughts on GROUP BY

While I am digressing, I wanted to also highlight some details concerning GROUP BY. The one the biggest difficulties working with the GROUP BY clause and aggregates, every column must either be a part of the GROUP BY or have an aggregation associated with it. The window functions help solve this problem as well.
In the following examples, the first query returns the sum of the amount by day. This is pretty standard logic when working with aggregated queries in SQL.

select OrderDate
,sum(OrderAmt) as DailyOrderAmt
from CTEOrders
group by OrderDate;

However, if you wanted to see more details, but not include them in the aggregation, the following will not work.

select OrderDate
,OrderID
,OrderAmt
,sum(OrderAmt) as DailyOrderAmt
from CTEOrders
group by OrderDate
,OrderID
,OrderAmt;

This SQL statement will return each row individually with the sum at the detail level. You can solve this using the subselect above which is not recommended or you can use a window function.

select OrderDate
,OrderID
,OrderAmt
,sum(OrderAmt) OVER (PARTITION BY OrderDate) as DailyOrderAmt
from CTEOrders

As you can see here and in previous examples the OVER clause allows you to manage the grouping based on the context specified in relationship to the current row.

One other twist on the GROUP BY clause. First, I need to give credit to Itzik Ben-Gan for calling this to my attention at one of our Minnesota SQL Server User Group meetings. In his usual fashion he was showing some T-SQL coolness and he showed an interesting error when using the OVER clause with the GROUP BY clause.

The following will return an error because the first expression is an aggregate, but the second expression which is using the OVER clause is not. Also note that in this example the OVER clause is being evaluated for the entire set of data.

select sum(OrderAmt)
, sum(OrderAmt) over() as TotalOrderAmt
from CTEOrders
group by CustomerName

The expression above returns the following error:
Column ‘CTEOrders.OrderAmt’ is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause

The goal of the statement above was to show the customer’s total order amount with the overall order amount. The following statement resolves this issue because it is aggregating the aggregates. The window is now summing the aggregated amount which are grouped on the customer name.

select sum(OrderAmt)
, sum(sum(OrderAmt)) over() as TotalOrderAmt
from CTEOrders
group by CustomerName

Thanks again to Itzik for bringing this problem and resolution to my attention.

Aggregates with ORDER BY

With the expansion of the OVER clause to include ORDER BY support with aggregates, window functions increased their value substantially. One of the key business problems this allowed us to solve was a running aggregate.

The first example shows how to get a running total by CustomreName based on OrderDate and OrderID.

select OrderID
,OrderDate
,OrderAmt
,CustomerName
,SUM(OrderAmt) OVER (PARTITION BY CustomerName ORDER BY OrderDate, OrderID) as RunningByCustomer
from CTEOrders
ORDER BY CustomerName, OrderDate;

Results

OrderID OrderDate OrderAmt CustomerName RunningByCustomer
3 3/2/2012 10 Beth 10
7 3/4/2012 10 Beth 20
10 3/4/2012 11 Beth 31
12 3/6/2012 17 Beth 48
14 3/7/2012 13 Beth 61
1 3/1/2012 10 Joe 10
4 3/2/2012 15 Joe 25
6 3/3/2012 12 Joe 37
9 3/4/2012 12 Joe 49
13 3/6/2012 19 Joe 68
2 3/1/2012 11 Sam 11
5 3/2/2012 17 Sam 28
8 3/4/2012 18 Sam 46
11 3/5/2012 14 Sam 60
15 3/7/2012 16 Sam 76

This next example is more creative. It begins to show how powerful the window functions are. In this statement, we are going to return the annual running total aggregated by day. The differentiator here is that we use a DATEPART function in the OVER clause to achieve the desired results.

select OrderID
,OrderDate
,OrderAmt
,CustomerName
,SUM(OrderAmt) OVER (PARTITION BY datepart(yyyy, OrderDate) ORDER BY OrderDate) as AnnualRunning
from CTEOrders
ORDER BY OrderDate;

Results

OrderID OrderDate OrderAmt CustomerName AnnualRunning
1 3/1/2012 10 Joe 21
2 3/1/2012 11 Sam 21
3 3/2/2012 10 Beth 63
4 3/2/2012 15 Joe 63
5 3/2/2012 17 Sam 63
6 3/3/2012 12 Joe 75
7 3/4/2012 10 Beth 126
8 3/4/2012 18 Sam 126
9 3/4/2012 12 Joe 126
10 3/4/2012 11 Beth 126
11 3/5/2012 14 Sam 140
12 3/6/2012 17 Beth 176
13 3/6/2012 19 Joe 176
14 3/7/2012 13 Beth 205
15 3/7/2012 16 Sam 205

The ORDER BY clause creates an expanding group within the partition. In the examples above, the partition was the customer. Within each partition, ordered groups based on OrderDate and OrderID are “created”. At each row, the OrderDate and OrderID groups are aggregated up to the current row’s group thus producing the running total. If more than one row has the same order grouping, all of the rows in the group are aggregated into the total as shown in the second example above with the days and years.

Aggregates with ROWS

The ROWS clause is used to further define the partition by specifying which physical rows to include based on their proximity to the current row. As noted in the first post in the series, ROWS requires the ORDER BY clause as this determines the orientation of the partition.

The following example uses the FOLLOWING keywords to find the next two purchases that the customer made.

select OrderID
,OrderDate
,OrderAmt
,CustomerName
,SUM(OrderAmt) OVER (PARTITION BY CustomerName ORDER BY OrderDate, OrderID ROWS BETWEEN 1 FOLLOWING and 2 FOLLOWING) as NextTwoAmts
from CTEOrders
order by CustomerName, OrderDate, OrderID;

Results

OrderID OrderDate OrderAmt CustomerName NextTwoAmts
3 3/2/2012 10 Beth 21
7 3/4/2012 10 Beth 28
10 3/4/2012 11 Beth 30
12 3/6/2012 17 Beth 13
14 3/7/2012 13 Beth NULL
1 3/1/2012 10 Joe 27
4 3/2/2012 15 Joe 24
6 3/3/2012 12 Joe 31
9 3/4/2012 12 Joe 19
13 3/6/2012 19 Joe NULL
2 3/1/2012 11 Sam 35
5 3/2/2012 17 Sam 32
8 3/4/2012 18 Sam 30
11 3/5/2012 14 Sam 16
15 3/7/2012 16 Sam NULL

As we noted in the first blog, the last two rows in the partition only contain partial values. For example, order 12 contains the sum of only one order, 14, and order 14 has now rows following it in the partition and returns NULL as a result. When working with the ROWS clause this must be taken into account.

Aggregates with RANGE

Lastly, adding the RANGE to the OVER clause allows you to create aggregates which go to the beginning or end of the partition. RANGE is commonly used with UNBOUNDED FOLLOWING which goes to the end of the partition and UNBOUNDED PRECEDING which goes to the beginning of the partition. One of the most common use would be to specify the rows from the beginning of the partition to the current row which allows for aggregations such as year to date.

In the example below, we are calculating the average order size over time to the current row. This could be a very effective in a trending report.

select OrderID
,OrderDate
,OrderAmt
,CustomerName
,AVG(OrderAmt) OVER (ORDER BY OrderID RANGE BETWEEN UNBOUNDED PRECEDING and CURRENT ROW) as AvgOrderAmt
from CTEOrders
order by OrderDate;

Results

OrderID OrderDate OrderAmt CustomerName AvgOrderAmt
1 3/1/2012 10 Joe 10
2 3/1/2012 11 Sam 10.5
3 3/2/2012 10 Beth 10.333333
4 3/2/2012 15 Joe 11.5
5 3/2/2012 17 Sam 12.6
6 3/3/2012 12 Joe 12.5
7 3/4/2012 10 Beth 12.142857
8 3/4/2012 18 Sam 12.875
9 3/4/2012 12 Joe 12.777777
10 3/4/2012 11 Beth 12.6
11 3/5/2012 14 Sam 12.727272
12 3/6/2012 17 Beth 13.083333
13 3/6/2012 19 Joe 13.538461
14 3/7/2012 13 Beth 13.5
15 3/7/2012 16 Sam 13.666666

As you can see, the latest versions of OVER clause supports powerful yet simple aggregations which can help in a multitude of reporting and business solutions. Up next, the last blog in the series – Analytic Functions which are all new in SQL Server 2012.





T-SQL Window Functions – Part 2: Ranking Functions

26 06 2014

This is a reprint with some revisions of a series I originally published on LessThanDot. You can find the links to the original blogs on my Series page.

TSQL WIndow Functions_thumb[1]This is part 2 in my series on SQL window functions. In this post, we will explore using ranking functions. SQL Server support four different ranking functions which are supported in SQL Server versions 2005 and forward. All of these functions require the use of the OVER clause. The following functions are classified as ranking functions: ROW_NUMBER, RANK, DENSE_RANK, NTILE.

Once again, the following CTE will be used as the query in all examples throughout the post:

with CTEOrders as
(select cast(1 as int) as OrderID, cast(‘3/1/2012′ as date) as OrderDate, cast(10.00 as money) as OrderAmt, ‘Joe’ as CustomerName
union select 2, ‘3/1/2012′, 11.00, ‘Sam’
union select 3, ‘3/2/2012′, 10.00, ‘Beth’
union select 4, ‘3/2/2012′, 15.00, ‘Joe’
union select 5, ‘3/2/2012′, 17.00, ‘Sam’
union select 6, ‘3/3/2012′, 12.00, ‘Joe’
union select 7, ‘3/4/2012′, 10.00, ‘Beth’
union select 8, ‘3/4/2012′, 18.00, ‘Sam’
union select 9, ‘3/4/2012′, 12.00, ‘Joe’
union select 10, ‘3/4/2012′, 11.00, ‘Beth’
union select 11, ‘3/5/2012′, 14.00, ‘Sam’
union select 12, ‘3/6/2012′, 17.00, ‘Beth’
union select 13, ‘3/6/2012′, 19.00, ‘Joe’
union select 14, ‘3/7/2012′, 13.00, ‘Beth’
union select 15, ‘3/7/2012′, 16.00, ‘Sam’
)
select OrderID
,OrderDate
,OrderAmt
,CustomerName
from CTEOrders;

ROW_NUMBER

The ROW_NUMBER function will return a row number for each row within the partition based on the partition and order. This function requires the use of the ORDER BY clause. However, it is often used without the PARTITION BY clause as it will number the entire result set. If PARTITION BY is used, then the row numbering starts over within the partition. The following code shows how both of these work.

select CustomerName
, OrderDate
, OrderAmt
,ROW_NUMBER() OVER(ORDER BY CustomerName) RowNumByCust
,ROW_NUMBER() OVER(PARTITION BY OrderDate ORDER BY CustomerName) RowNumPart
from CTEOrders
order by CustomerName;

RANK and DENSE_RANK

While these are different functions with even different rules, it is easier to understand the difference when put side by side. RANK and DENSE_RANK will order the rows based on the specified partition and apply a rank or number to them. Both RANK and DENSE_RANK will assign the same rank to “ties”. For example if rows 3 and 4 have the same value in the partition, they will have the same rank. The difference is how it handles the next rank number in the series. RANK does a “true” ordering and will apply the ranking based on the number of rows and skip numbers that are ties. So, if you have a tie between the third and fourth row and the first two rows and the final row are unique the ranking is as follows: 1, 2, 3, 3, 5. As you can see, 4 is missing. DENSE_RANK keeps the tie as well, however it does not skip any numbers in the sequence. Here is the same example set based on using DENSE_RANK: 1, 2, 3, 3, 4. As with the ROW_NUMBER function, the ORDER BY is required for these functions.

select CustomerName
, OrderDate
, OrderAmt
,RANK() OVER (ORDER BY CustomerName) RankByCust
,DENSE_RANK() OVER (ORDER BY CustomerName) DenseByCust
,RANK() OVER (PARTITION BY CustomerName ORDER BY OrderDate) RankByCustDt
,DENSE_RANK() OVER (PARTITION BY CustomerName ORDER BY OrderDate) DenseByCustDt

from CTEOrders
order by CustomerName;

NTILE

The last of the ranking functions is NTILE. NTILE groups the data into ordered and ranked groups based on the ORDER BY clause. The number of groups used in the ranking are specified in the function itself. So if you specify four groups to produce a quartile ranking, four ranked values, one through four, will be assigned to each group based on the order. In many cases, the total numbers of rows is not divisible by the number of groups chosen. For instance, if the number of groups is 4 but the total number of rows is 39, then the first three groups will return 10 rows and the final group will only return 9 rows. The function will always frontload the results so the earliest groups will have the “extra” rows. If a PARTITION BY clause is used, the NTILE ranking will applied within each partition. As with the other ranking functions, the ORDER BY clause is required to use this function.

select CustomerName
,OrderDate
,OrderAmt
,NTILE(4) OVER (ORDER BY OrderDate) NtileByDt
,NTILE(2) OVER (PARTITION BY OrderDate ORDER BY  OrderAMt) NtileByDtAmt
from CTEOrders
order by CustomerName;

Up next, using aggregate functions with window functions.





T-SQL Window Functions – Part 1: The OVER() Clause

26 06 2014

This is a reprint with some revisions of a series I originally published on LessThanDot. You can find the links to the original blogs on my Series page.

TSQL WIndow FunctionsSQL Window functions were introduced in SQL Server 2005. At the time, only a small set of functionality was available. Window functions fill a need in the aggregation story for SQL Server. Window functions allow the developer to use row level aggregations without the penalty of using cursors to accomplish a similar task. Window functions allow you to segment a set of rows and then apply a function to that set of rows. In many cases, you may choose an aggregation function. However, other functions are also available including ranking and analytic functions. In this four-part series, I will start by breaking apart the OVER clause which is the key to understanding window functions in SQL Server. The following posts will expand on each group of window functions which use the OVER clause – ranking, aggregate, and analytic.

Up until 2008 R2, SQL Server only supported a subset of window functions which focused on ranking and some aggregation functions. With SQL Server 2012, Microsoft greatly expanded its support for window functions in T-SQL thus making fairly complex operations such as running totals much simpler to accomplish.

The following CTE will be used as the query in all examples throughout the post:

with CTEOrders as
(select cast(1 as int) as OrderID, cast(‘3/1/2012′ as date) as OrderDate, cast(10.00 as money) as OrderAmt, ‘Joe’ as CustomerName
union select 2, ‘3/1/2012′, 11.00, ‘Sam’
union select 3, ‘3/2/2012′, 10.00, ‘Beth’
union select 4, ‘3/2/2012′, 15.00, ‘Joe’
union select 5, ‘3/2/2012′, 17.00, ‘Sam’
union select 6, ‘3/3/2012′, 12.00, ‘Joe’
union select 7, ‘3/4/2012′, 10.00, ‘Beth’
union select 8, ‘3/4/2012′, 18.00, ‘Sam’
union select 9, ‘3/4/2012′, 12.00, ‘Joe’
union select 10, ‘3/4/2012′, 11.00, ‘Beth’
union select 11, ‘3/5/2012′, 14.00, ‘Sam’
union select 12, ‘3/6/2012′, 17.00, ‘Beth’
union select 13, ‘3/6/2012′, 19.00, ‘Joe’
union select 14, ‘3/7/2012′, 13.00, ‘Beth’
union select 15, ‘3/7/2012′, 16.00, ‘Sam’
)
select OrderID
,OrderDate
,OrderAmt
,CustomerName
from CTEOrders;

Defining the Window with the OVER() Clause

The purpose of the OVER clause is to define the window over which the function will be applied. The default functionality is to define the window for the entire table. As shown in the example below, the result is the same as if you had used an aggregation function without a GROUP BY clause. Some of what makes OVER unique includes the fact different windows can be specified in a single SELECT statement as will be shown in later sections.


select CustomerName
, SUM(OrderAmt) Over () as NoParm
from CTEOrders;

image

The OVER clause takes three additional arguments which change the scope of the clause — PARTITION BY, ORDER BY, and ROWS or RANGE.

PARTITION BY

The PARTITION BY clause is used to reduce the scope of the window to which the aggregation applies. This clause partitions the default or entire result set into partitions based on the criteria specified.  The function will be applied to each partition independently. At this point, we begin to see the real power of window functions.

While similar to the GROUP BY clause, the PARTITION BY clause allows you to specify different partitions within the same select statement. The following example builds on the illustration above with no partition specified and a partition on the CustomerName field.

select CustomerName
, SUM(OrderAmt) OVER() as NoParm
, SUM(OrderAmt) OVER (PARTITION BY CustomerName) as PartByName
from CTEOrders;

image

ORDER BY

The ORDER BY clause is used to order how the partitions apply the window function. For instance, when an ORDER BY is present in our current illustration, it will sum all of Beth’s order amounts, but will then add Joe’s to the total as he is the next in order. In the following illustration, no partition is specified so the partition is still the whole table. You can now see how partitions and ordering affect the window function.

select CustomerName
,SUM(OrderAmt) OVER () as NoParm
,SUM(OrderAmt) OVER (PARTITION BY CustomerName) as PartByName
,SUM(OrderAmt) OVER (ORDER BY CustomerName) as OrderByName
from CTEOrders;

image

As you can see, if no partition is specified the order will apply to the entire result set or the default partition. This is particularly important to understand when using ranking functions with the OVER clause.

Before SQL Server 2012, the ORDER BY clause could only be used with ranking functions. With the release of SQL Server 2012, the ORDER BY clause can now be used with ranking, aggregate, and analytic functions.

The following image visually illustrates the how the PARTITION BY and ORDER BY clauses aggregate and rank data.

image

ROWS or RANGE

The final clauses that can be used to define a partition are the ROWS and RANGE functions. This functionality was introduced in SQL Server 2012. These functions will specify a set of rows based on the position of the current row and as a result both functions require the use of the ORDER BY clause. Because each partition is anchored to the current row, the rows or range specifications are based on their proximity to the current row. A number of key words can be used with this clause to define the window frame used by the window functions.

• CURRENT ROW: That’s what it means. It identifies the current row as part of the ROWS or RANGE. This key word is supported by ROWS and RANGE.
• UNBOUNDED PRECEDING: Go to the beginning of the partition. This is supported by both ROWS and RANGE.
• UNBOUNDED FOLLOWING: Go to the end of the partition. This is supported by both ROWS and RANGE.
n PRECEDING: This is used to specify the number of rows before the current row. This is only supported by the ROWS clause.
n FOLLOWING: This is used to specify the number of rows after the current row. This is only supported by the ROWS clause.

select CustomerName
,SUM(OrderAmt) OVER () as NoParm
,SUM(OrderAmt) OVER (PARTITION BY CustomerName) as PartByName
,SUM(OrderAmt) OVER (ORDER BY CustomerName) as OrderByName
,SUM(OrderAmt) OVER (PARTITION BY CustomerName ORDER BY OrderDate, OrderID RANGE BETWEEN UNBOUNDED PRECEDING and CURRENT ROW) as RangeByDate
,SUM(OrderAmt) OVER (PARTITION BY CustomerName ORDER BY OrderDate, OrderID ROWS BETWEEN 1 FOLLOWING and 2 FOLLOWING) as NextTwoAmts
from CTEOrders;

image

A couple of other notes regarding ROWS and RANGE. In cases where the partition returns no rows, the value returned is NULL by default as seen in rows 5 and 10 above. Also, if the partition only returns fewer rows than specified it will only apply to rows within the range. In the example above, row 4 NextTwoAmts column will only return the sum of row 5 as it is the only row included in the partition.

Finally, ROWS and RANGE react differently when only the CURRENT ROW is specified. When CURRENT ROW is used by itself with the RANGE clause it will apply the function based on the partitioning and ordering. However, when used with the ROWS clause, it will only return the current row regardless of the partition specification.

select CustomerName
,OrderDate
,OrderAmt
,SUM(OrderAmt) OVER (PARTITION BY CustomerName ORDER BY OrderDate RANGE CURRENT ROW) as RangeRow
,SUM(OrderAmt) OVER (PARTITION BY CustomerName ORDER BY OrderDate ROWS CURRENT ROW) as RowRow
from CTEOrders;

image

The next three related posts I will discuss ranking functions, aggregate functions and analytic functions.





Exploring Excel 2013 for BI Tip #16: Exposing “Values” from a Tabular Model

19 06 2014

As I mentioned in my original post, Exploring Excel 2013 as Microsoft’s BI Client, I will be posting tips regularly about using Excel 2013.  Much of the content will be a result of my daily interactions with business users and other BI devs.  In order to not forget what I learn or discover, I write it down … here.  I hope you too will discover something new you can use.  Enjoy!

From Power Pivot to SSAS Tabular

As companies move through the cycle of building Excel based solutions for business intelligence and analytics, they eventually end up with a SQL Server Analysis Services Tabular Model. The tabular model comes into play when you need more data in your model or want to support more granular security.

Up to this point, users have been happily using Power Pivot models in Excel to build their analysis solutions. However, once the model is deployed to tabular some functionality or interaction with the model changes in significant ways.

To summarize this point, power users or data modelers will create Power Pivot models in Excel. These models may or may not be deployed SharePoint, but they need to take them to the next level. You can migrate a Power Pivot model to tabular with ease by using the import option in SQL Server Data Tools.

image

Interacting with Power Pivot

I started by creating a simple Power Pivot model using Adventure Works DW data based on the Internet Sales fact table. I am using seven tables in my model as shown here.

image

I am not going to add any calculated measures to the model because Power Pivot allows me to use the data as it sets. Next we create a pivot table based on this model. I dropped the Fiscal Year onto rows and added OrderQuantity and ExtendedAmount to the values region. When OrderQuantity and ExtendedAmount are added to the pivot table, Excel defaults to a sum calculation when working with the data. Basically Excel creates the calculation for you based on what it knows about the data.

The point here is that I have data that can be used as values without doing any additional work with the model. I saved the workbook, closed Excel and moved on to the next step.

Interacting with Tabular

First we need to convert the Power Pivot model to a tabular model. Which is done by importing the model we just saved in SQL Server Data Tools. Once we have the project open, we need to deploy the model to a SSAS tabular instance so we can connect to it with Excel.

image

Now that it has been deployed to SSAS we can reopen our workbook and add a connection to the tabular model. In the field list we notice three differences now that the model is tabular.

1. The SUM symbol (sigma) is used to highlight values or measures that can be calculated.

2. The values we created in the Power Pivot model show up here.

3. In the Values section, “_No measures defined” is shown.

image

When working with multidimensional models, the Values section are represented the same. That makes sense as the connection that Excel is using is based on MDX not DAX. This significantly changes the user experience.

Let’s add a new measure to our Power Pivot model and try to do the same in the tabular model. We can still drop the DiscountAmount into the values section in our pivot table based on Power Pivot. However, when we try to do the same on tabular we get an error saying that we cannot add it to that area of the report.

image

In order for us to use DiscountAmount as a measure we will need to create an OLAP measure (See Excel Tip #8 for details) to use it in this Excel workbook or we will need to add it as a calculated measure in tabular and redeploy for it to be available.

What’s Happening

Because Excel treats a tabular model the same as a multidimensional model in SSAS you will need to add calculated measures for all measures you want to use as values in pivot tables in Excel. Multidimensional models are highly structured using the dimension and measure group techniques. While tabular “feels” like Power Pivot, to be used by Excel it needs to appear structured like multidimensional cubes.

Making this more interesting is that Excel uses MDX to communicate with tabular models, not DAX. As a result, we are able to use the OLAP tools in the PivotTable Tools ribbon.

image

This option is not available when working with Power Pivot models in Excel.

Impact to Users

Overall the impact to users, in particular power users and report builders, is that they have less “freedom” to design when using a tabular model. If they want to add more calculations, they need to be familiar with MDX. Furthermore, if they want the calculations to be generally available they need to work with IT to deploy updated models.

Hopefully we will see DAX supported interaction with SSAS in the future, but for the moment you need to understand how tabular and Power Pivot differ when using pivot tables in Excel.





Techfuse, a New Laptop, and How Microsoft Azure Helped Save the Day

24 04 2014

On Tuesday, April 22, I had the opportunity to speak at the Techfuse conference in Minneapolis. I was presenting a session on the differences between tabular and multidimensional models with a focus on the developer’s experience. My deck has tenTechFuse_logo  slides including title, references, and bio. The rest of the time is spent in Visual Studio building out equivalent models in using SSAS Tabular and SSAS Multidimensional.

The previous week, I was issued a new laptop from my company, a Dell 7440. This is a very nice machine and I had it all set for the presentation. About 11 AM (I was scheduled to speak at 1:15 PM) it occurred to me that I did not recall seeing a VGA port only HDMI. Next question, did the projectors at the convention center support HDMI? Answer, No. Now I had about an hour and a half to resolve this issue. Simple, I decided to head downtown and get the convertor from Office Depot. This was about 8 blocks away. I could do that and get some exercise in.

I took off at about 11:30. First, I stopped at Target, it was closer. No luck. So on to Office Depot. Keep in mind that Office Depot sells laptops like mine with only HDMI support and it stands to reason that they would have the converter. No such luck. I was able to get the HDMI converted to DVI, but that would not help as I later found out. They directed me to Radio Shack where I promptly picked up a DVI – VGA converter. Now I have three pieces that when strung together should support my needs. I headed back to the convention center and arrived with 30 minutes to spare. Working with the AV guy, we got it all plugged in only to still have it not work. Turns out you need a convertor to convert the digital signal to analog for use in the older projectors. Now what?

The moderator for my room offered me her laptop to use for the presentation. Which was AWESOME! So now I have a way to give the presentation, all ten slides. However, she did not have Visual Studio with SSDT for BI and SQL Server installed. Which was fine, because I didn’t expect her to.

Here is where Azure comes in. I had created a VM with SQL Server Tabular installed along with Visual Studio 2012 and the SQL Server Data Tools for BI. So, I firedth9CGBMYN6 up the VM right before I gave the presentation. I warned the crowd about what had happened and decided to push the demos to the end of the presentation so everyone could leave if nothing worked and all the material could be covered.

I was able to get into the VM, fire up Visual Studio. Since the demo was a live build of a tabular model and multidimensional model, I used a database I had created in SQL Azure as the data source and we built it the models live. Granted we were not able to do a complete multidimensional model because the database was not formatted as star schema, but it helped highlight the difference between what needs to be done prior to development. Overall it went very well (I think, surveys are forthcoming…). At the end of the day, without the work I had been doing in Azure I would not have been able to demo and it would have been a very short presentation.

Some lessons learned -

  • Be sure to have what you need to support presenting in a variety of scenarios. I should have made sure to have a converter prior to the conference as most convention centers and other facilities haven’t upgraded their projectors yet.
  • I will likely set up Azure VMs to support more demos. Just in case. It is always good to have a backup plan though a wireless connection would have painful to do that on.
  • Roll with it. Don’t give up, try to make the best of a bad situation. People understand things don’t always go perfectly. At the end of the day, I came to talk about multidimensional and tabular model development. I could have opened the floor up for discussion and did Q&A. Make the most of every situation.




Oracle Tips for MSBI Devs #6: Supporting SSAS Tabular Development

14 04 2014

As SQL Server Analysis Services Tabular Models become more popular, models will use Oracle databases as sources. One of the key issues whenever you work with Oracle is understanding how to properly configure the necessary components to enable development.

Getting Started

If you have worked with Oracle before, you are very aware of a few things you need to be successful. First, you need to install the Oracle client. Here is where the details get messy. When you are working with MSBI tools, you will be using SQL Server Data Tools in Visual Studio which is still only 32 bit. Of the BI tools in SSDT, only SSIS has run modes to support 32 bit and 64 bit configurations. As a result, you need to install the 32 bit Oracle client in order to develop your tabular model.

Once that has been installed you will need to update the TNSNAMES.ORA file with the servers you will be targeting during development. Ideally, your Oracle DBAs have a file for you to use so you don’t need to create one. One nice thing is that the Oracle 12c client updates the PATH environment variable with the location of the bin folder. (Yes, Oracle still uses environment variables.) I would also recommend adding or using the TNS_ADMIN variable to specify the location of the TNSNAMES.ORA file. (See http://www.orafaq.com/wiki/TNS_ADMIN for details.)

NOTE: It took me many hours to work through a variety of configuration issues related to working with the Oracle client install. A couple of reinstalls, reboots, TNSNames.ORA tweaks, and lots of fruitless searching were all required to get this working. Be warned, working with Oracle clients are neither fun nor simple.

The Issue

Now that you have the 32 bit client installed you can connect to the Oracle database through the tabular model designer. As shown below, you can connect to Oracle through the Table Import Wizard.

image

You will be able to successfully test the connection as noted here.

image

And you will be able to execute a query and get results. You can also use the option to select tables and views.

image

However, once you decide to import the data you will encounter the following error:

image

The issue is that while you can do most of your work within Visual Studio using the 32 bit client, the import process targets the SQL Server tabular instance you specified when you created the project. While the 32 bit version of SQL Server is still available, most of us would not install that, even in our development environments. If you do not encounter this error, you are either using the 32 bit client of SQL Server or you have the 64 bit Oracle client installed (more on that next). As long as Visual Studio is only 32 bit compliant and you choose to use the 64 version of SQL Server you will see this issue.

The Resolution

The resolution is fairly simple. You need to download and install the 64 bit Oracle client. I would recommend that you get it installed, then reboot your development PC. While this may not be required, it seems to have helped me with a number of connectivity issues. You will need to be prepared for some “interesting” issues as you will have more than one Oracle home installed and you have the potential of many types of ORA-XXXXX errors. Once you are up and running you should be able to develop tabular models built on Oracle databases.

Some Parting Thoughts

First, I want to be clear that I think that Oracle is a solid database platform. However, I have never been at a client site or on a project where the connectivity or client installs were totally correct or functional without some work between the Oracle team and the BI development team. I think that the .NET driver is supposed to better and I may try that out for a later post (when I have the hours to spare).

I did the testing for this completely on Azure (and my Surface). I set up an Oracle VM and a SQL Server VM on Azure. The Microsoft team put together a great reference on setting up your Oracle VM. Check it out. I also did a previous post on setting up Oracle in an Azure VM. Both VM types can be pricey, but in a testing environment all was not too bad. I encourage you to use Azure to for these types of scenarios. But be sure to turn it off when you are done.





Setting Up Tabular Models on Windows Azure

12 03 2014

In my last post, I discussed how to set up Oracle in Windows Azure. During a customer call, there were questions about using SQL Server Analysis Services (SSAS) tabular models with Azure. This post will walk through setting up an Azure VM and deploy a tabular model to that VM.

If you do not have an Windows Azure account you can use a trial account with your Microsoft or Live account. Check out http://www.windowsazure.com for details on how to “try it free.”

Setting Up the VM in Azure

From the Management Portal on your Azure account, select Virtual Machines from the left then Add at the bottom. On the next screen, choose to create your VM from the gallery. You should see the Choose an Image option as seen below. As you can see, I have the SQL Server 2012 SP1 Enterprise image highlighted. You will need to use the Enterprise license as Tabular does not run on Standard. In this case, the Data Warehousing image is NOT the Business Intelligence Edition of SQL Server.

image

You can also choose to create a “blank” VM and load up SQL Server on your own. I chose to use the image to speed up the process – which it did substantially.

After selecting the image, the next few steps guide you through setting up the VM. For the most part, the defaults will work fine. Be aware that once this is turned on, you will be charged for it running. It is up to you to make sure you understand the costs, even if you are using the free trial.

During the setup steps, you will create the VM and its related cloud service. Remember that the account is your admin account for the VM and you will need those credentials to Remote Desktop into the VM. On the last, setup page is the Endpoints. Leave the defaults, we will add an endpoint for our tabular model later.

At this point, it will take a few minutes to set up your new VM. Once it is setup, open a Remote Desktop session into it. If you look at services or at the SQL Configuration console you will notice that everything except a tabular instance have been set up for you. As a result, I would not recommend using this gallery image for a production deployment. You should look at creating your own template if you want a more locked down and refined setup.

Setting Up the Tabular Instance in Azure

As noted before, the tabular instance is not set up. The installation media is on the server, so you can run that to install your tabular instance. I won’t walk through the install process, but this was great to find because that meant I did not have to copy media to my VM.

Making the Tabular Instance Available

This section covers the tedious tasks required to make your tabular instance available for querying outside of the VM. There are three basic steps to getting your instance “online”: setting the port number in SSAS, updating the firewall, and adding endpoints. I will walk you through the steps I used to get this done followed by some references that helped me get here.

Setting the Port Number in SSAS

By default, SSAS, both multidimensional and tabular instances, use dynamic ports. In order, to allow connections through the firewall and endpoints, the port number needs to be fixed. I used guidance from TechNet and did the following steps to set the port.

    1. Opened the Task Manager to get the PID for MSOLAP$<<instance name>>.
    2. Ran netstat –ao –p TCP to get a list of ports used by current processes. Once I had identified my port number, I also noted the server IP address which is required in the next step.
    3. I chose to confirm that I had the correct port by connecting to the instance using the IP address and port number.
    4. Next, we have to go old school and modify the msmdsrv.ini file. The typical install path for this file is C:\Program Files\Microsoft SQL Server\<<instance name>>\OLAP\Config\msmdsrv.ini.
    5. Open the file in notepad and find the <Port>0</Port> tag.
    6. Change the port number to the port number that was identified above. (Technically we could have used any open port number. I chose to do this because I was sure the port number was available.)
    7. Save the changes and restart the service.
    8. Once again confirm you can connect to the server with SSMS using the IP address and port number.

Now you have set up the SSAS instance to use a fixed port number.

Updating the Firewall

Now that we have a port number, we can create a firewall rule. We access the firewall rules from the Server Manger. In the Windows Firewall console, we will be creating a new Inbound Rule..

image

  1. The rule type is Port
  2. We will apply the rule to TCP and specify the port we defined above.
  3. On the action tab, we selected Allow the Connection. (If you are planning to use this in a production environment, you will need to verify your connection requirements.)
  4. Next, we clear any connection we don’t want to apply.
  5. Finally, I named the rule with a descriptive name.

Now we have the firewall rule in place to allow external connections for the tabular instance.

Adding Endpoints

The final step to making the instance available is to add the endpoints in Azure. In the WIndows Azure portal, we need to go the VMs page again, select our VM, and open the ENDPOINTS tab. From here we create a new endpoint.

  1. We are creating a standalone endpoint.
  2. Next, we select the TCP protocol and add the port number to both the private and public port textboxes.
  3. Finally, we apply the changes.

We are now ready to test connectivity.

References

Setting up VM Endpoints

Configuring Windows Firewall

Configuring Windows Firewall with SSAS

Connecting to the Tabular Instance in Azure

So to verify this works, I wanted to connect to the model with SSMS on my desktop. However, it uses Windows authentication and I am not on the same domain. (My IT staff is likely still laughing about me joining my VM to our domain.)

Thankfully, Greg Galloway (blog) reminded me of how to set up runas to use different credentials to connect to SSAS. Here is the syntax I used to connect to the tabular instance on Azure using a command window:

runas /netonly /user:<<VM name>>\<<username>> “C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\Ssms.exe”

This allowed me to open SSMS using those credentials. Next, I used the VM + port to connect. You will see that the Windows credentials in the dialog are not what you entered on the command line. This is expected and the credentials you entered in the command line will be passed through to the VM.

Deploying a Tabular Model to Azure

I attempted three different ways to deploy my model to the Azure VM. Two worked, one did not.

Deploying from Visual Studio on My Desktop

My first attempt was to deploy from Visual Studio on my desktop. I set the deployment properties to point to the Azure instance using the same credentials I had used with SSMS. I also set up a command line execution to use runas like with SSMS.

It appeared I could connect, but I continually got a permissions error which is shown below. After much frustration, I gave up on it and moved to the next options.

image

Using XMLA to Deploy

This is the most straightforward way to deploy an SSAS DB on a different server. I used SSMS to generate the Create Database XMLA statement. Because I had not deployed it locally, I needed to modify the XMLA statement to remove the user name and guid from the database name and database ID. (AdvWorksTab1_SteveH_<<Some GUID>>)

In a bit of irony, I can use the SSMS connection from my desktop using the runas to deploy the database to the VM.

The reality is that this is easy and acceptable way to deploy SSAS databases to production environments, in Azure or in your datacenter.

Deploying from Visual Studio on the VM

The final method I used was deploying Visual Studio onto the VM and deploying from there. I used VisualStudio.com (TFS online) to transfer the source code from my desktop to the VM. I had to install the TFS client on the VM, but SSDT with BI tools is already there.

  1. Installed the VS 2010 TFS Client: http://www.microsoft.com/en-us/download/details.aspx?id=329
  2. Then installed Visual Studio SP1  http://social.msdn.microsoft.com/Forums/vstudio/en-US/4e4851dc-eb29-4081-9484-d38a6efa07ee/unable-to-connect-to-tfs-online?forum=TFService
  3. Finally installed VS2010 Team Foundation Server Compatibility GDR (KB2662296) http://www.microsoft.com/en-us/download/details.aspx?id=29082

Now it will connect to TFS Online. I got the latest from my project and pointed the project to my tabular instance.

Be sure to check the impersonation settings.

Next, I deployed the project to the local tabular instance on the VM and it worked great. This might make sense for development, but I would not use this method in a production environment.

Some Closing Thoughts

I was amazed at how simple it was to create the VM and set up tabular in Azure. Knowing what I know now, I would be able to set up a usable instance fairly quickly and deploy a database using XMLA without much effort. That was very nice.

Doesn’t work with Office 365

I started this project to determine the connectivity capability with Office 365. Well, that does not work in my current configuration. I was able to create a workbook on my desktop using my Azure tabular model and Excel. It works just as you would expect. However, when I deployed the workbook to Office 365, data refresh always failed.

image

My next thought was to use a data gateway, but those only work with SQL Server Data Engine and Oracle, not SSAS. This is still a significant issue with making Power BI able to fully take advantage of existing BI assets at a company.

Using Azure Active Directory

My next step would be to use Azure Active Directory services to try to get Windows Authentication to work throughout. But that is for a later project and post.





Traveling and Talking to Wrap Up 2013

16 08 2013

I am writing this blog post en route to New York City for SQL Saturday #235. This begins a fairly busy travel and speaking schedule for me this year. Having spent much of the summer at home with my family which was great, I now embark on some trips, primarily out east.

For starters, I am kicking off my busy end of summer, beginning of imagefall, in New York City. I am looking forward to being there for the second time. This should be a great event. As Regional Mentor for the NorthEast this trip allows me to be in that region for an event.  I will be speaking on Building BI Solutions with Microsoft Excel.

Next, up is some quality time in Boston. I have a contract that will put me in downtown Boston for about 4 weeks. After that trip, I cm-logoskip a week and then return to speak at CodeMastery which is a Magenic event in late September.  If you are in the Boston area, I will be speaking on the difference between SQL Server Analysis Services Models – the Multidimensional and Tabular Model Smackdown.  More details of this event will be coming soon, but pencil it in for Thursday 9/26.

While not traveling far, I am speaking at SQL Saturday #238 in Minneapolis right before PASS Summit in Charlotte. This is right in my backyard, so to speak, and I look forward to another great event in the Upper Midwest. I currently do not have the sessions that were picked, but look forward to seeing you there.

October will be spent at PASS Summit. While I was accepted to speak this year, I look forward to meeting a number of people at the Community Zone and the other events at the conference. I will follow that up by going to SQL Saturday # BI Edition in Charlotte. While I have not received word about speaking, I plan to attend the event in either case. Should be a nice way to wrap up my time there.

In November, I am headed to Atlanta for another CodeMastery event. This should be a nice time to visit there and spend some time with people there.

cm-logo  X 2

Upon returning from Atlanta, I am back in Minneapolis for a CodeMastery event there. Having had a major role in running this event over the past couple of years, I am truly looking forward to speaking there. Check out http://codemastery.com for details on all of the CodeMastery events as they are announced.

Modern Apps Live!SQL Server Live!

Finally, at least at this time, I will be presenting 5 times right before Thanksgiving. This will be my first time presenting at SQL Live and second time presenting at Modern Apps Live. Join us for this awesome 360 Live event in Orlando.

I look forward to meeting new people and visiting with old friends throughout all of these events and travels. If you see me there, say Hi and ask for one of my “famous” superhero cards! See you around.





Exploring Excel 2013 for BI Tip #8: Adding Calculated Measures

20 06 2013

As I mentioned in my original post, Exploring Excel 2013 as Microsoft’s BI Client, I will be posting tips regularly about using Excel 2013.  Much of the content will be a result of my daily interactions with business users and other BI devs.  In order to not forget what I learn or discover, I write it down … here.  I hope you too will discover something new you can use.  Enjoy!

Adding Calculated Measures to the Excel 2013 Workbook

If you have worked with SQL Server Analysis Services in the past you already know what calculated measures are.  More importantly, you know how to update the MDXScript without requiring a cube refresh.  (If you are unaware of this, check out the BIDS Helper project on CodePlex.)

A calculated measure uses existing measures and MDX to provide additional, shared calculations in a cube.  However, there are many times that the ability to create a calculated measure in Excel would be great.  In Excel 2013, this is now possible.

Once you have connected to a cube using a pivot table, you can add calculated measures using the OLAP Tools menu on the ANALYZE tab.

image

When you select the MDX Calculated Measure item, it will open an MDX dialog designer in which you can create a measure.  (MDX Calculated Members are will be in the next tip.)

image

Before we create our measure, let’s talk about the ancillary parts such as the name, folder and measure group.  You will want to give your measure a name.  It needs to be unique within the work you are doing and unique from other measures in the cube or you will get an error.

image

The folder and measure group are really optional.  It really depends on how you want display the new measures in the Excel Fields window.  I would recommend that folders are used when large volumes of measures are being used.  It is a great way to organize the measures into consumable, related groups for your users.

image

When you designate the measure group, the measure and folder will be put in the same group as the measure group.  This is appropriate when the measure is related exclusively to the measure group, conceptually if not technically. I usually will only do this if all of the measures come from the same measure group (technically related) or if the user understands that the measure “should” be a part of the measure group even if it is dependent on measures outside of the current measure group (conceptually).

Next, you create the measure.  The Fields and Items tab contains the measures and dimensions available while the Functions tab has the MDX functions you can use.  Use the Test MDX button to verify syntax prior to saving the measure.

The really nice part is that this measure is now contained within the workbook.  It does not get published back to the server.  However, if the measure becomes popular, you can use the MDX from this measure to create a new measure on the server.  It will be business verified before being published.  By using Excel to create calculated measures, you also prevent a glut of single use measures from being created on the server.

Finally, to manage created measures, use the Manage Calculations option on the OLAP Tools menu.  It will open a dialog with all of the calculated measures and calculated members created with this data connection in the workbook.  In my scenario, I used the MyVote Cube connection to create the measure.  Basically, the pivot table is associated with a connection and that is the defacto filter for this list.

image

Use Excel to test MDX simply.  This will allow you to create measures, verify data, then deploy working code.  It is a great addition to the product.

Next up… Calculated Members.








Follow

Get every new post delivered to your Inbox.

Join 731 other followers

%d bloggers like this: