Oracle Tips for MSBI Devs #6: Supporting SSAS Tabular Development

14 04 2014

As SQL Server Analysis Services Tabular Models become more popular, models will use Oracle databases as sources. One of the key issues whenever you work with Oracle is understanding how to properly configure the necessary components to enable development.

Getting Started

If you have worked with Oracle before, you are very aware of a few things you need to be successful. First, you need to install the Oracle client. Here is where the details get messy. When you are working with MSBI tools, you will be using SQL Server Data Tools in Visual Studio which is still only 32 bit. Of the BI tools in SSDT, only SSIS has run modes to support 32 bit and 64 bit configurations. As a result, you need to install the 32 bit Oracle client in order to develop your tabular model.

Once that has been installed you will need to update the TNSNAMES.ORA file with the servers you will be targeting during development. Ideally, your Oracle DBAs have a file for you to use so you don’t need to create one. One nice thing is that the Oracle 12c client updates the PATH environment variable with the location of the bin folder. (Yes, Oracle still uses environment variables.) I would also recommend adding or using the TNS_ADMIN variable to specify the location of the TNSNAMES.ORA file. (See http://www.orafaq.com/wiki/TNS_ADMIN for details.)

NOTE: It took me many hours to work through a variety of configuration issues related to working with the Oracle client install. A couple of reinstalls, reboots, TNSNames.ORA tweaks, and lots of fruitless searching were all required to get this working. Be warned, working with Oracle clients are neither fun nor simple.

The Issue

Now that you have the 32 bit client installed you can connect to the Oracle database through the tabular model designer. As shown below, you can connect to Oracle through the Table Import Wizard.

image

You will be able to successfully test the connection as noted here.

image

And you will be able to execute a query and get results. You can also use the option to select tables and views.

image

However, once you decide to import the data you will encounter the following error:

image

The issue is that while you can do most of your work within Visual Studio using the 32 bit client, the import process targets the SQL Server tabular instance you specified when you created the project. While the 32 bit version of SQL Server is still available, most of us would not install that, even in our development environments. If you do not encounter this error, you are either using the 32 bit client of SQL Server or you have the 64 bit Oracle client installed (more on that next). As long as Visual Studio is only 32 bit compliant and you choose to use the 64 version of SQL Server you will see this issue.

The Resolution

The resolution is fairly simple. You need to download and install the 64 bit Oracle client. I would recommend that you get it installed, then reboot your development PC. While this may not be required, it seems to have helped me with a number of connectivity issues. You will need to be prepared for some “interesting” issues as you will have more than one Oracle home installed and you have the potential of many types of ORA-XXXXX errors. Once you are up and running you should be able to develop tabular models built on Oracle databases.

Some Parting Thoughts

First, I want to be clear that I think that Oracle is a solid database platform. However, I have never been at a client site or on a project where the connectivity or client installs were totally correct or functional without some work between the Oracle team and the BI development team. I think that the .NET driver is supposed to better and I may try that out for a later post (when I have the hours to spare).

I did the testing for this completely on Azure (and my Surface). I set up an Oracle VM and a SQL Server VM on Azure. The Microsoft team put together a great reference on setting up your Oracle VM. Check it out. I also did a previous post on setting up Oracle in an Azure VM. Both VM types can be pricey, but in a testing environment all was not too bad. I encourage you to use Azure to for these types of scenarios. But be sure to turn it off when you are done.





Setting Up Tabular Models on Windows Azure

12 03 2014

In my last post, I discussed how to set up Oracle in Windows Azure. During a customer call, there were questions about using SQL Server Analysis Services (SSAS) tabular models with Azure. This post will walk through setting up an Azure VM and deploy a tabular model to that VM.

If you do not have an Windows Azure account you can use a trial account with your Microsoft or Live account. Check out http://www.windowsazure.com for details on how to “try it free.”

Setting Up the VM in Azure

From the Management Portal on your Azure account, select Virtual Machines from the left then Add at the bottom. On the next screen, choose to create your VM from the gallery. You should see the Choose an Image option as seen below. As you can see, I have the SQL Server 2012 SP1 Enterprise image highlighted. You will need to use the Enterprise license as Tabular does not run on Standard. In this case, the Data Warehousing image is NOT the Business Intelligence Edition of SQL Server.

image

You can also choose to create a “blank” VM and load up SQL Server on your own. I chose to use the image to speed up the process – which it did substantially.

After selecting the image, the next few steps guide you through setting up the VM. For the most part, the defaults will work fine. Be aware that once this is turned on, you will be charged for it running. It is up to you to make sure you understand the costs, even if you are using the free trial.

During the setup steps, you will create the VM and its related cloud service. Remember that the account is your admin account for the VM and you will need those credentials to Remote Desktop into the VM. On the last, setup page is the Endpoints. Leave the defaults, we will add an endpoint for our tabular model later.

At this point, it will take a few minutes to set up your new VM. Once it is setup, open a Remote Desktop session into it. If you look at services or at the SQL Configuration console you will notice that everything except a tabular instance have been set up for you. As a result, I would not recommend using this gallery image for a production deployment. You should look at creating your own template if you want a more locked down and refined setup.

Setting Up the Tabular Instance in Azure

As noted before, the tabular instance is not set up. The installation media is on the server, so you can run that to install your tabular instance. I won’t walk through the install process, but this was great to find because that meant I did not have to copy media to my VM.

Making the Tabular Instance Available

This section covers the tedious tasks required to make your tabular instance available for querying outside of the VM. There are three basic steps to getting your instance “online”: setting the port number in SSAS, updating the firewall, and adding endpoints. I will walk you through the steps I used to get this done followed by some references that helped me get here.

Setting the Port Number in SSAS

By default, SSAS, both multidimensional and tabular instances, use dynamic ports. In order, to allow connections through the firewall and endpoints, the port number needs to be fixed. I used guidance from TechNet and did the following steps to set the port.

    1. Opened the Task Manager to get the PID for MSOLAP$<<instance name>>.
    2. Ran netstat –ao –p TCP to get a list of ports used by current processes. Once I had identified my port number, I also noted the server IP address which is required in the next step.
    3. I chose to confirm that I had the correct port by connecting to the instance using the IP address and port number.
    4. Next, we have to go old school and modify the msmdsrv.ini file. The typical install path for this file is C:\Program Files\Microsoft SQL Server\<<instance name>>\OLAP\Config\msmdsrv.ini.
    5. Open the file in notepad and find the <Port>0</Port> tag.
    6. Change the port number to the port number that was identified above. (Technically we could have used any open port number. I chose to do this because I was sure the port number was available.)
    7. Save the changes and restart the service.
    8. Once again confirm you can connect to the server with SSMS using the IP address and port number.

Now you have set up the SSAS instance to use a fixed port number.

Updating the Firewall

Now that we have a port number, we can create a firewall rule. We access the firewall rules from the Server Manger. In the Windows Firewall console, we will be creating a new Inbound Rule..

image

  1. The rule type is Port
  2. We will apply the rule to TCP and specify the port we defined above.
  3. On the action tab, we selected Allow the Connection. (If you are planning to use this in a production environment, you will need to verify your connection requirements.)
  4. Next, we clear any connection we don’t want to apply.
  5. Finally, I named the rule with a descriptive name.

Now we have the firewall rule in place to allow external connections for the tabular instance.

Adding Endpoints

The final step to making the instance available is to add the endpoints in Azure. In the WIndows Azure portal, we need to go the VMs page again, select our VM, and open the ENDPOINTS tab. From here we create a new endpoint.

  1. We are creating a standalone endpoint.
  2. Next, we select the TCP protocol and add the port number to both the private and public port textboxes.
  3. Finally, we apply the changes.

We are now ready to test connectivity.

References

Setting up VM Endpoints

Configuring Windows Firewall

Configuring Windows Firewall with SSAS

Connecting to the Tabular Instance in Azure

So to verify this works, I wanted to connect to the model with SSMS on my desktop. However, it uses Windows authentication and I am not on the same domain. (My IT staff is likely still laughing about me joining my VM to our domain.)

Thankfully, Greg Galloway (blog) reminded me of how to set up runas to use different credentials to connect to SSAS. Here is the syntax I used to connect to the tabular instance on Azure using a command window:

runas /netonly /user:<<VM name>>\<<username>> “C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\Ssms.exe”

This allowed me to open SSMS using those credentials. Next, I used the VM + port to connect. You will see that the Windows credentials in the dialog are not what you entered on the command line. This is expected and the credentials you entered in the command line will be passed through to the VM.

Deploying a Tabular Model to Azure

I attempted three different ways to deploy my model to the Azure VM. Two worked, one did not.

Deploying from Visual Studio on My Desktop

My first attempt was to deploy from Visual Studio on my desktop. I set the deployment properties to point to the Azure instance using the same credentials I had used with SSMS. I also set up a command line execution to use runas like with SSMS.

It appeared I could connect, but I continually got a permissions error which is shown below. After much frustration, I gave up on it and moved to the next options.

image

Using XMLA to Deploy

This is the most straightforward way to deploy an SSAS DB on a different server. I used SSMS to generate the Create Database XMLA statement. Because I had not deployed it locally, I needed to modify the XMLA statement to remove the user name and guid from the database name and database ID. (AdvWorksTab1_SteveH_<<Some GUID>>)

In a bit of irony, I can use the SSMS connection from my desktop using the runas to deploy the database to the VM.

The reality is that this is easy and acceptable way to deploy SSAS databases to production environments, in Azure or in your datacenter.

Deploying from Visual Studio on the VM

The final method I used was deploying Visual Studio onto the VM and deploying from there. I used VisualStudio.com (TFS online) to transfer the source code from my desktop to the VM. I had to install the TFS client on the VM, but SSDT with BI tools is already there.

  1. Installed the VS 2010 TFS Client: http://www.microsoft.com/en-us/download/details.aspx?id=329
  2. Then installed Visual Studio SP1  http://social.msdn.microsoft.com/Forums/vstudio/en-US/4e4851dc-eb29-4081-9484-d38a6efa07ee/unable-to-connect-to-tfs-online?forum=TFService
  3. Finally installed VS2010 Team Foundation Server Compatibility GDR (KB2662296) http://www.microsoft.com/en-us/download/details.aspx?id=29082

Now it will connect to TFS Online. I got the latest from my project and pointed the project to my tabular instance.

Be sure to check the impersonation settings.

Next, I deployed the project to the local tabular instance on the VM and it worked great. This might make sense for development, but I would not use this method in a production environment.

Some Closing Thoughts

I was amazed at how simple it was to create the VM and set up tabular in Azure. Knowing what I know now, I would be able to set up a usable instance fairly quickly and deploy a database using XMLA without much effort. That was very nice.

Doesn’t work with Office 365

I started this project to determine the connectivity capability with Office 365. Well, that does not work in my current configuration. I was able to create a workbook on my desktop using my Azure tabular model and Excel. It works just as you would expect. However, when I deployed the workbook to Office 365, data refresh always failed.

image

My next thought was to use a data gateway, but those only work with SQL Server Data Engine and Oracle, not SSAS. This is still a significant issue with making Power BI able to fully take advantage of existing BI assets at a company.

Using Azure Active Directory

My next step would be to use Azure Active Directory services to try to get Windows Authentication to work throughout. But that is for a later project and post.





Exploring Excel 2013 for BI Tip #9: Adding Calculated Members

25 06 2013

As I mentioned in my original post, Exploring Excel 2013 as Microsoft’s BI Client, I will be posting tips regularly about using Excel 2013.  Much of the content will be a result of my daily interactions with business users and other BI devs.  In order to not forget what I learn or discover, I write it down … here.  I hope you too will discover something new you can use.  Enjoy!

Adding Calculated Members to the Excel 2013 Workbook

In my last tip (#8), I discussed using calculated measures.  In this tip, I will talk about creating a calculated member.  The primary difference is that a member becomes a part of a dimension and can be used as a filter, column header, row header, or even a slicer.

In my simple example, I want to created an aggregated set of categories used in my polls called “Cool Cats” which contained the Fun, Entertainment, and Sports categories and puts my new member in the Poll Category attribute hierarchy.  I will use this to see how many submissions there were in these categories.

As before, you can create and manage calculated members from the ANALYZE tab in the PIVOTTABLE TOOLS ribbon.  Use the OLAP Tools menu and select the MDX Calculated Member option.  If you have already created the member, use the Manage Calculations option to edit existing members and measures.

image

The New Calculated Member dialog is different from the dialog used to edit the member.  Let’s start with the create dialog as noted below.

image

(1) Assign your measure a name.  It will have to be unique in the context of the the Parent Hierarchy.

(2) Assign the Parent Hierarchy and Parent Member.  This establishes where you plan to locate the new member.  As you can see you can choose any hierarchy you use in your cube.  The Parent Member property lets you choose that first level within the hierarchy.  In my case, I am choosing the Poll Category attribute hierarchy and the All member as the parent.

(3) Create your member with MDX.  Because you are creating a member, it is important that your MDX resolves to a member.  As a side note I used a set initially which passed the Test MDX operation, but displayed as #VALUE in Excel.

(4) Test MDX will allow you verify you have no syntax errors in your member creation.  However, as I noted in step 3 it is not flawless, so you may still have issues even though it is valid syntax.

A couple of important concepts. In my situation, the Cool Cats member stopped at the top level.  If I put this in the Category hierarchy which has multiple levels, Cool Cats would have no children as it is a stand alone member.  However, when applied at a filter level it will filter results properly. The image below shows the filter in use.

image

Now when I pull the Poll Category hierarchy into the pivot table you will see that Cool Cats is a peer member and has the valid value.  By default Excel will not calculate the Grand Total with those members twice.

image

You will need to be prepared to properly call this out for your users so the understand how calculated members operate in this scenario.

A common use case for creating calculated members is to create date members that aren’t easily created or tested.  This will allow you to work out the member and how it affects the user experience.





Exploring Excel 2013 for BI Tip #8: Adding Calculated Measures

20 06 2013

As I mentioned in my original post, Exploring Excel 2013 as Microsoft’s BI Client, I will be posting tips regularly about using Excel 2013.  Much of the content will be a result of my daily interactions with business users and other BI devs.  In order to not forget what I learn or discover, I write it down … here.  I hope you too will discover something new you can use.  Enjoy!

Adding Calculated Measures to the Excel 2013 Workbook

If you have worked with SQL Server Analysis Services in the past you already know what calculated measures are.  More importantly, you know how to update the MDXScript without requiring a cube refresh.  (If you are unaware of this, check out the BIDS Helper project on CodePlex.)

A calculated measure uses existing measures and MDX to provide additional, shared calculations in a cube.  However, there are many times that the ability to create a calculated measure in Excel would be great.  In Excel 2013, this is now possible.

Once you have connected to a cube using a pivot table, you can add calculated measures using the OLAP Tools menu on the ANALYZE tab.

image

When you select the MDX Calculated Measure item, it will open an MDX dialog designer in which you can create a measure.  (MDX Calculated Members are will be in the next tip.)

image

Before we create our measure, let’s talk about the ancillary parts such as the name, folder and measure group.  You will want to give your measure a name.  It needs to be unique within the work you are doing and unique from other measures in the cube or you will get an error.

image

The folder and measure group are really optional.  It really depends on how you want display the new measures in the Excel Fields window.  I would recommend that folders are used when large volumes of measures are being used.  It is a great way to organize the measures into consumable, related groups for your users.

image

When you designate the measure group, the measure and folder will be put in the same group as the measure group.  This is appropriate when the measure is related exclusively to the measure group, conceptually if not technically. I usually will only do this if all of the measures come from the same measure group (technically related) or if the user understands that the measure “should” be a part of the measure group even if it is dependent on measures outside of the current measure group (conceptually).

Next, you create the measure.  The Fields and Items tab contains the measures and dimensions available while the Functions tab has the MDX functions you can use.  Use the Test MDX button to verify syntax prior to saving the measure.

The really nice part is that this measure is now contained within the workbook.  It does not get published back to the server.  However, if the measure becomes popular, you can use the MDX from this measure to create a new measure on the server.  It will be business verified before being published.  By using Excel to create calculated measures, you also prevent a glut of single use measures from being created on the server.

Finally, to manage created measures, use the Manage Calculations option on the OLAP Tools menu.  It will open a dialog with all of the calculated measures and calculated members created with this data connection in the workbook.  In my scenario, I used the MyVote Cube connection to create the measure.  Basically, the pivot table is associated with a connection and that is the defacto filter for this list.

image

Use Excel to test MDX simply.  This will allow you to create measures, verify data, then deploy working code.  It is a great addition to the product.

Next up… Calculated Members.





Exploring Excel 2013 for BI Tip #2: Show Details

13 02 2013

As I mentioned in my original post, Exploring Excel 2013 as Microsoft’s BI Client, I will be posting tips regularly about using Excel 2013.  Much of the content will be a result of my daily interactions with business users and other BI devs.  In order to not forget what I learn or discover, I write it down … here.  I hope you too will discover something new you can use.  Enjoy!

Show Details

So, you have connected Excel to your SSAS cube.  You really wish you could easily drill to details using Excel.   You can do this in Excel 2013 by double clicking the cell or right clicking the cell to get the shortcut menu which includes the option as well.

image

It is really cool that you can just double-click the cell and get there directly.  By default, it returns the first thousand rows of detail from the underlying cube data.

image

You can change this in the connection options for your SSAS connection.

image

One “gotcha” on this is that It doesn’t work on a calculated cell, but it works fine on standard measures.  If you use a lot of calculated members, you will get the error in both cases.  So if this option is important for the users of a specific calculation, you may need to get it into cube as a standard, not calculated, measure.





Exploring Excel 2013 for BI Tip #1: Quick Explore

8 02 2013

As I mentioned in my original post, Exploring Excel 2013 as Microsoft’s BI Client, I will be posting tips regularly about using Excel 2013.  Much of the content will be a result of my daily interactions with business users and other BI devs.  In order to not forget what I learn or discover, I write it down … here.  I hope you too will discover something new you can use.  Enjoy!

Quick Explore

So, you have connected Excel to your SSAS cube.  You really wish you could cross drill easily in the product.  If you have used PerformancePoint Services you know the process.  Right click on the bar or cell and then choose the dimension to drill to using the information you clicked on as a starting point.  You can now do this in Excel 2013 using Quick Explore.  Here’s how to do it.

1. Click on a cell and hover over it.

image

2. Click the Quick Explore button a.k.a. magnifying glass and lightning bolt.  That will pop up the following dialog box.

image

In this box, you can see that the cell has focus on “Boston” based on context and is drilling into the Geography hierarchy of the Client dimension based on the selected filters and slice of data we are focused on.

3. Click the Drill To target.  Excel will create a new view of the Pivot Table with the Drill to on the row as shown here with the filter still in place.

image

4. Going back…  One of those frustrating things with this is how do I go back to my original view.  Ironically, you use Undo.  This will reset the view back to the previous.  So, if you choose to drill into this data again and again, you have to Undo that many operations to get back to the starting point.  Of course, can click the arrow by the arrow to see your “bread crumb” and pick where you want to go back to as well.

image

Until the next tip…





X-XMLA: vii. Partition Management with XMLA

10 10 2012

image_thumb_thumb_thumbIn the 7th article of this series, I will discuss how to manage your partitions with XMLA.   We will be focused on the partition structures as opposed to partition processing which was covered in the previous post.

Creating or altering a partition

I have already discussed object creation with XMLA in part 3 of this series.  Remember that when altering the partition you need to have all of the partition definition you wish to keep as well as what you are changing in the XMLA.  If you have a subset of the definition, SSAS will assume you want to remove the missing parts.

Adding a partition to an existing measure group

One area where a partition is different from other objects is that you can add a partition to an existing measure group.  This is often done to improve processing performance or to match the underlying data architecture (especially if table partitioning is already in place for the source tables).  Often the partitions are based on date and correspond to the date the data is loaded.  For instance, you may have separate partitions for each year.  So at the beginning of the next year you need to add a partition.  In this case, you create a new partition with a specific data source definition for that partition.

If the measure group only has one partition that is based on a single table, you may need to change the source definition so you can properly add a new partition.  There are two common ways to manage the data sources to support partitions.  The first is to create views that map to the partition structure in the cube and then the data source for each partition can point directly to a particular table or view.  The second way is to specify a query definition that usually points at one table and filters the data based on one or more keys.

In either case, the XMLA is constructed as a Create command with the ParentObject specifying the measure group that the partition will be added to.  The Object definition contains the ID, Name, Source and other properties that make up the partition.  The example below shows the definition for adding a partition with that covers the month of January using a filter in the QueryDefinition for the Source.

<Create xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
    <ParentObject>
        <DatabaseID>Adventure Works DW 2008R2 SE</DatabaseID>
        <CubeID>Adventure Works DW</CubeID>
        <MeasureGroupID>Fact Internet Sales 1</MeasureGroupID>
    </ParentObject>
    <ObjectDefinition>
        <Partition xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200" xmlns:ddl300="http://schemas.microsoft.com/analysisservices/2011/engine/300" xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/engine/300/300">
            <ID>Internet_Sales_Jan2011</ID>
            <Name>Internet_Sales_Jan2011</Name>
            <Source xsi:type="QueryBinding">
                <DataSourceID>Adventure Works DW</DataSourceID>
                <QueryDefinition>SELECT * 
                        FROM dbo.FactInternetSales 
                        WHERE OrderDateKey BETWEEN 20110101 AND 20110131</QueryDefinition>
            </Source>
            <StorageMode>Molap</StorageMode>
            <ProcessingMode>Regular</ProcessingMode>
            <ProactiveCaching>
                <SilenceInterval>-PT1S</SilenceInterval>
                <Latency>-PT1S</Latency>
                <SilenceOverrideInterval>-PT1S</SilenceOverrideInterval>
                <ForceRebuildInterval>-PT1S</ForceRebuildInterval>
                <AggregationStorage>MolapOnly</AggregationStorage>
                <Source xsi:type="ProactiveCachingInheritedBinding">
                    <NotificationTechnique>Server</NotificationTechnique>
                </Source>
            </ProactiveCaching>
            <EstimatedRows>60398</EstimatedRows>
            <AggregationDesignID>Internet Sales</AggregationDesignID>
        </Partition>
    </ObjectDefinition>
</Create>

Merging partitions

In some cases, you will find the need to merge partitions.  In our example above we added a partition for the month of January.  However, as the your partition count becomes larger there is benefit to merging partitions into historical or archive partitions.    Or you may even decide that merging will help with maintenance or processing.  In either case, the logic is fairly simple.  You have one or more source partitions being merged into target partition.  The XMLA for this is pretty straightforward.

There are a couple of nuances to be aware of including the fact that the structure and the aggregation designs need to be the same to support merging.  If you plan to merge, you should take this into account when building partitions and applying aggregation designs.  Furthermore, if you use partition slices you will likely need to modify the target’s slice after the merge is account for the new data.

Here is the code required to merge two source partitions into a target partition.  The result will be one partition.

<MergePartitions xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
  <Sources>
    <Source>
      <DatabaseID>Adventure Works DW 2008R2 SE2</DatabaseID>
      <CubeID>Adventure Works DW</CubeID>
      <MeasureGroupID>Fact Internet Sales 1</MeasureGroupID>
      <PartitionID>Internet_Sales_Jan2011</PartitionID>
    </Source>
    <Source>
        <DatabaseID>Adventure Works DW 2008R2 SE2</DatabaseID>
        <CubeID>Adventure Works DW</CubeID>
        <MeasureGroupID>Fact Internet Sales 1</MeasureGroupID>
        <PartitionID>Internet_Sales_Feb2011</PartitionID>
    </Source>
  </Sources>
  <Target>
    <DatabaseID>Adventure Works DW 2008R2 SE2</DatabaseID>
    <CubeID>Adventure Works DW</CubeID>
    <MeasureGroupID>Fact Internet Sales 1</MeasureGroupID>
    <PartitionID>Internet_Sales</PartitionID>
  </Target>
</MergePartitions>

As you can see in the example above, all that you need to specify is the full address for each partition and identify which are the sources and which one is the target.

Setting the Slice Property

Before digging into the XMLA a couple of notes on the data slice in cubes.  First, slices are not required for MOLAP partitions, however, they can help with query performance.  While in current versions of SSAS (2005 and later), the system will determine the slice based on the data, it is not always accurate.  Keep in mind that a slice does not affect processing or the source data for a partition.  That is determined by the Source attribute and related definition.

Refer to the following blog posts Jesse Orosz and Mosha Pasumanky for more information as the details around the value of assigning scope which is beyond the scope of this post:

http://jesseorosz.wordpress.com/?s=slices

http://sqlblog.com/blogs/mosha/archive/2008/10/14/get-most-out-of-partition-slices.aspx

Assuming you need to set the slice or if you need to change it due to a merge here is the XMLA to alter a partition’s slice property.

<Alter AllowCreate="true" ObjectExpansion="ObjectProperties" xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
  <Object>
    <DatabaseID>Adventure Works DW 2008R2 SE2</DatabaseID>
    <CubeID>Adventure Works DW</CubeID>
    <MeasureGroupID>Fact Internet Sales 1</MeasureGroupID>
    <PartitionID>Internet_Sales_Feb2011</PartitionID>
  </Object>
  <ObjectDefinition>
    <Partition xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200" xmlns:ddl300="http://schemas.microsoft.com/analysisservices/2011/engine/300" xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/engine/300/300">
      <ID>Internet_Sales_Feb2011</ID>
      <Name>Internet_Sales_Feb2011</Name>
      <Source xsi:type="QueryBinding">
        <DataSourceID>Adventure Works DW</DataSourceID>
        <QueryDefinition>SELECT * 
                        FROM dbo.FactInternetSales 
                        WHERE OrderDateKey BETWEEN 20110102 AND 20110228</QueryDefinition>
      </Source>
      <StorageMode>Molap</StorageMode>
      <ProcessingMode>Regular</ProcessingMode>
      <Slice>{[Date].[Calendar].[Month].&amp;[2011]&amp;[2]}</Slice>
      <ProactiveCaching>
        <SilenceInterval>-PT1S</SilenceInterval>
        <Latency>-PT1S</Latency>
        <SilenceOverrideInterval>-PT1S</SilenceOverrideInterval>
        <ForceRebuildInterval>-PT1S</ForceRebuildInterval>
        <AggregationStorage>MolapOnly</AggregationStorage>
        <Source xsi:type="ProactiveCachingInheritedBinding">
          <NotificationTechnique>Server</NotificationTechnique>
        </Source>
      </ProactiveCaching>
      <EstimatedRows>60398</EstimatedRows>
      <AggregationDesignID>Internet Sales</AggregationDesignID>
    </Partition>
  </ObjectDefinition>
</Alter>

If we were to merge the January and February partitions we created previously, we could change the slice definition to include both months.

<Slice>{[Date].[Calendar].[Month].&amp;[2011]&amp;[1],[Date].[Calendar].[Month].&amp;[2011]&amp;[2]}</Slice>

Setting the Storage Location

Partitions also support the ability to change the storage location for each partition further capitalizing on hardware investments such as SSDs and low cost disks.  This is set in the XMLA using the StorageLocation element with a valid path.

<Alter AllowCreate="true" ObjectExpansion="ObjectProperties" xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
  <Object>
    <DatabaseID>Adventure Works DW 2008R2 SE2</DatabaseID>
    <CubeID>Adventure Works DW</CubeID>
    <MeasureGroupID>Fact Internet Sales 1</MeasureGroupID>
    <PartitionID>Internet_Sales_Jan2011</PartitionID>
  </Object>
  <ObjectDefinition>
    <Partition xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200" xmlns:ddl300="http://schemas.microsoft.com/analysisservices/2011/engine/300" xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/engine/300/300">
      <ID>Internet_Sales_Jan2011</ID>
      <Name>Internet_Sales_Jan2011</Name>
      <Source xsi:type="QueryBinding">
        <DataSourceID>Adventure Works DW</DataSourceID>
        <QueryDefinition>SELECT * 
                        FROM dbo.FactInternetSales 
                        WHERE OrderDateKey BETWEEN 20110101 AND 20110131</QueryDefinition>
      </Source>
      <StorageMode>Molap</StorageMode>
      <ProcessingMode>Regular</ProcessingMode>
      <StorageLocation>C:\Data\SSASOptimized</StorageLocation>
      <ProactiveCaching>
        <SilenceInterval>-PT1S</SilenceInterval>
        <Latency>-PT1S</Latency>
        <SilenceOverrideInterval>-PT1S</SilenceOverrideInterval>
        <ForceRebuildInterval>-PT1S</ForceRebuildInterval>
        <AggregationStorage>MolapOnly</AggregationStorage>
        <Source xsi:type="ProactiveCachingInheritedBinding">
          <NotificationTechnique>Server</NotificationTechnique>
        </Source>
      </ProactiveCaching>
      <EstimatedRows>60398</EstimatedRows>
      <AggregationDesignID>Internet Sales</AggregationDesignID>
    </Partition>
  </ObjectDefinition>
</Alter>

As you can see, there are many partition management operations that can be done using XMLA.  As you work to develop a management strategy keep in mind that these operations can be scripted for automated execution when managing your Analysis Services solution.





X-XMLA: vi. Processing and Out-of-Line Bindings in XMLA

19 09 2012

image_thumb_thumbIn this post, I will progressively go through processing a partition from full process, to incremental process, and finally to incremental process with out-of-line bindings.  This will provide you with additional samples to examine the results of partition processing with XMLA.

Note: these examples use the Adventure Works DW 2008R2 SE sample database available on CodePlex (http://msftdbprodsamples.codeplex.com/).

Processing with XMLA

Processing is a command that can be executed in a batch and in parallel with similar tasks.  This is particularly useful when processing multiple objects that are not dependent on each other, such as dimensions.  The basic structure is as follows:

<Batch>
<Parallel>
<Process>
<Object>
~object definition~
      <Type>
<ErrorConfiguration>
<WriteBackTableCreation>

Processing Type Options

This is a list of the most commonly used types when processing SSAS objects.

  • ProcessDefault:  This will determine the best option based on the condition of the object.  Be aware that if SSAS determines that a full process is required, it will do that. I would not recommend using this process type in most production situations.
  • ProcessFull: This option completely rebuilds the dimension, partition, cube or even database by removing existing data and reloading the data with aggregations and indexes.  In the case of cubes, measure groups and partitions, this is the equivalent of ProcessData + ProcessIndex.
  • ProcessClear: This option drops the data and related structures such as indexes and aggregations from the object.  Once this run, no data is available in the object.
  • ProcessData:  This option drops the data and reloads it, but does not rebuild the aggregations or indexes. It works with cubes, measure groups, partitions, and dimensions.
  • ProcessAdd: This option adds new data to a partition or dimension.  It is not available as an option when processing dimensions from SQL Server Management Studio, but is available in XMLA scripts.
  • ProcessUpdate:  This options will reread the data and updates the dimension attributes.  It is only available for dimensions and will drop flexible aggregations and indexes on related partitions.
  • ProcessIndex: This option rebuilds indexes and aggregations for partitions.  It requires that the partition has already been loaded with data (ProcessData).

Of these options, the most commonly used when working with XMLA in my experience have been, ProcessFull, ProcessData, ProcessAdd and ProcessIndex.  I will use each throughout the examples below and describe the cases where they have applied in my projects over the years.

Processing with XMLA

The process command in XMLA let’s you define the object to be processed, the type of processing to implement, and how to handle any writeback partitions.  (NOTE: the ErrorConfiguration options is optional.) Overall the structure is fairly simple.  Here is a simple example of XMLA will fully process the Fact Internet Sales measure group.

   1: <Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">

   2:   <Parallel>

   3:     <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200" xmlns:ddl300="http://schemas.microsoft.com/analysisservices/2011/engine/300" xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/engine/300/300">

   4:       <Object>

   5:         <DatabaseID>Adventure Works DW 2008R2 SE</DatabaseID>

   6:         <CubeID>Adventure Works DW</CubeID>

   7:         <MeasureGroupID>Fact Internet Sales 1</MeasureGroupID>

   8:       </Object>

   9:       <Type>ProcessFull</Type>

  10:       <WriteBackTableCreation>UseExisting</WriteBackTableCreation>

  11:     </Process>

  12:   </Parallel>

  13: </Batch>

The syntax is identical for processing dimensions.  The <Object> element identifies the object to be processed starting with the database.  Next the <Type> element specifies the processing type using the options from the list above.  Finally, the <WriteBackTableCreation> element instructs Analysis Services on how to handle a write back partition that has not been fully processed.  The default is UseExisting which will use existing table or create a new table if needed.  This is only relevant if your cube supports writeback.

Processing Affected Objects

This is a setting that is part of the Process command element.  By default it is set to True as in the example above.  This means that any objects that are affected will be processed as well.  The best example is if you process a dimension and it ends up being fully processed, this will cause all related measure groups to be fully processed as well, keeping the data in a consistent usable state.  However, if you are tightly managing your processing jobs, you may need to set this to False to reduce unexpected and blocking processing jobs.

Using Out-of-Line Binding with Incremental Processing

A great feature of XMLA is the ability to dynamically bind data and even connections when processing SSAS objects.  This is called “Out-of-Line” binding (http://msdn.microsoft.com/en-us/library/ms128523).   One of the core concepts behind out-of-line bindings is that you can specify the connection and the data source query or table at run time.  This is particularly useful when you want to do incremental processing.

This first example illustrates how to use a query to change the data being added to the partition.

   1: <Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">

   2:   <Parallel>

   3:     <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200" xmlns:ddl300="http://schemas.microsoft.com/analysisservices/2011/engine/300" xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/engine/300/300">

   4:       <Object>

   5:         <DatabaseID>Adventure Works DW 2008R2 SE</DatabaseID>

   6:         <CubeID>Adventure Works DW</CubeID>

   7:         <MeasureGroupID>Fact Internet Sales 1</MeasureGroupID>

   8:         <PartitionID>Internet_Sales</PartitionID>

   9:       </Object>

  10:       <Type>ProcessAdd</Type>

  11:       <WriteBackTableCreation>UseExisting</WriteBackTableCreation>

  12:     </Process>

  13:   </Parallel>

  14:   <Bindings>

  15:     <Binding>

  16:       <DatabaseID>Adventure Works DW 2008R2 SE</DatabaseID>

  17:       <CubeID>Adventure Works DW</CubeID>

  18:       <MeasureGroupID>Fact Internet Sales 1</MeasureGroupID>

  19:       <PartitionID>Internet_Sales</PartitionID>

  20:       <Source xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200" xmlns:ddl300="http://schemas.microsoft.com/analysisservices/2011/engine/300" xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/engine/300/300" xsi:type="QueryBinding">

  21:         <DataSourceID>Adventure Works DW</DataSourceID>

  22:         <QueryDefinition>Select 1 from SomeTable</QueryDefinition>

  23:       </Source>

  24:     </Binding>

  25:   </Bindings>

  26: </Batch>

The second example shows the syntax for changing a table or view in the bindings.

   1: <Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">

   2:   <Parallel>

   3:     <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200" xmlns:ddl300="http://schemas.microsoft.com/analysisservices/2011/engine/300" xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/engine/300/300">

   4:       <Object>

   5:         <DatabaseID>Adventure Works DW 2008R2 SE</DatabaseID>

   6:         <CubeID>Adventure Works DW</CubeID>

   7:         <MeasureGroupID>Fact Internet Sales 1</MeasureGroupID>

   8:         <PartitionID>Internet_Sales</PartitionID>

   9:       </Object>

  10:       <Type>ProcessAdd</Type>

  11:       <WriteBackTableCreation>UseExisting</WriteBackTableCreation>

  12:     </Process>

  13:   </Parallel>

  14:   <Bindings>

  15:     <Binding>

  16:       <DatabaseID>Adventure Works DW 2008R2 SE</DatabaseID>

  17:       <CubeID>Adventure Works DW</CubeID>

  18:       <MeasureGroupID>Fact Internet Sales 1</MeasureGroupID>

  19:       <PartitionID>Internet_Sales</PartitionID>

  20:       <Source xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200" xmlns:ddl300="http://schemas.microsoft.com/analysisservices/2011/engine/300" xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/engine/300/300" xsi:type="TableBinding">

  21:         <DataSourceID>Adventure Works DW</DataSourceID>

  22:         <DbTableName>Fact_InternetSales</DbTableName>

  23:       </Source>

  24:     </Binding>

  25:   </Bindings>

  26: </Batch>

In the end, by using the Bindings definitions you can choose what data is getting processed into the SSAS object.  Here are some ways that I have used this technique to support my projects.

  • Changed the datasource used with the object.  I have used this to point to a different data source to test whether it works with the cube without overwriting the existing datassource in the cube definition.
  • Changed the table to a view for processing.  Because I use XMLA when scheduling most of SSAS process jobs, I can dynamically set the table or view as needed.
                        • Changed the query definition to limit the data to new rows only. I have used this the most.  I have changed the query definition to reflect the latest data for both dimensions and partitions.  More importantly, I was able to leverage this in environments where I have not had control over the data source itself, specifically Oracle data warehouses.

Out-of-line bindings are very powerful when controlling your SSAS process options and allow you to maximize the efficiency of your jobs.





X-XMLA: v. Creating XMLA Script from Visual Studio and Deploy the Database

26 06 2012

image

This will be the easiest entry in this series.  This is a reblog of an article I wrote last year.

Simple batch script to generate XMLA and deploy SSAS DB

As I was working through some issues with my cube design, I realized that I wanted to deploy my SSAS databases using XMLA in SSMS. This can be easily scripted from SSMS, but I had not deployed the database yet. As a result, I created a batch file to build the XMLA used to deploy a database. This script can be used for automated builds or even automated deployments if that fits into your model. I don’t write a lot of batch files so you may want to update the syntax as needed.  Read more…





X-XMLA: iv. Deploying Databases with XMLA

19 06 2012

image_thumb

The fourth segment in this series focuses on using XMLA to deploy databases.  When you create SQL Server Analysis Services databases in Visual Studio, you often use target development server to validate the design and data with.  This blog will focus on moving that development database to other environments and the required steps to make this work.

Generating the Create Database Script

The first step is the easiest.  At this point, we are assuming that you have created a development database and have deployed it.  Generating a create script at this point is fairly straightforward.  Using the database shortcut menu, select Script Database as … CREATE To … New Query Editor Window as shown below.

image

This will in a CREATE script with all the metadata from the development database.

Changes Required during Initial Deployment Process

Before we move much further, a key point from the first section is that all the metadata from the development database is in this script.  There are two areas that need to be updated as this script is deployed to different environments.

Data Sources.  The data sources that are in the script currently point to development data.  In most cases, these will need to be updated to reflect the data sources used in the target environment.  For instance, you normally do not use the production data servers when developing cubes and unit testing the solution due to the performance impact on those servers.  Furthermore, many database administrators hold the connection information that is required to connect to those sources and will not give that information to developers.

Role Membership.  While we will create the roles in the development environment, we usually use development users and groups to test the security.  Once deployed, this membership will need to be updated to the correct users and groups for the target environment.

Then next two sections describe the two methods for handling these changes.  You may find a blended approach works for you as well depending on the target environment and your policies.

Applying Required Changes Post-deployment

First, you can apply these changes after the database has been deployed.  To do this, simply run the XMLA script while connected to the server in the environment you are targeting.

After running the script you can use SQL Server Management Studio to update the data sources and role members.  As shown below, in order to update the data sources, you will need to update each data sources connection string property to match the credentials and database used in that source.

image

You will go through a similar process with the role membership.  In this case you will go to the Properties window for each Role and Add or Remove users and groups to match the current environment.

image

Once this is complete, you will be able to process the cube with the current data and make sure the correct users are set up.  Just a note though, I would not process the database until the users have been set up as you will then have a database with data you did not intend to make available to developers.

Applying Required Changes Pre-deployment

The other method is to update the script prior to deployment.  In this case you will be modifying the generated script and which will not require additional changes once applied.

To update the data sources you need to search for the <DataSources> element and manually update the connection string for each data source.  Here is a sample of the XMLA you will be updating.

 1: <DataSources>

 2:     <DataSource xsi:type="RelationalDataSource">

 3:         <ID>Adventure Works DW</ID>

 4:         <Name>Adventure Works DW</Name>

 5:         <ConnectionString>Provider=SQLNCLI10.1;Data Source=ServerName;Persist Security Info=False;Integrated Security=SSPI;Initial Catalog=AdventureWorksDW2008R2</ConnectionString>

 6:         <ImpersonationInfo>

 7:             <ImpersonationMode>ImpersonateServiceAccount</ImpersonationMode>

 8:         </ImpersonationInfo>

 9:         <Timeout>PT0S</Timeout>

 10:         <DataSourcePermissions>

 11:             <DataSourcePermission>

 12:                 <ID>DataSourcePermission</ID>

 13:                 <Name>DataSourcePermission</Name>

 14:                 <RoleID>Role</RoleID>

 15:             </DataSourcePermission>

 16:         </DataSourcePermissions>

 17:     </DataSource>

 18: </DataSources>

Next, you need to search for the <Roles> element.  You will be updating the <Member> elements for each role you need to update.  The primary issue with this edit is that you may need to know the Active Directory SID for each user or group you need to add as a member.  The XMLA sample below shows what you will be updating.

 1: <Roles>

 2:     <Role>

 3:         <ID>Role</ID>

 4:         <Name>Role</Name>

 5:         <Members>

 6:             <Member>

 7:                 <Name>SampleUser</Name>

 8:                 <Sid>S-0-0-0-0000000000-0000000000-0000000000-0000</Sid>

 9:             </Member>

 10:         </Members>

 11:     </Role>

 12: </Roles>

Check out the following blog entry from Derek Newton ( T | B ) for using PowerShell to retrieve an SID from AD:  http://dereknewton.com/2010/11/finding-an-active-directory-users-sid-using-powershell/

Altering the Database

Usually you will not update an entire database except during the early development cycle.  If you make enough changes you may find it beneficial to update the entire database as well.  The data source and role member changes have been described above.  However, be aware that when altering a database only the metadata in the script will be applied.  The biggest area of concern is if you dynamically add partitions and aggregations to your database that have not been applied into the development environment.  If you proceed to execute the Alter script without taking this into account you will lose all of that functionality.  As your database becomes more mature, you will likely find that targeted changes are easier to manage and have much less risk.  If you need to update the entire database, you should be sure to add the partitions, aggregations, and any other changes you may have made in production to the development database so your script contains the correct items.  I will discuss partition management in an upcoming blog in this series and will have some other ideas there that can help.








Follow

Get every new post delivered to your Inbox.

Join 694 other followers

%d bloggers like this: