Setting Up Tabular Models on Windows Azure

12 03 2014

In my last post, I discussed how to set up Oracle in Windows Azure. During a customer call, there were questions about using SQL Server Analysis Services (SSAS) tabular models with Azure. This post will walk through setting up an Azure VM and deploy a tabular model to that VM.

If you do not have an Windows Azure account you can use a trial account with your Microsoft or Live account. Check out http://www.windowsazure.com for details on how to “try it free.”

Setting Up the VM in Azure

From the Management Portal on your Azure account, select Virtual Machines from the left then Add at the bottom. On the next screen, choose to create your VM from the gallery. You should see the Choose an Image option as seen below. As you can see, I have the SQL Server 2012 SP1 Enterprise image highlighted. You will need to use the Enterprise license as Tabular does not run on Standard. In this case, the Data Warehousing image is NOT the Business Intelligence Edition of SQL Server.

image

You can also choose to create a “blank” VM and load up SQL Server on your own. I chose to use the image to speed up the process – which it did substantially.

After selecting the image, the next few steps guide you through setting up the VM. For the most part, the defaults will work fine. Be aware that once this is turned on, you will be charged for it running. It is up to you to make sure you understand the costs, even if you are using the free trial.

During the setup steps, you will create the VM and its related cloud service. Remember that the account is your admin account for the VM and you will need those credentials to Remote Desktop into the VM. On the last, setup page is the Endpoints. Leave the defaults, we will add an endpoint for our tabular model later.

At this point, it will take a few minutes to set up your new VM. Once it is setup, open a Remote Desktop session into it. If you look at services or at the SQL Configuration console you will notice that everything except a tabular instance have been set up for you. As a result, I would not recommend using this gallery image for a production deployment. You should look at creating your own template if you want a more locked down and refined setup.

Setting Up the Tabular Instance in Azure

As noted before, the tabular instance is not set up. The installation media is on the server, so you can run that to install your tabular instance. I won’t walk through the install process, but this was great to find because that meant I did not have to copy media to my VM.

Making the Tabular Instance Available

This section covers the tedious tasks required to make your tabular instance available for querying outside of the VM. There are three basic steps to getting your instance “online”: setting the port number in SSAS, updating the firewall, and adding endpoints. I will walk you through the steps I used to get this done followed by some references that helped me get here.

Setting the Port Number in SSAS

By default, SSAS, both multidimensional and tabular instances, use dynamic ports. In order, to allow connections through the firewall and endpoints, the port number needs to be fixed. I used guidance from TechNet and did the following steps to set the port.

    1. Opened the Task Manager to get the PID for MSOLAP$<<instance name>>.
    2. Ran netstat –ao –p TCP to get a list of ports used by current processes. Once I had identified my port number, I also noted the server IP address which is required in the next step.
    3. I chose to confirm that I had the correct port by connecting to the instance using the IP address and port number.
    4. Next, we have to go old school and modify the msmdsrv.ini file. The typical install path for this file is C:\Program Files\Microsoft SQL Server\<<instance name>>\OLAP\Config\msmdsrv.ini.
    5. Open the file in notepad and find the <Port>0</Port> tag.
    6. Change the port number to the port number that was identified above. (Technically we could have used any open port number. I chose to do this because I was sure the port number was available.)
    7. Save the changes and restart the service.
    8. Once again confirm you can connect to the server with SSMS using the IP address and port number.

Now you have set up the SSAS instance to use a fixed port number.

Updating the Firewall

Now that we have a port number, we can create a firewall rule. We access the firewall rules from the Server Manger. In the Windows Firewall console, we will be creating a new Inbound Rule..

image

  1. The rule type is Port
  2. We will apply the rule to TCP and specify the port we defined above.
  3. On the action tab, we selected Allow the Connection. (If you are planning to use this in a production environment, you will need to verify your connection requirements.)
  4. Next, we clear any connection we don’t want to apply.
  5. Finally, I named the rule with a descriptive name.

Now we have the firewall rule in place to allow external connections for the tabular instance.

Adding Endpoints

The final step to making the instance available is to add the endpoints in Azure. In the WIndows Azure portal, we need to go the VMs page again, select our VM, and open the ENDPOINTS tab. From here we create a new endpoint.

  1. We are creating a standalone endpoint.
  2. Next, we select the TCP protocol and add the port number to both the private and public port textboxes.
  3. Finally, we apply the changes.

We are now ready to test connectivity.

References

Setting up VM Endpoints

Configuring Windows Firewall

Configuring Windows Firewall with SSAS

Connecting to the Tabular Instance in Azure

So to verify this works, I wanted to connect to the model with SSMS on my desktop. However, it uses Windows authentication and I am not on the same domain. (My IT staff is likely still laughing about me joining my VM to our domain.)

Thankfully, Greg Galloway (blog) reminded me of how to set up runas to use different credentials to connect to SSAS. Here is the syntax I used to connect to the tabular instance on Azure using a command window:

runas /netonly /user:<<VM name>>\<<username>> “C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\Ssms.exe”

This allowed me to open SSMS using those credentials. Next, I used the VM + port to connect. You will see that the Windows credentials in the dialog are not what you entered on the command line. This is expected and the credentials you entered in the command line will be passed through to the VM.

Deploying a Tabular Model to Azure

I attempted three different ways to deploy my model to the Azure VM. Two worked, one did not.

Deploying from Visual Studio on My Desktop

My first attempt was to deploy from Visual Studio on my desktop. I set the deployment properties to point to the Azure instance using the same credentials I had used with SSMS. I also set up a command line execution to use runas like with SSMS.

It appeared I could connect, but I continually got a permissions error which is shown below. After much frustration, I gave up on it and moved to the next options.

image

Using XMLA to Deploy

This is the most straightforward way to deploy an SSAS DB on a different server. I used SSMS to generate the Create Database XMLA statement. Because I had not deployed it locally, I needed to modify the XMLA statement to remove the user name and guid from the database name and database ID. (AdvWorksTab1_SteveH_<<Some GUID>>)

In a bit of irony, I can use the SSMS connection from my desktop using the runas to deploy the database to the VM.

The reality is that this is easy and acceptable way to deploy SSAS databases to production environments, in Azure or in your datacenter.

Deploying from Visual Studio on the VM

The final method I used was deploying Visual Studio onto the VM and deploying from there. I used VisualStudio.com (TFS online) to transfer the source code from my desktop to the VM. I had to install the TFS client on the VM, but SSDT with BI tools is already there.

  1. Installed the VS 2010 TFS Client: http://www.microsoft.com/en-us/download/details.aspx?id=329
  2. Then installed Visual Studio SP1  http://social.msdn.microsoft.com/Forums/vstudio/en-US/4e4851dc-eb29-4081-9484-d38a6efa07ee/unable-to-connect-to-tfs-online?forum=TFService
  3. Finally installed VS2010 Team Foundation Server Compatibility GDR (KB2662296) http://www.microsoft.com/en-us/download/details.aspx?id=29082

Now it will connect to TFS Online. I got the latest from my project and pointed the project to my tabular instance.

Be sure to check the impersonation settings.

Next, I deployed the project to the local tabular instance on the VM and it worked great. This might make sense for development, but I would not use this method in a production environment.

Some Closing Thoughts

I was amazed at how simple it was to create the VM and set up tabular in Azure. Knowing what I know now, I would be able to set up a usable instance fairly quickly and deploy a database using XMLA without much effort. That was very nice.

Doesn’t work with Office 365

I started this project to determine the connectivity capability with Office 365. Well, that does not work in my current configuration. I was able to create a workbook on my desktop using my Azure tabular model and Excel. It works just as you would expect. However, when I deployed the workbook to Office 365, data refresh always failed.

image

My next thought was to use a data gateway, but those only work with SQL Server Data Engine and Oracle, not SSAS. This is still a significant issue with making Power BI able to fully take advantage of existing BI assets at a company.

Using Azure Active Directory

My next step would be to use Azure Active Directory services to try to get Windows Authentication to work throughout. But that is for a later project and post.





2013 – A Year In Review

2 01 2014

It is in our nature as humans to look back in order to understand where we have been.

Warning – some of this blog contains stuff about my family… In case you only want the technical stuff.

Family Fun

This past year has been very interesting for me personally and professionally. In the past year, my youngest, Mikayla, has entered Junior High officially taking our family out of elementary schools. Mikalya joined me at the SQL Saturday event in Omaha. At the same time, my oldest, Kristyna, is now a senior at Burnsville Senior High School. Both of my boys, Alex a junior and Andrew a freshman, are both taller than me and staying active. Alex joined us at the Minnesota SQL Saturday and did a lot of volunteering. Andrew probably had the best event of all as he joined me at SQL Saturday in Fargo. There he got to see Bill Gates in person. I am proud of all of them, they are great kids. This was also the year I celebrated 20 years with the woman I love, Sheila. Without her support, I would not have been able to get this far in my career as well. Yep, it has been a busy year personally. Soon there will be lots of college, marriage, and maybe even grandkids. Wow, I must be getting old.

Magenic and the Server Development Practice

2013 is my first full year as a Practice Lead at Magenic. I started out as the Practice Lead for our Business Intelligence and Data Practice. In August, my role expanded to include SharePoint, Biztalk, and TFS. This allows us to focus server technologies at Magenic. Along  the way, I have had to learn a lot about VMs (still a work in progress). I really enjoy working with the pros across the company that we have. We some very talented BI, SharePoint and BizTalk consultants including a few virtual TSPs in SQL Server, Business Intelligence, and BizTalk.

During this past year, I have traveled around the country to consult, to speak, and to meet customers. I have had the privilege of speaking at multiple SQL Saturdays, Modern Apps Live, SQL Live, and Code Mastery events. It has been fun. I almost made it to all of our offices including the locations we opened this year. I made it to Minneapolis, Chicago, Atlanta, Charlotte, Boston, New York City, and San Francisco. Still need to get out to Los Angelos and Manila.

image

While it has been hard at times, the travel experience has been good overall. I try to keep my speaking engagements up to date, maybe I will see some of you next year.

This year I also authored outside of the blog. Chuck Whittemore (The Insight Analyst)The Changing World of Business Intelligence: Leading with Microsoft Excel - Custom Software Development White Paper and I coauthored a white paper on Leading with Excel: The Changing World of Business Intelligence. This was a fun project where we bring together Microsoft Excel and Microsoft BI in a real world way. We continue to successfully work this strategy with our customers and it was the impetus for my Excel BI Tips blog post series. I SQL Server Analysis Services 2012 Cube Development Cookbookalso had the privilege to coauthor a book that is just being released: SQL Server Analysis Services 2012 Cube Development Cookbook by Packt Publishing. This the third book I have worked on and it has been a while since was last published so this was a good experience for me. I still don’t know if I would take an entire project on, but maybe someday.

This year wraps up with me becoming a virtual TSP with Microsoft to further support their efforts with SQL Server and Business Intelligence in the marketplace.

One other thing that has been interesting for me is that with the release of Power Pivot and SQL Server Analysis Services Tabular Model, I am seeing a huge shift in how I work with and sell BI. I have always worked with cubes, but now I see the in-memory space as a more compelling and leading edge solution that will continue to change what my career will look like. While I had a lot of fun being a cube and MDX wizard, the ability to deliver results to business users in a timely fashion with great visualizations is actually more fun. The more things change …

Happy New Year!

I hope you and your family had much to look back and celebrate this year. I thank God for the blessings of a great company to work for and an awesome family to be with.





Are You Signed Up for 24 Hours of PASS–Business Analytics?

29 01 2013

If you have not signed up for the 24 Hours of PASS-Business Analytics you should be.  This is a great chance to hear 12 speakers (they will be repeated in the following 12 hours).  Topics are varied from Big Data to Strategy to Collaboration.  Most importantly you24 Hours of PASS Business Analytics can’t beat the price to hear speakers like Denny Lee, Peter Meyers, and Stacia Misner to name a few.

I get the privilege of moderating two of the sessions: Session 8:  What Is Big Data? by Mark Whitehorn and Session 10: Visualizing Data with Power View by Sean Boon.

Finally, I heard Marc Reguera talk about how Microsoft Finance uses Power View at a different event.  If you want to see Power View put into practical use by a business user, I highly recommend you check out his session.  I think it is the final piece of the puzzle to join the technology with the business.

I hope you all take the opportunity to join us for this compelling and free event preview to the PASS Business Analytics Conference in Chicago on April 10-12, 2013.





O, There’s the Data: Using OData in SSIS

23 07 2012

image

The Open Data Protocol (OData) is an open specification created Microsoft to enable exposing data in a standard way from a variety of sources.  OData is natively supported in many of Microsoft’s products including PowerPivot, Excel 2013, SQL Server 2012 Analysis Services Tabular Model, Windows Communication Foundation (WCF), and Entity Framework to name a few.  Furthermore, Microsoft uses OData to expose data feeds from the Windows Azure Data Marketplace as well.

I pursued adding an OData source to SSIS as a result of Mark Souza’s presentation at the Minnesota SQL Server User Group in April 2012.  I posed a question about easier interaction with Oracle.  He mentioned that OData would be a good way to solve that issue.  This led me to put together a presentation which I delivered for PASSMN in July 2012 entitled O, There’s My Data: The Open Data Protocol.  At that presentation, I reviewed the “pain and agony” of a data pro putting together a data feed using Entity Framework in C# and WCF to expose it.  For the most part, with the help of .NET pros at Magenic including Dave Stienessen ( B ) and Sergey Barskiy ( B ), I was able to create my first entity model and expose it using WCF.  After that I worked on how to consume the feed without purchasing a 3rd party tool.  Here is the rest of the story.

Using ATOM as Shown in a Channel 9 Exercise

While looking for solutions that allowed me to implement an OData feed into an SSIS package, I came across a Hands on Lab on Channel 9.  While the focus was on Reporting Services, I was able to use the steps to create a package that would read a feed and make the data available to the ETL process.  In a nutshell, this exercise involved three tasks – creating an ATOM file, processing the ATOM file and loading the data using an HTTP connection manager pointed to the OData feed.  While you are creating this package, you should run each step after you have created it in order to use the files created in the following steps.

image

Task 1 – Create ATOM File (Script Task)

In the Main method,  I used the following code which was copied and adapted from the Channel 9 exercise. (NOTE: The code for this script has an error.  The object declaration should be condensed to one line to work properly.)

public void Main()
 {
 // Get the unmanaged connection
 object nativeObject = Dts.Connections["TestSvc"].AcquireConnection(null);
    // Create a new HTTP client connection
 HttpClientConnection connection = new HttpClientConnection(nativeObject);
    // Save the file from the connection manager to the local path specified
 string filename = "C:\\Source\\SSIS 2012 Projects\\ODataIntegration\\Departments.atom";
 connection.DownloadFile(filename, true);
Dts.TaskResult = (int)ScriptResults.Success;

}

This task will create an ATOM file that will be used in the next step to retrieve the data.

Task 2 – Process ATOM File (XML Task)

This task will use the new ATOM file to create an XML file with the data.  It uses the XSLT operation type pointing to the File Connection Manager created in the previous step as the source.  This will result in another File Connection Manager to support the destination XML file with the data.  Finally, in the exercise as second operand set of XML is used to clear unsupported headers.  Admittedly, I just copied this straight from the example and still am not sure of the details of what it does.

Here is a look at the XML Task Editor so you can see the settings I used.

image

Here is the code from the Channel 9 exercise used in the SecondOperand property:

<?xml version="1.0" encoding="utf-8" ?>
  <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
  <xsl:output method="xml" indent="no" />
 <xsl:template match="/|comment()|processing-instruction()">
 <xsl:copy>
 <xsl:apply-templates />
  </xsl:copy>
  </xsl:template>
  <xsl:template match="*">
  <xsl:element name="{local-name()}">
  <xsl:apply-templates select="@*|node()" /> </xsl:element>
  </xsl:template>
  <xsl:template match="@*">
  <xsl:attribute name="{local-name()}">
  <xsl:value-of select="." />
  </xsl:attribute>
  </xsl:template>
  </xsl:stylesheet> 

Task 3 – Load Data (Data Flow Task)

The final task is a straightforward data load using the XML Source Component pointed at the file XML file I created.  Then I created a matching table in a database which I used as the destination.image

Wrap Up on the ATOM Feed Option

This will work with SSIS 2008 and SSIS 2012.  I tested most of the work in 2012, but the code in the illustration supports 2008.  This option does require that the package write at least two files to the server to work correctly.  In some cases, this will not work in enterprise environments as the data will now rest on the server for a period of time or the admins do not want files created on the server.

Using a Custom SSIS Source to Get the Data

NOTE: This is the preferred solution, but is not available in SSIS 2008 which uses the .NET 2.0 Framework.  This solution requires the .NET 3.5 Framework.

This version uses a custom SSIS source to connect to the OData feed and populate the data flow pipeline.  I did not find this option illustrated anywhere and used help from the Dave and Sergey to put this together.  I spent many hours trying to solve this issue and at the end of the day, it is fairly simple.  So, hopefully, this will save you some time as well.

This package only has one workflow task – a data flow task which contains the rest of the code.  In the data flow task, I have a Script Component implemented as a source and a Row Count with a data viewer on the pipeline to check results.

image

This was my first experience creating a custom source.  I used a post from SSIS Talk – SSIS Using a Script Component as a Source as a reference.  If you need help creating your first script source check it out.

Be sure to set your outputs prior to creating the script or you will not have them available to map to in the code.  You also need to add the HTTP Connection Manager you are using to point to your OData feed.

Add References, Using Statements, and Declarations

Once you have the basics set up, you need to add some references including the targeted data service and System.Data.Services.Client.  These are the key references for the code we are implementing.

image

Once you have these references you will need to add the following to the Using statements to the Namespaces region.

using System.Data.Services.Client;
 using SC_68e99fec2dce4cd794450383662f6ac7.TestSvc;

The SC_ reference is the internal name for your script component and will be different from mine, although it will likely be in the same format.

Next, you need to add the following declarations in the ScriptMain class as shown here.

public class ScriptMain : UserComponent
 {
private Uri svcUri = new Uri  (http://localhost/ODataSQL/TestSvc1.svc);
 private AdventureWorksEntities context;

The AdventureWorksEntities is from the service reference I created. You will need to know the context name for the service reference you are using.

The Working Code: Using DataServiceQuery

In the CreateNewOutputRows method in the SSIS script you will add code that runs a DataServiceQuery which adds the data to the data flow pipeline. In my case, my Output was called Departments and created the buffer reference you see in the code.  It has the output fields I defined for my source.  Here is the code I used to implement the solution.

public override void CreateNewOutputRows()
 {
 context = new AdventureWorksEntities(svcUri);
 DataServiceQuery<Department> dept = context.Departments;
    foreach (Department d in dept)
 {
 DepartmentsBuffer.AddRow();
        DepartmentsBuffer.DeptID = d.DepartmentID;
 DepartmentsBuffer.DeptName = d.Name;
 DepartmentsBuffer.GroupName = d.GroupName;
 }

This will query the service and return the rows. Alas, that is all it really took to solve this problem.  While this solution does not work in SSIS 2008, if you are planning to use a lot of OData, I would recommend using this as another reason to upgrade to SQL Server 2012.

SSIS Needs an OData Source Component

What I found interesting is that Microsoft does not have a native method to load OData feeds into the Data Flow Task in SSIS.  I have since created an Connect item to see if we can get this added.  Vote here if you agree.

Resources Used throughout the Process

Connecting to Windows Azure SQL Database Through WCF

Loading Data from an ATOM Data Feed into SQL Server

SSIS – Using a Script Component as a Source

DataServiceContext Class

Chris Woodruff – 31 Days of OData Blog Series

PASSMN Presentation – July 17, 2012

Consuming SharePoint Lists via OData and SSIS – Uses Linq





Great User Group Meeting Tuesday, Now on to Madison and SQL Saturday #118

19 04 2012

PASSMNLogoMinnesota SQL Server User Group Meeting Review (4/17/12)

First of all, I have to thank the awesome speakers who presented at our user group meeting on Tuesday night, April 17.  Mark Souza ( T | B ) for a great question and answer session.  He was open about SQL Server and where it is heading.  Some items that interested me is that SQL Azure was running the SQL Server 2012 platform in production by December 2011.  Microsoft is committed to releasing more changed through the SQL Azure platform at a faster clip.  This will allow them to continually improve the quality of releases for the boxed version as well.  (Speaking of which, the boxed version is not going away any time soon.)

Mark also spoke about leaving data at its source and using tools to bring the data together for reporting and analysis.  He touted the Hadoop integration efforts as key part of this strategy.  If you have followed my blog for a bit, you know that I do a lot of work with Oracle and MSBI.  As a result, I had to ask when a good version of Oracle drivers would be available.  While he did not have a good answer on this at the time, he did mention that OData is a data access solution for me to look at. Look for a future blog post as I explore this protocol to ease some of my Oracle.

After Mark finished answering the variety of questions asked by the crowd we handed the microphone to Itzik Ben-Gan ( T | B ).  As usual, he took an entire hour to help us understand that we did not know all of the possible ways to use the APPLY operator.  However, I also learned about the DBCC OPTIMIZER_WHATIF function which allows you to change the system settings the Optimizer uses to create a plan.  For instance, DBCC OPTIMIZER_WHATIF(1,16) will tell the optimizer to use 16 processors in its plan.

SQL Saturday #118 – Madison, WI (4/21/12)sqlsat118_web

Finally, I will be out at SQL Saturday #118 in Madison this weekend.  I have two presentations there – A Window Into Your Data: Using SQL Window Functions and Performance Monitoring and Tuning in SSAS.  Maybe I will see you there.





Oracle Tips for MSBI Devs #2: Keyword compare between Oracle and SQL Server

17 04 2012

While attending a recent Windows Azure Quick Start, the presenter, Mike Benkovich (@mbenko) happened to show a table on his site with this keyword comparison between SQL Server and Oracle so I wanted to pass it a long.

The Solution:

A list of comparable functions for SQL Server or Oracle developers.  Check out Mike’s site:  http://www.benkotips.com/pages/TSQLvsOracleKeywords.aspx for this helpful list.  While not exhaustive, it a one-stop shop for many of the most commonly used functions in SQL Server and their equivalent in Oracle.

The Situation:

You are an experienced SQL Server developer who now needs to pull data from Oracle or work with Oracle SQL to load Analysis Services or in Integration Services.  Some things are the same, but you want to find the equivalent of PATINDEX in Oracle.





X on XMLA: iii. Basic DDL Functions in XMLA (Create, Alter, Delete)

10 04 2012

image_thumb_thumb

XMLA can be used to manage the structure of your multidimensional databases.  While many developers use Visual Studio (BIDS), to deploy changes, as systems move to production or need to be more clearly managed, XMLA comes into play.

Some of the most common DDL type uses for XMLA including partition management, deploying changes, and promoting between environments.  In all of these cases, objects within Analysis Services need to be created, altered or deleted.

Before we dig into the details, I wanted to call out that the Execute method will be used and, keep in mind, that the full syntax is not required when using SQL Server Management Studio (SSMS).  (See X on XMLA: ii. Basic Structure of XMLA for more details.)  Furthermore, in SSMS, you can generate Create, Alter, and Delete XMLA by right clicking on the deployed object and choosing the Script To option.  If you have questions about syntax, definitely use this function to discover more about the syntax and the object you are working with.

The following sections are using the Sales Channel dimension from the Adventure Works sample database to illustrate the command syntax.  (This sample database is available on CodePlex).

Create

The Create command is used to create new objects in the database.  To child elements that are required are ParentObject and ObjectDefinition.  The ParentObject specifies the list of objects that are the parent.  In our illustration below, the parent object is the database.  This is true because the dimensions in Analysis Services belong to the database.  However, if we were creating a partition, the parent would be structured, database then cube then measure group.  This is true because the partition belongs to a specific measure group.  The order of the parent objects matter as they are read top to bottom in the XMLA.

image

Alter

The Alter command is used to modify existing objects in the database.  The Alter command uses two child elements, Object and ObjectDefinition.  The Object defines the object that is targeted.  This, like the ParentObject, is structured from top to bottom with the last object in the list as the object being targeted for alteration.  12820442882053136041exclamation_mark-md[1]

The ObjectDefinition specifies the changes to make to the object.  The changes to the structure MUST include all of the parts of the structure you want to keep the same.  I cannot emphasize how important it is that you keep this in mind.  You cannot send a simple change via XMLA.  You must send the new version in its entirety.  This is true at all levels including the database.  Where this commonly creates some issues is in the user objects at the database level and the partitions in the measure groups.  If you have specific users at the database level that are different between environment, which you should have, you need to update the alter for each environment.  If you are modifying a measure group that you have added partitions to, you must make sure these new partitions are in your script as well.

image

Delete

The Delete command is used to remove existing objects in the database.  It only has one child element – Object.  This is the simplest of the commands here and only needs the proper object definition. Like the ParentObject and Object elements above, you need to have the proper order to delete the correct object.  Use caution as this command will contain the database through the targeted object.  If you execute the command at the database level you will delete your database.

image

Results

The execution results are not always clear.  When the query is successful you will see the following:

image

Not really conclusive.  Error messages are also returned in an XML format and often contain some relevant information as to why the script failed.  Be sure to read it closely as some times multiple errors are returned and the root cause may not be plainly evident.





I’m Speaking at SQL Saturday #118 in Madison Wisconsin

4 04 2012

If you happen to be in the area on April 21, 2012, join us at Madison Area Technical College – Truax Campus, 3550 Anderson Str., Madison, WI, 53704.

For more event details check out the event site for more info: http://www.sqlsaturday.com/eventhome.aspx?eventid=199

Follow this event on Twitter with hashtag #sqlsat118, and get news on all our events with #sqlsaturday.





X on XMLA: ii. Basic Structure of XMLA

27 03 2012

image_thumb

The second post in the XMLA series focuses on the basic structure of XMLA.  At its heart, XMLA is XML.  As a result it is usually not a comfortable work environment for most database developers or administrators.  It is no different for Analysis Services developers or administrators.  While a discussion on XML formatting is out of scope for this conversation, I will break down the XML structures used in XMLA so you can see how the basic construction of the language works.

XML Namespace

In XML, a namespace is required to help the tools understand how to interpret the content.  The namespace used is urn:schemas-microsoft-com:xml-analysis. Usage of the namespace is handled at the method level.

XMLA Methods

There are two methods supported in the XMLA standard – Discover and Execute.

Discover

The Discover method will return XML-formatted results.  The Discover statement is similar in function to a SELECT statement in SQL.  Discover can be used to query the data within the multidimensional database as well as server information.

Within the Discover method, the following properties are supported which are implemented as child elements:  RequestType, Properties, and Restrictions.  The RequestType property determines the schema rowset to be returned.  The Properties property is a collection of options that can be implemented which are determined by the RequestType.  Finally, the Restrictions property can be used to restrict the results returned by the Discover Statement.

Here is an example of getting the list of dimensions from the Adventure Works sample cube (available on CodePlex.com).

image

Execute

The Execute method executes statements against the multidimensional database.  The Execute statement can be used to process, deploy and execute MDX statements.  It too returns its results in an XML format, including error messages.

The Execute method supports two properties which are implemented as child elements: Command and Properties.  The Command property contains the actual command that is being implemented.  Properties, like for Discover, enables Command specific properties which can be used to direct the command.

This method is implemented two ways in Management Studio.  In the first example, the syntax is explicitly called.

image

In this example, you can see that Management Studio will wrap execute statements with the Execute and Command elements making it simpler for you to work with Execute methods.

image

SOAP

What I do not discuss in this blog is the SOAP implementation.  As most of us use Management Studio or similar tools which don’t look at the SOAP envelope, I did not cover it.  However, if you need more details on SOAP or on the XMLA standards refer to http://xmla.org for more details.





PASS for Today (and Tomorrow)

20 03 2012

I justPASSMNLogo wanted to plug two events that are free SQL Server training.

First, Thomas LaRock (@sqlrockstar) and Jason Strate (@stratesql) are teaming up to bring you “Choose Your Own Adventure – Performance Tuning”. Join us at the Microsoft Technology Center in Edina, MN or online for this great adventure. This adventure starts at 3:00 PM CDT. More details can be found at http://minnesota.sqlpass.org.

Is that all?, you ask. No. Starting tonight at 0:00 GMT (7:00 PM CDT) is 24 Hours of PASS. Check out the awesome roster of speakers24HOP_Speaker including the likes of Denny Lee from SQLCAT, Marco Russo, and Dejan Sarka. These sessions run through the night and will be close captioned in 15 languages, making it a truly international event.  Oh, did I also mention, I will be speaking at this as well.

Take advantage of these free training opportunities by leaders within the SQL Community.  We all look forward to seeing you at both or either of these events.








Follow

Get every new post delivered to your Inbox.

Join 694 other followers

%d bloggers like this: