Azure Integration with Logic Apps

Today I’d like to talk about integration with Azure Logic Apps and how they can help your organization to do enterprise integration. Logic Apps is similar to Flow, but is an Azure tool, as opposed to an Office 365 tool. Logic Apps allows you to integrate a variety of apps, such as Salesforce, Office 365, SQL Server, Azure Event Hubs, etc. You can create interactions to allow these applications to integrate with each other.

As an integration tool, it’s typically triggered on a timer or by an action. For example, if you had an email from your boss come in that has “action required” in the subject line, you can have that action added automatically to your planner. What it does is interacts and moves data around with connectors that know how to connect between the apps, as well as what the APIs are; no need for custom work on your end.

As Logic Apps are like Flow, you could possibly start in Flow and upgrade to Logic Apps if that makes sense in a scenario. You do get more benefits when you use Logic Apps. With Logic Apps you get the ability to do custom development as needed. You can build and integrate yourself by getting into the code page in Logic Apps, which is not possible in Flow.

Are you coming to Azure Data Week next week?

You also gain the ability to do source control. And it can be opened in Visual Studio, so when you go into Logic Apps you can use Team Foundation or another source control solution of your choice and be able to manage that project and source control around that.

Logic Apps does a much better job at supporting business to business integrations, in scenarios where you want to trigger something based on what your partner in business is doing. Plus, it takes advantage of the security model since it’s an Azure tool. If you’re actively using Flow and as that becomes more complex or you decide to have it become an enterprise managed resource, it makes sense to move over to Logic Apps to gain the control and leverage Azure security and auditing.

Advertisements

Azure Cognitive Services – Vision APIs

In this final post of my Azure Cognitive Services week of Azure Every Day, I’ll focus on the Vision APIs and Services within this stack. Azure Vision Cognitive Services leverages image-processing algorithms to smartly identify, caption and moderate your pictures. There are 3 APIs and 3 Services available in the Vision stack:

1. Computer Vision API – Allows you to distill actionable information from images. You can breakdown a video or image and get insights into the content. This is being used in quality control scenarios in areas like manufacturing. In a process where people are putting eyes on an object to say if it’s good or bad, by using this in a scoring method in machine learning, that process can be automated.

2. Face API – With this we have the ability to detect, identify, analyze, organize and tag faces in photos. This can come into play in security. Or Uber uses it in their business app to recognize their drives when they log in.

3. Emotion API – This is pretty cool. You can personalize user experiences with emotion recognition. This API allows us to identify mood or emotion; there are currently 9-10 emotions handled within this API. This may be great to add on to a customer service app or maybe to use with your kids!

4. Content Moderator Service – This uses machine learning to automate image, text and video moderation, so it can detect and remove offense language in a video or in text that identifies an image or even offensive actions in a video.

5. Video Indexer Service – This service allows us to turn video into analytics. It breaks a video down, so you can identify people, work with speech sentiment based on how people are talking, and it will identify key words. What’s cool is it identifies where those occur within the stream of the video. Therefore, you can look back at something you’d like to see talked about in a video and Video Indexer will tell you in what places that occurred.

6 .Custom Visual Service – Easily customize your own state-of-the-art computer vision models for your unique use case.

Azure Cognitive Services – Speech APIs

This week in my Azure Every Day posts, I’ve been focusing on Azure Cognitive Services and what these features can add to your applications and your organization. Today my focus is the Speech APIs, which you can use to convert spoken audio into text, use voice for verification, or add speaker recognition to your app.

There are 3 primary APIs available in the Speech stack:

1. Translator Speech API – With this you can add the ability to easily conduct real-time speech translation with a simple REST API call. So, if you have an app that needs to operate around the world and you need to translate from a native speaker’s language to the common language that you’re using or reverse that and use speech detects to translate your common language to the native speaker’s language, this does this for you. This is an automatic, real-time translation tool, so you could do a real-time translation on a video or live feed. Currently includes support for a variety of languages and is updated with more regularly.

Join us at Azure Data Week coming in October 2018

2. Bing Speech API – This gives you the ability to convert speech to text and back again to understand user intent.

3. Speaker Recognition API – Still in preview, this can be trained to recognize voice patterns and use speech to identify and verify individual speakers. There’s a cool example of this online where it has a group of presidents speaking and the API will recognize which president it is. This also gives you the opportunity to train something to your speech patterns to identify you, which you could use to add security to an application by asking for voice recognition.

There is also a Custom Speech Service (still in preview) which allows you to overcome speech recognition barriers like speaking style, background noise and vocabulary.

As more people are interacting by speaking into their mobile devices and such, these APIs that Microsoft has made available are a great way to make speech part of your advanced applications.

Azure Cognitive Services – Language APIs

In today’s post focusing on Azure Cognitive Analytics, I’ll look at the Language Analytics APIs that are available. These language APIs allow your apps to process natural language with pre-built scripts, evaluate sentiment and learn how to recognize what users want. Often, you’ll work with Speech and Language APIs together, but I’ll cover Language today and Speech in my next post.

Here’s what is available in the Microsoft Language stack:

1. Language Understanding Intelligence Service (LUIS) – The most commonly used in this stack; with it you can teach your apps to understand commands from your users. UPS uses this to help customers track their packages. It’s also a great way to interact with Azure Bot Service where you can go in and ask questions in a bot and it will do the job of understanding what’s been asked by understanding the language being used.

2. Text Analytics API – This can easily evaluate sentiment and topics to understand what users want and use the language and context to decipher whether a person is happy or not, for instance. Let’s say you use this with a customer survey and it will do some analytics to identify words like wonderful or terrible. It provides sentiment analysis for the organization. You can move to another interaction level and respond, whether it’s ‘thanks for the great review’ or ‘sorry you had a bad experience, how can we help?’

3. Bing Spell Checker API – You can attach the Bing Spell Checker API to your application and it will detect and correct spelling mistakes.

Check out Azure Data Week coming in October 2018

4. Translator Text API – This API will translate the text that typed in between languages, so you can work with third parties in other countries or provide customer service in a chat scenario when you’re interacting with a person in another language. There are a lot of languages supported.

5. Web Language Model API – Helps us handle natural language queries by using the power of predictive language models trained on web-scale data and understanding the next common word or providing word completion.

6. Linguistic API – Currently in preview but brings in some very sophisticated linguistics technologies to simplify complex language concepts and parse text.

Azure Cognitive Services – Knowledge APIs

This week’s Azure Every Day posts are focused on the different APIs available within Azure Cognitive Services. Today I’ll focus on the Knowledge APIs, which map complex information and data in order to solve tasks such as intelligent recommendations and semantic search.

First up is the QnA Maker API, which distills information into conversational, easy-to-navigate answers. You can use this to set up the type of interaction where you can ask a specific question, set it up against known data and it will go through and present answers that make sense. You can add this QnA API to any type of application. For example, it’s not a bad plan to create a bot that gives people what they need right away, giving you the ability to add an interactive chat until the time it’s necessary to speak to an actual person.

Check out Azure Data Week coming in October 2018.

The other knowledge API is the Custom Decision Service API, which is a cloud-based, contextual decision-making API that sharpens with experience. This uses reinforcement learning in a new approach for personalizing content; it responds to emerging trends in both your activity and things that you’re interested in.

This API is actually making decisions for you, using your patterns to present you with more personalized content, and it continues to try to learn more from you as time goes on to do a better job presenting personalized data for you.

Thoughts on data, business analytics, and the SQL Server community

%d bloggers like this: