Is your company working on ways to become more environmentally friendly? Taking care of the environment is an important topic, within organizations and around the world. You may be asking, what does Azure have to do with the environment?
Well, one thing we can do is take a closer look at our data centers. With cloud computing becoming more mainstream and adopted by more and more businesses, this is a great way to do something for the environment. You also give your company a way to get some good PR by ‘going green with blue’.
Learn more at Azure Data Week next week, see you there!
So, how does cloud computing help the environment? Let me give you 3 key reasons:
Shared economies of scale and shared resources
It’s incredible what can be done as a group within Azure. Having the ability to share resources required to write a data center can reduce your energy consumption. Plus, Microsoft is continually improving their data center engineering to reduce energy consumption on a regular basis.
Microsoft works to use and purchase renewable energy as often as possible for their data center power to reduce their energy footprint. For example, they recently announced that their data center in Cheyenne, Wyoming is running entirely on wind power. You can use this as PR for your company by saying, we’re taking advantage of the investments Microsoft in making for the environment in data center management and applying that to ours as well.
Microsoft engineering is internally trying to build out more
They have partnered with the open compute project to build more efficient hardware, networks and buildings that house their data centers. You can be part of this by contributing to this open compute project and help them to improve and make it even better.
These 3 key ways – shared economies of scale, ability to use renewable energy and the open compute project – are great reasons to take your organization to the next level while proving that you care for the environment by adding Azure to your portfolio for data center work and management.
Today I’d like to talk about integration with Azure Logic Apps and how they can help your organization to do enterprise integration. Logic Apps is similar to Flow, but is an Azure tool, as opposed to an Office 365 tool. Logic Apps allows you to integrate a variety of apps, such as Salesforce, Office 365, SQL Server, Azure Event Hubs, etc. You can create interactions to allow these applications to integrate with each other.
As an integration tool, it’s typically triggered on a timer or by an action. For example, if you had an email from your boss come in that has “action required” in the subject line, you can have that action added automatically to your planner. What it does is interacts and moves data around with connectors that know how to connect between the apps, as well as what the APIs are; no need for custom work on your end.
As Logic Apps are like Flow, you could possibly start in Flow and upgrade to Logic Apps if that makes sense in a scenario. You do get more benefits when you use Logic Apps. With Logic Apps you get the ability to do custom development as needed. You can build and integrate yourself by getting into the code page in Logic Apps, which is not possible in Flow.
Are you coming to Azure Data Week next week?
You also gain the ability to do source control. And it can be opened in Visual Studio, so when you go into Logic Apps you can use Team Foundation or another source control solution of your choice and be able to manage that project and source control around that.
Logic Apps does a much better job at supporting business to business integrations, in scenarios where you want to trigger something based on what your partner in business is doing. Plus, it takes advantage of the security model since it’s an Azure tool. If you’re actively using Flow and as that becomes more complex or you decide to have it become an enterprise managed resource, it makes sense to move over to Logic Apps to gain the control and leverage Azure security and auditing.
In this final post of my Azure Cognitive Services week of Azure Every Day, I’ll focus on the Vision APIs and Services within this stack. Azure Vision Cognitive Services leverages image-processing algorithms to smartly identify, caption and moderate your pictures. There are 3 APIs and 3 Services available in the Vision stack:
1. Computer Vision API – Allows you to distill actionable information from images. You can breakdown a video or image and get insights into the content. This is being used in quality control scenarios in areas like manufacturing. In a process where people are putting eyes on an object to say if it’s good or bad, by using this in a scoring method in machine learning, that process can be automated.
2. Face API – With this we have the ability to detect, identify, analyze, organize and tag faces in photos. This can come into play in security. Or Uber uses it in their business app to recognize their drives when they log in.
3. Emotion API – This is pretty cool. You can personalize user experiences with emotion recognition. This API allows us to identify mood or emotion; there are currently 9-10 emotions handled within this API. This may be great to add on to a customer service app or maybe to use with your kids!
4. Content Moderator Service – This uses machine learning to automate image, text and video moderation, so it can detect and remove offense language in a video or in text that identifies an image or even offensive actions in a video.
5. Video Indexer Service – This service allows us to turn video into analytics. It breaks a video down, so you can identify people, work with speech sentiment based on how people are talking, and it will identify key words. What’s cool is it identifies where those occur within the stream of the video. Therefore, you can look back at something you’d like to see talked about in a video and Video Indexer will tell you in what places that occurred.
6 .Custom Visual Service – Easily customize your own state-of-the-art computer vision models for your unique use case.
This week in my Azure Every Day posts, I’ve been focusing on Azure Cognitive Services and what these features can add to your applications and your organization. Today my focus is the Speech APIs, which you can use to convert spoken audio into text, use voice for verification, or add speaker recognition to your app.
There are 3 primary APIs available in the Speech stack:
1. Translator Speech API – With this you can add the ability to easily conduct real-time speech translation with a simple REST API call. So, if you have an app that needs to operate around the world and you need to translate from a native speaker’s language to the common language that you’re using or reverse that and use speech detects to translate your common language to the native speaker’s language, this does this for you. This is an automatic, real-time translation tool, so you could do a real-time translation on a video or live feed. Currently includes support for a variety of languages and is updated with more regularly.
Join us at Azure Data Week coming in October 2018
2. Bing Speech API – This gives you the ability to convert speech to text and back again to understand user intent.
3. Speaker Recognition API – Still in preview, this can be trained to recognize voice patterns and use speech to identify and verify individual speakers. There’s a cool example of this online where it has a group of presidents speaking and the API will recognize which president it is. This also gives you the opportunity to train something to your speech patterns to identify you, which you could use to add security to an application by asking for voice recognition.
There is also a Custom Speech Service (still in preview) which allows you to overcome speech recognition barriers like speaking style, background noise and vocabulary.
As more people are interacting by speaking into their mobile devices and such, these APIs that Microsoft has made available are a great way to make speech part of your advanced applications.
In today’s post focusing on Azure Cognitive Analytics, I’ll look at the Language Analytics APIs that are available. These language APIs allow your apps to process natural language with pre-built scripts, evaluate sentiment and learn how to recognize what users want. Often, you’ll work with Speech and Language APIs together, but I’ll cover Language today and Speech in my next post.
Here’s what is available in the Microsoft Language stack:
1. Language Understanding Intelligence Service (LUIS) – The most commonly used in this stack; with it you can teach your apps to understand commands from your users. UPS uses this to help customers track their packages. It’s also a great way to interact with Azure Bot Service where you can go in and ask questions in a bot and it will do the job of understanding what’s been asked by understanding the language being used.
2. Text Analytics API – This can easily evaluate sentiment and topics to understand what users want and use the language and context to decipher whether a person is happy or not, for instance. Let’s say you use this with a customer survey and it will do some analytics to identify words like wonderful or terrible. It provides sentiment analysis for the organization. You can move to another interaction level and respond, whether it’s ‘thanks for the great review’ or ‘sorry you had a bad experience, how can we help?’
3. Bing Spell Checker API – You can attach the Bing Spell Checker API to your application and it will detect and correct spelling mistakes.
Check out Azure Data Week coming in October 2018
4. Translator Text API – This API will translate the text that typed in between languages, so you can work with third parties in other countries or provide customer service in a chat scenario when you’re interacting with a person in another language. There are a lot of languages supported.
5. Web Language Model API – Helps us handle natural language queries by using the power of predictive language models trained on web-scale data and understanding the next common word or providing word completion.
6. Linguistic API – Currently in preview but brings in some very sophisticated linguistics technologies to simplify complex language concepts and parse text.