Typing with Your Tongue Part 2: Voice Access

A few months ago, I wrote a post on how I use voice technology to continue working with my ALS condition. Since that post was written, Microsoft released a new voice technology called Voice Access as a part of the Windows 11H2 update. I’m going to talk a little bit about my experience using it. It has changed how I interact with my PC and which tools I choose for voice to text.

Voice Access

You can turn Voice Access on if you’re running the latest version of Windows 11. (Voice Access is not available in Windows 10). You simply go to the Accessibility settings and choose the Voice Access option under Speech. Once you have turned on Voice Access, you will see a bar across the top of your primary screen as shown in the image below. This is how you know you have Voice Access ready to go.

Voice Access is much more than a voice to text tool. Voice Access includes many command tools including a mouse grid option which allows you to grid your screen and select items on the screen using only your voice. Voice Access supports commands such as Open and Close for application windows. You can find out the full list of commands that are available to you in Voice Access here.

How do I use Voice Access?

I typically have Voice Access on all the time except when I’m in meetings. (It will try to type everything that’s said in the meeting, so it needs to be turned off at that time.) There are only two tools where I use Voice Access less often. That would be Outlook and Word. More on that in a bit.

Because Voice Access alternates between dictation and commands, I able to use it when working with most tools. For example, with Windows Mail I will use it to dictate an e-mail, and then click Send to send the same e-mail. When I say “click send”, Voice Access finds the Send button on the window. If there are more than one, it will give me an option to select which Send I meant to click. I find the overall experience pretty good as it allows me to switch between dictation and commands without issue.

I use Voice Access a lot when working with Teams, WhatsApp, and other chat-based tools. Voice Access allows me to have a good voice to text tool in applications that typically do not have great accessibility support. At times, I have used Voice Access instead of Dictation in Office 365, especially when working with PowerPoint and Excel. Neither of these are particularly voice to text friendly and Dictation in PowerPoint is significantly lacking.

Using Office 365 Dictation

I primarily still use Office 365 Dictation when working with Word and with Outlook. Dictation in both tools responds quicker than Voice Access. It also handles some of the issues that are currently being worked on in Voice Access such as punctuation. For example, the bulk of this blog post was authored in Word using Office 365 Dictation because it’s quick, simple, and works well within the context of voice to text.

Other Insights into How I Work with These Tools

Office 365 Dictation is fully online. This means if you lose your internet connection, you lose the ability to use that function. Voice Access on the other hand does not have this dependency and will continue to work without a connection.

The commands between these two tools still vary quite a bit. For example, text formatting is very different between the two tools. In Word, they use “capitalize” to capitalize a word using Dictation. (By the way this has been improved since the last time I wrote my blog post. This improved capability in Office 365 Dictation is huge). In Voice Access you have to say “capitalize that” to capitalize a word or selected words. When you use that same command in Office 365, it will capitalize the whole sentence. This is true of other formatting commands between the two tools.

There are enough variances in those commands to cause frequent issues when working with the two tools interchangeably. I find myself using the wrong command to create a new line or change the case of words regularly. This is something you will have to work through if you go back and forth between the two tools.

I started working with Voice Access while it was still in preview. I am actively working with the Voice Access team and providing feedback about the tool. While many updates have been made, there are still issues with the tool as it continues to mature. If you’ve been working with Windows Speech Recognition and Voice Typing, I would still recommend switching to Voice Access as the experience is much better. Just keep in mind it is still new and you may still run into issues here and there as they continue to improve the product.

When Voice Access became available, I switched immediately to Voice Access as my primary voice command and dictation tool when not using Office 365 (Word and Outlook). I think that this technology will continue to improve and make voice accessibility better for those of us with limited functionality in our hands and arms. As you know from previous conversations, I’m not a huge fan of Dragon because of how it changed my workflow and made things more difficult for me to do in general. Voice Access is a more natural fit into my workflow, and I like it considerably better. I do find myself going back and forth between my roller bar mouse and using a complete voice solution when my hands get tired. That speaks volumes of the work done in Voice Access to help someone like me continue to function by having options in the tools I can use. I look forward to the improvements as they come along.

The Microsoft Ability Summit in Review

abilities summit header

As many of you are aware, I have been dealing with a progressive version of ALS which is affecting my hands and arms and thus my ability to type. Throughout this process I have been writing about the technology that has allowed me to keep working. I really started to dig in and embrace the work that Microsoft is doing in the accessibility space. As I was looking around at what they were doing I stumbled across last year’s Ability Summit. It was interesting because in this Summit they announced the release of some of the Surface accessibility tools as well as Voice Access both of which I will be using a lot of. I have been using Voice Access since it was released in preview in the middle of last year. I plan to write about my experience with Voice Access after the next update of the product which was dropped in preview late February and I hope to have the update soon. My hope is that a number of the issues I’ve been dealing with in the product will be resolved by then.

Picture of Microsoft Adaptive Accessories
Microsoft Adaptive Accessories

Moving on to the Summit. When I came across the Summit in YouTube I was really impressed with the overall approach as well as content and conversations that were part of the event. This year’s Summit took place on March 8th. I took the time to watch much of the conference with my wife to see what new and great things are coming from Microsoft to further enable those of us with disabilities to work in a Microsoft and Windows environment. I was not disappointed! The conference had a number of sessions where they talk about how various individuals are impacting their workplaces as well as reflecting on the life of one of the foremost disability advocates in the country, Judy Heumann.

Along the way, there were a few things that they announced or talked about that excited me. The first of course was that Voice Access was fully released in February. While I haven’t received the update yet I’m looking forward to getting that and reviewing it in a different post. They also showed how Microsoft 365 including the Office tools and Teams have continued to embrace accessibility for people with all types of disabilities. One of the neat things they demonstrated was there’s now an accessibility checker in Word for example that will allow you to see if the formatting you have chosen in your document is accessible or not. I look forward to seeing the continued improvements on this as it brings awareness to those of us who don’t struggle with some of those same issues and shows us how to build documents that are considered fully accessible.

GitHub Next logo
GitHub Next

One of the tools I’m most interested in is Hey GitHub! Beyond even GitHub Copilot, you now have the ability to use voice commands to build code. While this is still in the early stages it is an awesome concept. I of course will be looking at how I could potentially use this to build out some SQL code so I can continue to demo.

There was also a lot of conversation around the impact of AI on our ability to be more productive. This includes the various AI capabilities such as support writing code in Visual Studio and creating summaries of meetings with ChatGPT. I hope to put some of these new findings to good use.

I have been using the public version of ChatGPT to build LinkedIn summaries of blog posts so I don’t have to type them up myself. It has been a cool experience watching the AI build the summary.

If you or your company is looking for insights into how Microsoft as well as other companies are tackling the tough questions around accessibility, I encourage you to check out the various sessions from the Ability Summit. This is a great opportunity to embrace a group of individuals we have a lot to contribute all they need is the opportunity! I want to say a huge thank you to the Microsoft team who have put together this conference and continued to make accessibility a key part of the tools they provide to us.

My experience working with notebooks in Azure Data Studio

I’ve seen notebooks used in Azure Data Studio on multiple occasions. I really like the concept of notebooks, having done some work within Azure Databricks notebooks, but not extensively. As I go into the process that I went through, it’s important to understand that I am not a data scientist and have not done extensive development or spent a lot of time in Python or Jupyter notebooks. Furthermore, my interest in the notebooks was elevated when I realized I wanted to continue presenting while working through my current ALS diagnosis. I have limited use of my hands and arms so highlighting and executing code, especially in front of a crowd, was going to be problematic. (If you want to learn more about my condition and tools I’m using to maintain my ability to work, please check out this series of articles on our blog.)

Let’s start with the core problem that I’m trying to solve today. I will be presenting a session on elastic queries in Azure SQL database. Most of the code is ready to go since I have done this presentation a few times. As I was working through testing my demo, I found executing code by highlighting and pushing “run” in either Data Studio or in SQL Server Management Studio was difficult because I struggled to control highlighting the code. I was also looking for better ways to automate the process, but more about that later. I watched a couple of demos on using notebooks and found some of the notebooks that have been created by Microsoft. I realized I could put together my entire demo package to share with the attendees and build the demo so that I could execute it a step at a time without highlighting. Now that you have the background of what I was trying to accomplish, let’s look at the process I went through getting this done.

How in the world do you work with notebooks in Azure Data Studio?

One of the interesting things about working with notebooks, is that if you want to work with notebooks, it’s likely that you already have and you prefer to use them. This means that the instructions for how to create, organize, and use notebooks within Azure Data Studio is a bit lacking. For example, it was not entirely clear to me that one part of the process is creating a folder to store your notebooks with your markdown files and other content. So, let’s go through the process of creating your first notebook step by step with explanations about what’s happening.

The organization of notebooks and files in Azure Data Studio

Part of my struggle in understanding what was happening is each time I tried to create a notebook it asked me for locations and files. I thought it should know where they should go. So, as a newbie with notebooks and organization with Azure Data Studio, I created a notebook and a Jupyter book so I could see how the files are organized. Then I could go back and create the Jupyter book correctly from the beginning. While I may not get all of the terminology correct in this process, this is my discovery as I move forward through the process.

Once I started working with the notebook process in Azure Data Studio, I realized there were multiple components involved:

  • Jupyter book
  • Markdown file
  • Notebook
  • Section

While I am sure there are simple ways to create what we would like to do, I’m coming at this entirely from Azure Data Studio as a data developer not a data scientist. Each time I tried to create my first Jupyter book, I didn’t understand what its purpose was in the beginning. When you create a Jupyter book, it looks like you’re creating a folder. That folder will also contain several helper files to organize your notebooks, markdown files, and sections. Before we leave the structure and organization section here, I want to clarify that the book is the parent folder, and the section is a sub folder within the book. Markdown files and notebooks are files created that are organized for particular purposes. The markdown file is effectively a document that allows you to create a nicely formatted informational component for your notebook. The notebook files are actual Jupyter notebook files which are split into sections for code and text.

Here is the high level organization of the Jupyter book we are going to create:

  • Jupyter book: Azure SQL database elasticity
    • Markdown file: README
    • Section: Setting up the demo
      • Markdown file: Set up instructions
      • Notebook: Prepping the demo
    • Section: Elastic query demo
      • Markdown file: Elastic query demo instructions
      • Notebook: Elastic query demo
    • Section: Elastic job demo
      • Markdown file: Elastic job demo instructions
      • Notebook: Elastic job demo

For the purposes of this blog post, we will walk through the process of creating the original Jupyter book and the elastic query demo section. That section has a good mix of code and text to illustrate the power and capabilities of notebooks.

Creating your first notebook in Azure Data Studio

Let’s begin creating our first notebook in Azure Data Studio. Before we dive into this process too deeply, I want to be clear that we are going to create a Jupyter book to add our notebooks to. This is not required as you can create a new notebook from the file menu or with the shortcut as noted on the screen in Azure Data Studio. What confused me about this initially is that you cannot create a simple notebook from the notebooks section in Azure Data Studio. When you create your notebook, you can save it as a file in the location of your choosing, but it will not show up in the notebook section. Once you create a notebook, if you are not using a Jupyter book to host it in, you can reopen it just by choosing Open File from the menu. While this may make sense to others, it was not entirely intuitive to me in the beginning. I had to do some mucking around to figure out that process.

So, we will start our process by creating a Jupyter book to host all our notebooks and markdown files. This Jupyter book will also be readily displayed in the notebook section on Azure Data Studio. Using the to get to the More Actions menu, choose Create Jupyter Book.

Create new Jupyter book

In the dialogue give your new Jupyter book a name and specify the location you want to store it in. I have not used the optional content folder for this exercise and will recommend that you do not either.

New Jupyter book dialogue

If you go to the folder location you created your Jupyter book in, you will see that it also created three files in the folder named the same as your Jupyter book:

  • _config.yml
  • _toc.yml
  • README.md

In the notebook section of Azure Data Studio, you should see your Jupyter book with a README markdown file in it. For now, we will leave the README file as an introduction to what is in your notebook. (Be aware, that you can remove the file by deleting it, but you will need to update the TOC file to reflect the changes you made. If you do not update the TOC file, you may see missing file error messages in Azure Data Studio.)

New Jupyter book with README

I will not take time in this post to review what is possible in a markdown file. The key here is you can update the README file that was created with headers and formatting to provide instructions on how to use the various contents of your Jupyter book. If you double click within the README file, it will open up the readme.md file in a new tab in Azure Data Studio. This has a line number and will allow you to update and add content.

The following code gives you an example of some markdown syntax:

# Welcome to the Jupyter book on Azure SQL Database elasticity
This book contains 3 sections
* The first section contains instructions on how to set up the demo
* The second section contains the demo for elastic queries
* The third section contains a demo for elastic jobs

This will result in the following look and feel in your README file

Formatted README markdown file

Adding a section

The next thing we will do is add a section where we will host the executable demo code. Right click on your notebook and choose Add Section. We will add the title as Elastic query.

Adding the notebook

Up to this point, we have been building the framework to support our first notebook. While all these steps are not required, this is the most complete approach. Right click on your section and choose New Notebook. This will create a Jupyter notebook in the subfolder of your section.

New section with a notebook

Once you create the notebook, it will open a tab in Azure Data Studio with the notebook. You will notice that it has something called Kernel. The kernel allows you to set the default language used for the notebook. For the work that we are doing we will be using the SQL kernel. This will allow us to execute SQL code against a database. In the Attach to dropdown, you will see databases that you can use to execute code. The Cell dropdown allows you to add cells which can contain code or text.

Azure Data Studio supports other kernels that can be used for executing code against various workloads. These include Python, Spark, PySpark, and PowerShell.

Now let us get down to the business of creating a notebook with executable code. Before we add executable code, let us add a text cell as an introduction to the code. You can do this by clicking the cell dropdown and choosing text. Once you add the text cell you will notice there is a formatting bar which ironically is missing in the markdown files editor. This means it is easier to create formatted text in a cell in a notebook rather than in the markdown file itself. Keep this in mind as you create your notebooks and add content to your Jupyter book. These cells are easier to work with at times than the full file. This is particularly true if you are not knowledgeable on formatting markdown.

At this point, let us add a quick introduction to what we are about to do in the in the following code cells.

Formatted text cell

Next, we will add a code cell. From the dropdown menu for cell, choose Code Cell. This will add a code cell to your notebook which uses the language selected in your kernel. There is also a play button which allows you to execute the code.

Empty code cell

I am going to add the code that is required to clean up the tables for the demo. The resulting code cell will look like the following:

Code cell with DDL code

As a last step to understanding how notebooks and code work in the environment, we can execute the code by pushing the play button in the code cell. This will return the result of that execution as shown below:

Code cell with results

Congratulations, you have created your first notebook with executable code against a SQL Server database! You can continue to add more text cells and code cells as needed. One of the reasons I like this pattern is that it allows me to execute the code without having to highlight it while doing demos. Each cell can be run independently. You will also notice there is a Run All button if you choose to run all the scripts at the same time that you have in your notebook. This could be valuable if you have a set of maintenance operations or related items you want to run and you have collected in a notebook for use.

Another key thing to remember is that notebooks are shareable. Because the connection is outside of the notebook, once you share the notebook, they will have to connect to an environment that allows them to execute the same code. You can add your notebooks to GitHub or similar source control to manage change and allow you to share common resources easily without just distributing SQL files.

Before we wrap up

I feel I would be remiss if I did not also demonstrate what happens when you get data results in a notebook. In my case I have a database I can connect to which has WideWorldImporters loaded into it. I am going to select the top 1000 rows from the DimSupplier table. Once I run the code cell, I get the rows affected, the execution time, and a table with results as shown here:

Code cell with data results

As you can see in the results window, you have several export options and a chart option that you can use to further visualize or work with the data that you have retrieved. I would encourage you to explore these options as it depends on the type of data you are working with whether they work well for you or not. For example, supplier data does not chart very well, whereas if I had used fact data there may have been some interesting charting options. A notebook could be a straightforward way to demonstrate some simple reporting for a technically savvy audience.

Wrapping it up

There are many more functions that I did not cover around notebooks, and I assume that Microsoft will continue to make improvements to the overall capabilities here. I look forward to using notebooks more as a terrific way to share code and run demos. I hope you find this as valuable as well.

For those of you who are not sure about using notebooks, this is an effective way to build your skills while not trying to learn a new language if you are familiar with SQL. My first exposure was using Python in a Databricks environment. That was much to learn while also trying to understand how notebooks functioned. As the data environment continues to expand and require new skill sets, understanding how to use and leverage notebooks on a regular basis is a good skill to have. Microsoft has done us a great favor by using standard Jupyter notebooks which are used in data science, Databricks, and other areas of data practice.

If you are following my work enablement series, you know one of the things that I am passionate about is simplifying how I work, in order to stay working while continuing to lose functionality in my arms. Notebooks help with this by allowing me to execute code without highlighting it when doing demos. Because highlighting code and executing it in a tool like SQL Server Management Studio requires multiple touches on the keyboard and mouse, I struggle to do it efficiently. The ability to organize my demo around code cells and then have a self-documenting notebook to pass along to attendees is a huge win for me. I hope this helps others who struggle in the same way. And I hope this was helpful to those who have not used or seen notebooks in their current work environment but may in the future.

I will be creating and sharing a completed notebook for the demos related to my presentation on elastic capabilities with Azure SQL. Look for that presentation follow up from the Memphis SQL Saturday in October 2022. I will publish a follow up blog post with a link to the completed notebook used with that demo.

Fast Fingers-Function Keypads

This is the third in the series of tools and technologies that I use to deal with the loss of functionality in my hands and arms. Check out this article for the lead up to this series.

Setting the stage

The issue I’m dealing with involves muscle atrophy in my hands and my arms. As a result, I’ve lost a lot of strength in my hands and arms including my fingers. Some of the unintended or unplanned impacts included the inability to successfully type at times or diminished amount of time I can spend typing. I had previously used a Logitech split keyboard which I loved. I considered myself a good typist and used to be able to type and a code very effectively. With the onset of the atrophy, I encountered situations where my hands would stop working. I would be typing and then I couldn’t type anymore. Some of it is related to physical exhaustion or fatigue from the effort required given my condition. I also experience a situation where my fingers curl making it nearly impossible type on a keyboard. The first time this happened was the first time I was concerned about my career. As I noted in a previous article, I am using voice to text for the bulk of my typing including this blog post. However, voice to text does not work that great for coding and frankly I have issues with any multi-key functions that require my hand to stretch across the keyboard.

Discovering a solution

I was watching a show with my wife and daughter when an ad came up that showed the Quick Keys solution from Xencelabs. This was part of a video editing package including a tablet and pens. I was intrigued because I had not seen a solution that allowed me to program keys with text. It also had a wheel on it that could be used for other tasks. I went and looked this up and I was able to buy just a Quick Keys device.

And I started doing some more research and looking into what this tool did, I realized that the space I needed to look more closely at was related to video editing and streaming. They have a series of tools which support macro keys that they use to optimize applications, shortcut keys, and game actions. The variety of these tools is substantial. Shortly thereafter, while at church working with the tech team, I saw a Stream Deck. This was even cooler because each of the buttons have a programmable LCD screen behind it. Now I knew what to look for and started determining what I wanted to do as I move forward.

Xencelabs Quick Keys

I purchased the Xencelabs Quick Keys device first. It has 40 programmable functions and a physical dial that I was able to program.

Xencelabs Quick Keys

I programmed some basic functionality that I really liked to have available with the ease in of a pushing a button close to me such as delete, backspace, a shortcut for speech to text, undo, redo, cut, @, and ctrl. This is my generic set of functions that in addition to the copy, paste, double click, and dictation shortcuts I had on my mouse applied to most of the applications that I worked in. I next set up a screen that you can go to by pushing a function button on the device to support specific functionality within Microsoft Word. The big one that I needed to have in there was a shortcut to change case as Microsoft dictation does not have cap capability at this time. I also added home and end along with a couple of other functions to be helpful.

The Quick Keys device has five customizable screens of eight buttons each. So, I used the first one for my generic set as noted above. My second one was for Word. I added a third screen that contained the web addresses of common locations I needed to go to such as the Azure portal and my blog. This allowed me to open a browser, push a button, and go to that site easily. What I quickly discovered was that I was going to need more functionality for this to be effective in the long term. Before I go to the next solution a couple other things I did on this device included using the physical wheel for moving the cursor and for volume control as it too had five settings.

My default screen setup with Quick Keys

Elgato Stream Deck

Because I had a device already, I wanted to research the Stream Deck before purchasing it. One thing I quickly noticed is that it is a favorite tool among streamers and has not had a significant upgrade because it just works. The device has been a solid device, easily programmable and customizable in a multitude of environment. If you go to YouTube, you will find a number of streamers, gamers, and content creators walking through demos of how to setup and use the Stream Deck. It has a lot of built-in functionality for variety of editing and streaming tools.

 To start with, I was unsure if this would be a good solution for me as most of what I needed was not what they were using it for. So, I dug in. What I discovered was that the deck was highly programmable with effectively an unlimited number of options that you could program. I thought I’d give it a try.

I purchased the 15 key Stream Deck pictured here:

Elgato Stream Deck

Once I got the Stream Deck and uploaded the software to program it, I quickly realize there are number of addons for the Stream Deck to support programming and Windows functionality. These are addons that give you shortcuts to things like locking your computer something that Quick Keys could not do. I added these in as well as some icon sets because icons are cool. Once I had this in place, I programmed my initial set of functionality to enhance what I was doing with Quick Keys. Because I already had Quick Keys, those two screens provided me the generic starting point for most functionality I would need outside a specific programs like Word. This also allows me to keep Quick Keys on the generic set of common functions and use Stream Deck to be more reactive to programs and needs.

I have taken the Stream Deck and programmed it for a couple of specific use cases I really am happy with. Let’s start with the first one which is Word. Stream Deck can detect the application that you’re in and set the keys up with specific profiles that you create. In my case whenever I am in Microsoft Word, it has the editing keys and other functionality that make working with documents easier on the Stream Deck. Because I still have basic keys sitting on the Quick Keys solution I’m able to have a combined set of 23 function keys readily available without switching screens.

Default screen on my Stream Deck
My Word profile

I have also set up a similar set of functionality when working with Microsoft Outlook. While I’m still working through which functions make the most sense for me to work with in each scenario and if I need more than one screen, the amount of functionality available at my fingertips as a result of programming these two devices is substantial. It makes it easier for me to work through a variety of commands without struggling with the keyboard. I am using the Word functionality as I edit this blog post after getting the content created via voice to text.

Optimizing functions for code

Now for the more interesting use case. It’s part of my job occasionally I still need to do some coding. In this case, I was working with T-SQL code. I needed to create some tables, add some keys, and work with data. Coding is one of the most typing intensive activities I do where voice to text does not help me. So, how can Stream Deck help me out? It turns out you can actually send keystrokes through both devices. However, the capacity of Stream Deck is substantially larger than that which is available in Quick Keys (500 versus 24). More importantly with the unlimited number of keys that can be programmed combined with folders to allow you to group together commands in Stream Deck, it was a natural choice. I created a folder on my screen for the work called SQL. In there I created a folder for CREATE commands and will likely add more as they go along. In the first folder of SQL, I have commands such as SELECT, FROM, WHERE, INNER JOIN, and similar common commands used when working with data in SQL. While it may seem at first glance these are short commands, I must call out that the goal for me is to reduce the amount of typing I do as much as possible. When I added the CREATE commands, I had the full syntax for creating a table where I just had to fill in the name and field list. I also added a folder that gave me the data types most used so I wouldn’t have to type those either. I also had the syntax for primary and foreign keys and add indexes in the future. My point here is I was able to reduce the amount of typing required by 30 to 50% depending on what I was doing. This reduces strain on my hands and allowed me to be more productive for a longer period of time.

Here are some screenshots from the Stream Deck programming surface to show you how I set it up for SQL so far.

My SQL default menu
My SQL create menu

I really enjoy a Stream Deck because some of it is just fun. Part of the fun of this is finding icons that you can use that include gifs on the screen. I’m completely planning to continue to extend what I’m doing with my Stream Deck.

Wrapping it up

Finding these tools have been extremely important to maintaining productivity in my work. What I’m learning so far is the tools that I’m discovering are beneficial for me but also for others who might want to build shortcuts out for things they can’t remember or to make working generally easier. These tools are not without cost, but the increased productivity is seriously worth it. And frankly it makes my setup at work look really cool. Hopefully you find this information helpful, or it could be helpful to someone else. Feel free to pass it on.

Typing with Your Tongue – Voice to Text Technologies

This is the second in the series of tools and technologies that I use to deal with the loss of functionality in my hands and arms. Check out this article for the lead up to this series.

Setting the stage

The issue I’m dealing with involves muscle atrophy in my hands and my arms. As a result, I’ve lost a lot of strength in my hands and arms including my fingers. Some of the unintended or unplanned impacts included the inability to successfully type at times or diminished amount of time I can actually spend typing. I had previously used Logitech split keyboard which I loved. I consider myself a fairly good typist and used to be able to type and a code very effectively. With the onset of the atrophy, I encountered situations where my hands would actually stop working. I would be typing and then I couldn’t type anymore. Some of it is definitely related to physical exhaustion in the effort required given my condition. The first time this happened, was the first time I was concerned about my career.

As my condition has worsened, I have try to variety of software solutions that supported voice to text. In this blog I’m going to separate my voice to text solutions into two primary groups. The first group is those tools which I can use for dictation like creating this blog post or working with documents. The primary focus of this group of tools is to support the ability to add text while working on a computer with a mic. The second group of tools is primarily focused around note taking and using mobile tools on my phone or similar devices where I may not have access to the dictation tools I would you use in my normal work day. the one area that I am not going to cover in this blog post is related to voice automation tools or those tools which provide voice command capability. What I have found is that they are not the same. Currently I have not found a voice command solution that I like. As I do some more discovery in that area, I will share what I find.

Dictation tools

When my condition first surfaced, I immediately started thinking about how to do voice to text. The first software that came to mind with Dragon by Nuance. I started using Dragon as soon as we were able to get a professional account through work. The first thing I noticed about Dragon was that it felt like I had went backwards in time as it was not a updated piece of software or modernized. Dragon has been around a long time and services a lot of different areas of business including law and medical. It is a highly valuable tool in those spaces and has specialty products for some of those with specific terminology support.

DragonBar

What I liked about Dragon is that it has an extensive editing capability built into the software. This is particularly true if you use their special dialog box to create most of your content. That being said, you really need to have a good microphone to efficiently run Dragon. The other issue that I had was when we upgraded to Windows 11, it was not supported. This will likely change as Microsoft has purchased the product in recent months and will likely incorporate a lot of it into its own platform. I reverted to Windows 10 to determine how much I would use it. The biggest issue I had was the requirement for a high-quality microphone that would likely need to be on a headset to operate well.

With the switch to Windows 11, I needed to find alternative options and I turned to Microsoft to see what they had available. It turns out that Microsoft has two voice to text solutions that work in Windows 10 and 11. (These solutions may work in other versions of Windows, but I don’t use them.) The first tool I explored and worked with was Dictate that is available from Microsoft 365.

Dictate In Word

In particular, Dictate inside of Word. I immediately liked this tool because it is built-in to the Office platform. It also seemed to learn more quickly than Dragon did through general use which is likely to do the AI behind it. I also appreciate the fact I could use the open microphone effectively without making changes to my environment. I am writing this blog post in Word first because of the capabilities of Dictate. It is not without flaws, and the biggest issue I have with Microsoft 365 Dictate is that it does not know how to capitalize mid-sentence or to choose a word to capitalize. This seems like a significant oversight that many have complained about through the years of using this product. Hopefully Microsoft will resolve this soon as it seems like an oversight. I did discover that there is a change case option in text editing available in Word that has allowed me to handle this situation easily.

Change case in Word

I’m still learning Dictate and its capabilities but overall, it has been the most fluid solution I’ve used to date.

When Dictate is not available outside of the Office 365 suite. In that case, I use the Microsoft voice typing that you can find by hitting Windows+H.

Windows voice typing box

This will allow you to do voice dictation to any text box well, most text boxes. I use this for dictating messages in Teams, forms on websites, and similar type of functionality. This is not as capable as Dictate in Office, for example delete does not work the same way in the two tools. However, it too seems to learn my speech and respond well to the open mic which is why I have chosen to use it.

Before I move away from the dictation tooling, I want to add that in the Office suite I’ve been able to effectively use Dictate in Outlook. This has been very helpful in creating emails. Depending on where you are in Outlook you may or may not have Dictate available to you in which case you can always use voice typing. Dictate also works effectively in OneNote. The functionality in PowerPoint is severely lacking and I don’t know why. It does not seem to figure out what I’m trying to say most of the time when I’m working with this in PowerPoint. So, this is kind of frustrating when creating presentations but overall, the effectiveness in Outlook and Word have kept me quite productive.

In summary, if Dragon works for you and how you work it is likely the best tool for the job. With Microsoft purchase of Dragon, we can expect to see some of that functionality move into this Office suite is my expectation or into Windows directly. If you are like me and prefer using an open mic, you will find that the Microsoft 365 Dictate and Windows voice typing tools are more likely a better fit but still have significant gaps to fill.

Notetaking and mobile

I kind of grouped these together because of how I function. One of the immediate impacts of my condition is that I am no longer able to take handwritten notes. This has been a huge hit as most of the time I used a lot of pen and paper for design work, notetaking, etc. Losing this capability was a significant hit to my productivity. As a result, I needed to find alternatives.

Otter on Android

The first tool I added to my toolbox on my phone was Otter. This product was introduced to me by a peer at 3Cloud. It allows you to record and transcribe conversations so that you have notes from that conversation as well as the recording. It does are pretty good job in transcription frankly. I’ve used it to take notes during meetings, to take notes while working with my doctors, and just self-transcribed notes. I use exclusively on my phone and then transfer the notes to OneNote when I want to use them with other tools. This has been a lifesaver in particular in regard to doctors’ appointments. It has helped me keep track of that information and because of the transcription we can transfer that into other documents or even onto my CaringBridge site when we need exact details.

On my phone, I also use Google’s built in voice text technology and Samsung’s technology as I have a Galaxy phone. I will say this as hit or miss and often and it’s a little bit of fun to my text with my family for sure. However, it is still easier to use voice to text as opposed to typing on the device itself. So, I’m thankful that it works even if it stumbles a lot more than some of these other tools. Dragon does have a mobile option as well, but I did not get it working so I can’t really speak to its functionality at this point.

Summary of my new world

I still need to type to do my job. Part of my job entails building some technical labs which require coding. Coding is not easily done with voice to text or maybe we should say should not be done with voice to text. However, as intellisense and similar functionality has become more prevalent in the tools, it has reduced the stress on my hands when creating code. There’s new functionality from Microsoft in GitHub called copilot and similar tools that use AI to suggest code. For the moment I haven’t had a chance to test these functionalities out but I’m looking forward to seeing how they to improve my work environment. I would always recommend that you let people know that you’re using voice to text in particular when you’re using it in Teams or other chatting type environments. This means you don’t have to go back and correct everything you do all the time. People are forgiving and occasionally we get some really good fun like calling “Dennis” “dentist”. He wasn’t one, or so he says.

Before I end, I would like to say that this is not just helpful for those of us who struggle typing. You may find the dictation tools for example in Word to be a way to generate documents rather quickly. Just keep in mind:

  • plan to edit some
  • take your time
  • learn the tool
  • find success

I hope this helps someone out there. If you have found a tool that uses voice to text more efficiently or differently than when I’ve talked about I’d love to hear about it. Just add it in the comments below. Thanks for reading!