Digital assistive technologies

Published: 14.4.2023
Categories: Accessibility
Reading time: 10 min
A smiling person creating a keychain at a workshop

Assistive technologies are designed to support the functional capabilities of people with disabilities. Some are low-tech and familiar, such as reading glasses, crutches, and hearing aids. Digital assistive technologies rely on more recent scientific and technological advances to help disabled people access digital services.

Text readers

What are they?

Text readers (also known as text-to-speech software) read out the text while assuming that the user can perceive visual attributes of the text (such as size and colour), as well as non-text content (such as images). They are useful for people with some types of visual impairments, people with certain conditions such as dyslexia, people who cannot read, and people who are in the process of learning a language.

Text readers can be:

  • Used as standalone software for desktop and mobile devices.
  • Part of built-in accessibility features on desktop and mobile devices.
  • Part of digital assistants like Siri.
  • Included in applications such as Adobe Acrobat.
  • Used as browser plugins.
  • Embedded into webpages.
  • Part of web-based services where users can paste text or upload files and the text is then played or saved as audio files.

We ensure text reader support when we create services accessible to screen reader users. Learn more about how to do this on our accessibility testing page.


Standalone software:

Mobile apps:

Browser plugins:

Other plugins:

Online readers:

Screen readers

What are they?

Screen readers are software that reads out text (like text readers do), while also providing information that is otherwise only available visually. For example, it may inform about special kinds of text (such as headings and links, which sighted users can identify due to the way they look), and read out text alternatives for non-text content (such as images). This is useful for all people who benefit from text readers, and essential for some people, such as those with severe or complete blindness.

Most operating systems ship with their own screen readers:

  • Windows and Windows Phone: Narrator.
  • Mac OS and iOS: VoiceOver.
  • Linux: Orca comes with many distros.
  • Android: TalkBack.
  • Chromebooks: ChromeVox (this can also be added to the Chrome browser on Windows and Mac, but in that case it is limited to web pages only).

Some other screen readers include:

  • JAWS (paid, for Windows): most popular screen reader, although the license is very expensive. It can be used for free for 40 minute sessions, so it's possible to use it for testing without having to purchase a license.
  • NVDA (free, for Windows): not as popular as JAWS, but it still has more features and a wider user base than Narrator.

It is crucial to test with screen readers when assessing the accessibility of a digital product or service; learn more about that on our accessibility testing page.


VoiceOver on macOS:

VoiceOver on iOS:

NVDA on Windows:

JAWS on Windows:

TalkBack on Android:

Orca on Linux:

Screen magnification

What is it?

Screen magnification can enlarge the contents on the screen. It's useful for people with partial or limited vision, people who have difficulty reading small text or seeing small images, and everyone else in certain situations (for example, if a website has disabled the ability to zoom, or an app does not support zooming, you can still use screen magnification to enlarge the content).

Magnifiers can come bundled with operating systems (both on desktop and mobile), or they can be installed as standalone software or browser plugins. Most magnifying software can:

  • Zoom in on the entire screen, or a portion of it.
  • Be set to follow focus.
  • Modify and enhance the look of visible cursors and pointers.
  • Change colour, brightness, and contrast settings.

Typical users work within the range of 1x to 16x magnification, but a lot of the software can zoom in much further. Additionally, some people also use external magnifying devices, which can be handheld or installed in front of screens.

Evaluating the look and performance of digital service at different magnification levels is an integral part of size testing.



What are they?

Many people with disabilities use the keyboard instead of, or in addition to, the mouse when interacting with computers. They may also use external keyboards to interact with smartphones and tablets. Keyboard support is essential for those who use screen readers, as well as for many people with motor disabilities such as muscle dystrophia, or temporary disabilities such as broken bones or repetitive stress injuries.

Some people with disabilities use standard keyboards, while others used adapted versions. For example:

  • Single-handed keyboards.
  • Keyboards with Braille relief for visually-impaired users who know Braille.
  • Keyboards with larger keys, for people with visual impairments as well as people who have limited control over their hands.
  • Colour-coded keyboards, useful for many people with visual or intellectual disabilities.
  • On-screen digital keyboards, useful for people who can't or prefer not to use physical keyboards. On-screen keyboards can then be operated with the mouse or some kinds of assistive technologies, such as switches, physical pointers, and eye and motion trackers.

You're probably familiar with common keyboard shortcuts such as:

  • Command-Tab (Mac) or Alt-Tab (Windows) to switch between applications.
  • Command-Z (Mac) or Control-Z (Windows) to undo.
  • Command-C (Mac) or Control-C (Windows) to copy.
  • Tab, Enter, and Space to navigate and operate websites.

Depending on the user, their needs, and the assistive technologies they use, keyboard operation may be done with these very same standard keyboard shortcuts or may involve more complex keystrokes.

Ensuring keyboard support is one of the cornerstones of digital accessibility. You can learn more about how to do that on our accessibility testing page.


Switch access

What is it?

Switch access (also known as switch control), along with other alternative control devices and methods (such as physical pointers, eye and motion tracking, and speech input) is designed to substitute the combination of keyboard and mouse. Switch access is commonly used by people with motor disabilities, and can also be useful for some people with cognitive disabilities.

Switch devices can have one or multiple switches, and they may also be used in combination with other kinds of alternative controls. Switches typically have two states (such as on/off or pressed/unpressed).

A switch can be, for example:

  • A button you can press.
  • A pedal you can step on.
  • A sip-and-puff, which detects whether you are inhaling or blowing air.
  • Different types of sensors that can detect biting, pushing, pulling, pressing, blinking, squeezing, etc.

Switch access is built into operating systems for macOS, iOS, Windows, and Android. Switch integration can happen in multiple ways:

  • Some switches can be directly connected via USB or Bluetooth.
  • Other switches require an intermediate interface to connect to the main device.
  • Certain mobile devices can also map switches to hardware buttons, screen regions, or head movements detected by the camera.

Switch users can receive feedback on their operations through different senses:

  • Sight: focus highlights, screen flashes.
  • Sound: beeps, sounds, voice.
  • Touch: vibration.

We ensure switch access support when we create services that accessible to keyboard users. Learn more about how to do this on our accessibility testing page.


Physical pointers

What are they?

Physical pointers are used by people who have limited use of their arms and hands. Some of them are designed to be used with digital devices, while others can serve other purposes (like painting on canvas).

Physical pointers can be, for example:

  • A head pointer (also called "head wand"): this is attached to the head with a helmet or headband, and can be used to operate a keyboard, touch screen, or switch device.
  • A mouth stick: this is held in the mouth by the teeth, and can be used the same way as a head pointer.
  • An adapted stylus: for example, one with a wide handle and straps to secure it to the hand, which can be used with a touch screen or instead of a mouse.

Eye and motion trackers are sometimes also called "pointers" or "head pointers", even though they are not physical stick-like tools. This can lead to confusion, so always check what kind of device is being referred to.


Eye and motion tracking

What is it?

Eye and motion tracking both works by using some type of sensors or optical hardware (such as a camera) to track movements, and software to translate those movements into the desired actions.

Eye trackers (or gaze trackers) allow users to control the cursor and interact with the computer using their eyes instead of the mouse. They are commonly used by people with very limited mobility. Eye tracking is implemented with the right combination of hardware and software. Some eye trackers are worn on the head, others are positioned on top of the screen or device being used, and some track magnetic dots placed on contact lenses. In all cases, they allow the cursor to follow the gaze of the user. Dwelling (staring at a certain part of the screen for a longer time) or consciously blinking in a particular way can then be used to "click", for example on icons, buttons, or a virtual keyboard.

Motion trackers are commonly used to track head movements (in which case they're sometimes called "head pointers", even though they are not physical pointing devices). They may also track the movement of other body parts through wearable techs, such as gloves. They work similarly to eye trackers, but they do require that the user has the ability to move and control the head or other body part.

Motion trackers were more commonly used in the past, but they have been losing popularity as eye trackers (which can be used by a wider array of people) have increased in accuracy and decreased in price.


Speech input

What is it?

Speech input (also known as speech-to-text or automatic speech recognition and dictation) allows users to interact with a digital device using voice commands, as well as to convert spoken words into text.

Voice-based user interfaces are used in many applications and industries including in-car systems, home automation, medical applications, call centres, aviation, language learning, and gaming. In recent years, they have become a popular way to interact with virtual assistants such as Apple's Siri, Amazon's Alexa and Echo, and Google Assistant.

Speech recognition has been used for quite some time to convert speech into text in telephone systems for Deaf and Hard-of-Hearing people. Voice-controlled interfaces are very useful for blind people and people with limited mobility. Speech input can also help people who have issues with reading and writing due to, for example, learning difficulties, cognitive disabilities, or brain injuries.

Speech input software is available as:

  • Part of most major operating systems.
  • Third-party dictation software.
  • Web services, websites, and browser plugins.



If you are interested in hearing in detail about how to enhance accessibility in a cost-effective manner, order a recording of our webinar (in Finnish) where the CEO of Annanpura and two experts from Wunder share their wisdom on this topic. By ordering the recording, you will also receive a link to presentation packed with links to useful tools and websites about the topic. 

Get your webinar and presentation links

Wish us to audit or enhance the accessibility of your digital?

Send Talvikki a message or fill in the form and we will contact you!

Related content