Screenless Typing

A keyboard for the blind and visually impaired community.

screenless_typing_cover

Overview

This project is the outcome of research work carried out with Dr. Davide Bolchini at U.S.E.R lab at IUPUI. We are exploring efficient ways of typing on a smartphone for visually impaired and blind community. The project is still in the research phase and we are using the iterative design process to find the efficient solution where we are consistently getting inputs from expert users to improve the system.

My Role

I am leading the user experience research and design for Screenless Typing. I developed the first version of the prototype in android and tested it with the blind users that highlight their adaptiveness towards the prototype that we designed. I also edited the videos of the project which demonstrates the concept to the larger audience.

Timeline

August 2016 - Present

Problem

There are 283 million blind and visually impaired people worldwide according to WHO. While using smartphones BVI users have to spend unnecessary cognitive and mechanical efforts to browse through the menus and typing even the smallest of words. Voice access like Google Assitant and Siri does help while introducing the security and privacy concerns. How might we design a keyboard for BVI community which is efficient, accurate and secure to use?

How efficiently would you be able to type using one hand and your eyes closed while walking?

Process

The idea for this project was extended from the screenless browsing, research from that project helped me a lot and gave a head start towards understanding blind and visually impaired people better. My research was focused on identifying the current solution BVI community is using, different scenarios of typing and major problems in current solutions.

Research insights and key problems

Hearing ability of blind people is 2.7x better than sighted people.
Blind people can understand spoken the language at the speed of 22 syllables per second as compared to the normally sighted listener which is 8 syllables per second. [1]

Usage of a touchscreen keyboard is cumbersome
Accessibility keyboard provided by Android and iOS is very slow and inefficient which discouraged typing.

It’s hard to type while walking
While walking, blind people have a cane in one hand which limits them to use the smartphone just with one hand, and typing with one hand gets difficult.

Prototype

We created two initial prototypes so that we can get early feedback from the users on the concept. It combines hand gestures to give input and ephemeral auditory keyboard which will act as a paradigm.

Below are the videos of me typing MARSH using two prototypes.

Alphabetical layout

Probabilistic layout

Advantages:

  • Users already know the flow so they don’t have full attention towards listening to the keyflow reducing the cognitive load.
  • The user gets quickly learns the flow which increases typing efficiency.

Disadvantages:

  • Takes longer time to type.

Advantages:

  • Efficient and quick for typing. (You can see that typing time is significantly different for typing MARSH in above videos)

 

Disadvantages:

  • Users are unfamiliar with the keyflow and it keeps changing after every letter is selected, so users have to provide full attention towards listening to the keyflow increasing the cognitive load.
  • Hard to learn and get used to.

Making the prototype

I created the first version of the prototype using android and MYO's android SDK. The reason behind the programming the prototype was we wanted to present something to the expert users to spark the conversation about the concept and see what different ideas emerge.

prototype

Features

Gestures for input

gesturesST

Chunks of 5

abc

Characters are grouped into chunks of five after every 5 letter are spoken, keyflow takes a pause of few milliseconds so that user can process the characters.

Earcons

Select earcon gives to the user feedback when they select a character from the keyflow.

Delete earcon gives feedback when users delete a selected character.

To prompt the users that the flow has started repeating now, we provided above mentioned earcon for that.

Smart Rewind

The speed of the keyflow is fast, it takes time for user ti listens to the character they want to select in the keyflow, perform the gesture, MYO to recognize the gesture and android to give feedback, the character in the flow has already been passed. To overcome this difficulty we implemented smart rewind feature, which selects the character that user wanted to select. 

Suggestions

After selecting three characters, the system will start providing the suggestions to the users to save the time.

Plans Ahead

WhatsApp Image 2017-12-29 at 1.21.07 PM

We discussed both of the prototypes with the Imran (Blind by birth) who is our expert user and received feedback from him which we are planning to integrate ahead. We are planning a new prototype incorporating best of features of both the prototype.

We recently got the news that our project got funded by Google, thus validating the design implications. Will post the updates soon! 

Other case studies

udacityPortalProjThumb
ReferralProjThumb
YelplProjThumb