Sonivivi is an interactive mobile application that helps people who are partially blind, or those who have recently lost their eyesight to become more self-sufficient in their daily life. As the intended use of Sonivivi is for the visually impaired, the interface does not rely on sight, but instead focuses on using gestural interfaces and audio feedback to navigate through a three dimensional audio space in order to improve the users listening skills.
Sonivivi offers an enjoyable and less labor intensive alternative to traditional methods of learning. It facilitates a self-directed and empathetic learning experience, and can be used as a support tool both for educational organizations and parents who have visually impaired children.
Touch screen devices are becoming more and more ubiquitous in our everyday lives due to the flexibility and engagement they provide us. Even though the newest devices have assistive features such as voice-over, there usage among the partially blind is still very limited. This project explores the potential of using existing technologies and platforms to facilitate new ways of learning amongst new user groups.
The current education for the partially blind consists of a range of training exercises for navigation, direction and distance. However these heavily rely on a teacher or trainer to execute them, do not harness the power of technology or leave any room for flexible or spontaneous learning. At the same time there is a need for more quantifiable and systematic gauge to aid the partially blind to develop and improve their listening skills.
Sonivivi uses 3D stereo sound as a tool to enhance the ability of hearing for two main purposes: (1) Provide experiential training for the partially blind to learn the fundamentals of navigation, distance and direction, and (2) Accelerate the users learning curve by providing real time audio feedback about their actions that is directed and easy to understand.
Sonivivi has been prototyped and tested with a partially blind student (who had never used a touch screen device before) and her teacher. Due to Sonivivi’s assisted interface the student was able to progress through all the levels of the application fluently by herself and greatly enjoyed the experience after a quick introduction, thus challenging the misconception that the partially blind can not use a touch screen device because of it’s dependency on GUI.
Essentially Sonivivi provides an opportunity for self-training for the partially blind that can compliment formal educational sessions. For teachers Sonivivi is a great extension of their toolset as it (1) establishes a common ground to compare results of students (2) helps better personalize their learning programmes and (3) offers an affordable and more flexible alternative to existing single-purpose devices.
For students Sonivivi not only helps them improve their listening skills as they complete levels, but it also supports them to feel a sense of achievement and a boost in self-confidence. Furthermore Sonivivi aims to motivate and engage students though it’s playful interface, which is especially important for those who have just lost their sight.
Sonivivi was developed through an iterative prototyping process, during which I involved a range of stakeholders including; partially blind, completely blind, caretakers and teachers. By having a people centred approach I was able to gain deep insight into the behaviours of my potential users, what meant the final Sonivivi concept and prototype was shaped around genuine needs and desires. Some of the key design decisions made during this process were:
A simple application to understand how people recognize the distance and direction of sound allowed me to understand that a sophisticated method of generating 3d sound was needed, and that by using binaural sound people’s perception of the space was greatly improved.
The Graphical user interface, evolved from simple graphics to a highly polished but high-contrast design that was easily readable by the partially blind.
The voice-over that was initially used only for instructions and distance feedback, proved to be essential in the final prototypes to ease the navigation within the application and encourage users to explore all the possible options.
At the beginning gestures were used only for moving sounds around the space. However in the final application they were also used for navigation feedback which was inspired by real world metaphors.