Goal 

Develop “HearSay3” Non-Visual Context-Driven Web Browser – to make dynamic content accessible and provide speech-enabled control for multimodal interactive browsing.

Usefulness 

10+ million people in U.S. and 161+ million people world-wide are visually impaired.  Speech-enabled browsing can also be used by sighted people, e.g.: over a land-phone, with mobile devices, etc.

Features

HearSay3 is a multimodal dialog system with mixed-initiative interaction; browser is speaking most of the time, but users have control over the dialog. HearSay3 analyzes HTML DOM-trees, segments web pages, and generates VoiceXML dialogs to provide features NOT available in ANY existing screen-reader. HearSay3 is controlled with shortcut-keys and text/speech commands (interpreted within VoiceXML). HearSay3 runs separate dialog threads for Firefox tabs, allowing users to switch context (while switching tabs) and continue browsing from the same position where they left off. HearSay3 supports context-directe
d browsing by using context to start reading from relevant content on the next page. HearSay3 enables dynamic updates by updating the VoiceXML dialog context (AJAX-like dialogs), notifying users of update events with earcons (sound-clips), allowing to navigate the updated content (marked with earcons), and return to the original position.
Requirements

Firefox2, Java5/6, MS text-to-speech and speech-recognition (Vista is recommnded).

The HearSay project is funded by the National Science Foundation

 

                   

footer image footer image