|
|
Home • Laboratory Sites • Publications • Documentation • Vision Loss • Contact Us | |
HearSay NewsHearSay3 beta to be released soon |
Goal
Develop “HearSay3” Non-Visual Context-Driven Web Browser – to make dynamic content accessible and provide speech-enabled control for multimodal interactive browsing.
Usefulness
10+ million people in U.S. and 161+ million people world-wide are visually impaired. Speech-enabled browsing can also be used by sighted people, e.g.: over a land-phone, with mobile devices, etc.
Features
HearSay3 is a multimodal dialog system with mixed-initiative interaction; browser is speaking most of the time, but users have control over the dialog. HearSay3 analyzes HTML DOM-trees, segments web pages, and generates VoiceXML dialogs to provide features NOT available in ANY existing screen-reader. HearSay3 is controlled with shortcut-keys and text/speech commands (interpreted within VoiceXML). HearSay3 runs separate dialog threads for Firefox tabs, allowing users to switch context (while switching tabs) and continue browsing from the same position where they left off. HearSay3 supports context-directe Firefox2, Java5/6, MS text-to-speech and speech-recognition (Vista is recommnded). The HearSay project is funded by the National Science Foundation |
![]() |
![]() |