Interaction Design

SHOWCASE

"BUGZ"

Eddie Luo
"Othello"

Chih Ming Chen
"Soldier Puzzle"

Amy Jen &
Robert Chung
"Chinese Chess"

Mike Chia &
Karen Chau

 

"SimpleCircuit"

Corey Grimes
"Flash Tracker"

James Sullivan
"Duck Hunt"

Chen-Yo Pao
"Star Wars"

Long Tran

 

SmartStep: A Physical Computing Interface to Drill and Assess Basic Math Skills

Students:
Senior programmer: Sherranette Tinapunan
Clarence Verceles
Edward del Mundo

Interface design by Mike Chen

CIT conference abstract:
In this paper we describe a physical computing application that helps kindergarten and early elementary students learn and practice basic math skills. Similar to hopscotch or jump rope, this application uses physical activity to reinforce basic math skills like skip counting, while honing motor skills, pattern recognition, rhythm and coordination. The hardware component consists of nine, differently colored, 12” square floor tiles arranged in a 3 X 3 grid, connected to the computer through a MIDI interface. The student interface is a 3 X 3 grid of cells in a table that mirrors the arrangement of the floor tiles. Each cell presents a sequence of numbers required to play the math game. Visual and auditory feedback accompanies each hop a student makes on the floor tiles. The teacher’s interface allows them to create math games and set their parameters through easy to use forms and pull-down menus.


FingerSpell: A Physical Computing Interface to Translate ASL Gestures into Synthesized Speech

Students:
Dmitri Nestrenko
Michael Fu
Shanaz Morshed

CIT conference abstract:
In this paper we discuss an application that translates hand gestures of the American Sign Language (ASL) alphabet and converts them to text. The FingerSpell application addresses the communication barrier of the Deaf and hearing impaired by eliminating the need for a third party with knowledge of the American Sign Language allowing a user to use hand signs that will be translated to letters and words. Through the use of a data glove embedded with multiple input sensors, the application parses each hand and finger gesture signed by the user and detects unique cases which trigger application responses. Application responses come in both visual and auditory forms. The letter corresponding to the unique gesture will appear on the screen and play out loud using computer speech synthesis. The user may also sign an end word gesture which cues the application to read the previous string of letters together as one word. This allows a deaf person to communicate with another person without the need of the other person to understand sign language. In addition this application can serve as a teaching tool for those learning how to sign. FingerSpell is a steppingstone for opening up lines of communication for the Deaf and hearing impaired communities. Future research and advancements can extend the FingerSpell application to cover the full sign language (word signs) and alphabets of other sign languages.



Tower of Hanoi: A Computer Vision Application to Teach Advanced Math Concepts

Students:
Brian Scammon
Adam De Nyse
Kurt Simbron

URECA abstract:
Traditional educational multimedia applications have used video, animation, audio and non-linear interaction to enhance learning. But while the use of multimedia content has been shown to expedite knowledge acquisition and increase retention, the traditional user interface of keyboard and mouse has two deficits when it comes to how children learn, according to pedagogical experts and cognitive psychologists. Traditional multimedia applications are typically single-user, and only provide a simulation of problem solving. Yet children learn best collaboratively and by tangibly solving problems, according to experts.

In this project we designed a computer-mediated learning task based on the traditional mathematics puzzle, the Towers of Hanoi. Targeted at middle and high school students, the application will help them to learn advanced concepts, like recursion and graph coloring, as they physically try to solve the puzzle in real time, working in groups of two or three.

Our approach involves the use of computer vision to track the location of puzzle pieces on a physical game board. Based on the state of board the application computes the optimal solution, and can provide audio/visual clues to the student, while explaining the principle behind the solution.

Future work on the project involves the addition of a speech command interface, so students can query the application without having to shift their focus to a GUI.