Image-Gesture-Voice: A Web Component for Eliciting Speech

TitleImage-Gesture-Voice: A Web Component for Eliciting Speech
Publication TypeConference Paper
Year of Publication2018
AuthorsBettinson, M., & Bird S.
Published inProceedings of the LREC 2018 Workshop
Keywordscrowdsourcing, language documentation, mobile apps, procedural discourse, web technologies
Abstract

We describe a reusable Web component for capturing talk about images. A speaker is prompted with a series of images and talks about each one while adding gestures. Others can watch the audio-visual slideshow, and navigate forwards and backwards by swiping on the images. The component supports phrase-aligned respeaking, translation, and commentary. This work extends the method of Basic Oral Language Documentation by prompting speakers with images and capturing their gestures. We show how the component is deployed in a mobile app for collecting and sharing know-how which was developed in consultation with indigenous groups in Taiwan and Australia. We focus on food preparation practices since this is an area where people are motivated to preserve and disseminate their cultural and linguistic heritage. 

Acknowledgment

This research was supported by NSF grant 1464553 Language Induction meets Language Documentation: Leveraging bilingual aligned audio for learning and preserving

URLhttp://lrec-conf.org/workshops/lrec2018/W26/pdf/book_of_proceedings.pdf#page=11
ICSI Research Group

AI