Quantcast
Channel: Blog – Center for Data Innovation
Viewing all articles
Browse latest Browse all 1218

Training Virtual Assistants for People Who Are Blind

$
0
0

University of Texas professor Danna Gurari and her colleagues have published a dataset of approximately 31,000 images, plus questions and answers about the contents of each image. The dataset is intended to serve as training data for computer vision applications that could help people blind or visually impaired interpret images. The data comes from an app called VizWiz that allows users to take pictures with their smartphones and ask volunteer interpreters questions about the image, such as the cost of an item in a store. Each image in the dataset includes a transcription of the question a VizWiz user asked about it and 10 crowdsourced answers from Amazon Mechanical Turk workers.

Get the data.


Viewing all articles
Browse latest Browse all 1218

Trending Articles