This content originally appeared on The Keyword and was authored by Tomer Levinboim
Images can be an integral part of many people’s online experiences. We rely on them to help bring news stories to life, see what our family and friends are up to, or help us decide which couch to buy. However, for 338 million people who are blind or have moderate to severe vision impairment, knowing what's in a web image that isn’t properly labeled can be a challenge. Screen reader technology relies on the efforts of content creators and developers who manually label images in order to make them accessible through spoken feedback or braille. Yet, billions of web images remain unlabelled, rendering them inaccessible for these users.
To help close this gap, the Chrome Accessibility and Google Research teams collaborated on developing a feature that automatically describes unlabelled images using AI. This feature was first released in 2019 supporting English only and was subsequently extended to five new languages in 2020 – French, German, Hindi, Italian and Spanish.
Today, we are expanding this feature to support ten additional languages: Croatian, Czech, Dutch, Finnish, Indonesian, Norwegian, Portuguese, Russian, Swedish and Turkish.
The major innovation behind this launch is the development of a single machine learning model that generates descriptions in each of the supported languages. This enables a more equitable user experience across languages in the sense that the generated image descriptions in any two languages can often be regarded as translations that respect the image details (Thapliyal and Soricut (2020)).
Auto-generated image descriptions can be incredibly helpful and their quality has come a long way, but it’s important to note they still can’t caption all images as well as a human. Our system was built to describe natural images and is unlikely to generate a description for other types of images, such as sketches, cartoons, memes or screenshots. We considered fairness, safety and quality when developing this feature and implemented a process to evaluate the images and captions along these dimensions before they're eligible to be shown to users.
We are excited to take this next step towards improving accessibility for more people around the world and look forward to expanding support to more languages in the future.
To activate this feature, you first need to turn on your screen reader (here's how to do that in Chrome). From there, you can activate the “Get image descriptions from Google” feature either by opening the context menu when browsing a web page or under your browser’s Accessibility settings. Chrome will then automatically generate descriptions for unlabelled web images in your preferred language.
This content originally appeared on The Keyword and was authored by Tomer Levinboim
Tomer Levinboim | Sciencx (2021-11-30T18:00:00+00:00) More accessible web images arrive in 10 new languages. Retrieved from https://www.scien.cx/2021/11/30/more-accessible-web-images-arrive-in-10-new-languages/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.