February 4, 2023


For computer aficionados

Adding one line of code can make some interactive visualizations accessible to screen-reader users — ScienceDaily

Interactive visualizations have altered the way we fully grasp our lives. For case in point, they can showcase the selection of coronavirus infections in just about every state.

But these graphics frequently are not obtainable to individuals who use monitor visitors, software program programs that scan the contents of a laptop screen and make the contents readily available through a synthesized voice or Braille. Tens of millions of People use screen visitors for a assortment of explanations, including total or partial blindness, understanding disabilities or motion sensitivity.

University of Washington scientists labored with screen-reader consumers to style and design VoxLens, a JavaScript plugin that — with 1 extra line of code — allows persons to interact with visualizations. VoxLens customers can acquire a superior-degree summary of the information and facts described in a graph, hear to a graph translated into audio or use voice-activated instructions to talk to specific questions about the knowledge, this sort of as the necessarily mean or the minimum amount worth.

The workforce presented this project Could 3 at CHI 2022 in New Orleans.

“If I am hunting at a graph, I can pull out what ever information and facts I am interested in, possibly it’s the all round craze or perhaps it’s the most,” claimed guide writer Ather Sharif, a UW doctoral university student in the Paul G. Allen Faculty of Computer Science & Engineering. “Proper now, display screen-reader buyers both get quite minimal or no facts about on-line visualizations, which, in gentle of the COVID-19 pandemic, can sometimes be a issue of lifestyle and demise. The objective of our task is to give display-reader customers a platform the place they can extract as significantly or as minor information and facts as they want.”

Display screen readers can inform customers about the text on a display due to the fact it’s what scientists call “1-dimensional details.”

“There is a get started and an conclusion of a sentence and everything else comes in between,” claimed co-senior writer Jacob O. Wobbrock, UW professor in the Information and facts College. “But as shortly as you move matters into two dimensional spaces, this sort of as visualizations, you can find no crystal clear get started and finish. It is really just not structured in the identical way, which usually means there’s no obvious entry place or sequencing for screen viewers.”

The team started the task by doing the job with five screen-reader end users with partial or comprehensive blindness to determine out how a opportunity tool could get the job done.

“In the field of accessibility, it’s truly essential to follow the theory of ‘nothing about us with no us,'” Sharif reported. “We are not heading to develop anything and then see how it works. We’re heading to create it taking users’ feed-back into account. We want to create what they need.”

To carry out VoxLens, visualization designers only require to add a single line of code.

“We didn’t want men and women to leap from 1 visualization to a different and encounter inconsistent data,” Sharif stated. “We created VoxLens a community library, which indicates that you are heading to listen to the exact same type of summary for all visualizations. Designers can just incorporate that just one line of code and then we do the relaxation.”

The scientists evaluated VoxLens by recruiting 22 screen-reader buyers who were being both absolutely or partially blind. Contributors acquired how to use VoxLens and then done nine tasks, every single of which concerned answering inquiries about a visualization.

When compared to contributors from a past research who did not have entry to this instrument, VoxLens end users completed the responsibilities with 122% increased precision and 36% diminished conversation time.

“We want folks to interact with a graph as significantly as they want, but we also do not want them to shell out an hour hoping to discover what the highest is,” Sharif mentioned. “In our review, interaction time refers to how lengthy it can take to extract information, and that is why lowering it is a great thing.”

The workforce also interviewed six participants about their ordeals.

“We required to make confident that these precision and conversation time numbers we observed had been mirrored in how the individuals ended up feeling about VoxLens,” Sharif explained. “We got seriously positive feedback. An individual instructed us they’ve been attempting to obtain visualizations for the past 12 many years and this was the to start with time they were ready to do so easily.”

Correct now, VoxLens only works for visualizations that are established utilizing JavaScript libraries, this kind of as D3, chart.js or Google Sheets. But the group is doing work on growing to other well known visualization platforms. The researchers also acknowledged that the voice-recognition method can be discouraging to use.

“This do the job is part of a significantly larger sized agenda for us — getting rid of bias in structure,” reported co-senior creator Katharina Reinecke, UW associate professor in the Allen Faculty. “When we construct technological know-how, we are inclined to feel of individuals who are like us and who have the exact same skills as we do. For illustration, D3 has actually revolutionized accessibility to visualizations on the net and enhanced how individuals can comprehend facts. But there are values ingrained in it and men and women are still left out. It truly is really crucial that we commence imagining more about how to make technology practical for most people.”

Additional co-authors on this paper are Olivia Wang, a UW undergraduate scholar in the Allen College, and Alida Muongchan, a UW undergraduate college student researching human centered layout and engineering. This analysis was funded by the Mani Charitable Foundation, the College of Washington Centre for an Knowledgeable General public, and the College of Washington Center for Investigation and Instruction on Available Technological innovation and Activities.

Code is accessible on GitHub: https://github.com/athersharif/voxlens