Wednesday, July 29, 2009

Computer science professor tasks his students with making phones, computers more accessible to blind or deaf people

From Xconomy.com in Seattle:

Think about the technological tools you use most often. For many of us, cell phones and computers rank high up on that list. But these devices are designed with the hearing and sighted in mind, and are constantly evolving, so there are numerous hurdles to clear to make a phone or a computer usable to the blind or deaf.

The University of Washington’s Richard Ladner (pictured), along with his students in the computer science department, is using engineering and computational tools to work on several of these hurdles—and the commercial applications could have far-ranging impact.

“When you think about a person with a disability, such as a blind person, most people think that’s a medical problem,” he said in a recent interview. “Just restoring the human function may be a solvable problem, but probably not for a long time. But maybe there’s another way to get the same thing done, to allow a person to read a book or talk to their family. So thinking non-medically, as an engineer, there are other ways to solve these problems.”

Ladner, who was born to two deaf parents, also believes that technologies developed for the blind and deaf may eventually lead to broader technological advancements—not such a far-fetched idea, as it’s happened before. Mobile GPS was originally developed as an aid for the blind, Ladner said, as was optical character recognition, a technology developed in the 1960s to turn an image of text (such as a photo of a book page) into digital text, which would then be read out loud using speech synthesizers. Now, the same technology is ubiquitous in turning pictures of text into digital text;Google uses it to digitize books.

Ladner used to work on computational theory before shifting to accessible technology in 2002. He is now trying to take his oldest project on accessibility for the deaf, MobileASL, to the market. This project uses video compression technology to enable signing over video cell phones on low-bandwidth wireless networks (such as those in the U.S.). Currently, deaf people can’t reliably use video cell phones to communicate using sign language, because the videos are too choppy to be intelligible. Ladner and his colleagues are working with UW TechTransfer on commercializing MobileASL.

“We’re trying to get it out and get it in actual use,” he said. “It’s in high demand. I get hundreds of e-mails about it.”

Although designed with the deaf in mind, MobileASL could be used by anyone who wants better quality video phone calls, Ladner said. Bringing it to market is slightly complicated by the fact that wireless companies, cell phone manufacturers, and video relay service companies (who provide government-subsidized assistance to allow phone calls between a deaf person and a hearing person) all have to coordinate to some extent to make the technology work. Ladner’s group is in conversation with all three types of companies.

Even though there are only upwards of 1 million American Sign Language (ASL) users in the U.S., Ladner still believes the technology has the potential to succeed commercially. Similar services are already on the market in Sweden and Japan.

“There’s always this issue. Do you want something to be an iPhone-level success, or go into a smaller market and have a bigger impact there?” Ladner said. “Venture capitalists and entrepreneurs always think about the next iPhone, but I think there are a lot of smaller things with good markets too.”

The project that changed Ladner’s research focus to accessible technologies is the Tactile Graphics Project, which employs various technologies to emboss images (such as textbook figures), creating tactile “pictures” for the blind. One aspect of this project converts text within figures to braille or speech, and this technology could also be used to automatically translate figures that include text in one language into other languages. Ladner is also very excited about a project that one of his graduate students, Jeff Bigham, has spearheaded. WebInSight improves accessibility of the Web to the blind and includes the program WebAnywhere, which is software that converts the text on any website into speech.

Ladner’s newest project is an educational and social networking site for deaf students of math, science, and engineering. One component of the site is technology to allow better interpretation for deaf students in hearing classes, that would provide interpreters familiar with the subject material to transcribe lectures, and captions along with the presentation slides on a student’s laptop. There’s also a new organizational part of the site devoted to cataloging signs for science and technical terms that don’t yet exist in ASL. Currently, deaf students and their interpreters may invent signs for specialized terms, but there’s no way to communicate those new signs to the whole deaf community.

“It’s a way for the language to grow even though the people using the language are rather dispersed,” Ladner said.

Ladner said the big shift in his research focus after a relatively established career in computational theory is keeping him young. “Since I changed to accessible technology, I’m just in huge demand, I get phone calls every day,” he said. “It’s like there was something pent up there, a real need for this. Plenty of people are doing computational theory, but hardly anyone is doing this.”