In an era in which the world population of individuals aged 65 and older constitutes nearly 500 million people, the accessible design of information is increasingly important. Closed-captioning, originally designed to make televised information accessible to deaf and hearing-impaired individuals, now benefits millions of people with a wide range of abilities. The United States, which has legislated captioned television for over thirty years, can be considered a world leader in the practice from which many countries, including Japan, may learn. Yet even in the U.S. there is no effective style of captioning for elders and people with low-vision. The captioning of short clips, such as TV commercials and short video, is especially lacking.
In our research we are establishing a methodology for appropriate captions for people with low vision, based on the Universal Design process pertaining to TV commercials. In Experiment 1 we examined the ways participants' visual behaviors were affected when their visual acuity was artificially reduced while viewing closed-captions, and the effects of reduced visual acuity on contents comprehension. In Experiment 2 we examined the participants' eye movement as they watched Japanese closed-captions at various speeds with no sound.
We realized that 1) participants' tended not to look at the captions when their visual acuity was reduced making it more difficult to read, 2) for participants with visual acuity lower than 0.25, captions were unreadable, 3) participants were affected by the caption speed when they had diminished visual acuity, and thus, they could not see the caption, and 4) eye movement is a valid index in researching people's viewing and comprehension of film clips. The results also indicate that closed-captions are currently unsuitable for viewers with lowered visual acuity, even for those viewers with slightly reduced visual acuity.
We now know that we must develop closed-captions that consider viewers with low-vision. We also know that total fixation time on captions calculated from eye movement is an especially valuable tool for researching the visual behaviors of viewers. This technology will not only aid the captioning process, but will also make the growing areas of new and online media accessible to millions of people.