In the previous section, the roles of standard platform manufacturers and third-party special access manufacturers were described. The purpose of this section is to provide an overview of the access work of these two groups and how application software manufacturers can take advantage of this work to solve most of the access issues for their programs. A thorough understanding of this section is necessary in order for application software manufacturers to avoid duplicating effort or solving problems which are best solved at these other levels. It is also important for application software manufacturers to understand these strategies in order to be compatible with them and to understand the aspects of accessibility that are and are not covered by them.
For the purposes of this discussion, the solution strategies which are provided by the standard platform manufacturers and by third-party manufacturers are grouped together and presented by impairment area.
The access strategies used by people with visual impairments fall into two major categories: enlargement of the image on the screen, and presentation of visual information in some other form (e.g., speech or braille). People with low vision generally use both strategies, while people who are completely blind must rely on the second approach.
(Please note: The strategies described below and on the following pages in this section are already provided (or will be) by computer manufacturers, operating systems, or third-party assistive device manufacturers. They are not features that application software designers need to add to their software; only things that they need to be aware of and to facilitate rather than obstruct.)
For individuals with mild to moderate visual impairments, the ability to enlarge the fonts (only) used on the screen may be all that is necessary. Within text-only documents, using "large type" is very straightforward, since most graphics-based programs allow the individual to select the font size to be used on screen. Utilities also exist which allow one to use a slightly larger font in the system menus. This concept could be expanded o include larger cursors, scroll bars, etc.,
Simply enlarging the font used on the screen, however, only works for individuals needing moderate character enlargement. For individuals with low vision, the image on the screen must often be magnified 4-16 times. Also, the entire image on the screen needs to be enlarged, not just the alphanumeric characters. To do this, some type of overall screen enlargement utility or program is required. These utilities or programs create a virtual image which is much bigger than the actual monitor screen. The monitor screen itself then becomes a "viewport" which can be moved about over the virtual screen. Using this technique, the individual can only see a small portion of overall screen at a time. (As a result, the effect is similar to a normally sighted person trying to use a computer while looking down a cardboard tube such as that found in a roll of paper towels.) Such screen enlargement utilities allow the individual to enlarge the text as much as they like (up to one character filling the entire screen). They usually also have a mechanism built in to allow the "viewport" to automatically follow the movement of the mouse or cursors as the individual types.
Application developers should note that it is important for screen reading or enlargement access software to be able to identify events which occur in different areas of the screen. This is necessary so that the access software can automatically move the "viewport" to that point on the screen in order to avoid the user missing important events occurring outside of the viewport. It is also important to maintain a consistent screen layout. The user will then know where to find things such as prompts, status indicators, menus, etc.,
For individuals who cannot read the image on the screen even when enlarged, some mechanism for presenting the information in nonvisual form is necessary. The two most common forms for doing this are speech and braille.
Screen reading programs allow the individual to move about on the screen and have any text read aloud to them. In graphical environments with multiple windows, screen readers must also be able to allow the individual to navigate around between windows and among the different elements of a window (scroll bars, zoom boxes, window sizing controls, etc.). They must also provide the individual with a means to deal with icons and other graphic information. For stereotypic images which always appear the same, such as scroll bars and icons, names or labels can be given to each object or icon. When the icons are encountered, their names or labels can be read aloud.
Application programs can facilitate or inhibit screen reading programs' ability to do this, however. For example, a tool bar which is drawn as a single graphic element cannot be easily deciphered by an access program. A tool bar where each tool is drawn using a separate draw command can be easily dissected, and the individual tool images extracted and named.
In addition to speech output, braille can also be used. Since braille is essentially a tactile alphabet, it can be used instead of speech to present the information to the user. Special displays of 20 or 40 braille cells with electromechanical moving pins can provide refreshable or dynamic braille displays that can be continually changed by the computer. As a result, anything that is printed in alphanumeric characters or which can be described in speech can be presented on a dynamic braille display. This is an effective and preferred means for accessing text by some people who are blind. For individuals who are deaf-blind, and can neither read the text on the screen nor hear spoken output, braille is essential for access. It should be noted, however, that the majority of individuals who are legally blind do not know braille (especially those who became blind later in life). Thus, it is a powerful technique, but cannot be used as the only way to provide access for people who are blind.
In addition to problems in accessing the screen, individuals who are blind also have difficulty in using input devices which require vision. For example, some keyboards have electronically locking keys, such as the Num Lock, Scroll Lock, and Caps Lock keys on an IBM PC or compatible. Small lights are provided on the keyboard to allow people who can see to determine whether these keys are in their locked or unlocked mode. Individuals who are blind are unable to determine the status of these keys unless there is some visual indication provided on the screen where their screenreaders can access it. Some application programs provide this. In addition, some software utilities and most screen reading software provide some auditory cues to allow the individual who is blind to know whether these particular keys are in locked or unlocked mode. It is important for application software to use the status flags in the system and ensure that these flags and lights are set to agree with the program's use of these keys.
A more serious problem for individuals who are blind is applications which require use of the mouse. The mouse by its very nature requires some type of eye-hand coordination. For individuals who are blind, this type of eye-hand coordination is impossible. Some blind access software packages provide mechanisms which automatically move the mouse cursor about the screen as they read or move between window elements. Another strategy which can provide some access to mouse-like operations is the use of the tactile mouse discussed below. Also, strategies for using a touchscreen are being explored. For these access techniques to work within the application windows themselves, however, they may require some cooperation from the application program.
Screen reading programs (available for Macintosh, Microsoft Windows and OS/2) are capable of providing full access to the basic operating system constructs (windows, menu bars, dialog boxes, etc.) as well as providing access to text within application program documents (as long as the text drawing tools of the operating system are used to create the text image). In order to access information which is drawing or picture-based (line drawings, charts and diagrams, floor plans, etc.), several advanced strategies are being explored.
One approach involves the use of a virtual tactile tablet with a tactile puck/mouse. A vibrating tactile array of 100 pins is mounted on a special puck/mouse. As the mouse is moved about on the tablet, the tactile representation of the information on the screen is provided to the individual's fingertip. In this fashion, the individual can actually feel the information on the screen. Coupled with voice output screen reading features, this system allows the individual to feel the image on the screen and to have any words on the screen read aloud. snap shots goes here
Other experimental techniques being examined are routines which would automatically recognize and describe verbally stereotypic information presentations formats (bar charts, pie charts, etc.) and routines which would provide special image enhancement (edge detection/enhancement, etc.) to make complex graphics simpler to tactually explore.
Individuals with hearing impairments currently have little difficulty in using computers. Some computers, such as the Macintosh computers and the IBM PS/1, have volume controls and headphone jacks which allow the connection of headphones or amplifiers/speakers to facilitate their use by individuals who have mild hearing impairments. For individuals who cannot hear, on-screen indication of beeps and other sounds can be provided. Currently, the Macintosh has a feature where the menu bar will flash whenever a sound is emitted if the volume control is turned to zero. Many of IBM's newer laptop computers have a small LCD display which flashes a symbol of a speaker whenever a tone is emitted from the computer, thus providing a visual indication of the auditory sound. The AccessDOS package distributed by IBM also includes a feature called "ShowSounds" which provides a screen flash whenever the speaker on the computer is used. There are also other third-party products, such as SeeBeep, which provide visual indications on the screen when a sound is emitted from a PC speaker.
In addition, a system-wide "ShowSounds" switch is currently being advocated for all operating systems. By implementing the "ShowSounds" switch at the system level, the switch could be used by all application programs to determine if the user would like visual indication of any sounds made by the application programs. If an individual was in a noisy environment (such as an airplane or a factory) or had difficulty hearing, they could set the ShowSounds switch. The operating system and all applications which emitted sounds could then check that switch. If it were turned on, they would accompany any auditory sounds with some type of visual indication. Some applications already provide some type of visual indication to accompany many (but not all) sounds. If the ShowSounds switch were set, however, it would be an indication that all sound output should be accompanied by some type of visual indication.
Implementation of the ShowSounds switch would also allow application programs to have closed captioning. That is, newer programs which include speech output could check for the ShowSounds switch and, if it were set, pop-up a small window with the same text that was being spoken. Because this caption would only appear when the ShowSounds switch was set, it would be called a "closed caption." Similarly, if other auditory information were presented which was necessary for the operation of the program, a small indicator or caption describing the sound could be presented (if the ShowSounds switch were set). This descriptor of the sound should preferably be text rather than an icon, in order to facilitate access by individuals who are deaf-blind and using a screen reading program (using braille) to present the information to them.
As software packages move toward more multi-media presentations, the ability of application software to provide closed-captioning will increase in importance.
NOTES:
Problems faced by individuals with physical impairments vary widely. Some individuals are very weak, and have limited range of motion. Other individuals, such as those with cerebral palsy, have erratic motor control. Some individuals have missing or paralyzed limbs, while others, such as those with arthritis, have limited manipulative and grasping ability. People with physical impairments can have difficulty manipulating media, carrying out quick actions, operating input devices requiring fine motor control, and pressing multiple keys or buttons at the same time.
Access strategies can be broken down into roughly three categories:
Some individuals are unable to use the standard keyboard, but could use it if it behaved a little differently. A number of standard modifications are now available which allow the user to modify the way a standard keyboard works in order for it to function better for people with disabilities. Four examples of keyboards modifications are StickyKeys, SlowKeys, BounceKeys, and RepeatKeys. Many of these features (and others) are now distributed by the major computer companies as standard parts of, or extensions to, their standard operating systems.
Access Features |
Macintosh OS 7.+ |
DOS 3.X-6.X |
OS/2.1 |
X11R6 X Windows | |
StickyKeys | X | X | X | X | |
RepeatKeys | X | X | X | X | |
SlowKeys | X | X | X | X | |
BounceKeys | X | X | |||
MouseKeys | X | X | X | ||
ToggleKeys | n/a | X | X | ||
SoundSentry | X | X | X | ||
Access Hooks | |||||
SerialKeys | ud | X | ud | ||
ShowSounds | ud | ||||
Extended Library Info for Visual Access | X |
n/a = not applicable
ud = under development
Figure 3 shows the availability of StickyKeys, RepeatKeys, SlowKeys, BounceKeys, MouseKeys, ToggleKeys, SerialKeys, and ShowSounds on Macintosh and IBM computers. The Macintosh has all but BounceKeys and SerialKeys built directly into the operating system. IBM and Microsoft distributes(free) a package called AccessDOS which contains all of the features. The Access Utility for Windows 3.x also contains all of these features, and is distributed as a part of the third-party drivers package available from Microsoft, as well as being available on several bulletin boards, gophers, and on-line information systems such as Compuserve.
StickyKeys is a feature which eliminates the need to press several keys simultaneously. For individuals who type with only one hand, finger, or a head- or mouthstick, it is difficult or impossible to press a modifier key (such as Shift, Control, or Alt) and another key at the same time. When invoked, StickyKeys allows the individual to type modifier keys in sequence with other keys: for example, they can press the Control key and then the H key to get a Control-H.
RepeatKeys is a feature which allows the repeat rate on the keyboard to be adjusted. Some individuals get unwanted multiple characters because the key repeat rate is faster than their reaction time. RepeatKeys allows them to change the speed of the repeat function and/or to turn it off.
SlowKeys is a feature which facilitates use of the keyboard by individuals who have poor motor control which causes them to accidentally bump keys as they move around between desired keys on the keyboard. The SlowKeys feature allows the user to add a delay to the keyboard so that the key must be held down for a period of time before it is accepted. In this fashion, the keyboard would only accept keys which were pressed deliberately and held for athis period, and would ignore keys which were bumped.
BounceKeys is a feature to facilitate keyboard use by individuals with tremor or other conditions which cause them to accidentally double- or triple-press a key when attempting to press or release it. BounceKeys does not slow down the operation of the keyboard, but does prevent the keyboard from accepting tvery quick consecutive presses of the same key. Thus, with BounceKeys on, individuals who "bounce" when either pressing or releasing a key would only get a single character. To type double characters, the user would simply have to pause a moment between typing the same key two successive times.
In addition to these software modifications to the keyboard, the use of a keyguard is also common. A keyguard is a flat plate which fits over the top of a keyboard and has holes corresponding to each key. The individual can then rest their hand on the keyguard and poke a finger down through the hole to type. The keyguard both helps prevent the typing of unwanted characters and provides a stable platform which the individual can use to brace their hand for additional control in typing.
Many individuals with physical impairments are unable to control the standard pointing device. In some cases, mouse alternates such as trackballs can be used. In other cases, individuals are unable to operate any analog pointing device. One software approach which allows the mouse to be controlled from the keyboard is called MouseKeys. When MouseKeys is invoked, the number keypad on the computer switches into a mouse-control mode. The keys can then be used to move the mouse cursor around on the screen. Keys on the keypad also allow the mouse button to be "clicked" or to be locked and released to facilitate dragging. The MouseKeys feature works at the same time as a standard mouse or trackball; it is therefore possible to use these other pointing devices to move about on the screen, and then switch to the keypad for fine movement of the mouse. Single-pixel movement of the mouse is very easy using MouseKeys. In fact, it is often used by nondisabled graphic software users for precise pixel movements which are difficult or impossible with the standard mouse. For individuals who have good head control, there are also head-operated mice which allow the individual to essentially use their head to point and then puff on a straw in order to activate the mouse button.
While modification to the standard keyboard allows input by some individuals, alternate "special" keyboards or input devices work better for others. These alternate keyboards take many different forms, including expanded keyboards, miniature keyboards, headpointing keyboards, eyegaze-operated keyboards, Morse code input, scanning keyboards which require operation of only a single switch (operated by hand, head, or eyeblink), and voice operated keyboards. Some of these keyboards connect to the computer in place of or along with the standard computer keyboard. Other alternate keyboards connect to the serial or parallel port on the computer, and use special software to cause their input to be injected into the operating system and treated as keystrokes from the standard keyboard. In still other cases, the "keyboard" may appear on-screen in a special window. The individual then selects keys on that video keyboard using a headpointer, a single switch scanning technique, Morse code, or other special input technique. The keys selected on the video keyboards are then fed through the operating system so that they appear to application programs as if they had come from the standard keyboard.
For programs which provide mouse support, these alternate input devices can also create simulated mouse activity in order to the user to access drawing, dragging, and other mouse-based functions of the application programs.