The following narrative was transcribed from a tape made during the Disability Action Committee for X (DACX) Meeting held during the 1995 CSUN Conference.
A "best effort" attempt was made to get all views correct.
Mark Novak, Chair
Mark: Good evening. We are going to get started. I see a few new faces in the crowd. I just want to inform everyone that this is the Disability Action Committee for X meeting.
We have a rather tight schedule tonight and therefore we are going to jump right into the agenda. I'll allow questions, but I am going to try to keep it limited so we can keep the agenda moving so we can get through the important items which everyone needs to hear about. If we get too many questions, I am going to ask you to try to save them to the end. We have time on the agenda for some questions and answers at the end.
So, first up on the agenda tonight is the status of the remote access protocol, or RAP, being followed closely by Will Walker
Will: First I would like to talk about what RAP is. How many people are familiar with what RAP is? RAP is a Remote Access Protocol that gives people the ability to learn about the interface of another application. How many people are familiar with Windows. I guess I should even take it down to that level. X Windows or Microsoft Windows. I just want to make sure I know where we are. RAP will give people the ability to figure out where push buttons are, what a push button is, what a file menu is, what the text is in that file menu, if there is a scroll bar in the interface or not. So RAP provides the enabling technology for a screen reader, it provides the enabling technology for a voice access control and stuff like that.
RAP, where it is right now; there is basically two components to it. There is what we call the ICE rendezvous mechanism and then there is the RAP protocol itself. The ICE rendezvous mechanism is a mechanism where the two applications can talk to each other and the reason it is called the ICE rendezvous mechanism is that ICE is the underlying transport to allow this communication to take place. I am talking a very low level transport. But anyway the two major components are the ICE rendezvous mechanism and RAP. The ICE rendezvous mechanism is now going through the internal review process at the X-Consortium. This may not sound like very good news, but this is really good news because it means we are one step away from standardizing it. Which means we are one step away from having it become a standard part of the X Window system for this next release and all the releases after that. So it is really good news.
The Remote Access Protocol. There are various people working on different Remote Access Protocols, or the stuff that gives people the information about a push button or a scroll bar. Georgia Tech is working on a version for their Mercator project and some people in France are working on something along the same line and after we get the ICE rendezvous mechanism standardized, the Remote Access Protocol is the next thing to get standardized. And we are hoping to do that as well for the next version of X. So we want the ICE rendezvous mechanism and RAP to become a standard part of the next version of X and I think we can do it.
The RAP portion maybe a little tight, but I still think we can pull it off. There is an active discussion going on for these on a mailing list and I will spell it out. It's called x-agent@x.org. To get on it you send mail to x- agent-request@x.org. This is where we are carrying on the internal review for the ICE rendezvous mechanism that is where we will carry on the review for the Remote Access Protocol. The reason I am sending these out is that we really need your help. We want the X-Consortium to know that there is an interest in getting this work done. Two weeks ago I sent out the stuff for internal review. There are over 100 people on the list and the responses were very far and few. There weren't many people responding until I sent out a plea on Monday and now the responses are pouring in saying that this is very important. But if we can get you people to come in and send mail to the X-agent list and say we think this is very important, it will very much help push this through the standardization phase.
Earl: How techy is the talk? and how about non-techy's responding on this x-agent?
Will: The talk is very techy. Talk is very low level and non-techy's are very welcome. And what I mean by that is even if you don't understand all the technical details, if you can get on there and say I support this particular thing.
Question: What type of pointers can you give to us for as we are going through it. People want to know why it is important to us?
Will: Okay. It is important to us because it provides standard features in X. In the next version of X this enabling technology for many different types of applications like screen readers, voice access control, testing. Things like that. So your participation would be very, very welcome. No question is stupid.
Question: regarding Mercator and RAP?
Will: Mercator currently is not built on RAP. They are currently porting to RAP. The reason we want to use RAP as opposed to what Mercator is doing, is to make it a generic solution across multiple platforms.
Will: There is also some background discussion on a Web page. For those of you who have Web access, HTTP://www.x.org/x-agent.
Question: So that will give out a background information?
Will: As much as I can write.
Question: From there it will tell you how to sign up to get the mail?
Will: Exactly.
Beth: The Mercator system that we are demoing now is supposed to be a prototype of RAP and how we would like it to work in the future. What we did with it was to create a prototype to show to people like the X-Consortium and now we are going to do things based upon this protocol. What we have down in the exhibit hall is running with a prototype RAP, a version that we will be distributing to Will, X Consortium, x-agent, etc.
Question: In this particular instance, what we are saying is RAP needs to communicate between the Mercator program and the code which has implemented the controls? Was the toolkit modified to speak RAP or was the RAP attached to it in someway.
Will: Mostly the last. Mostly the RAP is attached to the toolkit. In some instances you may want to modify it. Right now........we use the hooks we've talked about in the past.
Beth: And the hooks are already part of the toolkit. So in a sense this is just building on the hooks like we did last year. This just utilizes the hooks sending information to the screen reader or external agent.
Question: The hooks extract information from the control which then makes use of ICE and RAP protocol which communicates to Mercator, which is running a separate process?
Will: And its all done over sockets, allowing for very high bandwidth.
Will: XKB is the X Keyboard Extension and the reason I want to bring it up, is AccessX like SUN and Trace and Digital worked on and developed, is now becoming a part of X in form of the XKB keyboard extension. I just want to make you aware of that so if you are saying where is AccessX and X11R6, you'll notice where is XKB in X11R6. Just considered almost a name change. XKB is a much bigger thing that encapsulates AccessX. I just want to make sure that you have the awareness that XKB provides StickyKeys, MouseKeys, SlowKeys, BounceKeys, RepeatKeys.
Question: Does it provide any additional functionality?
Will: For example, what.
Question: Other than what AccessX provides?
Will: No it doesn't.
Will: No. The rule is you don't want to duplicate functionality if it is already there.
Question: What if people want AccessX now. What can they do, where can they go?.
Will: Digital and SUN are now shipping it on almost all their platforms.
Mark: Next on the agenda I would like to have Beth Mynatt give us a quick review on some of the work that Beth's group is doing at Georgia Tech. They are looking at some of the widget sets and what might need to get done to modify that. I believe that would be then the last component of what was done with the hooks, RAP and then modifications to the widgets.
Beth: So in addition to the RAP work that we just discussed. We have been doing some work on trying to improve access to Motif. Many of you are probably familiar that Motif is the leader in terms of versions of variance of X Windows. Most X applications that are out there are making a Motif widget set. When my group started working on access to X Windows, we worked on a version called the Athena widget set which is the one that all the poor researchers used because you can get the source for free and you can figure out how to do things. So, for example, the ugly application that I show access to downstairs is an Athena application because that allows us to simply figure out how to build this type of screen reader. What we have been doing in this past year is working on access to Motif because that is what people care about commercially. That is where we are going to solve the real access problem.
We have been doing this by breaking up the access issues into four major issues, and you can sort of think of them in sort of a top to bottom chart in terms of where you care about it in terms of the code. At the highest level, we have been looking for modifications to the Motif styleguide. This is the thing that supposedly application writers read or consult when they write Motif applications. This controls the look and feel of what a Motif application is suppose to look like, what type of labels they use, how they lay out the controls on the screens and so on. There are obvious ways that people can create the look and feel of an application to enhance its accessibility. If nothing else, making tag groups in the right order or labeling things appropriately or so on. Actually the work on the Motif styleguide was started a year earlier by, I hope I can get everybody down, Earl, Eric, Mark and I think Will you worked on a version of that as well, looking at the Motif styleguide and adding things that as a community we already knew. So my group, if nothing else, is just going over their additions, compiling things together, and seeing what other things we can come up with.
The second layer is what we would say in Motif functionality. Okay, now we are talking about recommending changes to the Motif toolkit itself. So things about what Motif offers application writers that could make the application to be more accessible. One example is this, if you use an icon on the screen and like say you want to use a push button that was just a bit map so the button is just a picture. This is very common. Obviously this can be difficult for screen readers because now we don't have any text that we can associate with the icon. So one change to the Motif toolkit could say we would like a resource of this particular widget this particular object to also have a textural label associated with it. So that application writers would say okay this is the open icon instead of just having a picture of an opening folder, for example. So those are a list of modifications that we are suggesting to the people who will be creating newer versions of Motif.
The third level is that the Motif implementation level so this how people have actually implemented the Motif toolkit and this is all the little tricks they have done for efficiency to break things like screen readers. For example, in the hooks and how we provide access we require people to use the common call Xt set values. So that means if I am like setting an icon to be highlighted, I would set that attribute of it and I would call Xt set value to do that. Okay, in some cases people with Motif cheat and they don't actually call this call but they just write to the memory address because they know how to do that because they get a little bit better efficiency. What happens is that makes things like screen reader access break because we can no longer get access to this information as it is changing. So one of things that we have been doing is finding all the different places where the Motif implementation is broken from our perspective because it breaks things like standard access software.
And then the last layer is what we are calling through the Motif architecture and we already talked about this today which is things like RAP and the hooks. So what needs to be added or essentially be able to dynamically load it into a running Motif program so that it can communicate with an external screen reader program. So that is the work on RAP that we are doing. So, Will's role in this has been sort of a primary architecture and he has been the main interface for DACX to the X-Consortium, and we have been one of the people saying okay this is a way you can implement this and actually testing that implementation with our screen reader back at Georgia Tech.
So these are the four things that we are doing. It is one of those things that it will produce a lot of documents and then the question will be how can we actually circulate these documents so that they have some value so that when people are writing Motif applications or when people are talking about what the next version of Motif will look like, they start to take some of these items into consideration. So this will be part of a long process of actually getting these changes into practice of how people build Motif applications. Any questions.
Question: Would it be a good idea to try to join up with the people who are doing the same things for recommendations for windows access and other graphical user interface access questions because the issues are fundamentally the same?
Beth: Right. And part of what we have blatantly done is read anything that anyone else is reading, the IBM group and I know that Greg Lowney had put some stuff out about Microsoft Windows and look for any ideas that we need to include in our work as well and then it will be important for us to say, here is what we came up with this is change how you would think about access to Microsoft Windows, for example.
Beth: We are really focussing on screen reader access, magnification, and mobility access and all the issues that we are touching. We just produced our first report last month.
Earl: I'd like to expand what Jim was saying, it is not only getting involved in the styleguide stuff but also the key people that we have to target. All the work that we do here is for not if we don't get the ISVs on board. So that means that all these people that are doing the work that Beth is doing and IBM and everything. Really what we should be doing is targeting more general publications, ACMs are an example. We were able to get into a book that is going to be used by students in colleges in the human factors areas. So targeting articles, papers and those types of publications, targeting CHI and things like that is where we can really start making a difference. Again, once we have all the base work in place, we need to have the applications being built based on a set of rules that are accessible rules.
Beth: One group that I am trying to particularly target is the people that are building programs to help you write Motif applications. Motif applications are pretty ugly to write it is a very time consuming process. So this is just a huge market in just selling programs that help you write applications.
Beth: At Builder is an example of that. So if we can build these rules and build these tents into the programs that people use to create Motif applications, then without even the application writers completely realizing it they will be creating more accessible applications. So that is one way to sort of build things in.
Beth: Our first draft. We are creating new drafts and the alpha drafts essentially was due last month.
Question: Would that be okay to send out with the DACX review.
Beth: Yes.
Peter: Just in general in terms of the accessibility guideline issue. I think there are sort of three tiers to accessibility from ISVs, one, here are the guidelines to follow, or if for some reason you absolutely positively have to break the guideline, here is a routine you can call to tell us what you are doing and then there is a third thing that I have to give Microsoft credit for. An architecture for figuring out what is going on by looking at the bit traffic or the pixel traffic and building up a high level understanding of that and inserting it back in through this routine that they would normally call if they are doing something weird. For example, if Frame decides to use other fonts and for what they need to do they need to draw the fonts themselves. We are not going to get them to change that, but what we might get them to do, is to say draw your font your way and then call this routine and tell us what text you put where, for example. So I am just suggesting those three tiers to the approach.
Beth: One of the things with RAP and the hooks there is what we call the Xlib hook. That is the hook that all the protocols are going by. So you probably don't want to watch all of that but if you know you need to watch for something you can say I want to watch for these particular packets.
Question: What about getting information off the users program like XGen, Tickle, and stuff like that? I mean that's almost an assistant administrator just writing a quick script to get information out. Is that going to be a consideration making sure those people write accessible stuff the same time. It is a little difficult to try to convince them but?
Will: Tickle is a big issue and one of the primary components for Tickle is down in Australia somewhere. I think he is on the x-agent list and he is involved in the accessibility issue for Tickle.
Beth: Part of what we have trying to do is figure out how to do it right for one toolkit and broadcast to other groups. The other toolkit to worry to Fresco which is another one down the line.
Earl: I just wanted to re-emphasize the importance of what Will and Beth have been talking about. The hooks they've been talking about it. These hooks, Motif work, the server work, etc. are what the screen readers, the speech recognizers, the screen magnifiers are going to be developed from. These are the commercial ones most likely. So providing these hooks is what is going to lead us to ideally the third party companies coming on and developing commercial products for screen magnifiers, etc. I just wanted to highlight the importance of it. That's all.
Mark: Next on the list. At our Closing The Gap meeting we had a lengthy group discussion to try to come up with some methods or ideas to collect some user numbers, if you will, for people who are visually impaired in the UNIX or X Windows community. So Jeffrey Pledger volunteered to try to put together a short survey and to collect the results. I like to ask Jeff to update us on the current status of that effort.
Jeff: I've got 30 print copies and there should be 16 braille copies.
Mark: While I pass these out, can you give us a quick synopsis of the results.
Jeff: Sure. We looked at it from all different options, Microsoft Windows, Macintosh, X-Windows and also another category called others. The survey had options for speech, large print, braille and other. The important number though is the X-Windows piece. For speech, looking at the categories for home use I was surprised to see 15 people said that they wished to have that in the home, 117 in the office and 68 in scholastic environments. Looking at large print, we had 58 actually in the office. Again the important number there being the office number if we are looking at the commerciality. We also had 70 for large print in the school. Braille copy numbers were not very noteworthy.
The important thing, we made a very considerate effort to see this information spread to all the different lists, as well as getting braille copies out to many other user groups. Through Christmas I received 62 responses through e-mail. Up through this past Friday I received two copies via fax. On Friday, I received 30 copies from England that filled out responses of the questionnaire from the Blind Association of Computer Users based over in England. To date I have yet to receive a single braille copy in response to the survey questions. To say the least I am disappointed in the braille response. For as much time and effort that we put into doing this, I would have expected some response from that community. Where do we go from here. Well, that is topic for discussion.
Greg: Jeff, this is Greg Lowney. I was under the impression that large print products on the market today out sold blind access products by quite a bit and yet the survey shows many more people responding saying they want blind access techniques, speech and braille much more so than large print solutions. Isn't it contradictory?
Jeff: Contradictory, yes. It is something that I also thought about. I just compiled the numbers and brought them in for everyone's review.
....Ensuing discussion centered around what value the numbers collected had, and what if anything else should be done at this time has been removed....
Mark: Next on the agenda, Jim Thatcher from IBM would like to talk about the X server on OS2 and the IBM screen reader.
Jim: I want to tell a story about something that happened today and it was a big surprise to the screen reader and then I have other questions. When we first started about 3 years ago, screen reader was pretty stable, we tried the X server under OS2 PMX and for reasons I understand, we looked at the window that the X application was running in and there was nothing there. We got the title because that was actually drawn by PM, but there was nothing inside the window. In December of 1994, a new PMX came out with current CSD and TCPIP from IBM and about 4 weeks ago somebody decided to look again at what an X application looked like running under PMX, the X server running under OS2. We were somewhat shocked to find out that everything looked good. There were no problems with what we call our off screen model. Off screen model for me and I think Peter and for some of the community but not all developers of the screen reader is basically a representation of all the text and let me just say all the text on the screen and associated with it all the information about the text like its position, color, its font, and the window handle that is associated with it. So with an OS2 screen reader since we had that window handle, we can ask the operating system what kind of widget is this. The thing that Beth is talking about knowing what kind of thing is a push button, what is a scroll bar and so on, we find out from the operating system because we have this window handle. It looked to us in every X Windows application that we ran in those 3 or 4 days that we realized that this was true, there was no problem with any of the off screen model in those X Windows application.
An example of what this means, this means, for example, when I run the Web Explorer under OS2 and have access to the Internet that way, it is very similar to access to actually running Web Explorer, is very similar to running X Mosaic with the X server under OS2, because I can't tab between the hypertext link I have to actually listen for color and when the color changes I can click on it and I can do all that with OS2. This statement or story that I am telling you is that I consider X not to be accessible, running under the OS2 X server, but it is an interesting case. And my first question is has other people thought about this? The next question is what about Windows X server? I know about Exceed under Windows. I first heard that somebody was actually using OS2 exceed and using X, a guy in Minnesota and I said that is impossible, you can't use that under OS2. It turns out that he can only partially do it. Has anybody thought about this. Is it conceivable that we didn't try enough applications running into more problems? Is it conceivable the process of possibly identifying graphical objects either with a Windows X server or an OS2 X server might be done with signatures of drawing calls more quickly then the entire process is being talked about by doing it actually on X.
Peter: This is precisely the approach that we were taking with our inhouse work on X Windows and it is the approach that we think will be the fastest results that people can use. I am surprised and delighted to hear that under OS2, the particular version that you cited, that they are actually using the font rendering mechanisms while underlying OS because once you do that you get all text. There is some simple techniques we can probably use to the graphics in which case you have an off screen model which is 60%-70% of what you have for the full OS/2 OS. That is to say window hierarchy information in X, you presumably won't get because presumably it is not there. A lot of the work that Beth is pioneering for RAP that will, etc. can potentially get you that information through a back door. Through a RAP protocol mechanism that you have to especially look for when you are running this X Windows application. But yeah I think that is a perfectly valuable approach it is an approach, that works well for a prototype.
Jim: Are you saying that you were not writing a screen reader under X, but running it under Windows?
Peter: The point I am making is that sufficient off screen model is text, graphics, windows. You are getting text and I think you can probably get graphics fairly easily through this X server under OS2. So the only remaining piece is window classes and you can do a lot of fudging without window classes to give people something they can use. It is not ideal, but it is something very tangible and very usable.
Jim: It's a picture. We do a lot of recognizing of things that are drawn in weird ways for OS2 screen reader. We have to recognize all kinds of crazy cursors and insertion bars and things like that by the way they are drawn. We can talk about recognizing, by we I don't mean just OS I mean maybe a Window screen reader running under Microsoft Windows. One could interpret the graphic calls to know that this is a button, another graphic call to know that this is a title bar, this is a scroll bar. It is a hard job, we know we can do it in certain places. I am hearing that Peter says that they have proceeded in that direction.
Will: What I am saying is that you will get some functionality but you won't get as much as you would if you were going to do it with RAP. Let's compare the Virgo interface. They give you a hierarchy, where you are in the interface and that would be impossible to do with just interpreting the raw X lib traffic.
Jim: Absolutely.
Will: I believe that hierarchy provides a lot of information and is very useful and very helpful.
Peter: There are two ways you can recognize a radio button is a radio button. You can recognize that it is a radio button because you found out it is a radio button by calling the OS, or by getting to RAP. You can find out it is a radio button because the radio button has its bit map. A radio button is on has one bit map and when the radio button is off, there is another bit map.
Jim: Which we have to do for some Microsoft applications.
Peter: Absolutely. You are betting that nobody is going to draw a circle that is filled in. Inside the circle that is not filled in with these particular radiuses and this particular bounding rectangular in the blit and if you do see that maybe this application is being pathological but they are probably drawing a radio button. So if you take that approach which we take successfully in our screen reader for Windows, you may have a surprisingly usable product and it is not ideal and that is part of why I am fully in support of RAP, why I am fully in support of getting it from the OS, but as a commercial vendor that has to sometimes cut corners to ship products, this is something that we have found it is a corner that we can cut and it works.
Beth: I think that one good quality you are going to run into because we did our first screen reader on analyzing protocols which is a little bit more information than what you are going to have. Is you do not have the standards in X Windows that you rely on in the Macintosh, Windows environment in terms of exactly what things look like, exactly how they are laid out. X Windows applications are weird. They look any which way they choose, there is no standard menu bar at the top, there is no standard message bar at the bottom. They can just look like anything and it is really going to impact your ability to use these types of recognition algorithms to provide anything that resembles usable access.
The other thing I want to follow along with this and I respect this work and it is definitely something you guys needed to try out, but one concern I have is the question of when people decide what is sufficient access. The story that goes with this, when Jolie and I were trying to work together ages ago. We were talking about access to pictures or something along those lines and we were trying to develop what would have been access to the graphical interface and the application developers came back and said no, but there is a command line interface that let's you do all the same things sort of, and a blind person could just use this instead. Well it really wasn't access to the same interface and it really wasn't what the users needed but it was a way that the application writers could get out buy saying they needn't worry about this. My concern is if basically this is announced as the way to have access to "X Windows", then there is no longer a need for this RAP stuff and there is no longer need for any of this kind of stuff because we have already checked off X Windows access and you guys are really just going to be able to get a very glimpse of what the application is doing using this approach.
Jim: I really almost take offense to that. I think there is nobody here in this room that is more concerned about giving access to people who need to use computers than I and I really pose this as a question about who thought about it. I said specifically, that having an X Windows screen that the text can be read under OS2 is not access. I think the problem is a very serious one, when it happened with SlimWare Windows three years ago when they brought it in to one of these conferences and say hey we have access to Windows. It is wrong to stand up and say you got access when it has partial access.
Beth: I certainly don't mean to insult anyone here and I know what you guys are trying to do and I certainly don't think anyone here would think that. What I am worried about is I am not picking on the federal government. But what I am worried about is someone who is satisfied with some sort of requirement where they don't really care and they are looking for a checkbox to mark off. I am not talking about folks here, I am talking about folks who really don't care and they are looking for an easy answer. That is what I am concerned about.
Question: I think we also need to remember there will never be a point at which we can stop and say that's sufficient access because if there is a point in which we can say that, people would not have developed monitors to connect to computers they would not have developed GUI interfaces. Therefore, since people who can see and operate the complete computer have decided that they still don't have sufficient access yet keep designing more and more complex interfaces can't say that about disabled people either, blind people. We can't say that is good enough for you for now and we are going to continue on with our development of our interface. We have to make sure that there is no end saying okay that's it that is enough access for you.
Mark: Next on the agenda I would like to have Eric Bergman give us a quick update on what is going on in the ANSI standards area.
Eric: I am going to talk to you about something having to do with standards. This is glacial and of no urgency but does have importance eventually. How many folks here are familiar with something called ISO 9241. Okay.
Well, ISO is an International Standards Organization and right now there is something called ISO 9241 that is in the process of being approved and to give you an idea of what kind of process that is. The work began on ISO 9241 in 1988. What it is, is a human computer interactions standard and standard for human computer interaction is really more like guidelines. To give you an idea of what that covers, the standards has 17 parts. Things like there is task and environment several sections on that, ergonomics, physical ergonomics sections, hardware ergonomics, keyboards, visual display, requirements for colors of displays, non-keyboard input devices, software ergonomics and this is the first time there has been an international standards effort on software ergonomics. This is things like guidance on usability, menu dialogues, command dialogues.
Now, the way I look at this I should say I have been on a committee that has been a technical advisory group to ISO for a couple of years, basically doing reviews and giving them information about the standards they are creating. The way I look at this is that right now the standard is in various stages of draft but over the next 2 to 3 years it will become an international standard. Nobody knows what that means. Nobody knows whether it will just do nothing which is quite possible. However, it turns out that some European countries take standards a little more seriously than we do and Germany, in particular, there is an organization that has started certified software as passing some ergonomic criteria which are in the draft portion of the standard and as a consequence the bottom line is that some companies are already making buying decisions there and this may spread to other European countries, who knows eventually here. But buying decisions based on level of compliance with some of these ergonomic standards.
Now it turns out there is nothing, nothing whatsoever about accessibility in this 9241 standard, not a mention. Recently, there has been an effort by ANSI to make an American national standard. ANSI is American National Standard Institute and there will be an American standard that will be an extension of this ISO 9241 standard. So it will be all of this ISO stuff plus what we can contribute to this American effort. Because I was on this committee this Technical Advisory Group that committee has now become this ANSI Standards Committee. So, there is going to be a standard called ANSI 200 which will be an extension of this ISO 9241 human computer interaction standard and there will be an accessibility portion of that standard because when we decided to do it I said I will volunteer to edit that portion. So one of the reasons I am discussing this with you folks is because I want to get your help and input in the next year probably in the next 6 months to a year there will be some kind of draft document and I would like to get as many people to look at it, provide comments, suggest materials, guidelines that should be in there and the guidelines in all of these kinds of standards are very performance based. They are not specific things like you shall have a cursor of such and such size, but they are all things like a user shall be able to adjust the cursor so that they can see it in such and such conditions.
Anyway, the bottom line is that anybody who is interested in providing material input such as you have research you've done or you know of guidelines huge box full of Grogs materials of course all of Trace Research stuff, the Nordic guidelines, all of those kinds of things can feed into this kind of effort. But people who have suggestions, great I would like to hear them. People who would like to review the material as it is being developed, great. Again, unclear to me what the actual impact will be. It is possible that it could be very minimal. On the other hand, it might be that especially in Europe where they take standards more seriously, companies will want to develop products that conform to this one standard so they would tend to develop to the ANSI which would be an extension of this ISO standard. They would pay attention to some of these accessibility issues. So, at the very least it is an important opportunity to make it visible at the international level that accessibility is important when people consider requirements.
Question: Eric, did you say that 9241 could just die away?
Eric: No, I don't think so. I did not say that.
Question: Then it will become a standard?
Eric: It should become a standard. There are portions of it that are. The way it works is it is broken into 17 parts and there is a whole review cycle in which there are comments and review and revision and portions of it are draft international standard now and some portions are just coming out of committee for their first review. It looks like over the next 2 years portions of it will start becoming standard.
Question: Does ANSI 200 necessarily follow it or is it happening in parallel?
Eric: The answer is it is happening in parallel, but it will probably follow it and come out shortly there after is the plan.
Question: Do you see this as applying to all computer products when you say the ANSI 200 standards monitor, piece of software, mouse, operating systems?
Eric: The scope of 9241 is supposedly they define it as office computing equipment, whatever that means, and then there is sort of a paragraph about what that means. As with most of these kinds of standards it is a bit fuzzy, it's clearly desktop computing environment.
Question: Do you have sub-sections on each of those categories?
Eric: Yeah, I can certainly direct you to where to get that information. And I should say the ANSI 200 is only on the software portion and for now and I am looking into this even now. But for now it looks like there is not hardware accessibility components in the near future but I think that will be something that will happen next time around.
Question: When does it look like there is going to be passage for the ISO 9241?
Eric: It passes in parts. So, it is really within the next 2-3 years a large portion should come through. It is always hard to tell because what can happen is some large number of people can have objections. Countries have votes and so it goes through this whole process and let's say that representatives from Britain all have a problem with it, it might have to go through some revision.
Question: Just a few quick questions. When do you think the work for review will come out, estimate?
Eric: I would guess about 6 months, may be as long as a year. It sort of depends on the progress. I don't want to circulate something that is so rough that it is a waste of time, because it is going through a lot of review. On the other hand, get it out there as early as possible. They say that standards work is glacial it is very frustrating. My attitude is accessibility needs to be in there so it's worth it from that point alone.
Question: They are glacial but they also have a tendency to plod forward and move things out of the way that don't accommodate them.
Question: In terms of standards. There are standards for example what monitor omissions that are about the same. But those ISO standards are very influential and I just want to make sure people understand that they are extremely influential.
Eric: First of all I think those are a different organization but as far as the software standards the things that is a big question mark is there haven't been any software ergonomic standards. Nobody is quite sure how you evaluate compliance and there is still a lot of work to figure out the compliance aspect of that.
Eric: I guess if nobody else has any questions my final message would just be one of if you are interested in some point in reviewing some material, send me e-mail at eric.bergman@sun.com.
Mark: Last on our agenda but certainly not least, I would like to have Earl give us a quick update on some of the work that SUN has been doing in terms of the screen magnification. This is some work we saw in early prototype at Closing The Gap. I will turn it over to Earl.
Earl: Downstairs we have the screen magnifier prototype that we brought to Closing The Gap. Essentially a separate window that magnifies the pixels. And we also have a screen keyboard that allows you to input the text into any window, mail tool window, shell tool, etc.
These are prototypes and the purpose we are doing this is so that we can take the results, the system base results, and identify third party or non-system base results and fold them into the Motif work, the RAP work, etc. that Will and Beth are doing and then give us a better idea of how we talk to vendors when we want to bring their solutions on to the platform or for when we want to make one ourselves, we end up doing something like that. So go ahead and go downstairs and check it out. They are up, they are prototypes but they do work. They will give you an idea.
Earl: What it is doing is it is doing pixel magnification and so it is just basically taking an area that you've magnified region. Magnifying the pixel and sending it to your screen magnification window. It is similar to if any of you have seen Puff or UnWindows. Same sort of principal. We have added a couple extra things. Again, it was to find out what the difficulties were so that we can make improvements as we either develop a product or work with a third party to develop a product.
Mercator is on the system as well as AccessX also, so you can check those out to if you would like to, in the Trace booth.
Mark: That completes everything on the agenda. We have a few minutes left if there is some questions or some issues that people like to hash out a little more.
Will: Microsoft added in ShowSounds for Windows '95. I think we can do something similar for X. In terms of what we call text property on the root window, it would be a global property accessible to all applications running on the same system and I was wondering what were people's thoughts on that and any interest in writing the specification. Does anybody have any feelings on that?
Question: The hard thing about ShowSounds is really evangelizing to the people who are going to be using it. And that it is probably a relatively easy thing to have as a property.
Will: While I am writing specs for RAP and everything else we might as well write this one too.
Question: I don't know the technical details of how to get into the X books what have you. But whatever you do that should be an important subset of what you are doing is getting that information into not just having it but actually making sure everyone who could possibly be writing software for X finds that out.
Question: Is there a clear distinction where the ShowSounds capability might be made available. Is it a toolkit level?
Question: What exactly is the ShowSounds?
Gregg: There are two parts. The ShowSounds flag is simply a flag that you can read and if you have applications that generates sound, they would check and if the ShowSounds flag is set, they would create some visual event to accompany the sound. The second part. So it is nothing more than a way for somebody who is hearing impaired or deaf or in a noisy environment like a factory or a library which is real quite where you just can't use sounds. You just flip this switch in one place and then all of your applications you don't have to go open each application and dig around try to flip the switch. So that is what it is. The second part.
Question: Applications would then have to be ShowSounds compatible.
Gregg: The second part of it is that you can then create tools within the operating system. For example, if you have an application that is going to be talking and it wants to throw up captions each application can then figure out how to do captions or there might be a system level captioning facility tool just in the tool box that one might use and things like this. So there is different things that you can do to support it, but the ShowSounds flag itself is rather trivial as it was pointed out. The real issue is in providing some guidelines to application manufacturers who are interested in supporting it as to what it means, how to go about supporting it, etc.
In Microsoft Windows '95 and in the earlier ones there is something else which we have which you call Sound Sentry and that is quite different that is for non-cooperating applications and it just watches the sound, the standard sound functions ports and whenever anything happens there it provides flashing menu or something like this. It doesn't help you if there is speech or anything, as you would see a flashing menu but it doesn't tell you what it is. So ShowSounds is really a solution. Sound Sentry is really for old programs that do nothing but beep.
Mark: Finally, I'd like to go around the room real quickly and have everyone identify yourself and where you are from and that will conclude the meeting tonight because we are over running our time.
I would like to thank everyone for coming.
---Attendees---
Mark Novak, Trace R&D Center
Mayor Max, NSA
Jill Horton, NSA
Bart Bauwens, University of Belgium
Beth Mynatt, Georgia Tech
Mike Alberts, Georgia Tech
Arnold Schneider, Swiss Federation of the Blind
Pam Olson, Independent Consulting in Seattle
Jim Thatcher, IBM
Jeff Pledger, Bell Atlantic
Charles Oppermann, Microsoft
Greg Lowney, Microsoft
Luanne LaLonde, Microsoft
Alireza Darvishi, Swiss Federation of Technology
Don Dillan, AT&T
Tad Lewis, Defense Intelligence Agency
Collette Lewis, Defense Intelligence Agency
Peter Korn, Berkeley Systems
Jutta Treviranus, Univ. of Toronto
Chris Serflek, Univ. of Toronto
Dave Minnigerode, Texas A&M
Eric Bergman, SunSoft
Will Walker, Digital Equipment Corporation
John Gardner, Oregon State University
Bill Bailey, Oregon State University
Randy Lundquist, Oregon State University
Carina Chang, Synopsys
Bill Fontaine, GSA
Jane McKinley, Dept. of Veterans Affairs
Gregg Vanderheiden, Trace Center
Glen Gorden, Henter Joyce
Maureen Kaine-Krolak, Trace Center
Jolie Mason,
Earl Johnson, Sun Microsystem Laboratories
Helen Petry, GUIB Consortium