1994 CSUN Conference


The following narrative was edited from a tape recording made during the Disability Action Committee for X (DACX) Meeting held recently at the California State University Northridge (CSUN) Conference March 17th, 1994 in Los Angeles, CA.

Meeting content and speakers are identified and reported herein "as-best-as" they could be determined from the recording.

Mark Novak (novakme@macc.wisc.edu)


Mark: I'd like to welcome you to the DACX meeting. DACX is the group that is working to make the X Window system more accessible. I see a lot of new faces in the crowd tonight and I think we should take the time to go around and have everyone introduce themselves. I am also going to pass around a list of people who've been at previous meetings or who have contacted me through email, telephone, letters, etc. If your name is on this list, just check it off. If your name isn't on this list, there is a blank sheet at the end to add your name, and postal or email address if you have one. Attendees listed at the end of this narrative. Thank you and I welcome you all here. I think this is the largest meeting attendance we've had, this being our fourth gathering. That is very encouraging. Two things about the DACX meeting that those of you who are new should be aware of. We encourage this group to speak out. We want to hear what is going on with your research, etc. This is an open group and everybody gets a chance to contribute. The second thing is that this meeting is being recorded. The purpose of the recording is to be able to distribute the minutes for review at a later date. It also serves as piece of history to work from. We do have agenda and topics to cover tonight, but before we get into that, does anyone have any topics that aren't on the agenda that you would like to have added on the agenda tonight?

Gregg: Just one. The announcement of an ftp and gopher site at the Trace Center at "trace.waisman.wisc.edu". So the DACX material as well as a lot of other disability related material will be archived there. So if you want a copy of different disability related kinds of things, we are going to start using that as a place where people can retrieve this information.

Mark: Two areas which DACX has been concentrating its work in the X Windows systems is to look at providing improved access for people with mobility impairments and people with sensory impairments. To date, sensory is focused on the visual aspect. A lot of what we are going to cover tonight falls into those two categories. I am going to ask different members of the DACX group who are working sort of as sub committee chairs to get up and present the results from some of their work, which hopefully will stimulate some discussion and some other ideas. To kick that off, I'd like to ask Will or Beth to give us a quick review of some of the work DACX was doing prior to the X Technical Conference which was in January, regarding the development of hooks and the RAP protocol, kind of as a short review since our last DACX meeting at Closing-the-Gap.

Beth: Will is going to take over on the second part. Mark just mentioned the hooks. The hooks we have been developing are in the X Library(s). The actual communication protocol that gets the information from the hooks to the screen reader agent is the second needed part, and then there is the last part which is the screen reader agent. The primary effort of the DACX GUI committee the past year has been working on designing, specifying and then putting in the hooks. The good news is that those hooks are in the release of X Windows which is going to come out next month (April) with X11R6. There are five types of hooks. There are hooks that tells you when things are created in the interface, when any resources or aspects of the interface change, also more geometry into things that is configured geometry, and also story hook. This provides us a great deal of information for computer access. One thing I want to tell the group that they may not know, is that the document that specify these are now up for public review, which means anyone can have access, you don't have to be an X Consortium member. They are available for anonymous ftp, and the ftp site is "ftp.x.org", also I have copies with me of what the specifications for the hooks looks like. Also, I want to remind you that the five hooks that I just mentioned are in what we call in the Xt Intrinsics. There is also a hook that is in the X lib library interface layer. This is sort of our catch all hook. This means that if anything in the GUI interface does bypasses these hooks, we have one last hook which catches every single packet that is sent to the display server. This catches everything that is happening in the interface. This is very low level information so you don't want to rely on this completely. Specification for that is going to be in the coming release of X11R6 and the documentation for that is also up for public review. You need to look through the X lib document for that. Basically, for those people who do not work in this area, this hook replaces the need for any type of pseudo-server in your screen reader system. You can use this hook directly into the X lib layer to get all types of information. The Mercator system, which is a screen reader for X that is based on these types of hooks, is out for demo in the Trace booth during the course of the conference. You can come see that system. Second part of this is once you have the hooks, you still have to communicate that information to the screen reader. The screen reader is a separate program. So you have the application running in your system. There are some hooks trapping information about it, but that information still has to be communicated to the screen reader. That is what we have been calling our remote access protocol or RAP. Will is going to tell you about DACX's initial design on that and then I will give you a final update on where that is now.

Will: As Beth was saying there is a need of a protocol to take advantage of the hooks. How many people are familiar with X, or Xt Intrinsics or anything like that? Okay, we are talking hooks and we are talking protocols and all this. In X you have something called a client and something called the server. The server is your physical screen that displays all your windows and where you type on your keyboard and move your mouse. A client is an application that runs. A screen reader is trying to get the information from the clients and represent that in a nonvisual format. The hooks are built into what we call the Intrinsic or one of the toolkit libraries for the client. When you link in the toolkit, you link in the hooks automatically. If that is set, that is still not good enough, since what you need is something to communicate to other clients what's happening with these hooks. That is where the protocol comes into play. Just like Beth just said. The protocol is a link for one client to communicate with another client. The reason the RAP protocol was developed, rather than developing some proprietary protocol, is that when a client links in something, it has to link in the standard library. For a client to be accessible using hooks and protocols methodology, they have to use a standard protocol with the standard hooks. You cannot retrofit it. When you build a client and link, it has to have the protocol and the hooks to link it with. So what DACX did was sat down and looked at something along the lines of developing a standard protocol called RAP, or the "remote access protocol". There is also another sub protocol called the ICE protocol, which is a very low level protocol for communicating between two clients. Essentially what RAP was intended to do, is to take each of the hooks that Beth described and package the information from the hooks and send it out in a standard protocol across the wire to another client. And that is about it. At the X Technical conference last January, some people from BULL Systems in France developed something called EditRes2 or the K- edit protocol which was very similar to the RAP protocol, so at the X Technical conference we sat down and discussed what the similarities were between RAP and K-edit and what part of K-edit could be used in RAP or what part of RAP could be used in K-edit. Now I will pass it on to Beth who has personnel at Georgia Tech working on protocol.

Beth: Are there any questions? I know there are people that have been in the DACX group for a long time that have been following all this X "techy- stuff" who should be making sense out of some of this and the rest of you are going to be saying, "what in the world are they talking about"! Please go ahead and interrupt with any questions. Basically, what we are trying to do is get all of this to be in the standard part of the X Windows distribution that is controlled by the X Consortium. Pretty much this distribution feeds into application writers and toolkit developers. For example the Motif toolkit. I guess the idea is that what we are really aiming towards is applications that then become compliant with the X11R6 distribution. There are sort of two possibilities, actually three possibilities to go with applications. One is that they are dynamically linked. If they are basically dynamically linked to the library, if you have the new libraries installed even if the application is not being completely recompiled then that will work. For example, the SUN distribution, SUN applications are typically dynamically linked. Also, applications that are athena widget set are typically dynamically linked. Another possibility is any of you coming into this group working on X screen readers, as we have been working on ourselves, you can work with older versions of X applications that don't take advantage of these X libraries. However, the focus of the DACX work so far has been to look toward the future. Look at setting the industry standard that we can take advantage of from this time forward.

Peter: Something that may not be clear to new people is that there are two levels of information that we (DACX) are wanting to get. One is the text in the windows and the other is information on all of the widgets. The hardest problem is the information on all of the widgets and that was the primary focus of DACX all along. The "catch-all" X lib hook was a very nice way of getting the text information and window hierarchy information so that from X11R6 forward, it will be very clean to get everything and possible to get the widget information. For X11R5 and R4, there are other approaches to getting what we get with the X lib hook and no approaches to getting widget information, for example, this is the button and the button is on, but we will get the text. If that makes it anymore clear?

Beth: The really hard work has been focusing on trying to set some industry standards and as I was talking with Mark this afternoon, I think that is a goal that the DACX group has been very successful with. In a period of 18 months we've gone from having a problem to having something coming out of a major distribution that works with access. It will however, take a while for those changes to propagate. Now, I am going to pick up where Will left off unless there are questions.

Question: (regarding what screen reader is?):

Beth: I always think of the screen reader as another application or client running in your environment. I do not think of that as part of the display server. It is essentially another client or another application. For example, another use of these types of hooks that we've been working with, are things like resource customization program or testing program that exercise the running application for different reasons. It uses the same types of hooks and same type of information that a screen reader uses. There are essentially two clients with two applications sharing information together through these hooks and through this protocol. One of those being the screen reader.

Question All of these hooks are suppose to get information from operator. Are there methods of getting information from your operating system?

Beth: There are two answers to that question. First is, these are not really hooks into the operating system. You are thinking of hooks like typically I think in PC world of screen readers. These are hooks that are in the underlying libraries of the X Window system. So they are really part of your application, you just don't think of it that way. The second part is something that will be in the protocol eventually, and that is a way for an application to message directly to a screen reader or any other type of client application that wants this information. If an application wants to be even more screen reader friendly than what the general purpose protocol provides, there is a way for it to send messages to the screen reader on giving it more information. That is part of the plan for the protocol that hasn't happened yet, but it's in the works.

Peter: Another thing that might be worth interjecting here is some applications we've noted, Frame is an example, decided not to use the X libraries for rendering tasks. So they have three options: they can continue to be inaccessible; they can instead release a new version that uses the text APIs and generates text through the libraries; or potentially they can even say to a screen reader if its present, hey I am rendering this text here, never mind I am not rendering the text to the library, and that might be another reason to have a back door into the screen reader.

Beth: Yes, actually I am going to address some of those issues. Frame Technologies Corporation has expressed willingness to deal with people that want to provide an extra module to a screen reader. Back to where Will left off and I think I will address some of these questions. What we have, is we have the hooks already in place, so we have a way to get the information. What we are left with is making the protocol. Basically, the DACX group worked pretty hard from Closing-The-Gap in October last year, up to the X Consortium Conference in January of this year, to get a specification put together for what this protocol should look like. In some ways these efforts were stalled and in some ways these efforts were helped by a new proposal appearing directly into the X Consortium. This proposal is from the people in France, who wanted a completely different protocol called K-edit, and their goal with that was for resource customization. K-edit had some of the same needs as the screen reader access but not identical. So, most of the time at the X Consortium meeting was spent ironing out how these proposals were alike, how we had similar needs, and how we needed to make something more general. The good news is that a very nice general protocol is along the way but it is going to take about a year to complete this general protocol. Even better short term news is that the X Consortium has been willing to work with the DACX group, specifically with people from Georgia Tech who decided they had time to implement a subset of this protocol specific to the screen reader access. The goal of this subset of the protocol is to be out in June, 1994. This will be part of what is called the "contrib" portion of the X distribution. I am not sure what all the politics of that means, but basically it is something that is out there for people to play with, to experiment with. It is not something that you can count on being compiled in every application, that is being compliant with R6. This will be a prototype or evolving standard. The prototype for the protocol will change within the year as we work towards X11R7, and for what is even more important, Motif 2.1. So what we are up to now is we are working on the subset of the RAP protocol. The main thing we are looking to create is the combined protocol with the people in France and in the DACX group. The very good news is that the OSF group has gotten heavily involved with this and they have made outward claims that they want to make sure this is in Motif 2.1. The bad news is as everybody knows, they are still working on Motif 2.0. Motif 2.1 should happen within the next year. Things that are important to this group. There is now a public mailing list because so many people are interested in it. The mailing list called the x-agents list. This is for people looking at applications that want to use this protocol and want to use the hooks in the screen reader application and so on. I got the information listed here. Basically, you need to send a message to "x-agent- request@x.org". Send an email message to that address. Subject doesn't matter, but in the body of the message, have the single word "subscribe". That will get you on the mailing list. Again, this is a public mailing list for people who are interested in applications that use these protocols or use these hooks.

Will: Just two things. One is the reason the x- agent list was made, was due to the problem that not everybody is a member of the X Consortium. And the way the X Consortium works, you can only get the R6 code before it is released if you are a member. If you are not a member, you don't have access to the code. The problem we (DACX) ran into, is that Georgia Tech and BULL were not members of the X Consortium. And Digital and OSF, some of the people who met at the X Conference were X Consortium members. Georgia Tech and BULL can be the primary provider towards this protocol if they were members of the X Consortium and had access to the code. So we ran into this basic problem and the way Georgia Tech solved it, was join the X Consortium. We still recognize the problem and the people at the X Consortium established the x-agent list, to allow anybody in the world wide web, another hot topic for this conference, to be involved. Therefore, you can get on the x-agent mailing list as discussed. The other thing is that the topic of the x-agent discussion is more around what the application should do, not what should a screen reader do. It is more what should this protocol do to enable a screen reader or a resource type application.

Beth: There is still some concluding discussion on the hooks themselves. The discussion on the list is pretty X heavy, but it has a wide variety of people on it. There is already about 40-50 people on the list . Other things briefly mentioned, probably not a concern to a majority of the people in this room, another protocol called ICE. There are multiple levels of this protocol. Basically ICE is the underlying protocol that handles things like authentication, connection data type information, etc. The ICE specifications are also up for public review, so in the same way which you can find out about the X lib and Xt hooks, you can also find out about ICE. By the way, public review ends in early April so if you want these documents, you better get them now. One more thing and that's the question of how to deal in multiple ways with the fact that the Xt hooks don't get all the information you need. Basically, it is being approached in two ways, in addition to the X lib hook. First off, there is an effort to put hidden parts of the widget out as fake resources, or whatever you want to call them. A lot of times there are attributes of the objects in the interface that are not real resources, so if they are not real resources, we don't find out about them through the hooks, but they sort of act like resources. So, the K-edit people have already come up with a way of making these attributes of the objects act like resources so that they will trigger the hooks. So this gets to a lot of things where the widgets will look deceiving, and makes it still work relatively well. That is part of the discussion that is going on now on the mailing list, is to figure out how to do that. Another thing and I already eluded to this, is the fact that OSF is gotten involved, and made the commitment that these hooks and this protocol will be in Motif 2.1. So what they are saying is that there are parts of Motif that still escape these hooks, and that they are going to make sure Motif works, they (OSF) are going to go in and modify it (Motif). For those of you still keeping up with the X of this, Motif is pretty much the toolkit of choice. 90% of the commercial applications out there are written with Motif. If we can solve Motif, we pretty much solve access to X for the time being. Even with the hooks that we have and the protocol that we have, 5% of the information we need is still missing or hidden in applications and Motif and they (OSF) are going to fix that also. So the idea is by Motif 2.1, which should be in less than a year, is all holes should be patched. That is only going to happen if we keep up on the list and keep up work on the protocol. The idea is that people want to make this happen. Any questions?

Question: Who are the groups working on a screen reader for X?

Beth: We at Georgia Tech have been working on one, Berkeley Systems, and IBM. So there is at least three prototypes that are out there. I know that in Europe, in fact there is suppose to be a representative here,

GUIB representative: The GUIB Consortium also dealing with this, but they are still waiting on the resolution for this.

Beth: Basically there is an EC project, that has 9 to 11 countries are presented in the project, and their work is focusing on Microsoft Windows and X Windows. Somebody from GUIB contacted me about 1 1/2 ago, and was sort of trying the same technique to access X Windows and basically the infrastructures they've been building will also build on the hooks as well. Some aspects that the GUIB project did different from other stuff we have here, they have a pretty impressive tactile display that they have been working on and have also been working on some other applications of sound. So it is a very large project in terms of trying different approaches for the screen reader interface and also looking at both Microsoft Windows and X Windows.

Question: What is the time lag you were talking about?

Beth: That is a heavy question. X11R6 is out in April. The "contrib" will be out in June. Motif 2.1 will be out in less than a year. Pretty much any screen reader is going to be dependent on how those schedules work. So if you want to count on a commercial screen reader, it is going to be after all those things happen. Depending on how people are working on it and how quickly they implement the changes, it could be fairly quickly after those things happen. In fact, the OSF staff have even talked about having a screen reader that is part of the Motif distribution. I wouldn't count on that coming out with Motif 2.1 either, but it should come out afterward. The important thing to remember in all of this, when we talk about the screen reader, even though a lot of work is done by the hooks and the protocol, the screen reader will still need a lot of smarts built into it as well. They essentially have to be able to interpret any type of widget that Motif should render or any other widget set which you are working on.

Will: To add just a little bit of clarification to that. When Beth says X11R6 is due out very soon. It is due out soon by the X Consortium. It will then be consumed by vendors like Digital, IBM, etc. So historically there is same lag time between the time the Consortium releases X and when vendors release it on their platform. They (OSF) just started thinking of Motif 2.1, they haven't begun selling it. Also, when OSF lets out Motif 2.1. then there will be a lag time between that and when vendors ship that. So first you will have a lag between R6 and then you will have a lag between time OSF gets R6 and gets working with it. One other thing, Motif needs to be accessible, but so do other toolkits. For example, Fresco is a different toolkit, and we need to be sure the protocol works with other toolkits and I think while you are reading about Fresco, the x-agent list would be a good list to talk about generic protocols. It is a good discussion, so just be warned that now is our chance to modify a toolkit that hasn't been released yet and make it accessible rather than do a retrofit with access requirements.

Beth: A prototype version of Fresco will also come out with X11R6.

Will: Kind of like the kiosks Gregg was talking about yesterday. Prototypes are being built now, and now is the time we can do the right thing by working with them to make them accessible.

Beth: One thing I would like to say to everyone and remind the people what I am speaking about is that we have three major X Window System vendors represented within this group (DACX). In terms of X toolkits and X distributions, even if we had something out with R6 and especially if we had something in the "contrib", it is important to work within your organization to say this is an important issue, even though it is just a "contrib", lets make sure we get it in as our distributions so people can be working with it, playing with it within our organizations and our customers as well. So that again the idea is, that even as a prototype, we are trying to get something out so we can get as many people playing with it as quickly as possible.

Will: Another way to help that is if you write a letter to Beth's boss or write a letter to Earl's boss saying it is important. That is the kind of ammunition that we need to prove to our supervisors that this work should be done. So if you are here as a member of DACX that is just interested in it, and you are not doing a lot of the work, maybe one of the things you can do to contribute and help the other people do their work and maintain their positions, is write a letter of support.

Jim: When we talked about this in previous DACX meetings, the impression I was left with was that these hooks would be embedded into the X lib and the Xt intrinsic and then those go out with X Consortium issue of X and that it takes really some extra work to get them out so why would anyone? So what is it that you are asking?

Will: The thing is it doesn't require extra work to get them out. What it requires is that some of these things are still on paper, like the prototype protocol that Georgia Tech is working on. The people working actively with the x-agent list to write the code need to have commitment from their bosses to let them work on it.

Earl: Additionally, when something gets into the "contrib", which is where the prototype work that Beth is doing will initially appear, it is not necessarily going to be picked up by each individual company.

Jim: How could it not be?

Earl: For example, we don't include say for instance X-trap, which is a contrib.

Beth: Remember, we are talking about is two different parts.

Earl: If it is a "contrib". Now if it is built in the R6 and it becomes a base part of the system like what's going on with XKB, then that is something that shipped with the systems. So the "contrib" is code that somebody else from a research center, from another company, etc., not necessarily MIT, is just making available to anybody who wants to use it. Companies that sell workstations, etc. don't necessarily include all that information. Since there is already 2 1/2 million lines of code, or something like that in the server, the more you add the more your overhead runs into your system.

Beth: There are two parts. First is the hook themselves. The hooks are part of the standard X distribution and they will come out with R6. They are part of X that ships every where. What I was referring to as a "contrib" is our prototype protocol which enables the screen reader to use the hooks. And that is what is going to be coming out in a "contrib" library in June.

Question: What value are the hooks without the protocol?

Beth: That is a good question. The value is that the hooks are there and if you wanted to write your own libraries and you were counting on a dynamically linked applications, you would use this.

Question: The real answer is not very much use at all. This means that the "contrib" portion is critical.

Beth: The "contrib" portion is critical and it will be very small.

Jim: We need to get a list of what ever these files are, etc., however that is identified, so the people that don't know anything or don't hear enough about X, can go to the people that manage the X Windows group and say to them that they have got to include these files. Just to say there is something out there that you need to put in, isn't going help.

Beth: The thing that is due out in June, the x- agent list will definitely being telling people what is going on and forwarding that to the DACX group to tell people what is going on. Those are the two major lists to watch so we can tell you exactly what files are going to be in the "contrib" library that are needed for the protocols. Any more questions? Since Will already brought this up, I am going to follow up. Some of you have already received this message anyway depending on when you left for CSUN. But basically Georgia Tech is applying for an internal commercialization grant for the work that they are doing. This is internal Georgia Tech money, so they are the only ones who could apply for it anyway, so for once we are not competing with anyone. And we have a good shot at it. I've done a lot of work already. But what it takes is people hammering down the doors saying this is a real problem. They are marketing people. They care about whether there is a market for this, there is a need for this in the industries, people would buy something like this if it was available, this isn't just software that no one will ever be using. So if you can send out the message one way or another that access to X Windows is a real important problem within the industry. Write a letter to the person specified in my email note. It is a real chance that will give us a years worth of funding to turn our prototype into something that sells or is licensed out to another company. This is one shot that we are looking at right now in terms of something that can actually be released on the market. I will also post any information that I can get in terms of demand for X access in companies, sharing the need if you will. I have already gotten a lot of people responding saying, yes, people in my organization need this, my inquiries are looking for this. That sort of thing.

Gregg: Another comment on that might be to check with IRS because they have a large number of people with disabilities and a large number of people who are blind. Other examples include the treasury department, ...

Mark: Next on the agenda I'd like Jim Caldwell to update the group on some of the work that he did to obtain access to the code from RPI, concerning their X screen enlarger software.

Jim: I can fill you in on that. We finally got the Dynamag code from Ephraim Glinert, who was at RPI, but is now at the University of Washington, Seattle on a years sabbatical working there. We got the code, you (DACX) got the code, we implemented it. Being blind, I am the world's worst judge of what screen enlargement code looks like. The people at IBM who did put it up and started fooling with it told me it was not what they had expected and I don't think they are using it. Based on their assessment, I don't think it was the answer we (DACX) are looking for. Perhaps it is a start, but I think it was just making Xmag more of a dynamic tool. People complained about it jumping around the screen a lot. That is the bottom line.

Greg P.: It is interesting. Essentially it is Xmag in a loop with a few features added on to it. Xmag is on the "contrib" tape, and has been there now for how many releases of X Windows? Xmag is a slow program, and Dynamag, which is the main part of RPI's code because it uses Xmag itself, it is adjustable in the speed that it updates the screen but as you make your enlarged screen larger, Dynamag also slows down more and more. As you try to move the pointer around the screen and have Dynamag follow it, there is some problems with that too. I think it tries to enlarge the enlarged part. Which becomes a problem. As you move it around. We played with another magnifier that is very similar to this. But as you are moving around it gets a little confused on what it is suppose to be magnifying. It will focus on itself and blow itself up.

Jim: We are trying to get funding to work on this kind of a thing. I don't have any yet. Clearly, we don't have the answer. That is the bottom line.

Question: Is this proprietary to a specific platform?

Jim: No.

Greg P: It is available. The package itself is called UnWindows. It is Dynamag plus a few other programs.

Mark: I put the internet site address out on the DACX alias a while back. I can do it again. Okay I will repost that next week, but you can ftp to "ftp.cs.rpi.edu", and the files are in the UnWindows directory.

Jim: IBM just come out with a screen enlarger or magnification program that runs with screen reader OS2 which has a nice feature of what you have just heard is what was just enlarged or blown up. To me that is the only way to go. Because I was working with some low vision people and it is very fatiguing for them to look at the screen. They can look at the screen but it is really hard work. What seems to me what they really like to do is use an audible screen reader, get all the information they can, and then if it is not enough, then they may look and they need to know that is the same spot that is enlarged.

Peter: Speaking as a vendor of screen enlargements for PCs and Macintosh which is closer to X Windows than DOS certainly, we feel we have in our MAC product sort of a good specification of what you need for X Windows. In going over this with Bob Scheifler and others, we see the only way to get this functionality in X, is with a pseudo-server. It is the only way we see. The issue is our software and Zoom Text and a few of the others will completely enlarge the screen and use the entire screen as your enlarging magnifying glass. So you can't do that if you only have a locked window somewhere on the bottom and it is just a window, just a simple application that happens to be getting the bits near the mouse and blowing them up in another window. You need to redirect all drawing to a different server and take complete control over the real display and to do that you need a pseudo- server. By doing the pseudo-server, the other screen reader prototypes developers know, you will loose some of the nice efficiencies with the special graphic cards and accelerators and so forth. That is sort of the penalty we think you have to pay to get what we consider to be good screen enlarger. Some of the other features include inverting the entire screen but with sophisticated color mapping, black or write, enlarging only a portion but then have that portion scan across a line of text in a word processor window. If you are enlarging two or four times, you are not going to be able to get the full width. So you have partial glass and that starts traveling across, enlarging like a marquee. For these sets of features you need to redirect the screen. You need a pseudo-server or something that is effectively a pseudo-server. This is how our MAC product works. It is effectively a pseudo-server. I don't see that there is any other effective way to do that?

Question: You put the magnifier on the server itself?

Peter: Precisely. You have to do a memory server because just getting the call is very nice but you need to render it into bits, because it is bits that you are showing on the real screen. So you need to implement a memory server, which from what I understand, 90% of the code for a memory server already comes on the X tape. So you need to add the remaining 10% and then you need to do a whole bunch of bit-blit operations all the time. There is something you can do to make it intelligent and faster. Basically, that is the only architecture that a bunch of us talking together have come up with. Anyone that wants to do screen enlargement, I think that is probably what you will have to do.

Jim: You said that the code for a pseudo-screen server it is already on the X tape?

Peter: 90 % of a memory server is supposedly on the X tape. A pseudo-server is on something that claims to be a server and isn't. A memory server is something that is a server and renders bits to memory not to a screen. A number of X Window machines have very special memory architectures for the display. It has got additional hardware for polygons and what not. You may or may not get to use that in a memory server. Certainly you can't expect it to be there. So the first approach would be to write one memory server that is fairly straightforward, but it doesn't use any of that accelerated hardware. Once you've imaged the bits, you are going to take one bit and make one pixel and make it 4 or 9 or 16 and then take those 4, 9 or 16 pixels for every pixel you are magnifying and draw it on the main screen, that is going to be a lot of bit counting.

Jim: If you just said 90% of the code for?

Peter: A memory server should already be on the distribution tapes from X Windows. So if you wanted to do this yourself, you are going to have to allocate a person. If anyone is interested, we have a demo version of our Macintosh Enlarge, which can give you a sense of what we think real screen magnification should be, come by our booth, get a copy and find a MAC and play with it.

Question: .....and keeping track of the pointer, that is going to be hard part?

Peter: On the Macintosh you have much less of a sense of it. The Macintosh has no real keyboard navigation. So you hit the tab key and you aren't generally tabbing anywhere. You don't use the arrow keys to move throughout the dialogue box from one control to another. Microsoft Windows uses that very, very heavily. Now again with some of the hooks that we are getting for screen readers, telling you what is the active widgets, etc., you will want to put them back into a good usable screen magnification client.

Question: Why not build it like a screen reader?

Peter: If all you are doing is taking one pixel and turning it into 4, that is really fast and not nearly quite a problem. If you want to re-render some text so that instead of the 12 pt. text coming in very blocky 24 pt text, re-render it as smooth clean 24 pt. text, then you need both a screen reader and screen enlargement, and you need to have the two talking to each other.

Will: I agree that the pseudo-server or some of the other alternatives, but I tried to take a little project of my own to see what I can do to make something more enlarged on the screen. And the good thing about X is that you can specify the type of font that will be used for display. So what I have done is use VTWM, which is a virtual window manager, which will let you make a virtual screen much larger than a physical screen, and as you move the mouse towards the edge of the screen, the virtual screen pans. What I did is made some very large 36 pt. font with the default font using the window manager and if you come by the Trace booth, you can see it. It is missing the problem with the active point, it won't follow active point. It has a problem with things like cursors. Cursors are still small, they are not magnified. The goal was to take the same application and just modify resource files to get it to be more accessible, so come by the Trace exhibit and take a look and see what you think.

Question: VTVM is still going to be a pseudo-server?

Will: VTVM is a window manager. What it does, it will move the window, so it can move the window to position off the screen. So by moving windows around, it acts like a virtual screen of the display. The reason I chose that one is because the dynamic Xmag thing was slow, it was very CPU bound. It can bring the whole system to a halt. A pseudo- server is also very resource bound. So what I was trying to do is take an existing application without any additional requirements for memory or CPU and see what I could do.

Peter: I have to say though how ever much the pseudo-server is a big "boogy" man to some, we have a 68020 processor effectively running the pseudo- server, and we've got quite a few protocols on, and it works surprisingly well.

Will: The Macintosh is a complete different architecture and operating system.

Peter: Right, but you are still imaging the bits twice and all of that. If you write your code well, it can be done. You got so much better processors on the Sparcs. Show us where your power is!

Jim: Mark, I think I talked a lot and taken up 10 minutes.

Paul: I would like add to one of Neil's points on the screen magnification. That is when we created DACX, what we did was we invited everybody that we knew that was in the business of X Windows and everybody that we knew that was in the business of screen readers to a consortium and at the first meeting we talked about how we are going to solve the AccessDOS kind of accessibility, the screen reader accessibility, and we talked about screen enlargement and there was no expertise in the room so it kind of got pushed aside. If we had invited all the people that were in the screen enlargement business into that room from the onset, you could do that now, but maybe we would have some expertise in the room for people that are willing to tackle that portion of the problem. Let's invite them all to the next meeting, and ask them if they want to solve the screen enlargement problem, and if they don't, all we've lost?

Mark: I'll make a note of that. Next on the agenda. Leedy Day made a request to present ideas about some other access features or modalities to the DACX group.

Leedy: What I wanted to discuss a little bit, after all the technical "assist me" kinds of things, is a concern that I have over the user interface. I realize that some people are now in the prototype at least developing prototypes and that kind of thing. I am very concerned about the lack of discussion over user interface and more so on the modalities of access. What type of information are we trying to access. I realize that text is uppermost in everybody's mind as well as mine. What about the graphics? The graphics obviously are a big piece of this and not everything is going to be available in a text format. Also, I am not quite sure what is meant by screen reader in this context. I have some concerns regarding the emphasis on sound without equal emphasis on tactile and Braille access and although I realize that various drivers and so on can be written for a toolkit and so on, the difficulty comes in that if these things are not considered from the beginning, we end up with a situation such as Synthavoice has with their Window Bridge. Somebody decided this is great product, it will work with an "Alva", but what happens is a couple things. The information doesn't come up on the display exactly in the same columns as it is showing up on the screen, so you have this difficulty of discussing it say with a colleague or whatever and that is a problem. Also, because Window Bridge was initially designed for speech instead of saying "..." or ellipses or whatever, the speech from what I understand will actually say "point, point, point", well that is exactly how it comes out written on a Braille display, "point", "point", "point"! These kinds of things need to be taken into consideration in my opinion from the beginning, instead of just assuming that sound and audio are the only access media for people who are totally blind or obviously when we talk about magnification, a person who needs magnification probably doesn't know Braille and isn't very comfortable with any kind of tactile access, so my take on that is that they are going to want to get the text in the fastest way possible but the magnification isn't just going to be a text, it is going to be graphics. And what I said before about the picture. I don't want to hear a description of the picture, I don't want to read a description of a picture, I want to see the picture. And certainly anything that can be grabbed in text, I want that, but if it can't be grabbed in text, I still want it. I am not sure that those issues are really being addressed. What I wanted to state was that I see definitely four modalities that need to be considered as requirements not like, we will think about of these in version 2 or version 3 of whatever. They need to be considered, I believe, while we are coming up with protocols and we are doing hooks and that kind of thing because they effect the kind of information we are grabbing and how we are grabbing it. I really consider all these to be equal: the magnification, Braille access, sound which includes audio, and tactile. The only tactile hardware that we have at this point, although it is not what everybody would like to see, it is what we have, is an opticon and I am very much concerned about the lack of emphasis on development of interfaces between the opticon and with PC's for one thing, but also in this environment with workstations, because that to me is the only way that we have of seeing the graphics. I have talked to TSI, and I'm tired of trying, to develop an option on the opticon PC package which actually wouldn't help in this situation, but it may help a little bit. Actually adding an option on the opticon PC package which would work similarly to the Berkeley Systems discontinued, InTouch, so that the graphics could actually be seen. If somebody wanted the option of just turning off the graphics access and just grabbing the ASCII they could. Is anyone familiar with the opticon PC software? No. So let me explain a little bit more. Who isn't familiar with what an opticon is? Great. Anyway I guess it depends on what your job is. I am a software engineer and I need to see the drawings, I don't just need to see the drawings, I need to be able to make the drawings. But at this point I would settle for being able to see the drawing. I need to be able to see flow diagrams and let me tell you about one of the neat things that happened to me for the first time out of my three children. The youngest is turning seven now, and I for the first time on Tuesday afternoon was able to see one of his drawings. That goes beyond words. He has been describing drawings and all of my kids have described drawings as they went along. But actually seeing one! The way that was done was, he drew with a felt tip pen just a picture, actually he did several. He labeled them down at the bottom. He wrote what the picture was and he scanned that in with the Oscar and printed that out on the VersaPoint. The first one he did was a heart shape balloon that said "I love you" through it. There is nothing that can replace a picture, whether it is tactile or visual, it's a picture. And anyway, what the opticon PC software does is you can pan along with a mouse and see the ASCII text come up and form the array. Now TSI decided when they did this to develop it so they just grab the ASCII text, that's it. The reason for that was well, if you skew the mouse or whatever it doesn't matter which way you turn the mouse you are still going to see this beautiful perfect ASCII text, much better than you ever saw any text in a book believe me. But at the same time the bad thing about that is that is all you get. Because any graphics that happen to be on the screen just doesn't come out at all, and I would really like to see an option where, and Peter you guys may want to explain a little more about how InTouch did that, I am not sure. For instance with InTouch, I could scan along with the mouse and I could see the icons, I was just so psyched by that, I just couldn't believe it. Because not only could I see the icons, which obviously I wouldn't recognize their shape the first time I saw them, and OutSpoken on the MAC would tell me what that icon was. And the two of them working together, did the thing that is really normal for anyone who is viewing a picture, it had the picture, you had a symbol in your mind and to that you had mapped the name, that is normally how people remember things, learn all those kinds of things. Things are mapped to an image and that is to me no different for a person who can't see. It is just that the image, there is a different way of having the image input into the brain. So I would like to see TSI do that for the PC. However, I have been told by Rob Savoy that they are not interested in doing that, but they would be willing to give the spec up to anyone who is willing to write it, and I know I have discussed this with Will before and also discussed it with Beth. I am really concerned and just want some reassurance that we are not just talking about screen readers, we are not just talking about sound. I want to know that it is not just something that will gosh, people later on can write their own drivers and they can get Braille because as I said, I don't want to read "point", "point, "point". I really didn't just want to stand up here and talk about it. I want it open for discussion.

Peter: To answer your questions. It works very much like the Xmag stuff. You know where the pointer is, you look on the screen for the dot, you grab them, you send them out some interface, whatever the opticon interface uses and excite the pixels. The one strange issue in that the opticon wants to invert. If it is white on black or black on white, it is trying to make a guess of what the contrast of the color and so there is some strange inversion things. I am not sure if that has been addressed in the new opticon where you can say this pixel up, pixel down versus just this is black, this white and decide whether black is up or black is down. The PC thing that you are talking about sounds like it was really for DOS, since without a screen reader you aren't going to be getting the text. The opticon style tactile output, that we have done and that you are describing, is completely separate from all the technical stuff we have been working on here. It requires only knowledge of the bits from the screen which is very easy to get in most cases. I am sure there are some proprietary implementations of the X server, for using special hardware which requires the use of our friend the pseudo-server. Again, getting bit maps is fairly easy, in terms of directly supporting Braille from the beginning or sounds from the beginning, I can't really speak to that other that so say I agree with you.

Jim: At least from the standpoint of what we are doing at IBM, the general model for what we are doing with UNIX, or AIX, is the screen reader for OS2. As far as the controls, the ILs, the language, the applications, that kind of thing. Frank D. has been writing and using OS2 screen reader to go Braille displays, etc., but again it is only text. Would you agree that it is very controllable Frank?

Frank: Yeah, there is sort of two points here. The first is that what Beth and Will have been describing to me is, the capturing of information, some interpretation of graphical images in other words you know it is a push button, you know it is a radio button, whatever. The second part of that is, what is it that you do with that information once you have it. The IBM screen reader was rigidly designed as a voice output device, however, during the development of OS2, functions were put into the profiling language that made it possible to communicate directly with items like Braille and any other input/output devices. So since the product was released, basically, once again as a voice product, it was designed to talk to voice synthesizers and various profiles have been written that take the same information that's already been captured and reformat it to go to the Braille display, or to go to the voice and Braille together. Whichever you choose to do. I kind of work on the stuff for Alva displays and I can use Braille and voice together. Others have some stuff with the VersaBraille using Braille only. That is all conceptual and that is all done, fortunately or not, the IBM screen reader is completely profile controlled so you can do anything you want with the information that is already captured. So you can reformat it and do whatever you please with it. So you can make it a Braille only device. You can make it a voice only device. You can make it a combination. So the short answer to that was that what Beth is working on up to now is just capturing information and making sure it has everything it needs and then whoever is going to write the screen reader has to decide what the output device is.

Leedy: Okay. But what I want to insure is that the information that is captured is all of the information that is needed for all of these modalities. That is my point.

Jolie: I think that again to emphasize that idea. More and more of what's going on is this idea of software with pictures of having people draw designs and things like that at the highest level of developing environments. So one of the easiest things to do with an opticon or whatever is to feel a straight line across a page. If you can't get to that information, then you can't get to it. I absolutely agree with what you are saying. If you see on the screen lines and things like that, you should be able to feel lines and things like that.

Peter: What might be a very productive thing for you to do is to spearhead sort of a white paper on what access means to you. What it should consist of. We are all here because if I come out with a screen reader for X Windows, it is going to potentially competing on the market with something that Beth does, it is going to potentially competing on the market with what somebody else does. We have a common need and that common need is getting information and that is what we are spending our time talking about. Berkeley Systems has a concept of what a good screen reader is. It is perhaps a different concept than what IBM has and I think that is very important. Nobody has a monopoly on ideas and it is important that we see lots of realization of ideas because only after I show mine, and they show theirs, do we really see what works and what doesn't. Different people prefer different things. Some people don't care for tactile output, some people don't care for this. Some people really like the way our navigation works and some people don't care for it, etc. So that is I think part of why you are not seeing a group of people discussing what it should look like. I think that really needs to come from the user.

Leedy: I just don't want to have the options limited. Because right now why we are still at the stage of determining how the information is going to be captured and what information is going to be captured to me, that in the process of doing that we need to understand how the information is going to be used. For instance, if we make the assumption that the only thing that is needed is the text, so we don't include ways of capturing the graphics for output, it is harder to do that later on. That is sort of my point. It is harder to go back in and do that. I completely agree that everybody has different ways of using things and that is exactly why I wanted to bring this up, because for instance I am a Braille user. I am what people want to refer to as a spatial/visual learner which I say is a spatial learner. If I read, I can listen to novels on tape just fine. I can get through that just fine. But give me "C" code on a Braille display please. I go batty real quick listening to "C" code coming off of any speech synthesizer. I have to picture it, I see things in dimensions where they are images to me and that is how I store them? Dimensionality is really important. And just listening to something, I don't care if it has a lot of sound and stereo sound and all of that, this image is still missing. May be I would have to listen to it and try to develop the image in my mind. I guess what I try to ask people sometimes is you develop the speech output systems and so on that is great, I think speech is wonderful for people who can use it by themselves, that is excellent. For people who would want to use them in combination with Braille and magnification, that is excellent. My question is to people who are not visually impaired, who do the development, are you comfortable sitting there using the thing you developed completely as your primary access tool to your programming, to whatever it is you are developing? Are you perfectly comfortable using that instead of looking at the screen?

Beth: The points that were made I agree with. The focus on the DACX group has been on information access and on standards that everybody can agree on. So I want to try to relieve some fears on that is all we've spoken on. Other reasons we talk about access to text is that I think opposed to the PC and MAC environment, in X Windows the text is the hard part. To use the X lib hook or through the other hooks, is real easy for us to find out what pictures are being drawn or what graphic controls are out there. It is the text that keeps getting hidden from us because there is all these shortcuts to save on efficiency. So when we focus on text, is not that is all we care about, but that is where a lot of our technical difficulty is centered. So if this were taken for granted, given the type of hooks and controls that we put in, we have access to the graphics, it was the text that was eluding us. The second thing is that it is still important to keep bringing these concerns up with the DACX mailing list and so on. There are many people who are talented and can work in this area and a lot of what helps and we have done this on the "scenario" based problems. I do this with the X people all the time because they can always hit me with a technical thing, that I don't know about, so I just place a scenario and say this is what I want to see happen you tell me what's out there and how it is going to work. Use that same methodology and say okay here is the scenario of how I want my screen reader to do something, you tell me in your design how that is going to accommodate you and that is really how you keep pushing on these buttons. In some cases, the tactile problem is the same thing as magnification problems. Learning how to deal with those bitmaps and that has been something that we know we can solve, but we haven't solved it yet. We need to get that group working more and that will help. Pushing out there and pushing on people saying all right you told me these hooks could do this and tell me how in greater detail. I think that will help everyone working together.

Leedy: I just wanted to mention one quick thing. Thanks for that Beth. I will keep that in mind. What I wanted to mention too and throw out is the idea that in DEC, we are trying to accommodate a different audience than the PC or Windows area. The people who generally use X Windows are more technical group, even the users and the developers. I won't say that the developers are necessarily more technical, but it comes in more of an engineering kind of environment in which people don't just take that home and play games on it. It's not on the mainstreams of laptops and that is kind of what I wanted to emphasize too, maybe because the focus needs to be a little different in terms of what the outputs need to be.

Will: Quick comment. Opticon. I've done a quick interface with X Windows, so come by the Trace booth and see it.

Mark: We have two topics left to cover on the agenda, and we have about 25 minutes to push through them. I think the first one is very important. If we don't get the last one, I don't think people will worry too much. A couple members of DACX have been very industrious and have been looking for things to do. That's the good guys over at SUN. We have talked here tonight mostly about technical issues and actual programming that is being done. There is another side to the DACX effort, and that is the advocacy area in making systems accessible, which has to do with reaching the correct people, getting to the developers, getting to the people who are writing the systems who use the guide lines and develop the applications which run on X. That is just as important, if not a more important issue, that needs to be solved. Get out there in the world where people have never heard of the need of accessibility and continue to do advocacy and education. I will turn this over to Earl he can explain what he and Eric have been doing in terms of getting guide lines, getting people's attitudes, getting conscious awareness raised within OSF/Motif and some other groups concerning these guide lines for accessibility.

Earl: After Closing-The-Gap we ran into difficulties dealing with the keyboard invocation of StickyKeys, and having conflicts with say other applications. Ones that always came to mind were games. To activate StickyKeys, you press the shift key five times, and you sometimes run into a situation where if you had the capability for keyboard access to StickyKeys, all of a sudden you get into this condition where the user didn't know what was going on. What we had to do, was on the user interface, which you can go to the Trace booth and see two different examples of AccessX, we had to add a enable button to the client so if the user didn't want to invoke StickyKeys when they press the shift key, disabling this button would allow them turn off that function. So when Eric and I were back at SUN after the conference, we were kind of discussing this because this had some implications for SUN, since SUN has been doing the interface aspect of AccessX. So Eric and I were kind of trying to struggle with how do we deal with the wording for enabling or disabling the keyboard functions, etc. and what Eric kind of came up with, was to have the COSE or the CDE alliance I guess we'll call it, look at it. So why don't we just write a proposal for reserving these keystrokes for the COSE environment. So what happened, Eric and I worked together to write this proposal, and Eric submitted it to the Human Interface group, who decided that this was more important than just the COSE or CDE, so that was submitted to the OSF group who is doing Motif.

Eric: There is one more piece to the puzzle which is that the COSE effort is trying to work with the Motif 2.0 effort to make sure that this group of companies which includes HP, IBM, and SUN, as they came out with common desktop environment that doesn't diverse so much that Motif 2.0 comes up in two separate directions.

Earl: They will probably converge. So what happened was OSF came back and said well this is news to us. This is great work why don't you look at our style guide and tell us what we can do. I will now turn it over to Eric.

Eric: There really isn't that much to say after what Earl just said. I think most people here know what a style guide is. But if you are not familiar with the use of style guides, whether it is Windows, Motif, Macintosh, OS2, all of these environments use style guides . There are style guides for all of these environments and the application developers look at the style guide as essentially a way of deciding how to make the applications have a fairly consistent behavior and appearance. Otherwise, even though the tool kits supply you with a lot of built- in behavior and the look of the windows and so forth, you can put them together in so many different ways that you could for example, make 30 different Macintosh applications, to take a platform that is famous for being easy to use, and make them all very different, the menus very different and the behavior very different, etc. A lot of the behaviors you see in application, whether it is in Windows or the Macintosh or Motif, is a behavior that is really specified on paper. It is not even in the toolkit. If somebody didn't look in one of the style guides to do it, it wouldn't be there. Actually that turns out to be more true for Motif than most of the other toolkits. Just briefly I will tell you what our proposal was and what we did. What we did was send to OSF, a chapter with an accessibility overview, or design principals, in designing applications for accessibility. This was for any old software developer anywhere. It might be somebody doing in house applications, someone who has never thought about access at all. We just wanted to make sure they would think about things like don't hard code your font sizes, don't provide mouse only access to some function that is not also accessible from the keyboard. Anyway, there is a chapter discussing these types of guide lines. We also proposed an icon identifying access relevant materials throughout the style guide, the Motif 2.0 style guide, which is about 700 pages of material. So it is a lot of stuff for somebody to wade through, and if someone were interested in making sure their application was accessible, prior to this effort, it would have been very hard looking in there to see which guide lines they would have to follow and which ones they wouldn't. So we proposed "marking" the guide lines that were particularly important to accessibility and that we discussed in the chapter. Our proposal was accepted and what we've done in just a few weeks, is we went through the entire Motif style guide. We found areas with access relevant items that there were guide lines that were important to making the application accessible, but weren't required by Motif 2.0. So for example, you can make an application that would be Motif compliant, but it didn't have to follow a particular guide line for keyboard access, or have fonts that you could resize or whatever. So we made some of those guide lines that were relevant to access and only recommended, also become required. In other places where there weren't guide lines, we suggested new guide lines and that's pretty much what we did. Right now where we are at, this isn't something that Earl and I did alone. We had very little time, we submitted the material to OSF, but we had a group of DACX members including folks from IBM, Trace and Georgia Tech and elsewhere assist us in compiling the material. Right now there is a chapter with guide lines on making accessible applications and hundreds of pages of Motif guide lines, some of which are stamped with a little icon showing that these are relevant to accessibility. All of this material is being reviewed by OSF as part of their review process, and they plan on coming out with these additions in the Motif 2.0 style guide. So we are happy about this because a lot of people who didn't even think about accessibility, namely anyone who develops a Motif application, is going to pick up the style guide and they will see "access" related issues mentioned in there. They will see, for example, a series of guide lines and a typical guide line here might be something like, "provide access to all functions of an object using equivalent but not necessarily identical mouse and keyboard techniques". They will see a little access icon, the wheelchair symbol, next to a guide line and we have asked OSF that any guide line we so marked, that it remain required. With some others, we asked OSF to make them required if they had not been required already. So this will be the first style guide that application developers will use that discusses access and also advises developers on just which kind of guide lines they should follow to make their applications more accessible. If anybody has any questions, please feel free to ask.

Mark: Next, I'd like to ask Earl or Will to give us a quick update on the mobility access issue surrounding X Windows that DACX has been involved with..

Earl: In the AccessX client, we are dealing with small problems on the user interface. Nothing major. It is just about where it was when we last talked 6 months ago at Closing-the-Gap. Will can talk about the AccessX/XKB transition for X11R6.

Will: AccessX was jointly developed by Trace, Digital and SUN on X11R5 as a sample implementation. For X11R6, we gave it to MIT to include in the X11R6 release, as part of a bigger keyboard extension called XKB. Anyway, the upshot of this is X11R6 will have AccessX and AccessX provides AccessDOS and Easy Access like functionality for X Windows.

Mark: Are there any other remaining issues that people would like to discuss tonight? That takes us through our agenda. The next DACX meeting is "tentatively" scheduled to coincide with the Closing-the-Gap Conference, in Minneapolis, in October, unless enough people suggest an alternative meeting site and time.

Listed below, are the names of the DACX meeting attendees whose names were understandable on the tape, or who signed the DACX mailing list which was circulated through-out the meeting. My apologies to anyone whose name I missed or whose names I've misspelled...Mark

Mark Novak, Trace Center
Gregg Vanderheiden, Trace Center
Beth Mynatt, Georgia Institute of Technology
Will Walker, DEC
Eric Bergman, SunSoft
Earl Johnson, SUN Microsystems
Mark Steer, Sears Roebuck
Nelson Hinman
Diego Castano, AT&T/NCR
Frank DePalermo, Ability Consulting
Jim Bounscheim(sp?)
Jim Caldwell, IBM in Austin
Brenda Saxon, IBM in Austin
Paul Fontaine, GSA
Neil Scott, Stanford University
John Steger, IBM in New York
Peter Korn, Berkeley Systems
Jolie Mason, Los Angeles Radio News Group
Greg Pike, IBM Research
Chris Grey, IBM
Bill Barry, Oregon State University
Jay Leavitt, Buffalo
Dave Andrews, Director of International Braille Technology Center of the Blind
Scott Hooker, Senior Information Planner at Federal Express
Greg Lowney, Microsoft Corp.
Jim Snee, Bell Atlantic
Jeff Pledger, Bell Atlantic
T. V. Raman, DEC
Mike Paciello, DEC
Ben Drees, Berkeley Systems
Marc Stiehr, Sears Merchandise Group
Leedy Day, DEC


BACK to DACX NOTES