The Foggy Third Party Screen Reader Issue


Last week, I published a story here highly critical of NFBCS, NFB and Curtis Chong as leaders in technology related to blindness. The piece, “Accessibility And NFBCS” described a number of incredibly important issues in technological accessibility for people with vision impairment in which the largest advocacy organization in the world of blindness remains absent and asks how they can be effective leaders if they ignore the most important events of the day.

The article also discussed the question of whether or not it would have been better if Microsoft had made its own “end to end” screen reader. I believe that, as Apple provides on iOS and Macintosh and Google includes on Android, that all OS should have a fully functional screen reader shipping out-of-the-box. Sighted people don’t need to pay extra for the graphical UI they use, blind people should not need to pay extra for the UI we use either.

In last week’s article, I discuss the NFB role in pressuring Microsoft into not doing its own screen reader, favoring instead the high priced, third party solutions from Freedom Scientific, GW Micro, Dolphin and other companies. Last week’s article was specifically about NFBCS and Curtis Chong’s writings in Braille Monitor. It, therefore, described the NFB role in the third party screen reader story with little context. In the early drafts of that piece, I did include much more historical context but those early drafts of the article contained more than 6000 words and the final version that I actually published still had more than 3600 and was “too long” for some of my readers.

After publishing the story last week, I spent a few hours talking on the phone with NFB insiders who, like me and the other sources I used for that article, were actually present for some of the meetings with Microsoft and were observers to this history. While I feel that the story I told last week about NFBCS and its role is true, I also think it’s important that I tell the rest of the story.

I try to publish here every Tuesday. In some weeks, I have a lot of time to do a lot of research and write fairly formal pieces. Some weeks, like this one, I’ve less time to devote to the blog and, therefore, will be telling this story largely from memory. Unlike some articles here, it will not contain a lot of links to outside sources and such because I just haven’t the time to do so this week.

Ted Henter’s Speech

In the late nineties, Ted Henter, founder of Henter-Joyce and the inventor of JAWS, took the stage at an NFB convention general assembly and made a speech detailing exactly why he felt that screen readers will always be best if developed by small companies dedicated entirely to access technology. One may believe that Ted said these things out of a purely cynical desire to protect the profits of his own company but, while tis may be partially so, having worked for, talked to, hung out with and been friends with him for more than a decade now, I’m confident that Ted was speaking honestly to what he felt was a greater good.

When Ted made that speech, there were no fully functional screen readers built into operating systems. IBM had made two screen readers, Screen Reader/DOS and Screen Reader/2 but neither had ever gathered much popular appeal. Vocal-Eyes from GW Micro had been the most popular DOS screen reader among American users, followed by the DOS version of JAWS. When Windows came along, JAWS for Windows (JFW) and Window-Eyes would together dominate the market. Thus, when Ted made his speech, there were no examples of a fully functional screen reader having been accepted broadly by the user community, thus, no evidence of an OS vendor making a tool that our community would actually enjoy using.

The Others In The Room

Starting around 1995, MS held meetings on its campus up in Redmond to which as many accessibility oriented stakeholders were invited. This, of course, included the screen reader vendors, advocacy organizations including NFB, ACB and AFB and other notables from the world of technology and disability. As I state in the introduction, I am writing this from memory and the two NFB insiders to whom I spoke last week were also telling the story from their memories so, please realize, the following may be a bit foggy as that’s how human memory works.

As far as I can tell, everyone in the room at those meetings, the AT companies trying to protect our profits and the advocacy organizations speaking on behalf of their constituents, agreed that the third party screen reader system would provide the greatest access for the vast majority of users. I will contend that, then and for a number of years into this century, this model was probably the right path to take.

Life Before Accessibility API

If you are one of the many blind people who enjoy using an iOS device from Apple (iPhone, iPad, iPod Touch), you are benefitting from Apple’s very well designed accessibility API and compliance with it in the apps you use. In the late nineties, though, there were no examples of a functional API driven screen reader anywhere.

The Big Hack

In the days before modern accessibility API had been invented, Windows screen readers used a technique called “driver hooking.” In brief, a part of JAWS, HAL, Window-Eyes and the others pretended it was a video card driver and gathered its data in what we called an “off screen model” (OSM). In brief, the OSM was a database of information sorted primarily on where things appear on the screen. Using some other techniques, the screen readers knew which Window, hence, application they were in and, along wit painstakingly developed heuristics for each program they hoped to support, they would then speak and/or braille the information for their users.

A little example of how this worked that I can recall of the top of my head is how JAWS tracked focus in Excel. To “know” which cell a user had in focus, the JAWS Excel support would “see” that the software had issued a series of “LineTo” graphical drawing commands to the video driver. JAWS had no actual idea about what was going on in Excel, so it had to jump through heuristic hoops to do something as simple as tracking focus.

Needless to say, a system built on having human programmers spend so much time figuring out the strange details of how a rectangle is drawn on the screen to track focus had severe limitations. Anytime an application made the slightest change, it would likely break the screen reader.

The OSM and screen scraping techniques also introduced major stability problems as it was such a non-standard way of doing things that neither Windows nor the applications running on it were aware of the screen reader as a device driver and, for their own purposes, also used non-standard techniques to put information on the screen. An application would try to do its own optimizations and would, as a result, cause a screen reader to crash.

MSAA 1.0

Microsoft’s was the first OS vendor to attempt to build an accessibility API. It was called Microsoft Active Accessibility (MSAA) and, to be frank, it was little more than a demo of things to come. MSAA, even in its 2.0 version, did not provide a screen reader with enough information to provide its users with a usable solution. Thus, all screen readers using MSAA back then also had to include non-standard techniques to provide information properly to their users.

This fact alone is a large part of what made it virtually impossible for MS to make its own screen reader, until the advent of UIA (later in this piece), it was impossible to use anything resembling standard techniques to gather information.

The VBA Hack

At Henter-Joyce, Glen Gordon, still the top developer on JAWS, realized that using the automation API designed for VisualBasic programmers to extend Microsoft Office and other applications could also be used to get data into a screen reader. The JAWS team took Glen’s idea and ran with it. This was what the JAWS developers used to do all of the amazing things we did in MS Office back then.

The good thing about using the VBA approach was that we could gather very specific information in the context of the application being employed. We no longer had to follow graphics commands to determine focus, we could get the precise coordinates by simply asking Excel. There were two major downsides to this approach, it required that each application supported in this way have hand coded scripts written for it, hence, it also required that the screen reader had a scripting facility, something that, back then, Window-Eyes didn’t have and proudly boasted that they didn’t even want.

The Window-Eyes Versus JAWS Approach

At this point in our story, I must acknowledge that GW Micro, in what felt to me like business suicide, chose to embrace the MSAA path while we, at Freedom Scientific, continued to reject it. History shows us that GW Micro was on the right side of the technological discussion then but, in my mind, they arrived at that conclusion too early.

While GW Micro would provide no cost consulting advice to billion dollar corporations on how to get MSAA implemented in their software, we, at FS continued down the path of non-standard solutions. In many ways, this is what put JAWS on top of the heap, it could do things no other screen reader could and it provided the best experience for its users as a result.

The Hole In The JAWS Approach

In order to provide the best possible collection of data to its users, JAWS employed v very non-standard techniques mixing its OSM with the VBA support and MSAA where available. The outcomes were terrific for the end uses but, inside FS, continuing to support each application using custom code (scripts and internal) was an expensive proposition. In the world of generic accessibility API, a screen reader like VoiceOver gets all of its information from said interface, hence, when the app changes, no one needs to go in and change the code in the screen reader. This was not true then and to a lesser extent now for JAWS.

As I’ve written here many times before, by 2003, JAWS had reached a monopoly marketshare at around 85% of all new product sales. FS, therefore, had no incentive to continue investing in JAWS as it had won the race. Thus, with JAWS receiving less and less funding annually, these custom solutions started to deteriorate.

The First Real Accessibility API

While MSAA was a pretty poor bit of technology, Microsoft should be recognized for even giving it a try. As there were no accessibility API in existence before MSAA, they had no point of reference and MSAA stood as a solid prototype for things to come.

In my opinion, the first truly usable accessibility API was the one led by Peter Korn at Sun Microsystems. It not only provided specific information about a control, it had the facility to provide its context, hence, enable a screen reader to provide a more complete picture to its users. In effect, the Gnome Accessibility API was the first of its breed.

Apple would follow with its API and Microsoft would craft something called User Interface Automation (UIA) that serves as both an accessibility API and a test framework for the software. Today, with comprehensive accessibility API on all major OS, it’s reasonable to expect a fully featured screen reader also to be included with of them.


On iOS, OSX and Gnome, there’s one accessibility API on each. On Windows, the OS with the largest number of users, blind or otherwise, there is a second one called iAccessible2 (iA2). Some experts would argue that iA2 is the best and most important of the various accessibility but, in all honesty, I’m not expert enough to describe either its benefits or any pitfalls within it. The Mozilla Foundation chose iA2 over UIA or MSAA in the Windows versions of it’s applications and it’s iA2 that you’re enjoying when you use FireFox with NVDA.

When iA2 was developed, it was intended to be a cross platform accessibility API. The goal was to permit application developers who build software on multiple OS to write their accessibility code once and be able to reuse it on other systems. Sadly, while iA2 is “owned” by the Linux Foundation, it has never been implemented on any system other than Windows, something I think is a shame.

Another huge question is whether or not Microsoft will support iA2 in its “end to end” Narrator. They may elect to only support UIA/MSAA and, therefore, leave FireFox and the other iA2 enabled applications out of the sphere of its support. This could be another reason third party screen readers like NVDA may stick around into the future.


The only conclusion I can draw about the entire third party screen reader debate is that the story is much more foggy than one might think. NFB did insist on promoting the third party, mom and pop company solutions but they didn’t do so alone. HJ/FS, GW Micro, AFB, ACB and everyone else in those meetings except for MS agreed that this was the best path forward. History and Apple have demonstrated that it is, indeed, possible to deliver a high quality, no cost screen reader along with the OS and, holding that high priced, proprietary third party screen readers are a favorable solution in the 21st century is purely anachronistic thinking.

I do not mean to suggest that third party screen readers will or even should disappear entirely, they will likely fulfill an important set of requirements for a lot of people, third party screen readers might even become the luxury products of the Windows world, supporting applications too old to have included MSAA/UIA or by providing a user experience different from and preferred by some to the generic Narrator. Of course, I’ve been predicting the demise of the commercial third party screen reader since I wrote an article slamming JAWS 7 on my old blog so I’m likely not the best prognosticator of things to come.


I would like to express my thanks to the loyal NFB members to whom I talked on the phone and exchanged emails with last week. I appreciate the insight you guys gave me, a different perspective on the different meetings up in Redmond and the help you provided in writing this article. I appreciate honest and sincere dialogue and also thank the NFB faithful and everyone else who posted comments on last week’s article. In my opinion, the more discussion we can have about this community within this community the better.

Fortunately for all involved, technology progresses. At the AFB Leadership conference in Phoenix last week, Rob Sinclair announced that Narrator would, in Windows 10, be an “end to end” screen reader. Only the future can tell us how it will work out.

Accessibility and NFBCS: More Questions Than Results


Last week, Curtis Chong, the seemingly permanent president of The NFB in Computer Science (NFBCS) published an article in Braille Monitor highly critical of accessibility at Microsoft, especially of the accessibility on its Windows platform. Chong presents a number of indisputable facts with which I agree entirely, there are many things regarding accessibility to people with vision impairment that Microsoft does very poorly. Chong is also correct that accessibility across the Microsoft catalogue is highly inconsistent with some programs providing excellent coverage and others providing none at all. I applaud Curtis for the shedding light on the problems he describes and hope Microsoft will take action to remedy them as quickly as possible.

I also felt that Chong’s article was misleading, that it contained statements that were either inaccurate or unverifiable but, worst of all, it lacks detail in the historical context, an elephant sized hole in the story.

Over the past month or so, I’ve been exploring the concept of leadership in the blindness and technology space. I’ve talked about the changing leadership paradigm in, “Anarchy, Leadership and NVDA,” I’ve discussed leadership in innovation through traditional paths in my CSUN 2015 report and, in my article about Be My Eyes, I discussed another path a team had taken to lead a project of significant value. This article will, in the context of Chong’s piece, explore leadership in technology from NFB and how it, in my opinion, has been a failure for decades.

This article was sourced through public records and through private conversations and communications with 2 former and 2 current Microsoft employees and a number of others who had witnessed some or all of the events described herein. In my role at Freedom Scientific, I was also party to and present at some of the discussions summarized below. Thus, my sources are not “anonymous” but, rather, “unnamed.” As some of these statements are controversial, I will not reveal my sources as they may face retribution. They can, if they choose, self-identify themselves in the comments section. It is very likely that at least one section in this piece will be broken out into a separate and more detailed account of that part of the history as I think you will find it very interesting.

What Curtis Got Right

First, I’d like to recognize that Curtis gets a lot correct in his piece. I’ll confirm that most of the facts I did check are true. The statement he makes in his opening paragraph, “For those of us who are blind, access to Microsoft products is not just something that we would like to have. Rather, full non-visual access to Microsoft products is essential if we are to have any hope of being able to compete in today’s technology-driven labor market, let alone maintain parity with our sighted neighbors at home,” could not possibly be more true.

Chong’s article lists a number of MS products that are mostly inaccessible. Chong’s conclusions, that MS still has a lot of work ahead of them to ensure true and universal accessibility is also true.

The Elephant Sized Hole In The Story

If you haven’t already, please stop reading here and read Chong’s piece immediately.

Now that you’re back, ask yourself, what piece of technology fundamentally important to users with vision impairment does Curtis not mention in his article? If Apple has VoiceOver, a fully featured screen reader, Google has TalkBack, a rough attempt at the same and the Gnome Foundation has Orca, why does Microsoft have no fully featured screen reader of its own? Curtis may have simply been careless in his reporting, he may have been so focussed on application accessibility that he simply forgot to include the missing screen reader in his analysis. Or, as I contend, Curtis left out this detail intentionally.

The elephant sized hole in the story is that NFB has been on the wrong side of the leadership argument in their interactions with Microsoft. NFB’s positions have prevented MS from making its own screen reader and, as we will see later, its continued support for third party commercial AT is part of the reason why Microsoft still has accessibility problems in its technology.

The pressure NFB has put on Microsoft into not building its own screen reader, preferring instead to accept that third party screen readers would provide access to the Windows operating system has been a failure. It is for this reason that, when one launches Narrator, the screen access utility from Microsoft, it tells the user that it is not a fully functional tool and is only useful as a temporary solution until the user installs a real one.

the economic realities of business in the 21st century meant that the “mom and pop” companies like HJ and Blazie Engineering would find their way into a merger/acquisition deal that would put ruthless venture capitalists in charge of JAWS, still the most popular screen reader. The same economic realities have shown the Window-Eyes share, once equal to that of JAWS, drop to single digits.

Perhaps the most notable economic reality that the NFB approach ignored was that, because a screen reader is necessarily a niche product, the only way an independent company can make one and be profitable is by charging a real lot of money for each license. The NFB tact of working against an MS screen reader cost blind people, their employers and their educators millions of dollars that could have been spent otherwise if a no cost one existed. . NVDA, led by volunteers, saw the inequity of blind people needing to spend hundreds to thousands of dollars and its leaders took it on themselves to solve this problem while NFB ignored it entirely.

The Third Party Screen Reader Hypocrisy

As we’ve seen, NFB insisted that MS not do its own screen reader. NFB would later insist that Apple and Google build their own screen readers to provide out-of-the-box accessibility at no extra cost to consumers. I wonder if NFB learned from the chaos on Windows system and realized that it would be better if a screen reader was built into the operating system. Chong seems to praise the Apple experience which, regarding out-of-the-box accessibility is the best available.

In his piece, Curtis suggests that application developers at Microsoft should be tasked with testing against screen readers. This actually makes sense on products from Apple and Google as every developer at those companies have a screen reader they can launch easily and, if tasked with testing for accessibility in their project plans, they need only test against a single user agent. Chong seems to suggest that developers at Microsoft, instead of testing for compliance with the accessibility API, also perform functional tests against third party software. I think it’s absurd to think that developers at MS should try to test against software over which they have no control. If they were tasked with testing with Narrator, it would make sense, MS controls both the application and the AT; suggesting that developers and quality assurance professionals at MS learn JAWS and NVDA (the only Windows screen readers with enough market presence to warrant testing) and test against them is simply absurd.

Is NFBCS An Effective Advocate?

Chong writes, “Year after year, the National Federation of the Blind and the Microsoft Accessibility Team engage in active and ongoing communication, and year after year, we have communicated our frustrations and concerns to this team.” To which I ask, “If Curtis and NFB have been working with the MS ATG for more than two decades and, as Chong expresses in his article, the accessibility job remains mostly incomplete, are Curtis Chong, NFBCS and NFB itself actually effective advocates for this community?”

The Notable NFB Absences

I contend that Curtis and NFB have done a poor job of understanding the technology and, as a result, are ineffective advocates in this space. I didn’t quite know how I could express how an advocacy organization was “ineffective” as proving a negative is a logical impossibility so, instead, I thought I might list a number of very important areas in accessibility for people with vision impairment that, as far as Google can tell us, NFB has not participated. To this end, I used the search engine to look for terms important in technological accessibility with “National Federation Of The Blind” and/or “NFB” in the search terms.

  • WCAG 2.0 is the single most important set of guidelines for Internet accessibility to all people with disabilities, including we blind people. I googled, “+NFB WCAG 2.0” and found that Google gave us 7 results, zero of which were on an NFB related site. I then googled, using “National Federation Of The Blind” in place of “NFB” and did find one link to an NFB site in the top ten results and it was the consent decree in an NFB lawsuit requiring the defendant to follow WCAG 2.0. I tried a few more search terms and found identical results, it is obvious that no one from NFB, not even Curtis the president of the computer science subgroup, participated in the development of the standard and that there isn’t a single article on the NFB web site explaining this somewhat complex and definitely esoteric set of guidelines. Searching on these terms without including NFB provides one with a panoply of tutorials and other useful information from the entire world of accessibility but, sadly, none of it comes from NFB.

  • The Web Accessibility Initiative (WAI) has another standard called Aria. Web developers use Aria to include the semantics that a screen reader can use to tell users about complex web applications. In brief, if you use a complicated but also accessible web application like Microsoft Office Online (quite accessible albeit a bit sluggish with NVDA and FireFox), you are enjoying the work the developers did using Aria. So, I googled “+NFB WAI Aria” and, guess what? I found zero entries on any NFB sites. When I spelled out the name of the organization, I find exactly one search result on an NFB page and, once again, it is about legal frameworks and not technology. Searching without the “+NFB” provides one with another large list of tutorials, analysis, explanatory information about Aria from everywhere in the world of accessibility but not NFB.

  • Microsoft’s accessibility API is called User Interface Automation, if Curtis and NFB are so concerned about the accessibility of Windows applications, surely they must provide readers of the NFB web site with information on how to ensure their applications comply with the API, right? Wrong. If you google “+NFB User Interface Automation” you will probably, as I did, get zero results and two advertisements for contractors who do work using UIA. No matter how we search, we can’t find anything from NFB on this important piece of technology.

  • The 21st Century Video and Communication Accessibility Act of 2010 (CVAA) is the most important bit of new legislation regarding disability and technology to come along in quite some time. As with all such laws, the agency charged with enforcement must hold a public comment period to determine how it should proceed with the wishes of Congress. During the CVAA public comment period, Pratik Patel (then Curtis’ approximate equivalent at ACB), on behalf of his advocacy organization, filed hundreds of pages in public comments. The combination of NFB, NFBCS and Curtis Chong filed exactly zero. I’m neither a member nor a promotor of ACB but, on this very important task in ensuring that CVAA will be enforced, Pratik and ACB took on a leadership. I’ll assume that the reason NFB, NFBCS and Curtis made no comments was that they were overwhelmingly impressed by Pratik’s genius in this matter and were happy to have ACB speak for all blind people.

  • The Section 508 Refresh was passed by Congress and opened for public comment. Again, Pratik Patel and the ACB wrote up a ton of documentation and filed it in this important matter. Heck, on behalf of the Free Software Foundation (not a group known for its stellar record on accessibility), I filed a few pages of comment on 508 Refresh. The beauty of public comment is that it’s public so anyone can search the records and discover that NFB filed nothing on this matter. A friend who had attended the public hearings told me that NFB people did attend those sessions but that their only contributions could be summarized as, “blind people need to be involved in the process,” which was already true when they said it and, “NFB speaks for blind people,” which I contend is false as they don’t speak for me. I said the NFB deferred to Pratik on CVAA so I’ll suggest that Curtis and the NFB must have found my comments so brilliant, so illustrative that they chose not to do any of their own and let me speak for the community.

I could go on but, at this stage, I think you get the point. NFB and NFBCS, under Curtis Chong’s leadership, has steadfastly refused to participate in the most important developments in access technology. If, indeed, NFBCS, NFB and Curtis Chong have not contributed to the development, promotion, explanation of these and other extraordinarily important areas in accessibility, they are irrelevant as leaders. It’s easy to write articles like Curtis’, it’s easy to complain, to bitch and moan but it takes actual work to be part of the solution, work that Curtis and NFBCS have thus far refused to join in doing.

Gonz Gets Pedantic

I try to do my best to post articles that are as factual as possible. I may draw a controversial conclusion and use fairly aggressive prose when I express an opinion but, whenever a factual error is presented to me in an article I had published previously, I add a correction to the piece.

This blog is different from Curtis Chong’s articles in Braille Monitor (BM) for a lot of other reasons as well. First, I do not claim to be representing anyone other than myself and those who have given me explicit permission to speak for them. Curtis, in his role, claims to speak for “the blind” which, arguably, would include me. Second, Curtis, as is obvious by the introductory section in his article, claims to speak with authority, apparently derived from talking to other blind people; my own blog profile states that I am a loudmouth, crackpot stoner, I don’t claim any authority or expertise, I let my words speak for themselves and allow the readers to draw their own conclusions. Third, I allow you to post comments on this blog and no NFB publications, including Curtis’ articles in BM, permit any public discourse. If Curtis allowed for comments, we could have had this conversation as dialogue and the rest of you could have contributed as well but NFB sorts speak from “on high” and discourage interaction.

I, therefore, feel it is reasonable to hold Curtis Chong to a higher journalistic standard than I do even myself. He claims to be speaking for all of us and I, therefore, think he should be more careful with the way he states things. To wit:

  • Curtis writes, “Today only a small percentage of Microsoft products are regarded by the blind as comfortable and intuitive to use…” and as far as I know this may be true. As far as I know, this statement is false. I would like to know what was Curtis source for the things he states as fact, namely, the term “a small percentage.” I would also like to know Curtis’ definition of “the blind” in this sentence as I cannot find a supporting document in my googling.

  • Curtis, in the same section, continues by stating, “well over 80 percent of Microsoft products remain inaccessible to non-visual users.” I googled using as many terms as I could and could not find this 80% number published anywhere. I’m also curious as to the definition of “accessible” in this context. Did Curtis or others around the NFB actually test every program from MS and, if so, where did they publish their results. I dislike magic numbers when included in prose and, when used to discredit a corporation’s efforts, I believe that such numbers should not be used without a verifiable source as doing so is just ad hominem.

  • Chong writes, “There does not appear to be any user-experience research being conducted by Microsoft into improving efficiency for keyboard-only users, including the blind.” First, blind people also use crazy wild new fangled things like touch screens and track pads as well as keyboards these days. Second, many years ago, NFB itself published a valuable bit of UX research and is the command set still used on most braille keyboard based devices; as far as I can tell, with Curtis at the helm, NFBCS and NFB have not published any UX research in this area either. Researching user experience for keyboard only users would provide an excellent resource to Microsoft but also to Apple, Google and every other company that hopes to include effective keyboard control of its products. Perhaps, Curtis should be asking, why has it been so many years since NFB published actionable user experience research?

  • Curtis includes an oddly rambling paragraph on MSAA and UIA, the accessibility API in Windows. He writes, “the screen-access software vendors (very small companies in relation to Microsoft) had to devote considerable resources to make this happen. It would be better if these relatively small companies could spend more time and effort coming up with innovations that improve the efficiency and productivity of blind users of their software.” Or, one might say, if NFB hadn’t pressured MS into not making its own screen reader, we might actually have a company who can afford to keep up with OS releases like Apple can with its VoiceOver software? Why, decades later, Chong still insists that the broken system of high priced third party screen readers should continue is baffling.

  • In the same section on Windows accessibility API, Chong also neglects to state that FS actively opposed using standard API at all. I am part of the guilty party in this as, when I worked at Fs and for a few years afterward, I argued that an API solution could never provide the kind of accessibility that we could with JAWS using proprietary techniques. We argued vociferously that, rather than a generic API, applications should expose a VB like programming interface so we, the third party screen reader developers could craft custom solutions for each separate program we cared to support. An API solution was fine for simple applications but something fancy, Excel for instance, would always do better if we could write highly customized scripts for the UX. When HJ became FS, we stopped investing as heavily in JAWS development and it was our lack of investment in further support using these non-standard techniques that resulted in deteriorating in application accessibility, not MS. It was FS who rejected MSAA approaches and chose our own non-standard route to accessibility. You can’t blame MS for deteriorating accessibility in the third party screen readers which are entirely beyond its control. If you want to blame anyone, blame me, I fought hard against API back then, so did Glen Gordon and Eric Damery. MS was right, we were wrong.

  • Curtis writes, “For years Microsoft has left the blind with no access to Windows phones.” This is not true with phones based in Windows 8. I haven’t tried a Windows phone myself and reports from the field say it is a bit sluggish but, if the word “accessible” can be applied to Android as Chong does, it should also be applied to Windows Phone 8 as regards the more than 90% of blind technology users who prefer a synthesized voice interface, Windows Phone does not yet support refreshable braille devices.

  • About the MS Bitlocker software, Curtis writes, “A blind employee who is required to use a computer with Microsoft BitLocker installed will be unable to turn the computer on and get it running—not to mention use it.” This is strange coming from an advocacy organization that opposes accessible money, beeping traffic lights and other structural bits of accessibility. The fact is, Bitlocker is not “accessible” under any known definition of the word but, as I know a whole lot of blind people whose jobs require using the software daily, suggesting that it is impossible for a blind person to use is misleading. I asked a friend how she used it and she told me, “I turn my laptop on, I wait a little while, I type in my PIN and hit ENTER, my computer starts.” Yes, Apple has made their similar technology accessible and MS should as well but, as many blind people can work around it, it isn’t a functional impossibility as Chong suggests.

I believe that a “leader” in this space, someone who by his statements that he speaks for “the blind” should be much more careful in their publications. I’m a crackpot blogger, Curtis is publishing in an official organ of an advocacy organization that claims to represent our community. I think he should be held to a much higher level of journalistic standards and, as I illustrate above, Chong’s article is filled with problems and outright factual errors.


The community of blind computer users need effective advocacy but NFB, NFBCS and Curtis Chong have demonstrated poor judgement on technological, economic, political and structural issues of the gravest importance to this community. They have not participated in the most important discussions regarding standards, guidelines, API, user experience or anything else in this space. NFB seems to do nothing to promote use of accessibility development tools or standards compliance on any platform including Windows and provides none of the useful explanatory materials a developer hoping to make his work accessible might search on. As far as I can tell, regarding advocacy on technological matters, NFB, NFBCS and Curtis Chong have been present but irrelevant for a very long time now.

To clarify, this article is specifically about leadership and advocacy and discusses the Curtis Chong, NFBCS and NFB as spokespeople for our community. NFB does many other things including having funded the development of KNFB Reader, a really terrific iOS app that I enjoy using frequently. Unfortunately, KNFB Reader is a rare exception in a very large organization.

Be My Eyes And Asking For Help


Often, I receive queries from readers of this blog asking if I will write about a particular subject in blindness and technology. These are usually good ideas for stories but, given the schedule I keep, they would take far more of my time to research, write and edit than I have available in my life at this point. Sometimes, though, a number of people all suggest that I write about the same subject and this article is the result of a number of those requests added to my own fascination with the Be My eyes phenomena.

As with the successful NVDA Remote Access fundraising campaign I wrote about here a couple of weeks back, I’m left with few hard and fast conclusions about Be My Eyes, its unprecedented growth in popularity, its penetration into mainstream media or its long term impact. What I am certain of, though, is that Be My Eyes accomplished something we’ve never before witnessed in the world of blindness and technology and that I am tremendously enthusiastic about its success so far.

What Is “Be My Eyes?”

If you read this blog, you probably follow the world of technology and vision impairment pretty closely so you are probably already quite aware of Be My Eyes (BME). If, however, you’ve been in a Rip Van Winkle style coma for the past few months, you may have missed what has been the 2015 access technology story of the year so far. Be My Eyes is an iOS app that, using a video chat system, connects blind people with sighted volunteers who can in turn lend them their ability to see.

Users download the Be My eyes software from the AppStore for no cost. When they launch the app for the first time, they are asked to register as either a blind person who may need assistance or as a sighted volunteer willing to provide such help. As I’m a blind user, when I launch the app, I’m presented with a very simple interface with only two buttons, one for “Settings” and the important one, “Connect To First Available Helper.” When one taps on the “connect” button, a little tune plays until one of the sighted volunteers accepts a request for help. Once connected, the blind user can point the camera on the iOS device at the object with which they need sighted assistance and the two parties can talk until they are satisfied they’ve solved the problem the blind person was experiencing.

The BME Phenomena

Be My Eyes is, quite obviously, a very useful tool as it provides near instant access to a volunteer willing to lend their vision to a situation in which a blind person needs some help. What BME is definitely not, though, is a tremendously innovative bit of engineering. For all intents and purposes, BME is a video chat program with the added feature of automatically connecting a person requesting assistance with a volunteer willing to help at that moment. The exciting aspects of BME aren’t wizardry in software engineering but, rather, its mastery of social engineering.

If you launch Be My Eyes right now, you will hear that it has 192K sighted volunteers, 17.3 registered blind users and 63.9K people have been helped so far. BME launched in January and, while actual market figures are impossible to get for other blindness related software products, I’m willing to wager that no technology product has reached as many blind people in as short a span of time ever before. I’m also 100% confident that no technology designed for assisting people with disabilities has received as much mainstream media attention in as compact a period either.

What BME Did

When a new technology product designed for use by blind people comes out, the hardest problem a publicist has communicating to mainstream media about it is what the thing actually does. The number of times I’ve had to explain that a screen reader is an output agent and is not voice recognition grows with nearly every conversation I have with a sighted person about the primary means with which a blind person interacts with a computer. We all use screen readers but few people outside the biz even know that such exist.

Enter Be My eyes. It does exactly one thing, it connects blind users with sighted helpers. For a blind person, it’s value is obvious, I tap a button, point the camera and I get sighted assistance; for the sighted volunteers, the value is also obvious, someone who can’t see needs to borrow a pair of eyes, I can do that.

The Be My Eyes story is so obvious, the mainstream media could comfortably talk about it and they did so in droves. I think that BME may be the first blindness oriented program that combined tremendous value to users while also having a story that can be told easily enough for all to understand.

The BME Controversy

I am of the belief that everyone, blind or otherwise, lives in a society and that each of us have ways to contribute and times we will need help. Some people around the blindness conversation are more radical than I am about independence. In my view, if asking a sighted person for help will solve a problem more quickly than I could do so by insisting on being fiercely independent, I’m going to ask for help. I often ask other pedestrians to identify places while I’m walking in a city, BME lets me ask for help when I’m alone and need sighted assistance in a hurry.

A few weeks back, though, I heard a sighted person on NPR pondering Be My Eyes with the question, “Would I be enabling or somehow taking away the agency of a blind person by helping them this way?” I suppose in the marketplace of ideas, such questions should be asked and such issues should be discussed. I don’t have answers to the hard problems in critical disability theory, I’m neither a scholar nor a philosopher. As a blind person who has BME installed on his phone, though, I’m happy to have this tool available to me and, having used it three times, I can say that it has been useful when I had no alternatives for getting something accomplished.


Be My eyes did something incredible in the blindness space. The BME team created a useful tool but, more interestingly, created a social phenomenon previously non-existent in our world. The sheer simplicity of the BME app allowed for a simple story to be told in a manner that the global media could comprehend. Will there be a next BME? Will another project repeat the publicity storm of Be My Eyes? I don’t know, I’m just happy that this event has happened and that we have BME as a tool on our mobile devices.

CSUN 2015 Report: Traditional Leadership


Last week, I published an article here called “Anarchy, Leadership and NVDA” in which I described how both the NVDA screen reader and the recent NVDA Remote Access projects were able to find funding through non-traditional sources. I discussed the challenges such efforts were causing for the traditional access technology vendors and how, through a democratic and anarchistic system, blind people took responsibility for financing the technology we need and desire.

A few weeks ago, I attended the CSUN Conference on Disability and Technology in San Diego. There, a handful of the presentations I attended demonstrated technology and expressed ideas that cracked my affected cynicism and, for different reasons, impressed me greatly. Unlike NVDA and the crowdsourced free software projects I’ve discussed recently, these projects demonstrated leadership funded and developed through more traditional channels. Thus, leadership is emerging from the mob but, it’s important to recognize, that at CSUN we witnessed a number of important and interesting developments from academia, the standards community and the corporate world. Sadly, none of the interesting developments came from the traditional access technology companies.

Last year, I wrote tens of thousands of words largely about out-of-the-box accessibility on Android beginning with “Testing Android Accessibility: I Give Up” (the single most popular article I published in 2014), followed by a series of articles on the deplorable accessibility on that platform, a series of articles on Apple’s deteriorating accessibility concluding with, “The Macintosh User Experience” (coincidentally the one of the least popular articles we published in 2014), which described how the out-of-the-box accessibility delivered by Apple just ain’t good as it used to be. Thus, as I spent most of a year writing about Apple and Google and, to a lesser extent, Microsoft, I skipped their CSUn presentations and, instead, focussed on topics, presenters and companies doing things I found interesting and innovative.

One observation I did make about CSUN in general this year, it was my first since 2012, was that a whole lot of the demoes I attended and a lot of the hallway buzz I heard was about NVDA. In the past, virtually all presentations that used a screen reader, used JAWS. At CSUN 2015, more than half of the presentations I heard showed with NVDA and/or VoiceOver. I think this is another indication of the trend toward free and no cost screen readers but, remember, this is an anecdote based purely on my observations and your experience may have been different.


It was 4:20, a time I like to celebrate, on Friday afternoon, the final slot for CSUN presentations. It was the forth or fifth presentation of the week on math accessibility but the room was filled with tired but enthusiastic attendees as this was the math presentation of the week. For me, as we will see, it had personal implications far in excess of the terrific technology to be presented.

Sina Bahram, PhD candidate at NC State, president of Prime Access Consulting (PAC) and a close personal friend of mine since he was 19 years old took the stage with David MacDonald of CanAdapt Solutions and CB Averitt of Deque Systems.

I was there to hear Sina and his demo of MathPlayer with NVDA.. I had been following this project for far longer than all but two people in the room, Neil Soiffer of Design Science and I could have known.

I first saw a super secret demo of the technology when Sina brought it to my Cambridge, Ma home last August. I was notably impressed as he showed me how different fields in mathematics were spoken with different rules as appropriate to their specific vocabulary as he navigated through what, for me at least, were pretty complicated equations. I wanted to write an article about it then but my “friend DA” with Sina kept old Gonz’s mouth shut. In my home that afternoon, Sina also told me a story about MathPlayer’s difficulty getting Freedom Scientific to support it in JAWS. In brief, Sina and Neil met with a pair of FS executives at CSUN 2014 and were told that FS saw no business case for supporting math, an obvious lie as they would instead include their own proprietary and entirely inferior solution when they released JAWS 16 in September. At the same CSUN, Sina met with Mick Curren and Jamie Teh, the guys behind NVDA and, on his plane ride home to Australia, Jamie had written the code and MathPlayer was demonstrable with the free solution.

Sina’s demo, while impressive, only showed the tip of the iceberg of this powerful new way for blind users of NVDA and, soon, Window-Eyes to be able to study math. Sometime in the next month or so, I will be posting an article here specifically about progress in mathematics for people who use screen readers that will include an in-depth description of MathPlayer with NVDA as well as a discussion of MathMLCloud from our friends at Diagram Center. As that article will contain specific details about these and perhaps some other technologies, it’s one that will take a lot more effort than a standard article here so, while it’s in progress, it’s going to take a while to get right.

In addition to enjoying watching a friend I’ve had for more than a decade do an impressive presentation, my personal connection to this project made me feel a bit emotional. When the panel completed the formal portion of the event, my hand was the first to be raised. I didn’t have a question but, rather, a statement I cleared my throat and said, “As I was the first ever Freedom Scientific executive to have been forced to tell Neil Soiffer that we wouldn’t support his work in JAWS, I just want to thank Sina, Neil, Mick, Jamie and everyone else involved in this effort for ending what’s been more than a decade of personal shame.” That brought a round of applause and I felt so happy that, after more than a decade after we had commissioned a specification to build a MathML solution into JAWS, Design Science, Sina, NVDA and soon Window-Eyes will be delivering it to their users.

As this is the article on CSUN and not on math itself, I also want to recognize Sina for the complete classiness of his presentation. While it focussed on his own work on MathPlayer and the demonstration was done with NVDA, the only screen reader that fully supports it today (the Window-Eyes solution is still a beta), he also showed how a JAWS user could do the same with the FS solution in JAWS 16. The really classy part of Sina’s presentation was that he only showed the good parts of the JAWS solution when he could have bashed it for any number of reasons, most notably, the vast superiority of the NVDA/MathPlayer combination. Sina is a class act, I’m Gonz Blinko so I can say such things.

If you’re interested in exploring math in NVDA, follow the links above to the Design Science site, grab the software and give it a ride.

FireFox OS

This piece is starting to sound like a list of Gonz’s personal friends as the second presentation I’d like to feature was the one done by old buddy and fellow Freedom Scientific throw away, Marco Zehe. If you don’t know Marco and you get the chance to meet him, you’re probably already friends, you just don’t know it yet as he’s one of the sweetest, most charming, delightful and smartest people you’ll meet around this business. If you use a screen reader and enjoy the fabulous accessibility in the FireFox browser , Marco is the guy you have to thank for it.

At CSUN 2015, Marco showed the world the accessibility features of FireFox OS, a mobile operating system designed to run on low cost handsets. The beauty of this solution is that this entire operating system is based in an expanded purpose version of the FireFox browser, hence, it inherits the accessibility features we already enjoy with the FireFox browser on Windows with NVDA and TalkBack on Android.

From what I gleaned from Marco’s presentation is that all of the controls that a FFOS app will need (the kinds of standard controls available on all OS), has their accessibility components built in and turned on by default. As the entire OS is designed to run on low end hardware, it is less likely that application developers will spend a lot of time and effort creating custom and inaccessible controls as they will also require additional memory and more horsepower from a low cost and low powered processor. I predict that, when it’s ready for general distribution, mobile devices running FFOS will jump into second place behind only iOS as the most accessible mobile devices on the market. And, unlike the one definite plus that Android can boast over iOS, it’s also going to be very inexpensive.

As Marco was showing off the screen reader that will come with FFOS, I asked the wise crack question,” Does it use circles, right angles and other weird gestures?” and, before Marco who had started to laugh could answer, a few others in the audience, in parody of the TalkBack interface, shouted out, “six finger complex polygon!” “ four finger irregular rhombus!” and the laughter spread. Marco, of course, said, “No, no weird gestures.”


While I avoided the big corporations who make AT products and the AT vendor presentations themselves, I did attend two from major American corporations and one of them, [Target], the retail giant put on a truly impressive one on Thursday morning. It’s presenter, Laurie Merryman of Target, is not an old friend of mine, we hadn’t met before the event so this little report may show less of a personal bias than the first two.

What made the Target Presentation so different and so interesting was that they weren’t discussing testing their technology against WCAG and other standards, they already had done that work. Target is doing actual human factors, true usability testing with screen readers so as to not only provide an according to Hoyle accessibility experience but to take the experience to a next level, they’re intent in this effort is to make Target a pleasant shopping experience for people with disabilities. Laurie’s presentation included a description of how they use a program called Loop 11 to monitor each keystroke or gesture a user employs to complete a task and how the software includes other features to gauge user experience. One amazing fact is that the Loop 11 testing tool is also fully accessible and can be used with a screen reader.

Recently, I’ve been working on a fairly large proposal mostly unrelated to accessibility for one of my clients. This effort has forced me to read a lot of research about non-visual literacy. Of more than 600 papers published on this subject in the fifty year period between 1963 and 2013, only 22 had a sample size over 20 participants and only three of those studied more than 30 individuals in its sample. When Laurie said that they had just started this effort at Target and, thus far, had “only 60” participants, I about jumped for joy. I raised my hand and asked, “You’re obviously gathering data to improve the Target customer experience but you are also gathering a lot of information on generic screen reader use, would you guys be willing to share that information with the rest of us?” As US corporations tend to be pathologically secretive and proprietary about data, even data that has little specific value to them, I expected she would say no, instead, to my surprise, she said, “That’s a great idea.” and one of her colleagues shouted, “That’ll be our 2016 presentation.” To a data junky like me, there was no better possible answer.

Data Visualization and SVG

I tend to avoid the social events at conferences. While I write boldly, I’m actually pretty uncomfortable in crowds, I do poorly with small talk, I often get too passionate about a topic to remain polite and I’m happiest when in small groups. Thus, when I decided to attend the Diagram Center reception at CSUN, i was making an exception. I was comforted by knowing that I knew a lot of the people there and my long relationship with Benetech, the parent organization of Diagram Center, I also knew I would have some old friends around.

While I got to meet and talk to a lot of people doing interesting things at this reception and, of course, as I mentioned above, I’m enthusiastic about the Diagram Center’s MathMLCloud project( more to come on it in the upcoming math article), the I got to meet and was tremendously impressed by a W3C guy named Doug Schepers.

My friends Mia and Mallory led me to one of the bedrooms attached to the suite where the reception was held. A few others were already gathered there and my dog, thinking he was at home, jumped onto the bed and took a nap. At the desk, sat Doug Schepers and he was going to show us a prototype of a talking system for SVG based charts and tables. Doug’s prototype used a self voicing interface as he hasn’t found a screen reader to support it yet but it was truly impressive.

Doug’s approach to this problem comes from his background in standards. His work proposes a set of additions to Aria for describing data visualizations. His demo showed only a single bar chart but the potential for this, in a standards based manner, is terrific.

My personal attachment to Doug’s work was that, as VP/Software Engineering at Freedom Scientific, it was my idea and Joe Stephen’s work that got charts and graphs talking in Microsoft Excel. What Doug’s solution provides are the semantics that make reading such information far nicer. I sincerely hope we can find a way to get this experimental code into NVDA to test it while Doug works to get this extension accepted by the people who set the Aria standard.

Hanging Out

A big part of going to CSUN is having the opportunity to meet and hang out with both old friends and friends we hadn’t made yet. First and fore mostly, I had a wonderful time spending time with and getting to know fellow 3MT member, Mallory Van Achterberg, one of the smartest, kindest and absolutely most fun people you’ll ever meet in this business. It was a pleasure to meet Karl Groves, a guy whose work I’ve admired but never got the chance to meet in person and a person with whom I’d have probably been friends as we spent a lot of time in the same places with a bunch of the same people, separated only by the time dimension. Donal Fitzpatrick, did a terrific presentation on his on going research into a system that will, using haptic cues, provide blind musicians in an orchestra with the information that the conductor does visually and having lunch with Donal afterward was great. As ever, it’s always nice to see the lovely Laura Legendary, even if only for a few fleeting moments. I can’t list everyone whom I had the pleasure of meeting and talking to but, suffice it to say, I’m grateful for every moment of your time.

I want to thank Steve Sawczyn and Paul Adam for the work they did on our “Dueling Mobile panel. It was originally my idea, I suggested the panel in a blog article I wrote here last summer but Steve and Paul did all of the real work. I got to make a few wisecracks and MC the event but Paul and Steve did all of the heavy lifting. You can find our HTML Obstacle Course on Paul’s web site and you can use it to test your mobile accessibility as well.

Lastly, I would like to thank all of you who came up to me to tell me that you read and enjoy the blog. I don’t ask for donations so but I do gain a lot of satisfaction when readers find me and tell me they enjoy my work. This blog would be a lot less interesting if it wasn’t for the readers who help keep our hit count up, write comments and tweet out the links. I appreciate all of your support.


While we may be experiencing an uprising of democratically run and user funded leaders emerging, there is a lot of important work happening in the more traditional areas in accessibility as well. I see no leadership from the traditional AT players but academia, independent ventures, the standards community and the corporate world are doing some very interesting things. This article is by no means complete, lots of other interesting developments are happening all of the time and I’m glad to be an observer as we all get to watch the technology move forward.

Anarchy, Leadership and NVDA


When I first met Richard Stallman, he described his philosophy as “information anarchism” and explained his vision for a future of free software in which individuals and corporations voluntarily donate money to support the programmers bringing them free, libre and open source technologies. Stallman’s dream has been the NVDA reality for many years now. NVDA comes from an entirely unregulated system of voluntary donations and has allowed Mick Curren and Jamie Teh to deliver one of the best screen readers ever built to a community yearning for its independence, freedom from Freedom Scientific and its high priced competitors if you will.

Last week, my good friends and business partners, Christopher “Q” Toth and Tyler Spivey took the anarchy to another level, they found that this community would donate its hard earned dollars to an entirely independent effort. The power centers for screen reading had been based in St. Petersburg, Fort Wayne, Orlando/Minnesota and in the UK. Q and Tyler have acted in a manner that shows that some true authority can be derived directly from end users, they stepped up, took on the leadership of a single task (building NVDA Remote Access) and the community took notice, donated the dollars the boys had set as a goal and, soon, all of us will have a really cool free addition to an awesome free screen reader.

The NVDA RA team had an amazing week during the fundraising push. What everyone involved agrees is that we’re witnessing history; what we can’t entirely figure out is whether or not the NVDA RA campaign was a fluke, a one-off or if, indeed, we are experiencing an actual paradigm shift and a reassignment of leadership from a small number of gatekeepers to a profoundly more democratic and anarchistic model for the future. Thus, on a personal level, I know that I had made some major misassumptions in my evaluation of the effort prior to the campaigns launch as, quite frankly, I didn’t expect to see so many of the big dollar donations coming from blind individuals. I hadn’t the confidence in our community to be willing to invest as heavily and as quickly in their own future as they did last week. Hence, the rest of this article will contain internal contradictions, some likely incorrect assumptions and will likely meander into and out of notions without making any strict conclusions. What happened last week with NVDA Remote Access may be a fluke, a one-off and may never be replicated again. I hope that this isn’t the case, I hope NVDA RA set a precedent and established a model that people in this community who find a leadership vacuum can use to do many more projects this way in the future. As NVDA RA represents exactly one data point, people who care about statistics (like me) have no actual data from which we can draw conclusions but I think we can make some inferences about the future from this single event.

Thus, what follows are my thoughts on the events we witnessed last week. You may have vastly different ideas on the matter and, please, post them in the comments section as I’m trying to learn as much as I can from this event and I suspect others will be interested in your notions as well. On this happening, I’m not an expert, I’m just a guy who watched the thing unfold and was exhilarated by its success.


In the original version of this article, I stated that JAWS Tandem, a feature similar to NVDA Remote Access came at an extra cost to its users. A commenter pointed out that this was not true and, after a quick Google search, I saw on the Freedom Scientific web site that, indeed, JAWS Tandem comes at no extra cost to people who buy a JAWS license. I apologize for this mistake and have corrected it in the text that follows. Thanks for the diligence Mr. Commenter!

I also had written that “less than half” of the NVDA RA contributions came from English speaking nations. I was working from memory of a conversation on TeamTalk and was just reminded that the English speaking world contributed closer to 65% of the contributions and only 40% had come from US.

I had mentioned that Mick Curren and Jamie Teh, the guys who created NVDA, were both college drop-outs. Jamie sent me a tweet this morning and a commenter pointed out that this is true for Mick but not for Jamie. Sorry about that.

Freedom From Freedom?

“Freedom’s just another word for nothing left to lose, And, nothing ain’t worth nothing if it ain’t free,” Kris Kristofferson.

The NVDA screen reader is free software. This means that it can be used, redistributed and enjoyed in any way possible for no cost. It also means that the source code is available to anyone interested in using it for any reason allowable under the GPL 2 license. JAWS, the screen reader from Freedom Scientific comes with a price tag over $1000. I believe, based in comments posted in the article I wrote on this blog announcing NVDA RA and some of the chatter on Twitter surrounding the Indie Go-Go campaign, that some of the people who donated to the NVDA RA campaign were motivated to contribute in order to afford themselves and the rest of the community a level of freedom from Freedom and its high prices.

JAWS and NVDA are similar but not identical beasts. Some blind users who need access to a handful of specific technologies (Citrix for instance) have no choice but to continue using JAWS as NVDA, at this point in history, has no support for such. Conversely, there are a lot of screen reader users, especially those in technology related professions, who have no choice but to use NVDA as JAWS and its high priced competitors have largely ignored many of the tools they need to do their work.

WebAIM published marketshare statistics that showed that, on Windows, JAWS was holding a share around 55% with NVDA coming in around 22%. This data was gathered in a self selecting survey so is less than scientific. The WebAIM survey was also only done in English so it’s likely that few people from non-English speaking locales participated. The other night, as Q and I went over the tracking information from the NVDA RA Indie Go-Go campaign, one surprise was that only about 65%of the money contributed came from English speaking countries. It’s possible, therefore, that NVDA may actually have a larger share when viewed on a global basis. As the market data comes from a self selecting survey, it’s also possible that JAWS, because of its popularity in corporate and government installations, may also be underrepresented as users may not have gone to the WebAIM site to fill in the survey form while at work. If one looks at all five years that WebAIM has published this information, though, they will see that the trend lines show that NVDA is the only Windows screen reader that has shown growth in marketshare in each of the years described in the data.

Any regular reader of this blog will know that I’ve railed against the lack of competition in screen reading many times. With NVDA approaching a quarter of all Windows screen reader installations, Freedom Scientific is, for the first time since 1998 when JAWS and Window-Eyes were tied with an approximately 35% share, actually feeling some heat.

Will FS respond to this new found competition, possibly based in the fact that NVDA costs nothing and FS gets more than a thousand bucks for JAWS with a price cut? Probably not. I haven’t worked at FS for more than a decade but, back then, we discussed the possibility of a free or no cost screen reader coming onto the market and how we might respond. Our strategy then and likely now was that, if we felt competitive pressure from a low or no cost solution, we would raise the price of JAWS. As I mentioned a couple of paragraphs ago, there are technologies that one can only access using JAWS and the FS strategy was to make sure we kept our profits high by “eating the rich.” I don’t know if FS will respond this way ten and a half years later but, as NVDA RA adds a feature to NVDA that one needed to buy JAWS to get, , they may need to find a way to replace the dollars on their bottom line and may, in fact, respond by increasing the price of JAWS.

The Leadership Vacuum

Roughly ten years ago, Mick Curren and Jamie Teh, two very young blind individuals came up with the idea that they could build a competitive Windows screen reader on their own. Most old timers around the access technology game actually laughed out loud. A couple of kids in Australia might take a stab at the problem, they might make a toy screen reader, they’ll get the stuff the API delivers properly maybe but little else. Over time, though, Mick and Jamie proved to the world that, following Richard Stallman’s dream process of accepting only voluntary contributions, a couple of smart individuals could, in fact, build a screen reader that can compete with JAWS and crush Window-Eyes, SystemAccess and the Dolphin products in the marketshare wars. Mick and Jamie and the people and companies who contributed to the effort showed true leadership while most of the traditional gatekeepers, both formal and otherwise, ignored the question of whether or not it would be better if our community had a free solution as an alternative to the costly proprietary screen readers.

My questions are, “Why did two individuals need to lead the free screen reader project? Where were NFB, AFB, ACB and the other so called advocacy organizations and why have they been so silent on this matter? Why don’t the traditional leaders understand that it is immoral, unethical and possibly illegal in some locales to ask blind people to pay a penny more than our sighted peers to use the same technology?”

I contend that the traditional leaders in the blindness and technology community dropped the ball many years ago and, for reasons of their own, chose to act like ostriches, stick their heads into the sand and pretend this issue didn’t exist. They are not real leaders, people like Mick and Jamie are the real leadership in our community.

Emerging Leadership

If we cannot depend on the traditional advocacy organizations to provide leadership and if old timers like me continue to stand on the sidelines, if FS continues to allow JAWS to decay and the other commercial screen readers continue to be poorly funded, who are the leaders in our community? In the NVDA case, it was two guys, with NVDA Remote Access, it was, once again, two guys, Christopher Toth and Tyler Spivey. The leadership vacuum regarding technology and blindness was so intense that these guys, when they saw a need for a free solution, got sucked into a leadership role.

NVDA is a huge and complex piece of software that, to date, has taken something like ten years to develop. NVDA Remote Access, however, is a relatively straight forward programming task, it’s not innovative in any way (it’s pretty similar to JAWS Tandem, the similar feature in Window-Eyes and RIM from Serotek) so it contains no problems that haven’t already been solved by someone else previously. NVDA RA is unique in that it will be available to end users at no cost and its source code will be available to anyone with Python programming skills to extend, improve and hack on forever. Almost anyone with some coding and fundraising skills could have elected to do this project at any time in the past few years; Q and Tyler wanted the feature so they grabbed the reigns and took the leadership role when everyone else refused to do so. It’s possible that you, the folks who read this blog, can also step up and become a leader in this field, you too can be a leader who makes a big contribution.

The other change in the leadership we might be witnessing is that the community itself, as in the theoretical anarchism Stallman describes, is taking control of its own destiny by voting with their dollars. With NVDA and NVDA RA,, hundreds of blind individuals chose to buy for themselves the leadership they want. In this example, every donor, whether they gave $5 or $250, took on part of the leadership role by deciding what technology we use by taking charge of a portion of the funding model. I wish that the traditional leaders (NFB and the like) had realized the importance of a free solution, alas, in a democratic uprising, the community, led by Mick and Jamie, Q and Tyler, did the leading in a distributed manner.

If you’re reading this article, you might be the next leader in this space. I encourage as many people as possible to step up and take the bull by the balls and run with a project. It’s obvious that we cannot wait for any of the leaders from history so, do something, lead!

My Role In NVDA RA

Since announcing the NVDA Remote Access campaign on this blog last Tuesday, I’ve received a number of inquiries asking me what my role has been in the project. Some people have privately suggested to me that they thought I am leading the project from behind the scenes, something that could not be further from the truth. I am not, in any way, a puppet master pulling the strings from behind a curtain. In fact, my role has been fairly minimal in this effort. Others have suggested that this is a 3 Mouse Technology effort, an easy mistake as Q, Tyler and I are all involved in 3MT and we’re all named on the Indie Go-Go as team members. In fact, NVDA RA is a project unattached to any organization, it’s a private project being done by the two guys writing the code.

NVDA RA was born as an idea when Q and Tyler were chatting on TeamTalk and realized that they wanted to have this feature in NVDA. They banged out a prototype and Q then took over the project. I was struggling with a health problem when this all started, I was not present for the conversations nor did I do anything at all to help its development. Q spent the time and did the work to get the NVDA RA story to as many people as he could, he managed every step of the process and he’s the true leader on this effort.

I have helped in a few ways. I wrote a few drafts of the statement you might have read on the Indie Go-Go page but the final text was done by Joe Orozco, a friend of the project. I’ve provided some free advice which was probably worth less than the guys paid for it and it was my idea to ask the silken voiced Ricky Enger to record a demo. I helped push out the campaign here on the blog and I made a lot of noise for a few days on Twitter trying to drive my followers to the campaign page but I’m following Q’s direction on all of this.

The fact is, I’m an old guard access technology guy. I hadn’t the imagination to even believe that a crowdsourced effort would gain so much traction and actually meet its goal. I was surprised by the campaign’s success and how rapidly it met its numbers. I’d be a terrible leader on this project as I simply wasn’t creative enough to think this would be possible and, if one cannot even imagine the possibilities, they absolutely cannot be a leader.

Free Software And Security

As I wrote back in January, we’ve seen a couple of very public security breaches in the proprietary access technology world recently. NVDA and NVDA RA are free (as in freedom) software, a company or individual concerned with potential security defects can, only with NVDA and its components, perform a security audit on the source code and be as confident as their expertise will allow that the software contains no security defects. Plain and simply, this is entirely impossible with JAWS, Window-Eyes, the Serotek or Dolphin products to have the same level of confidence as, without the source code, users must trust the publishers to sell them software that they cannot audit independently. For all intents and purposes, the more people who can look at the source code, the more likely it is that bugs of all kinds will be found and fixed, a feature of NVDA that simply doesn’t exist in any other screen reader.

It’s true that few individuals will have the skills to perform their own security audit. I certainly can’t perform this kind of work, I don’t know Python and security isn’t my speciality. I do, however, feel much more confident while using NVDA, though, as others expert in security can do such a review and, in a corporate setting, a company with a high level of security requirements can afford to hire professional security auditors to review the source code.

Is Crowdsourcing A Model For The Future?

As I say at the top of this article, I don’t know. NVDA has been crowdsourced from day one and has been a tremendous success. NVDA RA hit its fundraising goals in less than two days. Freedom Scientific is feeling marketshare pressure for the first time in a decade and other proprietary screen readers are falling in popularity. Is this model a plan for the future? All I can say is that we’ll see.


Mick and Jamie, Q and Tyler are now the true leaders in access technology. They became so because they made personal decisions to take on important projects and did so out of pocket when they started their efforts. They saw holes in the system and they filled them. The people and companies who have contributed to these efforts are also leaders as they have decided where the dollars should go. NVDA is information anarchy at work and its winning the hearts and minds of the community in a way that none of we old time so-called experts could have predicted.

I think we can also conclude that there is a severe problem with the traditional leadership in this community. As a result, we need to, as individuals, step forward and take control, you may be the next big shot in this field, all you need is an idea and the time to do some hard work. You needn’t be a programmer to lead a technology project, it certainly is helpful but, if you’ve got a good idea and can raise enough money, you can hire any number of developers to make your dream into a reality.

It’s also time we start holding the identified leaders to a much higher standard. NFB, ACB and the others have been notably absent on these issues and complete nitwits claiming to be accessibility experts get tons of YouTube views while providing information that is worse than useless as its entirely without actual knowledge of the technology involved. We all need to become harsh critics and, while I’m sure we’ll be writing for years to come, we need more people than just Marco Zehe and I doing serious criticism. Stop worrying if you may hurt the feelings of programmers who deliver crappy accessibility, stop worrying if FS may not like you if you speak up, do the right thing, speak critically, speak frequently and speak loudly as, otherwise, by not doing so you are part of the problem, not part of the solution.

NVDA Remote Access


Back in December, I wrote an article called, “2014 In Review And Predictions For 2015” in which I somewhat disingenuously predict that this will be a big year for NVDA. I say “disingenuously” not because I don’t think this is true but, rather, because I had a lot of insider knowledge about some of the things that would happen with NVDA this year well in advance of the general public so I knew that some incredibly important developments in this popular free screen reader would be available to a broader audience in the first half of 2015.

One of these exciting developments motivated this article. It’s called NVDA Remote Access and brings functionality similar to JAWS Tandem to the world of people who use NVDA, currently the number two screen reader on the Windows operating system. For the reasons I’ll describe below, please click on this link, it will bring you to an Indie Go-Go fundraising page and donate some money to the project so my good friends and business partners Christopher Toth and Tyler Spivey can gather the funding necessary to bring this truly important bit of software to the NVDA using public.

This article is considerably shorter than my normal two to three thousand words on a subject. There’s little I can say that isn’t already discussed on the NVDA Remote Access Indie Go-Go campaign page, so please visit it to learn much more about this important bit of software.

NVDA Remote Access Basics

NVDA RA allows a user with the software installed to control another user’s PC who is also running the code. The two users agree on a secret term, they both connect to the same NVDA Remote Access server, type in their secret word and are immediately connected. This permits a variety of tasks that were previously impossible, most importantly hands-on technical support and training.

In brief, people can use NVDA RA to do nearly everything one can on a local computer while hearing what NVDA is saying on the remote system. Previously, one could purchase JAWS Tandem which, including JAWS, costs more than $1200 and, if this project gets its funding, they will now have this functionality for free.

Why NVDA Remote Access Is Important

For the past few years, I’ve heard from a lot of people around the business of bringing accessible solutions to large populations. These are the people who make purchasing decisions for entire states and federal agencies as well as individuals who use screen readers who have wanted to use a remote solution for any number of reasons. Plain and simply, they agree that NVDA is the best screen reading solution for Windows but they couldn’t use it because it had no functionality like that in JAWS Tandem, hence, it was difficult to provide hands-on support and training. With NVDA Remote Access installed, this problem disappears and, while I can’t say anything too specific about these developments due to NDA, some big time installations are rethinking JAWS and will likely switch to the profoundly more cost effective NVDA in the relatively short term future.

My Experience With NVDA RA

Recently, I made the decision to return to Windows as my full time platform and I’ll only use Macintosh for a handful of very specific tasks. The combination of NVDA and the Windows OS and software like FireFox, Chicken Nugget and QRead (coincidentally also written by Toth and Spivey), Microsoft Office and a variety of other applications simply work better for me than do their analogues on Macintosh. As I haven’t used Windows much since 2008 or so and that I’d never used Windows 8 for more than a couple of hours at a time, I needed a bunch of help setting up my new laptop. I didn’t even know all of the apps and utilities I should install and Tyler jumped in to help me.

Using NVDA Remote Access, Tyler was able to install a number of apps, utilities and the like, change my settings to something I would enjoy more and perform a variety of tasks to get me up and running. On his system, he heard NVDA speaking with his chosen synthesizer at his chosen speech rate while I enjoyed using my synthesizer and speech rate on my local system. All I had to do was sit back and hit Alt+y a few times when the UAC dialogues popped up. I’ve also had the opportunity to watch Tyler help his father, a 70 year old sighted technology neophyte, do all sorts of things on his computer as well.

NVDA Remote Access is a powerful tool in its prototype state and will be a killer app when it’s fully implemented.

Other Solutions

Yesterday, Serotek announced that users could buy a “day pass” to use their RIM (Remote Incident Manager) software for $15 per 24 hours. Instead of paying Serotek $15 for a single day, please instead send those dollars to the NVDA Remote Access Indie Go-Go campaign and participate in building a much better program built into a much better screen reader that you and everyone else who cares to can use for free forever. If my typical average hit count of readers of this blog all kick in $20, the world will have NVDA RA for free, forever.


Please send some money to the NVDA Remote Access project. This is an important big step for free screen reading solutions and will be a force in accessibility for years to come.

, ,

Data Breaches Plague AT Industry


This story has been unfolding as I was writing this piece. I was in process of doing my final edits when I received a phone call from one of my sources. For reasons that will become obvious as you read the rest of this piece, the FBI has been contacted regarding the data breach at GW Micro/AI Squared. Stay tuned for updates as I get them.

As this is the most popular independent blog in the blindness space and one of the rare regular publications in this field willing to write about issues of controversy in a critical and data driven manner, I get a fair amount of email from random individuals interested in blindness and technology containing what they believe to be an important idea for a story. These emails sometimes describe an issue in accessibility about which I hadn’t written previously but, more often than not, they contain some bit of what I consider to be gossip about one person or another inside the AT biz and wouldn’t be of much interest to a broad readership.

Then, this morning, I received an anonymized contact from an individual only self identified as the hacker who cracked Window-Eyes and the Serotek systems. I’ve alerted individuals at both companies that I’ve been contacted by said hackers and shared with them the information I’ve received thus far.

In light of the Sony data breach, I thought you would enjoy a story about a pair of recent hacks in the access technology industry.

The Message I Got Today

This morning, I awoke to a message from an anonymous sender claiming to be the person who hacked both Serotek and, in the past 24 hours, GW Micro/AI Squared. The message said that the hackers would send me the complete Window-Eyes user database and included some sample records from such. The message also said that the hackers have in their possession the Serotek 2012 and 2013 financial reports and other information they had downloaded in November. In the GW Micro case, it’s clear that these people have user names, passwords, serial numbers but not credit card information as it isn’t in the file they shared with me. From the snippets of data they elected to quote from the Serotek financial reports, it was unclear to me if, indeed, the data is genuine as I’ve no way of checking it for accuracy.

I have sent the data that I received to AI Squared and have deleted it from my system and no longer have it anywhere as I’ve even emptied the trash on my Mac.

The Strange Thing

As it’s clear that these hackers possess some information that could be tremendously damaging to either of these companies, it’s unclear as to why they didn’t just post it to some public but anonymous site rather than just communicating the fact that they had said information to a blogger like me. In fact, the hackers chose to make statements in support of radical Islam from within Window-Eyes and on the Serotek Twitter accounts, allowing the public to see their hacking work without causing any real damage. I’ll assume these people see themselves more as clever vandals than actual data thieves but anything I suggest about them is purely conjecture as we don’t have a personal relationship.

I do not for a second, however, believe these hackers are actually Islamic radicals but, rather, to me they seem to be bored individuals from within the community of people with vision impairment who’ve learned some advanced hacking skills and chose to apply them to companies in this business.

The Window-Eyes Hack

The hackers were able to get into the Window-Eyes database of registered users, and download all of the account information but, most interestingly, they managed to change Window-Eyes itself so, when its users awoke and turned on their computers this morning it updated and, once per minute or so, would make an announcement in support of the Islamic State. From my own hacker perspective, I must tip my hat to these guys for creativity in the technological equivalent of graffiti as actually forcing a product to update just to pull a prank is pretty damned clever, albeit annoying to its victims.

What Motivated the Hackers?

Again, I’m stepping deep into conjecture here but, as the hackers have chosen not to just dump all of this data onto a public site releasing potentially private information on Window-Eyes users and insider financial information about Serotek, I’ll assume they think of themselves as “grey hat” hackers. They have some fun with some minor malice while electing not to do anything that could cause irreparable harm to either the individuals or companies they’ve targeted.

At some level, I think the AI Squared and Serotek people got off easy. If these hackers had chosen to, they could have inflicted some truly heinous fuckery onto these companies, their employees and their users like what is assumed to be the North Koreans in their attack on Sony Entertainment. Instead, they make some silly statements intended to anger some people and write to me about their efforts knowing that I’d tell the world about their hack.

Is Your Information Safe?

If you are a registered Window-Eyes user and you use the same password for other services as well, I strongly recommend that you change not only your GW Micro password but, assuming you use the same email address for Window-Eyes as you do for Amazon or some other place where your credit card information might be exposed, change that too. I do not know how to decrypt your passwords in the sample data I had received but something tells me that these hackers may have such tools and, while their activities have been relatively benign so far, one cannot be too careful these days.

I assumed the same would be the case for users of Serotek products but, this morning, I spoke on the phone with Mike Calvo and he assured me that it was not the Serotek system that was compromised but, rather, it was his own account which, of course, had access to a lot of interesting business information but not to user information, databases, passwords and such that was hacked.


Data breaches are the news of the day with Sony and the numerous reports of shopping and other web properties being hacked. Typically, the blindness business is way behind the mainstream curve but, regarding security failures, I suppose this time we’re running even with the state-of-the-art.

All kidding aside, there has been an historic schism between the security and the accessibility communities. As I’ve written here and on my BlindConfidential blog, it is essential that accessibility related tools be seen as fully secure as they are essential to people’s employment in positions where security is a very high priority. A lot of blind people work in government positions, many dealing with very sensitive data. Events like a security breach at an AT company, while it says nothing to the reality of whether or not the AT itself is a security problem will not leave those responsible for security in large installations with a warm and fuzzy about our community.

Thus, while the hackers in the Serotek and AI Squared cases seem to have thought of this kind of activity as a lark, a game to play or a prank, I recommend, for the sake of the industry’s reputation, that such activities stop immediately. To quote the astro-physicist Phil Plate, please, don’t be a dick.

2014 In Review And Predictions For 2015


It’s been a while since I’ve done one of my random musings pieces. I think about all sorts of shit and, sometimes, I type up such notions and present them to you, my loyal readers. In this piece, I thought I might talk a bit about this blog itself and how it performed over the past year, the various topics discussed here and then let my stream of consciousness take this piece in whatever direction the wetware points.

I had gone into my web analytics page to take a look at a few things and, as I found certain trends interesting, I thought I’d share them with you as you may find something of interest in such. As this is really musings about these numbers, I didn’t do any math with the numbers beyond calculating a few averages and lumping things together in a manner that makes sense to me. Thus, don’t draw any serious conclusions about anything from this piece as, in addition to being heavily biased by my own ideas on matters, the statistics are noisy and are in absence of any independent data from which a conclusion could be found. Thus, as it says on the programs given out at Vegas psychic shows, “for entertainment purposes only.”

I’d also like to express my sincere gratitude to all of the people who help me make this blog possible. There are a number of you out there, I appreciate the help you give me by providing answers to technical questions, advice on organizing articles and the other things you do to permit me to publish articles that are reasonably accurate and informative. Of course, I’d also like to thank all of you who visit the blog, read the articles, occasionally leave comments and so on, it’s you readers that motivate me to keep writing.

Blog Statistics

When I wrote the BlindConfidential blog, I used some hit count utility available to those of us who wrote on blogger back in those days. I’d look at the statistics now and then and occasionally feel proud of a big week or feel disappointed in a bad one. Those statistics were very raw and extraordinarily noisy and I doubt they reflected true visits in that that particular utility didn’t even filter for uniqueness.

In late November 2013, a year ago or so, I decided to install Piwik to track statistics about this blog. Piwik is profoundly more interesting than the utility I used previously, it’s UI is mostly accessible (my best experience with it was with NVDA and FireFox but quite acceptable with VoiceOver and Safari on OS X), it’s developers seem committed to improving its accessibility and it has a panoply of features for analyzing a site’s web traffic. I turned my Piwik installation live on 12/1/13, a year ago so now have twelve full months of data about this blog.

Note: I installed Piwik and started looking at our statistics purely out of curiosity and not for any business purpose. this blog and its predecessors does not accept advertising nor does it ask for donations. Hence, gathering analytics for me is a intellectual endeavor that leads me to questions like, “Why do I get a lot of hits when I write about topic A but very few if I write about topic B?”

The Big Numbers

In the twelve months ending November 30, 2014, this blog, discounting for bots and such, received something more than 26,000 hits, an unremarkable number in an era of viral Internet media but not bad for a crackpot like me. A bit unfortunate also is that, to get a truly accurate representation of my 2014 statistics, I’ll exclude nearly 6000 of those hits in my analysis as they all happened when Daring Fireball linked to an article I had written here in 2013 which alone accounts for nearly 6000 hits. Thus, I’ll be using the number 20,000 as the grand total for the year and will note otherwise when and if I use the top line number.

In total, we published 22 articles here this year which gives us a mean hit count of about 900 per article. A mean isn’t a terribly interesting statistic, though, as the distribution of hits across the articles is far from even.

The Article Popularity Curve

This year, the subject matter on this blog can be broken down in a variety of ways. Most obviously to me, however, is that I wrote about accessibility on Android and Apple products, I wrote articles about the history of screen reading on Windows and I wrote what we’ll lump together as general interest pieces, a category we’ll call “other” giving us four major topic categories.

When, in 2014, we published an article here about Android accessibility, it attracted a mean readership (excluding the article that Daring Fireball linked to) of more than 2000 hits with one, Testing Android Accessibility: I Give Up receiving just over 4500 on its own. Also interesting is that, including the article that got the mainstream link, a number of the most popular articles this year were those that I had written in 2013 about Android accessibility as well.

In second place comes the articles about screen reader and access technology history, specifically those articles I wrote about my personal experience during the days when the Windows screen reader wars were raging between FS and GW Micro. Of the top ten articles this year, three come from this category and one was a 2013 article on the subject that was also (based in the really crappy analytics thing I used then) the most popular article of that year as well, The Death Of Screen Reader Innovation. I’ve been writing articles like these since the BC days and they’ve always been popular among readers.

Articles we published critical of Apple accessibility were the third major group of pieces that I had written in 2014. These gathered a mean hit count of about 500 each and, to me, were a disappointment. I hadn’t written anything terribly critical of Apple in years, since the BC days in fact. Unlike the Android and historical pieces, these articles got few comments and little noise on Twitter as well.

The fourth and final category, the one I call “Other” as it ranges in subject matter from William Faulkner and Led Zeppelin to announcing a couple of things to preserving the history of access technology, received very few hits whatsoever. It appears that, if Gonz Blinko strays too far from his central themes, very few people take the time to click through.

I “advertise” each blog article the same way, when I first post it, I send out a tweet with the headline and the link and, when appropriate, include the #a11y and #accessibility hash tags. About five days later, I’ll tweet out the link again with an “in case you missed it” preface. When I last looked, I had just over 600 Twitter followers. If someone tweets something I especially like about an article I’ll retweet and favorite it as well. Thus, I don’t spend much time marketing this blog and I’m happy to see that the numbers on some articles must come from word of mouth as they exceed the number to whom I send links.

Analyzing These Numbers

Here’s where I find myself scratching my head. I’ve broken a number of the things we’ve published here this year into series. The series that, by far and away received the most hits was, “Testing Android Accessibility” with I Give Up getting over 4500 and The Deaf-Blind Perspective and The Programmers’ Perspective receiving around 1200 each. Historically, my articles get about three quarters of their hits in the first three days after publication and the totals tend to stop rising after an article has been live for around ten days. This series, however, as a group continues to get about 75 hits per article per month showing continued interest in our testing.

What distinguishes the three “Testing Android Accessibility” pieces from others I had written critical of Android or the much less popular articles I did on Apple? What made these three pieces so much more attractive than the more outlandish [Amish User Experience10 or Do We Get What We Pay For?,

I think the thing that made these three pieces so much more popular is that they were the most data driven articles I’ve ever written or, in the deaf-blind case, that a guest author had written. While those articles contained opinion, the conclusions were drawn from actual test results included in the articles themselves. Most of the content here is derived from my somewhat educated personal thoughts on a subject; in this series, we spent a lot of time doing a lot of work in preparation before they were written and published and I think they’re success reflects that effort.

The Statistics Appear Upside Down

The next question that I find when reading my Piwik report comes when I compare the relative success of articles on this blog with actual marketshare numbers, published and observed. As far as I can tell, Apple has an overwhelmingly large portion of the blind user market on mobile devices and Android, with published statistics at 12% but an observed share that’s even smaller, has had at best marginal uptake in this community but, if I publish an article about Android accessibility, I’ll have 500 hits in the first few hours while an Apple piece is lucky to get 500 hits in its lifetime.

Android fanboy behavior cannot account for this large a discrepancy but our friends on the Eyes Free list do feel a fierce sense of loyalty to their chosen platform and I’m certain to hear from them when I write a critical piece about the platform. At the same time, while vocal, there aren’t that many people who are actually active on Eyes Free and in other blindness and Android communities. Thus, the fanboys, my haters cannot account for the popularity alone. Digging a bit deeper into my Piwik statistics, I find that (including the article Daring Fireball linked to), only 6.8% of the 26,000 hits came from Android systems while more than 50% came from Windows, and roughly 30% from Apple products which, allowing for a reasonably large margin of error, roughly reflects the actual user distribution among blind technology consumers both published and observed.

Nothing in the Piwik reports about geography says much about who reads these articles either. One outlier here is that all three of the Testing Android Accessibility articles got a spike in hits from a specific city in South Korea where Samsung has a plant. So, maybe an actual engineering organization is paying attention. I notice a similar spike from Cupertino, California when I write about Apple and if I mentioned GW Micro, I would see a spike in hits from Fort Wayne, Indiana. But, I can’t believe that the insiders, the actual engineering sorts responsible for this technology cause such a large bump in hits either. In fact, when I drill down on these numbers, I see that we’re talking about a very small number of actual hits so, statistically, we can’t derive any conclusions from the geographical data.

By looking at my Piwik report, I learn that most of the hits I get here come via Twitter but, after that, it’s search engines. And, indeed, the top search term that leads people to this blog is, in 2014, “Android Accessibility.” This, in turn, becomes a self-fulfilling prophecy, people google on that term, they find one of our articles, they click and our SEO gets better so it’s more likely that the next person who searches on that term will find us and so on.

Over the past couple of days, I have talked about these statistics and this article trying to find conclusions we can draw. Fundamentally, why do so many Windows, iOS and Macintosh users come to this blog to read articles about Android? The only solid answer we could come up with is, “we don’t know.” We all seem to suspect that people read articles about what they don’t already own as, if they use Apple devices, they already know about its defects so don’t come here to read about such. We came to a few other conclusions but none were supported by the data so I’ll leave them out.

What About The History Articles?

A whole lot of people upon meeting me for the first time tell me that they really enjoy these articles. I enjoy writing them as well as I get to talk about the days when I was actually relevant, productive and on top of the world’s most popular screen reader. I don’t just write them to massage my ego though. These articles have and have had for years a running theme, the lack of fundamental business principles, specifically the lack of real competition in the blindness sector of access technology.

I think these articles remain popular for a couple of reasons. First, a lot of you readers have been looking at my stuff for a lot of years and these articles tend to be part of a continuum of commentary. Those days were an exciting time to be around the AT biz, in JAWS, we were doing significant things with each release and, as happens during periods of irrational exuberance, we thought that run would never end.

In addition to the rapid progress in screen reading during that time, we also saw the beginning of Section 508 which, while certainly not a success even a decade later, caused an explosion in new jobs in the federal government for blind people. Not only was the JAWS marketshare growing to complete dominance, the market itself was expanding faster than ever previously observed. Those days were a lot of fun and I hope another generation of accessibility specialists gets to experience something as much fun in the future.

Lastly, a whole lot of the things I predicted on the pages of BlindConfidential came true. Readers who were rightfully skeptical of the articles I published back then (I encourage my readers to be as skeptical as possible of everything I write as well as everything else they read on a blog and in mainstream publications as well, believe nothing, not even this) have, years later, come back and agreed that my predictions about the Windows screen reading future had largely come true. The lack of competition in the space allowed JAWS to deteriorate, falling sales caused by their suicidal business plan prevented GW Micro from catching up, NVDA came along and grabbed ~22% of the Windows screen reader market but, lagging in their Office support, haven’t caught on in institutional settings and the Windows screen reading world is a quagmire of differently broken access technologies.

The Year’s Biggest Disappointment

As I’ve written in the introduction section of various other articles, I’m never certain what will and what will not be a hit on this blog. Having analyzed the Piwik reports, I suppose that I can predict that an article about Android or screen reader history will outperform something I might do about Apple and that few people will read anything I write about anything else but, on a specific article, I never really know. One such article published back in July, Preserving Our History was one such a piece.

In Preserving I present the problem that the history of access technology for people who are blind seems to remain unwritten. Products I felt were of tremendous historical importance, the Blazie Braille ’N Speak for instance and people like it’s inventor, Dean Blazie, have no Wikipedia entries and very little of merit written about them anywhere online. Wikipedia has long articles about the completely random pieces of mainstream technology but even the most groundbreaking access technologies are disappearing to history.

I thought this subject felt both important and of interest to a broad group of potential readers. I wrote the piece and, as I describe above, tweeted out a link with the two accessibility related hash tags I use. The next day, I looked at Piwik and saw that maybe 25 people had clicked through. After waiting a few days, I did my usual second tweet and was greeted with the most deafening silence I could have imagined. To date, the article has received fewer than 100 hits. People in this community, excepting those who wrote comments or sent me a note through the contact form, just don’t seem to have an interest in preserving our history. This made me sad.

What About The Future?

In preparation for this piece, I reread the twenty top articles from my statistics. This blog was really dark this year. Almost every article I wrote in 2014 has a rather negative conclusion. I’m uncertain if it will be any more cheerful moving forward.

For the first year in many, 2014 saw no new fiction written under the Gonz Blinko nom de plume. I have three such pieces in various sates of disrepair and incompletion but my inner Hunter S. Thompson has failed to inspire me to lampoon the industry and my own life lately. Some people around FS hated those pieces but, in general, they were pretty popular. I don’t like poking fun at people whom I don’t know well as I’m not sure how they’ll take it and, these days, I do try to be less of a dick than I was when I wrote BC, especially in its early days. So, to those of you who’ve asked, you can probably expect new Gonz material in the coming year.

The Gonz Blinko Predictions for 2015

I’m going to take a looked into my somewhat resin covered crystal ball, well, actually, I cannot afford an actual crystal ball so I’m using the water chamber on a glass bong that I bought up in Haight-Ashbury a few years back to observe the murky future. I’m feeling exceptionally intuitive as I stare into this bit of glass and see nothing (I can’t see, I’m blind you morons) but the sound of the diffuser clinking against the sides surrounded by the splashing of the water makes me as confident of the following predictions as I am in the dead Sylvia Brown’s ability to find a kidnapped child:

  • Something important regarding accessibility for people with vision impairment will come out of Amazon. I don’t know whether this means that Peter Korn will lead an effort to fork Android accessibility and make a proprietary Amazon solution for Android, whether we’ll see tremendous improvements to the Amazon web properties accessibility or what specifically will happen but, from the sounds in the bong, I predict significance.
  • Google’s accessibility will see some dramatic improvements but will probably mostly come in the second half of 2015. This prediction is based on one bit of actual data, Vic Tsaran, a blind guy who did a terrific job on accessibility at Yahoo now seems to be leading the charge at Google. Past performance does not guarantee future returns but I’ll wager a few bucks that Victor, if anyone, can start fixing the systemic problems with accessibility at Google.
  • NVDA and VoiceOver will continue to see marketshare growth while JAWS, Window-Eyes and others continue to fall. One serious wildcard in this specific equation is whether or not Narrator in Windows 10 which apparently has a scripting language built-in will succeed. I don’t know anyone who has tried to use the Narrator in W10 and, obviously, I haven’t heard anyone tell me that they wrote or edited a script for such so all I can say is that this is an interesting random data point that we should keep an eye on in the coming year.
  • Samsung, for reasons entirely unrelated to accessibility, will follow in Amazon’s footsteps and fork Android and give it its own brand name. Huge companies rarely like being beholden to each other and Samsung needs its own OS to optimize for features on its hardware. A major reason that iOS devices can outperform their Android cousins while having a lower powered CPU results from its software being optimized for very specific hardware components, something impossible in a generic platform like Android.
  • A number of new micro businesses will launch selling NVDA technical support, making my favorite Windows screen reader considerably more attractive to institutional installations.
  • I will publish at least one article that gets me slammed by the fanboy community surrounding some bit of access technology. It’s unlikely that this will be the Android peeps as, given that Google has brought on an individual like Tsaran suggests they mean business so, while I would still recommend avoiding Android if you’re blind, I’ll postpone any further analysis for a while to wait and see what Victor might accomplish there. And, on a personal note, I’m really bored with all things Android, progress in accessibility in the L release, based on scraping Eyes Free and not testing anything myself, seems to remain tragically slow so there’s nothing left for me to write about regarding it until something gets profoundly better or actually gets much worse.
  • Windows tablets and very low cost micro-laptop things from Dell, HP and elsewhere will emerge as the first real competition to iOS in the blindness mobile accessibility space. I see a whole lot of hardware coming online in the under $300 price point range that, if a user tosses NVDA or their favorite screen reader onto them, they’ll have a low cost portable device with a UI they’ve been using for years.
  • Something important will happen regarding accessibility to mathematics for screen reader users. Over the past 18 months, we’ve seen MathML support added to VoiceOver and in JAWS 16, we saw the impressive demo that Sina Bahram and his friends at Design Science did at CSUN 2014 in this area as well. Meanwhile, I’m hearing poor reports about the VO and JAWS solutions in their current incarnation but the trend points toward improvement in providing math to blind people.
  • Sadly, I believe that accessibility on Apple devices will both remain the best thing available for blind users in the mobile space but the accessibility to such will continue to deteriorate. On any institutional sale where accessibility is a requirement, iOS can continue winning the sales in absence of any real competition on accessibility. Hence, there’s no market force pushing Apple to regain its 100% compliance policy, something I think is reflected in both iOS 8.x.x and OS X Yosemite.

Will these things come true? I don’t know, the bong likes to give hints of the future but is rarely specific. These predictions are Gonz’s first attempts at the paranormal so his intuition may not be focussed properly.


I want to thank all of you who’ve visited the blog, read the articles, posted the comments, sent me emails through the contact form, tweeted and retweeted links to the articles, connected with me on Twitter, told me that you read my work when we’ve met at a conference or participated in this blog in anyway in 2014 and, indeed, over the entire 8 years I’ve been blogging. I honestly enjoy reading all comments posted here, even those that are quite antagonistic toward me personally.

Of all of the comments we got in 2014, my favorite part of any of them came when a reader wrote describing me as an “irresponsible journalist,” because, it elevates me, a self-described stoner, crackpot and loudmouth to a level of “journalism,” something I’d never say about myself. If this guy is right, I’ve taken a step up from blogger, an author who is irresponsible almost by definition up a few notches all of the way up to journalist, albeit an irresponsible one. Those words in a comment on this blog make me smile as they suggest that, although inaccurately, some people actually think these articles have actual power to influence people and their purchasing decisions.

I’m not sure what to expect on this blog in the coming year. I’ll focus less on Android as there’s nothing left to say about it other than “I hope Victor is successful at Google.” I’ll likely explore the competition or lack thereof theme from different angles as we see events actually unfold. And, while there my least popular pieces, I’ll probably be writing more articles in the “other” category as that seems to be the ideas that I’m thinking up lately.

Thanks again for your support!

Apple, The Company I Hate To Love, Part 3: The Macintosh User Experience


Recently, I have been writing a series of articles about accessibility and Apple describing the cognitive dissonance I feel when I’m in a position in which I must praise the Cupertino technology giant. I wrote the first article, “Apple and the Accessible Internet” before I realized this would become a series so it reads like a stand-alone piece. Then, after the release of iOS/8 and the Yosemite version of Macintosh OS X and a bit of encouragement from some readers, I launched a series investigating broader issues regarding Apple and accessibility. You can read the first article, “My Long History Fighting Apple,” about my activism on intellectual and information freedoms and the second item, “Where’s The Competition?,” in which I revisit a common theme for this blog, namely the current and historic lack of competition in accessibility and how this phenomena hurts blind users. These are not great examples of my writing skills, I stand by the opinions presented but please forgive me for the mediocre writing and repetitiveness of the material, I’ve been highly distracted while working with my new guide dog.

I also work on another blog called Skeptability, a pan-disability site that discusses the intersection of disability with feminism, social justice, skepticism, humanism, atheism and related subjects and disability. My general rule separates my articles between the two sites by publishing those that are more technical, more laden with jargon and require more historical knowledge about the access technology field here but, when I write for Skeptability, I write about things of interest to a broader audience. My Skeptability articles tend to be less dark than is this blog and, if you’re interested, you can read an article about my experience at guide dog school called, “My Time At Guide Dog School” there if you’re so inclined. . ,

I have been a blind user of Macintosh for a pretty long time now. I first wrote about this experience on my old BlindConfidential blog in an article called, “Eating An Elephant, Part 2: Apple Rising,” where I prefaced the piece with a discussion of Apple’s deplorable history regarding intellectual property law but continued to talk about how good Macintosh accessibility had become at that point. Back then, I did an experiment in which I didn’t reboot or restart my Macintosh with VoiceOver running until I absolutely had to. My record for testing the reliability of a Macintosh back then was more than 40 days without needing to restart the laptop or the screen reader. Today, a bunch of years later, I rarely go a single day without rebooting my Macbook Air or restarting VoiceOver. Plain and simply, I cannot be as productive with my Macintosh as I once was and I will soon be returning to Windows as my full time system, using Macintosh only for my audio work.

This article explores the very accessibility sloppy Yosemite operating system release as well as discusses problems with [OS X] accessibility that have been with us for years. As far as I can tell, Apple has been made aware of all of these issues and reports of these issues have been received by Apple repeatedly for a lot of years but have been ignored by the Apple engineers. In fact, Apple seems to treat Macintosh accessibility as an orphan stepchild of the much more comprehensive iOS versions of the same.

I’d like to thank my friend and fellow accessibility expert Bryan Smart for the conversations we’ve had in preparation for this piece. Readers should visit his blog where they can listen to his work investigating some of the issues described herein. Bryan is a really smart and very insightful individual on issues regarding accessibility and you, my loyal readers, should check out his stuff too.

The Sloppy Yosemite release

As I mentioned in the second article in this series, it appears as if Apple had hired an accessibility quality assurance specialist out of the notoriously sloppy Google testing department. Yosemite also contains some accessibility improvements, most notably in the browser, iWork and by adding support for MathML in a number of apps. These are all very solid steps forward but, very sadly, they are overshadowed by the newly introduced accessibility problems along with long standing issues that have yet to be remedied. I didn’t do a lot of testing to prepare for this piece and will be writing from personal experience rather than reporting results found from a formal testing procedure. The guys on AppleVis wrote a terrific and much more detailed article called, “Features and Bugs in OS X 10.10 Yosemite,” which you should read if you’re looking for a more detailed report.


I tend to keep my email app running at all times on all of the different OS I use. Email is, for me, an essential tool for business, recreation, personal and professional correspondence and nearly every other activity in which I participate. Years ago, when I wrote “Apple Rising,” AppleMail was both entirely compliant with the Apple accessibility API as well as being very usable for a VoiceOver user.

Over the years, AppleMail has seen its accessibility deteriorate. In the Yosemite version, using “Classic Mode” for the display, when a user opens an email that is part of a thread, they will hear “Embedded ” followed by “Embedded Unknown, Embedded Unknown.” If one then interacts with the first thing labeled as “Embedded,” they will find themselves in that email but first most navigate through no fewer than a half dozen buttons that VoiceOver only identifies as “button” in speech. Thus, we find ourselves in a window in a an app that’s very important to my daily life with a bunch of unlabeled items in its interface, even in “Classic Mode.” In general, AppleMail feels a lot like something released by Google, regarding accessibility, it isn’t even up to an alpha test level as it remains feature incomplete. The bugs in AppleMail are all really easy things to fix and are easy test cases that should have been caught by an automated test suite, hence, they are solidly “stupid” bugs.


When Apple released the Mavericks version of OS X in 2013, they introduced some nasty accessibility bugs in Finder, one of the most essential bits of software to all Macintosh users. Specifically, when one tried to navigate through the sidebar to move to a certain folder, focus was lost and instead of reading the items in the part of the interface a VoiceOver thinks he’s interacting with, it read file names and, indeed, moved focus from the sidebar to the table of files. For a VoiceOver user, this is a usability nightmare and, while I think Apple had fixed this in a later version of Mavericks, it appears to have been broken again in Yosemite.

This problem leads me to question Apple’s quality assurance and software engineering methods. If a bug existed and was fixed in an earlier version of the operating system, the fix should have long ago been integrated into the main trunk of the source tree but, apparently, Apple has chosen to ignore accessibility fixes present in Mavericks in the Yosemite release. This also speaks to what must be a fact that Apple either does not test its accessibility features and VoiceOver or chooses to ignore bugs reported either by their internal testing teams or by the army of blind people out there willing to spend their personal time reporting problems to Apple regarding accessibility. I know which bugs I had personally reported during the Yosemite beta cycle and, much to my chagrin, I also saw very few of the many I had reported fixed in the final release.

Other Problems

While my notions about AppleMail and Finder are accurate and things you can test for yourself, they do not even approach a complete look at Yosemite accessibility. As I suggest above, please do read the AppleVis article to get far more details. I’ll suffice it to say that OS X has had problems for a number of releases and, with each new version of OS X, the accessibility deteriorates further.

Yosemite And The Internet

After publishing “Apple And The Accessible Internet,” I received an email from the people who work at the email address, The author of the email asked me to install the Yosemite beta, to test the improved Internet support and report my findings to them. I typically politely refuse to run pre-release software without being compensated for my time but, in this case, I made an exception and elected to work as a volunteer testing this OS release.

I was pleased when I went to my first web site using Safari, VoiceOver and Yosemite. The first thing I did, with QuickNav turned off, was to start navigating around using cursor keys in a manner similar to how I interact with FireFox using NVDA or Internet Explorer with JAWS. I also enjoyed the relatively new feature that allows a VO user to navigate a web site with single key commands similar to QuickKeys in JAWS and similar features in all Windows screen readers.

When, however, I tried to actually use the new Yosemite version of VoiceOver in Safari, I found a number of problems.

An Interface Out Of Sync With Itself

If you are running OS X Yosemite (10.10),you can try this on this very page. First, make sure QuickNav is turned on, then hit “h” a few times to get to a heading somewhere on the site, it doesn’t matter where. Next, turn QuickNav off (left arrow plus right arrow toggles it) and start navigating with the cursor keys in the new simulated virtual cursor mode. You will discover that the two navigation modes are out of sync with each other. A user would expect that hitting a down arrow after navigating by heading would read the first line after the heading text, in Yosemite, you will find that the cursor navigation, assuming you hadn’t used it earlier, starts from the top of the page no matter where QuickNav had left you. This turns the new cursor navigation feature into a demo of things to come in the future as it is not actually usable in its current state. A lot of VoiceOver for OS X has seemed more like a demo than production code for a long time.

Split Lines

Due to Apple’s philosophical obsession with ensuring that VoiceOver only represents information that appears on the screen (more on this later in this article), when using cursor navigation on an Internet site, it reads the information exactly as it appears visually in Safari. This means that when using cursor navigation or having cursor navigation turned on during a “read all” the user will hear words hyphenated by Safari read with the hyphens included. If the user has sounds turned on for misspelled words, the hyphenation will create misspelled words by its nature and the user will experience the latency problem caused by having sounds inserted sequentially into the audio stream. NVDA does not exhibit this problem and, if I remember correctly, neither does JAWS.

Faithful representation of on screen information is very nice in some cases but, in this one and a number of others, it inserts a layer of inefficiency into the user experience.

Copy And Paste

I spend a lot of my time writing and, like most authors these days, I use the Internet as source material for my work. It is therefore essential that I be able to copy information from web sites and paste it into my text editor for integration either into one of these blog articles or, far more important, into the documents I prepare for my clients. With VoiceOver and Safari, copy and paste is a never ending adventure.

On this site, one can select using the cursor key navigation along with the SHIFT key as one would expect but, on many sites I tried, the same selection, copy and paste do not work at all. VoiceOver does provide a keystroke for selecting text on web pages but it also works very inconsistently. When I’ve reported problems with selecting text on web sites to Apple, they responded with ambiguous answers that tended to say something unspecific like, “something about that site prevents us from selecting text.” I’d accept this as an answer based in web accessibility standards and guidelines if the people at Apple would tell me which piece of WCAG 2.0 or standard HTML was violated but they never include that piece of information in their responses to me. Meanwhile, NVDA handles the same pages perfectly in FireFox and, in my opinion, if one screen reader can do something properly, they all can.

In general, the Yosemite version of VoiceOver and Safari provide a nicer experience on the web than did Mavericks but, as it also contains a whole lot of the problems that were reported by users of earlier versions of OS X, it remains far behind JAWS and NVDA in its actual usability.

Latency and Sounds in VoiceOver

A really long time ago, TV Raman (now at Google accessibility), added the notion of an “earcon” to his emacspeak software. More than ten years ago, JAWS became the first screen reader to include this idea with the advent of its Speech and Sounds Manager. An earcon is a sound a user hears in lieu of speech to augment the audio stream in order to spend less time listening to speech and more time actually getting their work done. Going back to the early versions of VoiceOver on OS X, Apple included the concept of an earcon to deliver information but implemented it in the worst way possible.

While I worked at HJ, Ted henter personally taught me to count syllables in any text that JAWS would speak to its users. Ted demonstrated that every syllable or pause spoken to a user takes up a single unit of said user’s time. We invented the speech and sounds manager in order to help users reduce the number of syllables they need to hear in order to enjoy the same amount of semantic information in less time. As a quick example, one can set JAWS to play a tone instead of saying “link” when it finds one. The important feature of the JAWS implementation, however, is that the sound plays simultaneously with the text being spoken.

As you can hear if you listen to Bryan Smart’s recordings on this matter, the VoiceOver developers made a rather bizarre decision when they implemented the sound feature on OS X. Specifically, instead of playing the sound simultaneously with the spoken text, VoiceOver adds its sounds sequentially to the audio stream. Thus, instead of saving time, each sound played by VO adds more time to that which the user needs to spend hearing the same amount of information. According to Bryan’s work, this delay is never less than 200 milliseconds and can go as long as a half second. One fifth of a second doesn’t sound like much but such interruptions cause a cognitive hiccup that could easily be avoided by playing the sounds at the same time as the text is spoken. Apple’s sound system adds time, thus reducing efficiency while also breaking up the text in a manner that disrupts one’s attention.

This problem and its related efficiency issues have been reported to Apple many times over the years, people have discussed it in blog articles and podcasts but over the years, Apple continues to refuse to remedy this major problem with their interface.

The latency issues aren’t always associated with the sounds being played. If one uses any text augmentations, including having VO change the pitch for links and misspelled words, they are accompanied by a delay of no less than 100 milliseconds, making these features interesting but not entirely usable.

Complex Apps And Efficiency

Apple must be commended for the excellent work it has done regarding accessibility in software like Xcode and Garageband. As far as I can tell, a VoiceOver user now has access to all of the features in both of these very complex user interfaces. For me, an occasional podcaster, having Garageband for recording and mixing available to me has been a lot of fun. I also enjoy using Garageband to create “virtual bands” to jam along with using loops and related features. At the beginning, the VoiceOver interface in Garageband worked well for me but I was a novice then. and, as I grew more proficient with the program, I found many tasks were tremendously cumbersome.

As I’m only passingly familiar with Xcode (I don’t write software for Apple devices), the examples I’ll use in this section will come from Garageband but apply to almost every Apple branded Macintosh application of any complexity, including iWork apps like Pages and Numbers.

Faithful Representation Of On Screen Information

When I worked on JAWS, a frequent complaint we would receive at FS from the field often came from sighted people or from actual JAWS users who needed to work closely with sighted colleagues. The problem came as a result of JAWS speaking information in a manner differently from how it appears on the screen. Sighted trainers became frustrated when the speech didn’t match the visual display and I can remember trying to ask my sighted wife for help at times and both of us getting frustrated by the difference between speech and screen information. The people who designed VoiceOver chose instead to take a radically different approach and ensure that on screen information is accurately represented in what the user hears.

The JAWS philosophy comes from Ted henter’s insistence on not only providing an accessible solution but also making sure that the solution is as efficient to use as possible. I’m sad to say that, as far as I can tell, no screen readers other than JAWS and NVDA even attempt to maximize efficiency anymore. The problem with the JAWS approach, however, is that it comes with a steep learning curve, users to use complex applications efficiently with JAWS must spend a fair amount of time learning different keystrokes specific to the application they need to use and will need to live with aspects of the application remaining inaccessible in most cases. The Apple approach solves the discoverability problem, a novice can poke around the Garageband interface and find everything in a fairly intuitive manner; the Apple approach, at the same time, provides little in terms of efficiency for intermediate to advanced users.

Using Garageband, I often find myself spending more time navigating from control to control than I do actually working on my recordings.

A Lack Of Native Keystrokes

In general, Windows programs tend to have more accelerator keys to handle interacting with features than do those on Macintosh. It would be useful for Macintosh apps to have the same. While I can perform every task and use every feature in Garageband, many require me to issue a pile of keystrokes to both navigate from place to place but also to use a on screen simulator. Indeed, my experience is nearly identical to what a sighted user enjoys but without the efficiencies provided by having vision. Where a sighted user can move quickly with a mouse or trackpad, a blind user needs to step through every item in between and often perform actions with a keyboard that could be made profoundly more easy if a single keystroke was available.

The Interaction Model

In an attempt to make navigation more efficient, the VoiceOver developers invented a user interface system that grouped interface items together in order that the user could either jump past its contents or, if they so choose, to interact with the group and access the information therein. Unfortunately, the grouping seems to be done algorithmically and that this facility doesn’t work terribly well.

Using the Macintosh version of iTunes as an example, a user can observe some areas made more efficient by the interaction model while also finding areas where they need to step through a bunch of controls that are not grouped together in a useful manner. This is true of many other applications as well, the interaction model demoes well but is implemented in such a random manner throughout the Apple branded apps on OS X so as to be of marginal use at best.

The interaction model also inserts a hierarchy on the interface. In a complex app like Garageband or Xcode, a VoiceOver user needs to climb up and down a tree of embedded groups with which they must interact separately. Moving from a place in the interface buried deeply in one set of nested groups to another place buried in a different group requires a ton of keystrokes just to do the navigation which could be obviated with either native accelerator keystrokes or keystrokes added specifically for VoiceOver users.

It appears as if these groups and the interaction model had been presented as an idea, included in VoiceOver and then ignored as the software matured. I do not believe that this interface model is mutually incompatible with efficiency, I just think that it has only been partially implemented and that it needs much more work moving forward.

A Lack Of A Real Scripting Language

AppleScript is available but has so many restrictions that it is nearly useless as a scripting system for VoiceOver. First and fore mostly, it is very difficult to share AppleScript with other users as such requires copying the files individually and adding keystrokes separately on each system. It is also impossible to assign a non-global keystroke to an AppleScript so application specific ones are impossible as well. AppleScript cannot fire on UI events so, continuing with the Garageband examples, one cannot have a sound play only when an on screen audio meter hits a certain level or some other interesting UI event had happened. After many years of criticizing JAWS for having a scripting language but falling further and further behind in the functionality wars, GW Micro finally added a real scripting facility to Window-Eyes, it’s now time that Apple do the same for VoiceOver.

Bryan Smart works for DancingDots, a company that makes Caketalking an impressive set of JAWS scripts that, among other things, provide access to the popular Sonar audio editing software on Windows. Why would people pay a lot of money to get JAWS, a lot of money for the DancingDots scripts and a lot of money for Sonar when they can get Garageband, VoiceOver and a laptop all for the price of a Macintosh? Because they need to use Sonar efficiently and Garageband, while being an excellent choice for a novice, cannot be used efficiently by a VoiceOver user. Complex applications seem to need a scripting language to accommodate users as they grow increasingly proficient with an application.

Syllables, Syllables, Syllables

As I wrote above, Ted Henter taught JAWS developers to count syllables whenever we added text to be spoken by JAWS. After running Yosemite for a few days, I changed my verbosity setting from “High” (the default) to “Medium” but still find that VoiceOver takes too much time to express some very simple ideas.

In AppleMail, for instance, VoiceOver reads “reply was sent” instead of simply “replied” which could save two syllables and the time spent on the whitespace to separate words. When I used CMD+TAB to leave my text editor to use another app and then again to return, VoiceOver says, “space with applications TextEdit, Mail, Safari…” and lists all of the apps I have running, even if I had hit CONTROL to tell VoiceOver to stop speaking. In TextEdit, where I’m writing this piece, if I type a quotation mark, instead of saying “quote” or some other single syllable term, VoiceOver “left double quotation mark” enough syllables to fill a mouthful or more.

I could go on. It seems that VoiceOver speech is overly verbose in far too many places to list. Whether a key label or a text augmentation, it is essential that the user hear as few syllables as possible in order to maximize efficiency.

Forcing A Keyboard Into A Mouse’s Job

Most blind people access general purpose computers using a keyboard and this is how I use my Macintosh, only rarely using the TrackPad. As I mention above, the VoiceOver UI is designed to mimic as closely as possible the on screen information. Quite sadly, a keyboard is not an efficient mouse or trackpad replacement.

The notion of drag and drop makes sense visually, the user “grabs” an object with the mouse or trackpad, drags it across the screen to its destination and then drops it by releasing the button on the pointing device. Using a keyboard to navigate by object until one finds themselves at their target destination is a hunt and peck process at best and far too cumbersome to use at worst. But there are Apple branded apps, including Garageband, that allow the user to interact with some features only with drag and drop, inserting a profound level of inefficiency into a VoiceOver user’s experience. Why not also allow for cut, copy and paste as a alternative to drag and drop? Doing such would provide a UI metaphor that makes sense to a person driving a Macintosh with only a keyboard.

In Garageband, there are custom controls for moving the play head, selecting blocks of audio information, inserting and deleting blocks and so on. For a VoiceOver user to do these things they must jump through weird UI hoops to force a keyboard to act like a mouse. Plain and simply, this can be corrected by either adding native keystrokes to Garageband or by allowing VoiceOver to be customized as extensively as one can do with JAWS or NVDA. In its current state, a blind person can use these features (along with similar ones in other Apple apps) but only with a great deal of superfluous keyboarding involved.

In short, though, using a keyboard to faithfully mimic what sighted users would do with the mouse is a poor idea in practice.


  • Apple, largely due to its iOS offerings, remains the leader in out-of-the-box accessibility. It is also true that the accessibility on OS X has both deteriorated from release to release and has had major problems delivering information and permitting interaction in an efficient manner.
  • Both iOS/8 and Yosemite contain a lot of “stupid” bugs, defects that should have been discovered by automated testing and remedied with about a minute of effort typing some text into a dialogue box.
  • Making VoiceOver on Macintosh into an efficient system will require changing some of its deeply held philosophical positions and I doubt this will ever actually happen.


Apple: The Company I Hate To Love, Part 2: Where’s the Competition?


Most of this article appeared first on this blog yesterday (10/14/14) under the title, “Apple: The Company I Hate To Love.” A number of our most loyal readers asked that I split that story up and start, as I did for Android, a series on the problems I perceive surrounding Apple and accessibility these days. So, being a blogger who tries to be responsive to his readers and fully understanding why they felt this should be a series, here’s part 2, in which I discuss the problems I’ve experienced since installing iOS/8 and my continued issues with the lack of competition in this space.


For years, both here and on my BlindConfidential blog in the past, I have railed against the lack of competition in the screen reader business. Years before systems like iOS, Android and Fire existed, I ranted about how GW Micro chose to take what I had described as a “non-compete” strategy in the market battles between JAWS and Window-Eyes. I’ve demonstrated in these articles how the community of screen reading using people were screwed in the end as, once JAWS was allowed to reach a position of market dominance, FS was left without incentive to continue making JAWS great as, in reality, if the competition “sucks worse” you remain the winner.

I have always and probably will always blame the lack of competition not on the winners nor on the consumers but, rather, squarely in the lap of the businesses who chose not to compete. It isn’t the fault of the JAWS developers that they built the best screen reader on the market back in those days.

Actually, rethinking, I suppose, indeed, that it is my fault and that of Eric Damery that we elected to spend the development dollars to make software like Excel and PowerPoint not just demo well but be usable in real professional settings. It’s my fault and that of Glen Gordon that we didn’t take the then broken MSAA approach to web accessibility but, rather, decided to invent the virtual buffer, the invention most blind people enjoy on Windows and to a lesser extent other platforms today. It’s definitely our fault personally as we are bad people who did the awful, we made the best thing out there and, as a result, we achieved a monopoly position, a position Apple holds today in the mobile accessibility space.

It isn’t the fault of people who told the world to buy Apple products for being accessible and to eschew products whose accessibility remains poor. It isn’t my fault that Google makes a poor accessibility solution, that’s Google’s fault. I report on what I observe and I encourage people to buy the best and, today, in spite of the disappointing iOS/8 release, Apple remains the best, even if they may not be as good as they were in their previous release.

Buying an Android device today, purely if accessibility is the standard on which one makes their decisions, is a really bad idea. Buying Android today doesn’t create competition but, rather, discourages such as it tells the manufacturers “it’s ok to suck.” It also tells the leader that they can stop working as, if users accept that crap, why should the best even consider for a second getting even better? competition will start in this space when there are two or more players who can claim what iOS/7 did, namely,100% compatibility with their own accessibility API. As no mobile device other than those running iOS come even close to iOS/8, defects and all, going to Android is only telling Apple that it’s ok to suck even more as we’ll buy this stuff just to not buy product from you.

I heard this exact same argument while at FS. People would say things like, “Sure, Window-Eyes is a poor alternative but I’ll use it just to promote competition.” How well did that work out? In those days, I was told by people at AFB that they refused to write a fair review of JAWS or Window-Eyes that compared the two as they feared killing the competition between the two screen readers. I railed very publicly as an FS VP against AccessWorld for saying that MAGic (the FS low vision software) was nearly as good as ZoomText because it was not so, I found this sort of article to be entirely misleading for readers as some, if they actually believed AccessWorld, might choose MAGic over its far superior competitor and vowed to work to drive MAGic to catch up (another of my personal failures). Promoting substandard solutions does not drive the leader to improve, it does the exact opposite and, as we saw with JAWS and Window-Eyes, a leader who isn’t pushed by its competitors will allow its technology to atrophy.

Can someone find me another industry where any consumers say, “I’m going to buy the crappy one, I’m going to reward them with my dollars just to encourage them to do better in the future?” No, of course not.

Fans Versus Consumers

It’s playoff time of year so my attention turns to baseball and I’ll use a baseball metaphor to describe what I consider to be the difference between a “fan” or “fanboy” if you prefer and a consumer.

Let’s say that you live in New York where you have a choice between two baseball teams, the Yankees and the Mets. Let’s add that, in this particular season, the Mets are a really terrific team and the Yankees are a poor one. If the Yankees and the Mets are playing at the same time but, of course, in their separate stadiums and you want to go to a baseball game, you need to make a choice as to whether to travel to the Bronx or out to Queens, you need to decide whether or not to pay to see the Yankees or pay to see the Mets.

If, in this case where the Mets are a superior product, you choose to go see the Yankees, you do so because you are a Yankee “fan” or “fanboy” if you prefer; if you choose to go to the Mets game, you are making a consumer based choice and buying the better product. If you think that buying a ticket for the Yankees will help them build a better team in the future, you are like the fans of the Chicago Cubs who haven’t won a World Series in more than a century, you are buying hope without reality.

This is, fundamentally, why winning teams draw large crowds and, in cities other than Boston, San Francisco or New York where money is so abundant, poorly performing teams draw poor attendance.

The iOS/8 Fiasco

I did not join the beta program to test iOS/8, I’m entirely unwilling to pay Apple $100 per year for the privilege of running broken, pre-release software, that’s an effort for which individuals should be paid as quality assurance professionals and is not something that billion dollar corporations should be enjoying as free labor from volunteers. All I can say, however, is that it appears as if Apple accessibility must have hired a QA person out of Google as the number of glaringly obvious accessibility bugs, defects that were reported by people paying Apple for the right to report bugs, remain in the released version of the software. We’re not talking about obscure problems that require a lot of steps to reproduce or may be the result of a strangely and unpredictable combination of features/apps/hardware but, rather, these are the really stupid bugs, the ones that any automated testing process should have caught that are present in many areas in iOS/8.

So, it remains that iOS/7 is the all time out-of-the-box accessibility champion. As iOS/7 can no longer be purchased from Apple, this also means that the most accessible solution for mobile computing is now a thing of the past. We’ve regressed in iOS/8 and Apple must be taken to task for such. That iOS/8 is crappy, though, does not mean, “go out and get an Android device” as Android remains far worse. Apple set the gold standard in iOS/7 and, with iOS/8, has taken a step backward but remains, by far, the best accessibility solution for people with profound to total vision impairment.

I’ve spent most of the past month in a car traveling from the Boston area where we spend our summers to Florida, unpacking and then getting back in the car for a much shorter drive south to Palmetto, Florida where I spent 25 days in guide dog school. As I was learning to work with a wonderful new dog, I didn’t have the time to do any serious testing of iOS/8 myself. Please do read the very comprehensive article on iOS/8 accessibility bugs on AppleViz if you need more details as the problems I mention in this article are a subsection of those I’ve personally experienced and is not a result of a comprehensive plan. I tend to ignore AppleViz in general as I find their editorial gist is too soft on Apple and contains too little criticism. This article, however, is pretty good and reflects much more of an effort than the item you are currently reading.

The Stupid Bugs

I am using the word “stupid” here specifically as a term to describe obvious bugs that should have been caught by automated testing. These are the sorts of bugs that drive me crazy about Android accessibility with the question, “How can you miss something as simple as putting something into the tab order or adding a label to a button?” as testing for such should take no more than a few seconds of an automated testing tool telling the developer, “Hey stupid, you forgot the damned tab stop.” These are the bugs that require thought to remedy, they can mostly be handled with a tiny bit of typing. If, on iOS/7, a blind user installed everything that came out-of-the-box plus all no cost iOS/7 apps that carried the Apple brand name, they would find that there are more than a thousand total accessibility API tests that could be performed and that all but a tiny fraction (10 or so) passed on iOS/7 giving a result of 100% when the results are rounded to integers. While this number is far worse on Android than iOS/8, the new iOS offering certainly does not hit the 100% mark but is probably still in the greater than 90% score level. Compared to Google, Microsoft, Amazon and Samsung, this is still the best score on the market today by at least 30 points but, as the newly introduced bugs are mostly “stupid” ones, the trend toward regression at Apple is alarming.

The Apple Monopoly Position

I had a lot of time alone while at guide dog school, it’s largely a “hurry up and wait” experience and while other students were training, I had a lot of time to think. What came into mind as I went from iOS/8 to 8.01 to 8.02 was also partially propagated by other students in the class and at another guide dog school where, coincidentally, my dearest friend was getting a new dog at the same time.

Of the 24 students in the two classes, 22 used at least one Apple device. All of these people, quite obviously, were also blind. What they are not in any other way is a narrow sample as they spread an age range from around 20 to over 80, a wide array of educational backgrounds, personal histories and so on. As a sample of adult blind people, this, while not scientific in any manner, was a fairly diverse group. The single thing that we had in common was that we used iOS devices and that we handled guide dogs. Of the two who didn’t use iOS, one was an older woman who still used an old Nokia N82 with Talx and the other was an Android user who admittedly didn’t use the device to do much more than accept and send phone calls. So, 22 of 24 users had already moved to iOS.

This complete market dominance led me to think of JAWS in late 2002. By then, Freedom Scientific was holding a marketshare among new product sales of over 80%. We had achieved a monopoly position and, while I may rant and rave about such decisions on philosophical grounds, it would have been a seriously poor business decision to continue investing as we were in JAWS as, quite simply, the market demands caused by serious competition had disappeared. I use the word serious about competition in this space as, in theory, GW Micro and Window-Eyes “competed” with JAWS and Android, also in theory only, competes with Apple in the accessibility space, they just do not compete with any serious efforts at doing so. Thus, in the lack of competition, why should Apple do anything but wait for the others to catch up?

Don’t blame the Apple fanboys for creating this environment, they saw the best thing this community has ever seen out-of-the-box, they rightfully celebrated the top thing on the market. The blame here falls directly in Google’s lap, TV Raman and his team produced a horrible solution wrought with the stupidest of bugs and Google corporate policy doesn’t even require accessibility testing on anything they make. Don’t blame the monopoly position on Apple, they only did what they were asked by this community to do: namely, deliver a device 100% accessible to people with profound to total vision impairment. In the same way that blaming FS for the failures at GW is absurd, so is blaming Apple for what are solidly problems at the businesses that claim to compete with them.

I’m not blaming the users for buying the best thing, that’s, indeed, how competition works, two or more companies release similar products, consumers evaluate them and buy the one they prefer. Apple built a highly preferable system or that’s what the marketshare numbers tell us and it was so profoundly preferable that virtually all blind consumers, based in a function of competition, chose the system that best met their needs. If a large number of blind people were to suddenly abandon iOS in the hopes that buying an Android device would “promote competition in the future,” they miss the definition of competition because, on the day you’ve bought the device, you have, by rewarding the manufacturer with your money, actually announcing that the inferior option has won because you’ve given them the only actual prize that a large corporation cares about.

No traditional market forces are at play in this situation and all I can say is that I really do hope that Peter Korn can bring some actual competition to this space.

How Does This Happen?

Something, I don’t know what, is different inside Apple these days. Maybe it’s the new CEO, maybe it’s something else, maybe they really did put a person out of Google in charge of accessibility QA, I don’t know. All I know is that no one seems to be minding the store. If the stupid bugs are starting to slip through, what can we expect next. I’m glad that iOS/8 has support for MathML and has added some other interesting new features but, overall, the release is unnecessarily sloppy.

Some of the most annoying bugs I’ve encountered have nothing to do with accessibility. One in specific, I hang up a call, another finger happens to accidentally tap a number on the keypad, the tone from that number starts to play and does not stop. If this thing is called a phone, the one app that should work flawlessly would be the one for using the phone, isn’t it? This doesn’t just happen to VO users, it’s a stupid bug that a lot of people are experiencing, having to entirely reboot the phone to get it to work. Really? A phone button is stuck down? You guys didn’t think of testing such?

Other bugs, some related to accessibility, some not, seem so stupid that I can only wonder if anyone at Apple either tested such or if they listened to beta testers at all as I’m highly confident that most, if not all, of the most obvious bugs would have been caught there. As I wrote above, I’m not an iOS beta tester so I’m running on assumptions here but, if they had as few as two blind people testing and reporting iOS/8 bugs, they’d have heard reports of most if not all of these problems and, as I wrote above, most of these could have been remedied in less than a minute each by anyone who can type.

What is it that seems to have, regarding accessibility at least, to have allowed Apple to think it can do such a sloppy release? in my mind, it’s the fault of their competitors refusing to make a credible solution at all. If everyone else sucks, they are giving the leader carte blanche to suck too. When Window-Eyes fell behind JAWS, they could have worked really hard to catch up, especially when it, around the release of JAWS 7, became very obvious to the general public that FS was working far less hard on JAWS than we had previously. If Google released an Android with an accessibility score even close to the iOS/8 with all of its bugs included, it would be true choice and would incentivize a lot of users to give it a try; in its current condition, Android is not “competition” but, rather, capitulation to Apple’s dominance.


Apple is doing something different and dangerous with their accessibility strategy. By choosing to release iOS/8 with so many glaringly obvious bugs, they have allowed accessibility regressions to vastly overshadow any improvements in such in iOS/8. My personal conclusion is that this is the result of a failure by the Apple competitors, most notably Google and Microsoft, to actually compete in this space. Apple released iOS/7 with a 100% accessibility API compatibility rating, the only out-of-the-box solution that has even tried to achieve such. Apple is still the clear leader in accessibility in the mobile computing arena but has proven that they can disappoint as well as surprise this community with their accessibility efforts.

I’m feeling tremendously discouraged. I’d love to be able to say, “Apple is blowing it, support one of their competitors,” but, in good faith, as iOS/8 is still substantially better in all areas of accessibility than is Google, Amazon or Microsoft, I’d be recommending an even worse solution. Apple and iOS/8 may suck but it sucks far less than its competitors. I refuse to look at trend lines in this space as they are historically unpredictable but, based in both insider and public information, I think that MS and Amazon might be making a solid move in accessibility and its a move forward. Google has demonstrated a few promising signs (Chrome is more accessible on Android and Windows, GoogleDocs seems to be catching up to Microsoft Office Online in accessibility) but we’ve heard so many promises from Google for so long that, with them, I take a wait and see attitude ignoring all statements about the future that isn’t accompanied by actual functioning bits.

I still conclude that the fault for this lies entirely in the hands of Apple’s competitors. If Apple had someone knocking on the accessibility marketshare door, they might not be so cavalier with choosing which bugs to fix and which to force upon us as paying customers. As long as Apple can say, “we suck less,” they will continue to e allowed to suck further until they drop all of the way down to the standards of their competition. If we, as blind consumers, accept a lower standard for accessibility, we are part the problem, not part of the solution.