Introduction
Since 2009, WebAIM, a consulting company with a focus on Internet accessibility, has published an approximately annual report detailing the results of a survey they conduct regarding screen reader marketshare and the preferences of screen reader users. Recently, WebAIM published its “Screen Reader Survey #6 Results and I found the information compelling.
As a matter of full disclosure, I want to state up front that I find the nearly annual release of the WebAIM data to be one of the highlights of my yearly accessibility calendar. I love numbers and, while the WebAIM survey has some major flaws, it is by far the best data we have available to us regarding the questions it covers. I will highlight the problems with the survey in the next section but, because this information was gathered by the same people using the same methods from year to year, while any specific number in the WebAIM report should be viewed with a high degree of skepticism, the trend lines we can derive from looking at all six of the reports in chronological order provides we professionals with the most useful data available. As long as Freedom Scientific, AI Squared and other third party commercial vendors of these technologies, insist on keeping their sales figures secret, the WebAIM data is the best available to us and our entire community should be grateful to our friends at WebAIM for doing the work to bring this valuable information to us.
This article takes a look at two of the important questions asked in the WebAIM survey, those that ask what screen reader one uses primarily and which they use commonly. I’m focussing exclusively on general purpose computers and will write a separate piece on the mobile computing results at some point in the future.
Problems With The WebAIM Survey
Before I leap into the actual results published in the WebAIM survey, I wanted to illustrate some of the problems with this data and why any specific number in the WebAIM survey is dubious at best. I want readers to understand that, while the WebAIM numbers are the best information we have, the survey methods they use introduce biases into the data and, as far as I can tell, the WebAIM techniques make it difficult or impossible to publish important statistics like “margin of error” and “standard deviation,” two numbers that would be incredibly useful to one analyzing the data.
The WebAIM Survey Is Self-Selecting
The first problem with the WebAIM survey is that it is a survey of voluntary subjects. Thus, everyone who filled out the WebAIM survey did so by making a personal choice to do so, thus, we find a bias toward individuals who have a motivation to take the time to go to the web page containing the survey, fill in the information and hit the button to submit their entry to WebAIM.
That the WebAIM survey comes from self-selection cannot tell us anything about the participants other than they have an inclination to filling out surveys. This could imply anything from the participants being the sorts of individuals who will goof off at work and spend time reading web sites and filling in surveys instead of doing their jobs; it could mean that the people who took the survey were highly motivated to pump up the results of their favorite screen access tool; it could even mean that one or more of the screen reader vendors ran a quiet campaign to get their users to take the survey to make their numbers look better; in brief, we can accept that a self selecting survey definitely introduces some kind of bias but we cannot determine what the bias might be, hence, we can not correct for it in our analysis.
The WebAIM Survey Is English Only
The WebAIM screen reader survey is only published using the English language. If one looks at the table in the sixth set of results published by WebAIM, they will see that this also causes a disproportionate number of the respondents to come from English or bi-lingual locales. If we add up the percentage of respondents from North America, Europe/UK and Australia/New Zealand, all areas where English is either the primary or a very popular second language, we find that roughly 90% of the WebAIM information comes from participants who reside in these locales. If one googles to find the median income in these regions (approximately identical numbers are available from IMF, World Bank, United Nations and elsewhere), they will find these areas to have among the highest levels of personal income in the world. Hence, the WebAIM data has a bias toward relatively affluent locales which may skew results away from free and no cost screen readers.
A secondary effect of the bias toward Europe, North America and Australia/New Zealand, is that the nations on these continents are also more likely to provide subsidies or cover access technology under their national health care programs (these services seem to vary wildly as one crosses national boundaries) which also may bias the data against free or no cost solutions as the end users, sheltered from the high prices, have no concrete motivation to find lower cost alternatives..
WebAIM Is An Internet Only Survey
As the WebAIM survey is conducted online only, it favors individuals who are more likely to be confident and comfortable using the Internet while also including a bias toward screen reader users who will, probably through social media, even hear that the survey is being conducted. It’s my personal belief that this causes a bias toward younger users and may not include the growing number of elders using access technology, especially on mobile platforms. There’s a fair amount of anecdotal information showing a strong preference among seniors with low vision toward the Amazon Kindle Fire HDX tablets and among seniors with profound to total vision impairment toward iOS devices. Thus, I doubt that a properly proportional number of people over the age of 65 are represented in the WebAIM data. I think it would be good if WebAIM added an age category to its survey in the future as this is an important demographic point that appears to be absent from the current results.
The WebAIM Numbers “At A Glance”
Given the problems identified in the previous section, let’s take a big picture view of some of the WebAIM numbers and see what we may be able to conclude from such.
Screen Readers Used
WebAIM does something interesting by asking two separate questions regarding which screen readers one uses. Specifically, they ask which is one’s primary screen reader and which screen readers they use commonly. Anyone who follows the “#a11y” and “#accessibility” hash tags on Twitter in the days immediately following the publication of the sixth set of results from WebAIM will also have observed that these were the numbers that lit up the discussion like a Christmas tree during that week.
Primary Screen Reader Used
If I was asked this question, it would be impossible for me to select a single screen reader that I use most. It’s only 8 AM on a Sunday morning and I’ve already used three screen readers (VoiceOver for iOS and OS X and NVDA on Windows). Because I switch back and forth from Macintosh to Windows a few times per day, it would be nearly impossible to tell you with any degree of certainty which is my “primary” one. But, as this question has been asked in all six of the nearly annual reports from WebAIM, it’s a good one for us to look to in order to observe trends.
The Numbers Over Time
The following table contains the results published for “primary screen reader” used in all six of the WebAIM surveys. I want to remind our readers that, as there is no published margin of error for the WebAIM statistics, these numbers should be viewed as far less than precise. A relatively small change, less than 5 percentage points or so, in the values reported from one survey to the next is only interesting when viewed as part of a multi-year trend and, as this survey has no known controls, I think that a 5% fudge factor feels right but, if I’m terribly wrong, please do correct me in the comments section as serious statistical analysis isn’t my strong suit. But, do assume that small changes in the numbers from one survey to the next are dubious at best as, while we don’t have a precise margin of error, we should assume that such does exist.
January 2009 | October 2009 | December 2010 | May 2012 | January 2014 | July 2015 | |
---|---|---|---|---|---|---|
JAWS | 74% | 66.4% | 59.2% | 49.1% | 50.2% | 30.2% |
Window-Eyes | 23% | 10.4% | 11.2% | 12.3% | 6.7% | 20.7% |
NVDA | 8% | 2.9% | 8.6% | 13.7% | 18.6% | 14.6% |
VoiceOver | 6% | 8.9% | 9.8% | 9.2% | 10.3% | 7.6% |
SystemAccess | N/A | 4.9% | 4.7% | 10.4% | 7.7% | 1.5% |
ZoomText | N/A | 2.6% | 3.3% | 2.8% | 1.3% | 22.2% |
JAWS Remains On Top, Barely
The WebAIM numbers describing JAWS are perhaps the most interesting of the bunch as they show a steady decline in share from a high of 74% in the January 2009 survey to just over 30% in the July 2015 data. In all but one of the periods in the survey (May 2012 to January 2014 where it remained flat), the percentage of people reporting that they used JAWS as their primary screen reader dropped by nearly 10 points in each set of results. In total, the JAWS share reported by the WebAIM survey participants dropped by nearly 45 points in roughly six and a half years.
Recently, we’ve seen Freedom Scientific take some experimental steps to regain their once dominant share. At the NFB convention this past summer, for instance, FS tried selling the approximately $800 JAWS single user, personal license for $75. Clearly, FS is feeling the heat and, as JAWS sales are nearly a pure profit center for the company, they may be preparing to attack the oncoming Window-Eyes, NVDA and others on price. I’m not smart enough to make a prediction as to how this will unfold into the future but it’s certainly going to be interesting to read and write about the developments when and if they happen.
Window-Eyes Rebounds?
For roughly a year and a half now, anyone with a licensed copy of Microsoft Office could download and use Window-Eyes at no cost. Prior to this offer, as one can see in the table, the popularity of Window-Eyes showed a steady decline in its use as a primary screen reader dropping from a high of just over 23% in the first WebAIM survey in 2009 to a relatively insignificant seven percent or so in 2014. Then, we see a leap in share back to slightly over 20% that correlates with the time period in which individuals and businesses with Office licenses (virtually every business and organization that uses the Windows OS) could get at no charge. While I predicted that a no cost Window-Eyes had arrived too late to matter, I appear to have been wrong in that assertion. With a share over 20%, AI Squared has made Window-Eyes into a true contender again.
The Window-Eyes number, though, also causes one to ask a question about another of the details published in the WebAIM Survey Results #6, specifically, the question, “How did you obtain your primary screen reader?” The answer, “I downloaded it free of charge from the Internet,” got just over 17% of the responses. NVDA has always been free and Window-Eyes now comes at no cost. If we add the NVDA share to that of Window-Eyes, we get a number roughly 35% and, assuming that all NVDA users got their copy for free, this would imply that 85% of the Window-Eyes users surveyed in fact paid for their copy when they just as easily could have downloaded it at no cost to themselves or their employer. I think this discrepancy shows us the flaws in this kind of survey, when the participants are self-selecting, we cannot establish a solid baseline from a control group and we find ourselves scratching our heads when results that we think should correlate do not.
VoiceOver Remains Flat
The question WebAIM asks regarding primary screen reader includes the phrase “on your desktop or laptop computer” so the wording of the question attempts to exclude mobile devices from the survey. Thus, I’ll assume that those who responded by saying that VO was their primary screen reader use it on Macintosh and reserve iOS responses for questions specific to mobile platforms. I may be wrong but this is the assumption I’m making based on the text of the question asked.
If we accept my 5% fudge factor as the margin of error, all we can conclude about VoiceOver is that it jumped out and grabbed a small share but remains hovering at just under 10% of the population for the entire six and a half years covered by this survey. Anecdotally, this is in agreement with my personal observations, a bunch of blind people ran out and got Macintoshes when they first became accessible but this number has remained more or less constant throughout the period.
NVDA Seems To Slump
If we include the 5% fudge factor, we can see that NVDA grew to a solid position but has remained stagnant or has possibly dropped a bit in share recently. It’s possible that one might choose a no cost Window-Eyes over a free NVDA, it could be possible that the publicity surrounding the no cost Window-Eyes swamped NVDA’s visibility and it’s possible that, because this is a self-selecting survey, that NVDA users are simply less motivated to fill out and submit the form.
The ZoomText Factor
In the first WebAIM screen reader survey, ZoomText, if measured at all in the survey, fell into the “Other” category and, in all subsequent years prior to the publication of the most recent results, it polled only single digits of respondents. Then, with the publication of the sixth WebAIM survey, ZoomText jumps to a share greater than 20%. I can only assume that a 20% jump in share is the result of a change in how the WebAIM survey was conducted and not in an actual enormous change in user preferences. It’s probable that more people with low vision, the target audience for ZoomText, participated in this year’s survey than ever before and, therefore, have skewed the results, even in regards to following the trend lines.
Viewing the data over time provides us with no reason that ZoomText would have leapt in marketshare numbers other than a larger number of people with low vision participating than in the past. Blindness is certainly a spectrum and “legal blindness” is defined well within the parameters of what a person who could use ZoomText might fall into. For statistical purity, I wish ZoomText and software primarily used for magnification had been split out into a separate question. One interesting thought, though, is that many people using ZoomText have degenerative disorders and are moving along the spectrum from magnification to speech and ZT, in its fancier versions, includes both. When asked for a recommendation as to whether a person should use a screen reader or a magnifier or both together, we at FS used 8X magnification as the cut off, people who need 8X or greater should use JAWS; users who run at less than 8X magnification should use MAGic or ZoomText. This was a rule of thumb we used and, as far as I can tell, has no science to support the notion. So, maybe the WebAIM survey is reaching more people in the place on the spectrum where ZoomText still provides a lot of value or maybe WebAIM is finding people who are struggling to get by with an AT less compatible with their needs than one of the more comprehensive screen readers.
Conclusions About Primary Screen Reader Numbers
By observing the trends present in the table above, we see that JAWS, the most expensive screen reader on the market, showed a decrease in share of more than 50% in the roughly six and a half years covered by the six WebAIM surveys. In January 2009, only 26% of survey respondents identified any screen reader other than JAWS as the one they use primarily while, in July 2015, approximately 70% had identified a different screen reader as the one they use primarily. In January 2009, 14% of participants identified a free or no cost (NVDA or VoiceOver) screen reader as their primary one; in July 2015, roughly 45% identified screen readers available for free or no cost (NVDA, VoiceOver and Window-Eyes) as their first choice.
I believe that these changes tell us one thing for certain, “price matters.” As I wrote above, we can’t be certain as to why Window-Eyes saw such growth from year to year, especially because its growth does not correlate with the numbers of people who say they downloaded their screen reader at no charge from the Internet.
Other conclusions might be that Window-Eyes and NVDA are catching up to JAWS in terms of feature set or, conversely, that JAWS has deteriorated to a point in which its competitors provide equal or better accessibility in some to many areas. VoiceOver is certainly a factor, it comes at no extra cost with a Macintosh but, in the WebAIM survey results, seems to have leveled out with just under 10% of the market.
Now, Let’s Add The Major Chaos
WebAIM, in addition to publishing the data we discuss above regarding which screen reader participants identify as the one they use primarily, also publishes a table of data describing which screen readers people also employ commonly that shows that most of we screen reader users tend to use more than one. This correlates with my personal experience as, on any given day, I will use VoiceOver on iOS, VoiceOver on Macintosh, NVDA on Windows 8.x, Orca on a Gnome system and, rarely but not never, SpeakUp in a text based GNU/Linux distribution.
Commonly Used Screen Readers Over Time
Beginning in their second survey (October 2009), the WebAIM team has asked the question, “Which of the following screen readers do you commonly use, check all that apply?” In the following table, you will see that this data is far more chaotic than the information on “primary screen reader” and that it leaves the reader less capable of making any hard and fast conclusions about screen reader usage.
January 2009 | October 2009 | December 2010 | May 2012 | January 2014 | July 2015 | |
---|---|---|---|---|---|---|
JAWS | N/A | 75.2% | 69.6% | 63.7% | 63.9% | 43.7% |
Window-Eyes | N/A | 23.5% | 19.0% | 20.7% | 13.9% | 29.6% |
NVDA | N/A | 25.6% | 34.8% | 43.0% | 51.2% | 41.4% |
VoiceOver | N/A | 14.6% | 20.2% | 30.7% | 36.8% | 30.9% |
SystemAccess | N/A | 22.3% | 16.2% | 22.1% | 26.2% | 6.9% |
ZoomText | N/A | 7.5% | 3.3% | 6.8% | 5.3% 27.5% |
If we add or subtract my 5% fudge factor, this table shows that virtually all screen readers (excepting SystemAccess) mentioned in the survey have been moving toward parity over the past few years. Using our 5% fudge factor as a margin of error, JAWS, Window-Eyes, NVDA, VoiceOver and ZoomText fall into a statistical clump. These numbers also vary more from survey to survey than do the primary screen reader results and, rather than trying to tease a trend out of these numbers, I’ll put the fluctuations into problems with the sampling techniques but will definitely accept that Window-Eyes use seems to have jumped by a larger amount than can be explained by statistical biases and errors in sampling.
Why Do We Use So Many Screen Readers?
With five screen readers all appearing to converge in a dead heat in these numbers, one must ask the question as to why this is the case. In the WebAIM Screen Reader Survey #2, 62% of respondents identified themselves as using more than one screen reader, a number that comes in at 53% in the sixth of these surveys. Once again, on this number, if adjusted for a margin of error, we’re observing little change over the six and a half year period studied in this series of reports. Plain and simply, roughly half of all screen reader users employ more than one. With so many blind people using mobile devices, it’s obvious that one would need two screen readers, one for the phone or tablet and one for their general purpose computer. The WebAIM numbers show that our community tends to use more than one screen reader on Windows alone, possibly due to some outperforming others in different use cases.
The two screen readers that appear to have jumped in this table, Window-Eyes and ZoomText, can probably be explained by Window-Eyes becoming available at no cost and for the larger number of ZoomText users to have participated in the sixth of the surveys. In truth, we can conclude little from these numbers.
The Big Chaos This Causes
Both of the tables above show something very similar, the marketshare of a lot of different screen readers are converging. This is more obvious in the “commonly used” table where, if we adjust for errors in sampling and techniques, we see five different products, four of which are traditional screen readers, one of which is also a magnifier, with very similar levels of common use. As one who has screamed about the lack of competition in this field for years, I must say that I think the end of the JAWS monopoly position and greater diversity in the products used by the population of people with vision impairment pleases me at some level. But, as an accessibility professional, I must also say that I find this convergence alarming as well.
Our little company, 3 Mouse Technology, earns much of our income by testing web sites and other technologies for accessibility. We usually perform this kind of testing in two phases: automated and hands on. The first pass uses automated web accessibility testing tools like Karl Grove’s Tenon.io or Deque Systems Axe. These automated testing tools and their competitors take the HTML for a given web page and generate a report based in the various standards and guidelines, the most objective of the testing possible. Unfortunately, in their current state, an automated testing tool can describe with great precision precisely what is right and what is wrong in the HTML but cannot inform us about the user’s experience, hence, the hands on testing.
To perform the “hands on,” user centric kind of testing to ensure true accessibility beyond simple compliance with standards, our team, after seeing the problems detected in the automated tests remediated, then goes over every element on the web site using “popular” screen access tools. Thus far, our clients have only ever asked and paid us to test with JAWS and IE, NVDA and FireFox, ZoomText with IE and VoiceOver on iOS with Safari. If, based on these survey results, clients start asking us to also test for Window-Eyes and/or VoiceOver on OS X, we’ll bill more hours, we’ll earn more money but, at the same time, the user side of accessibility becomes more expensive for our clients, hence, becomes less likely that said clients will find motivation to do accessibility at all. Remember, money talks; accessibility walks.
A More Complex Case
Our little company and the other businesses like WebAIM, PAC, Deque Systems, TPG and others like us can only be positively effected by greater diversity in screen reader usage. As I said above, the more hands on testing that we are asked to do, the more hours we will bill the clients and the more money we’ll earn. But, what happens in larger and more complex companies and government agencies when this level of chaos is inserted into the mix?
Let’s use a large US government agency and say they develop and buy educational technologies and content for their clients. Let’s say that they also purchase a lot of third party software for their employees and clients to use as well. If we’re looking at a really huge agency, the Social Security Administration (SSA) or the Veterans Administration (VA) who are required by Section 508 to only purchase and publish technology that is “fully accessible, we’re looking at a tremendous amount of tax payer funded work to be done in the agency’s procurement compliance areas just to test the technologies for accessibility.”
These agencies employ hundreds of blind people and serve millions of Americans with their different programs., They must work with literally hundreds of technology companies who each have their own unique processes for affecting accessibility while some of such have never considered accessibility at all. Now, let’s say that, because accessibility to educational and other governmental information materials is so incredibly important, NFB is threatening the government agency and the vendors from which it acquires technology with lawsuits regarding its products and our massive corporation is under international regulatory threats that could prohibit its products from being sold to government agencies, including school systems in many locales around the world. How does an agency like this address its accessibility concerns?
To make the arithmetic easy, let’s say that our government agency works with 100 technology vendors, each with 10 products that must be made accessible as soon as possible. That’s 1000 separate software products that are largely incompatible with each other, that run on multiple operating systems and, as we’re talking about government and education here, are subject to the highest possible standards for accessibility. Now, let’s say that each screen reader considered popular enough to be included in a test plan each have 50 features apropos to the app or web site being tested and we now have something on the order of 50,000 separate tests to perform just to determine if the software our agency needs to acquire is both compliant with standards but also usable with the most popular screen readers.
Putting this problem into perspective, an agency like the VA publishes literally thousands of technologies used for training its clients. The VA, as many of its clients have suffered major injuries in combat and, because it has a disproportionately large number of people with disabilities as clients, these training technologies need to have bullet proof accessibility. The Veterans Administration cannot tell veterans with disabilities what AT they can and cannot use. Thus, the people working at the VA help desk receive calls about the accessibility problems in their public facing technologies from people using a panoply of different AT and the agency is burdened badly by a lack of compliance with standards.
Standards compliance has a major flaw. Specifically, one can be entirely compliant with standards and guidelines and still not be accessible due to bugs and, indeed, “features” in some screen readers, in the different browsers and on the different OS on which one’s software or web site needs to run. A long time ago, while I was still at FS, we looked at the standards apropos to our work and found that WCAG 1.0 and MSAA 2.0 were inadequate and invented a number of our own techniques to work around limitations in the standards and to work around the products and web sites that didn’t even try to comply. This provided JAWS users with an experience vastly superior to that of its competitors but it also created the myth that “it’s accessible if it works with JAWS,” a notion that has held back growth in other screen readers but, worse, led developers to ignore the official standards in lieu of testing only against JAWS. This wasn’t the worst case scenario when 80% or more of all screen reader users chose JAWS but, with a five way convergence in share, it’s simply absurd for we activists, including NFB, to ask for anything more than strict compliance with the generally accepted standards and best practices for accessibility. Requiring a huge company or government agency to test against five different products related to vision on top of all of the AT required for other disabilities only so that it can work around defects in the AT, the browsers and OS on which they sit is an entirely undue burden. The solution is to force the AT vendors to either come into compliance with the standards and the user agent guidelines or, based in objective measures by testing exactly how compliant they are with such, ban they use in governmental installations. IF NFB wants to sue companies for problems in accessibility, perhaps they should start with the user agents employed by their membership first and take on FS and AI Squared for aspects of the standards they choose to ignore.
Yes, my loyal readers, standards are important and, given the chaos in the vision impairment AT alone, standards are the most we can expect a big company or government agency to provide. I’d encourage all developers to also do user experience testing but, even then, only do so with screen readers and other user agents that best adhere to the standards and ignore the others as they try to catch up.
Conclusions
While this article illustrates some of the flaws in the WebAIM data, I’d like to repeat my appreciation to WebAIM for doing this work. I’d like to see a couple of things added (the age demographic) for instance, I’d like to see mobile technologies better separated from general purpose computing and I’d like to see the survey performed in more languages. Sadly, while adding an age question to the survey would be easy for WebAIM, going multi-lingual would be a real lot of work and, as the survey doesn’t generate revenue for WebAIM, I don’t think it’s even fair for me to ask them to do the extra work.
We can conclude from the statistics we analyze in this article that we’re living in “interesting times.” The convergence in commonly used screen readers is so cloudy that choosing which to include in one’s test plan has become very difficult, including all would be nice but it’s also expensive. Thus, I can only conclude that we need to adhere even more strictly to published standards and best practices and hold all technology vendors, including those who make access technology, to them.
Amanda Rush says
Not sure if this scews the data even further, but AISquared also pimped out this survey heavily, both on Twitter and to their email lists, and that causes me to wonder whether or not window Eyes and Zoom Text numbers are even more artificially inflated.
S. Massy says
This makes for a great read and is spot-on in most aspects.
Regarding the self-selecting nature of the survey, I would go farther and say that the number of people who know of the survey and, therefore, who are likely to take it, is further restricted by how few people are aware of it beyond a highly concentrated subsection of the blind population. If I weren’t on a number of accessibility-related lists for FLOSS projects and didn’t follow a couple of leading accessibility figures on Twitter, I would never be made aware that the survey has become available. To me, this reinforces the suspicion you voice that the survey is sampling in large part from people who are likely both highly technically literate and who are highly interested in accessibility to begin with. I’d be willing to wager that, if you sampled average Joes as opposed to the crowd mentioned just now, you would find a higher proportion of JAWS and, possibly, VoiceOver users than what the survey found. There are loads of people out there who have never used but one screen-reader and will never use another, no matter how handicapped this makes them. How, then, do we get average blind Janes and Joes to fill out a survey like this? First, we need to enlist support from organisations like the NFB, CNIB, RNIB and others to spread the word so that it reaches people living outside the tech circle; it’s really a shame that they don’t do more to get their members involved and thus empower them. Secondly, however childish and trivial it sounds, there probably needs to be a chance or two to win an iPod or similar to entice otherwise uninterested folk to take the time and fill this survey.
I followed you through most of the reasoning throughout this article nodding merrily along, but, there was a point near the end where I got somewhat uneasy.
I agree with you that it would be quite unreasonable to expect every software vendor and web service provider to ensure 100% accessibility on every platform and AT combination. Clearly, doing so would be shooting ourselves collectively in the foot. Compliance with standards and best practices is the most we can expect and, as you know all too well, even that will only ever likely happen when hens grow teeth and flying pigs become the standard guide-dog. I also agree that AT vendors have a similar responsibility to adhere to standards and provide the best interaction model they can. Where I sharply disagree is when you suggest that the NFB might do well to sue those AT vendors. What about free software AT like NVDA or Orca? Should the GNOME Foundation and NV Access be sued? Very well, so, say we exempt providers who do not charge for their products and only sue the big boys, we’re still creating a model where only certain products would be expected to be fully usable and, in the long run, curtailing this burgeoning choice from which our community benefits. It seems to me that, before the NFB engages in a legal battle against Freedom Scientific or any other big corporation likely to cost hundreds of thousands of dollars, it would do well to consider expending some of that money on paid development time for projects like NVDA or Orca, or yet paying someone to test and file bugs with open projects like Mozilla or even closed companies like Apple and Google. Sinking money into legal battles will only ever benefit a few, while real investment in FLOSS projects or even into the ecosystem at large benefits the whole world and provides a legacy which we own and on which we can build.
Will Pearson says
I think that the process that was used to recruit participants may have biased the data as well. The main method of recruiting participants was through announcements on websites, web forums, email groups, and social media related to sight loss and accessibility from what I can gather. This likely means that the survey has little or no coverage outside of people who use those communication channels or engage in those communities. This might not be an issue as there may not be a difference in screen reader usage patterns between that population and the wider screen reader user population but we can’t know for sure.
Will Pearson says
I also think that inferring trends over time from the survey might be unreliable due to changes in coverage of the population that may have occurred because of the recruitment process. The segments of the population that were aware of the survey may have changed from year to year. This may have led to differences in how the survey sample reflects the general population in different years. This makes comparing the results obtained in different years unreliable.