Beyond maturity assessments: Proactive measures to gauge the success of government accessibility programs
We don’t have the time. We don’t have the money. We don’t have the expertise. We don’t have a usability lab. We wouldn’t know how to interpret the results.—Steve Krug, ‘The Top five plausible excuses for not testing web sites’ in Don’t Make Me Think. (2000), p.145
This guide introduces the need to add usability testing with people with disabilities to government accessibility programs. Traditional programs have focused on functional/operability testing as legally mandated. While usability isn’t legally mandated, it is essential for discovering how successful accessibility programs are at meeting the needs of end users who have disabilities (employees, and members of the public). Step-by-step guidance is offered on how to establish a new initiative on usability testing.
This article was developed as part of
The Accessibility Switchboard Project
from the
National Federation of the Blind Jernigan Institute
June 2018, Version 1.0
Creative Commons License: CC BY-SA 4.0
Introduction and Background
Current ‘measures of success’
How well is your accessibility program meeting the needs of the intended target audience? Historically, two main measures have been taken, neither of which tells you how your accessibility program meets the needs of your intended target audience.
The first measure that has been taken is the ‘organizational maturity assessment’. This is designed to gather information on how well the accessibility program is operating. In order to find out whether accessibility has been addressed at any level, or if accessibility has been non-existent, a series of questions and data gathering exercises can give you data points such as “We are just starting out on x (Level 1); we have resources applied to y (Level 2); and we’re succeeding in z (Level 3).” The x, y, and z might be policy, website accessibility, and training. In practice, lots of measures of a program are combined to determine ‘how well we’re doing’. In the 2000’s US, the Department of Justice would infrequently gather such data by asking other government departments to assess their progress, and then publish these as survey results. In the following decade, the Office of Management and Budget (OMB) required departments to conduct maturity assessments and report back to OMB. Although this information is now collected twice a year, OMB does not publish the results. So, the US government supposedly knows how their accessibility programs are doing, but even though that sort of information is valuable to help manage and implement programs, it still doesn’t tell the department (or the public) how well those programs are meeting the needs of their target audience. The target audience is never addressed in any program maturity assessment metric. (For more on conducting this topic, see our article
How can I assess organizational maturity with respect to accessibility?)
The second measure, historically, has been the number of complaints from employees. It was very typical, when asked how a government accessibility program was going, to hear a response like “Terrific! We have had no complaints!”. Again with reference to the US Federal Government, a legal requirement exists to track and address complaints. However, the typical response shows that program mangers often mistakenly assume that a lack of complaints equates to satisfaction of end users. It is a mistaken assumption because the likelihood of complaints from the population as a whole is incredibly low, and the likelihood of complaints from people with disabilities is lower still. This makes logical sense when you consider that people with disabilities are the most under-represented marginalized group in society in terms of employment numbers: if you are lucky enough to secure employment, why make waves? So, again, the number of complaints you receive is no measure of how well your program is meeting the needs of your intended target audience. (For more in-depth discussion of complaints, see our companion guide
Beyond offering employees a complaint process: proactive measures to tackle accessibility issues)
The internal measures of program maturity and complaints don’t provide us with good measures of success. How do we know there are problems though? The easiest answer is to point to the plethora of published studies that have been made on how well government websites perform when tested for accessibility. The results are usually shocking in how bad they are.
Functional/Operable does not equal Usable
The field of usability (a.k.a. human factors and ergonomics) grew out of a need to address problems with technology that functioned well, but couldn’t be effectively used by people. Military aviation was a particularly fertile ground for research and development of usability techniques. Post WWII, the jet age meant more and more functions could be added to aircraft, and even though these new functional elements could be tested on the ground, pilots under stress when something went wrong would make mistakes in pressing the right button at the right time, or pulling a lever the wrong way to avoid a crash. As computing became more a part of everyday business, the same principles for making complex flight decks usable were applied to make software easier to use, reduce user mistakes, and providing means for users to correct mistakes.
Broadly speaking, engineers would design and build interfaces to be functional/operable, whereas usability professionals would evaluate and help refine those interfaces to be usable.
Functional/Operable by people with disabilities does not equal usable by people with disabilities
No one expects that military aircraft should be operated by people who are blind. However, these days we do expect that computer software and websites should be operated by people who are blind. We even demand this in legislation. For example, in the US, the Federal government requires that computer interfaces used by its employees, and the public, be operable by people with disabilities. The current functionality/operability requirements are given in the Web Content Accessibility Guidelines 2.0 (WCAG2.0).
It’s possible to legislate operability. It’s much harder (and practically non-existent) to legislate usability.
Even though usability isn’t legislated, we have compelling reasons to invest in testing the usability of our aircraft (so they’re less likely to crash), and to test the usability of our software (so users are less likely to send out unintended/erroneous messages to the public). But, more often than not, the required operability testing for accessibility isn’t accompanied by user testing with people with disabilities.
In our experience, we find that most government departments don’t have usability testing programs. However, many government departments do have accessibility testing programs (as ensuring operability by people with disabilities can be legislated). Even those government departments that do have usability testing programs do not regularly liaise with the accessibility testing programs. If you want to find out how well your accessibility program is meeting the needs of the intended target audience, you will need to include people with disabilities as testing participants in that program. Otherwise, create a new initiative to conduct usability testing by people with disabilities.
Do you know, for example, if your employees who cannot use a mouse are staying late at their desks to complete their work because it takes 5,000 more keystrokes than their colleagues who can use a mouse to complete their daily tasks? Do you know how often your blind employees have to lean over and bother their sighted colleagues to describe the images on their screen?
Excuses, excuses
The opening quote from this guide listed Krug’s ‘top five plausible excuses’ (no time, no money, no expertise, no lab, no interpretation skills). In his popular and concise book, he guides people on how to overcome each of these excuses. Usability, as Krug points out, doesn’t have to be overly time-consuming, nor does it have to be expensive. The expertise to begin the process can be gained in a cheap paperback, or from many free tutorials on the web. You don’t need a lab. And when you get the results, you’ll be able to interpret them just fine:
“Usability testing has been around for a long time, and the basic idea is pretty simple: If you want to know whether your software or your Web site or you VCR remote control is easy enough to use, watch some people while they try to use it and note where they run into trouble. Then fix it, and test it again.”
—Steve Krug, 2000, Don’t Make Me Think. (p.143)
If you currently have an environment in which people can make excuses, they will. If you are struggling in a situation where staff are able to make excuses and you are experiencing frustration in getting things going, see our article on
How can I distribute the responsibility and accountability for accessibility?
What usability measurements can you gather?
In gauging success in terms of how well your accessibility program is meeting the needs of the intended target audience—there are several types of usability measurements you can gather:
- Informal usability testing. This can be as simple as sitting with someone at their desk and asking them to do tasks while telling you what they are finding and thinking as they go.
- Formal usability testing. This can use recordings, timing measures, and gathering targeted data. It can include comparisons between nondisabled and disabled participants on objective measures like speed, number of errors, task completion, and subjective measures such as user opinions.
- Focus groups. These are somewhat unfashionable these days, but gathering together groups of employees to hear what their opinions are on the ICT can be useful. Employee affinity groups may be useful for finding potential participants.
- Interviews. You are more likely to get uncensored opinions from direct interviews than you are from focus groups. However, that is only if the interviewees are assured of complete confidentiality and anonymity.
- Surveys. After you have collected data on a small scale with usability studies and interviews, you can verify the results and/or gauge the extent of problems by using those results to inform the design of surveys.
For more on user testing options, see our article
How do I ensure my products work for people with disabilities?
What other proactive measurements can you gather?
In addition to usability testing, there are also quality checks for operability that you can proactively gather. Say, for example, that you have a software application that passed in lab testing for accessibility functionality/operability. You could find users of that application in the field, and find out whether the software still passes the same operability tests.
The test tools and methods that you use may have to be modified for use in the field, but spot-checking ensures that what is in use is living up to what is specified at the time of purchase (or build).
For more on accessibility testing, see our article
What procedures should I use to test my ICT for accessibility?
Step-by-Step: Proactive measures to gauge the success of government accessibility programs
Step 1. Establish what you want to measure, and why
The decision to conduct usability testing is driven by the desire to find out how well your technology meets the needs of end users. The companion to that exercise is, of course, fixing any problems found. The budget (time, money, resources) should include both. Because there is a budget investment, either large or small, you need to understand and your teams agree on what you want to measure, and why.
In the beginning stages, we would suggest you test with the aim of finding out whether there are any big problems that you were unaware of. Later, refined aims could be to do comparative testing of one interface versus another to find out which is more efficient. However, this is a more advanced task that should be attempted once you and your team have some experience under your belt.
Note: If you give your team a budget of a million dollars and deadline of three years, they’ll spend a million dollars and take three years. If you give them a budget of ten thousand dollars and deadline of three weeks, they’ll spend ten thousand and take three weeks. Will the extra budget be justified? It’s unlikely. If the proposal makes it sound outrageously expensive and time-consuming, get another proposal.
Step 2. It’s easier to ask for permission (in this case)
The phrase ‘It’s easier to ask for forgiveness than to ask for permission’ isn’t applicable in usability testing. Before you collect any data you have to seek out permission. You can get into big trouble by asking people questions about their work when you don’t have the authority to do so. This is especially true when it comes to disability. Some employees have disabilities but don’t want to disclose that to their peers, or even in many cases, to Human Resources.
In the history of testing with human subjects there have been well-intentioned studies that have unwittingly caused physical or psychological harm to people, exposed sensitive, personal information, and hurt people’s reputation, job and career prospects. As a result, institutions that conduct testing involving human subjects (universities, large corporations, and government departments) will have some version of what is known as an ‘Institutional Review Board’ (IRB). The IRB meets periodically and reviews any and all proposals to conduct research involving humans. So, before you talk to anyone, even in an informal way, make sure you understand your own institution’s rules, and follow any necessary procedures that they impose on you before they grant you permission. They know that you want to collect data as soon as possible to answer your own questions, but they operate as a check-and-balance on behalf of the participants who could inadvertently get harmed.
Step 3. Prepare your questions and tools
Have you ever read a survey and thought “That seems like a leading question!”? Or, have you seen a survey that went on and on and on for page after page after page and wondered “What on earth are these guys looking for… I’m so bored!”?
Online survey tools are available and easy to sign up for. Conjure up a few questions, and hit send to the whole department. This is a mistake.
It is worth taking the time to (a) learn how to write effective surveys (or conduct effective usability tests, or conduct focus groups, etc.); and (b) learn how to use any tools that you will be employing.
Sometimes overlooked is the need to evaluate the tool for accessibility. Conduct an accessibility test on the tool before you send it out to employees who have visual disabilities. You don’t want to be on the receiving end of “Hey, you sent me this opinion survey on accessibility, but the first page only works with a mouse!”. Not only does this cause an inconvenience to your participants, it gets them in the mindset that you will be trying this again but they already think you don’t know what you’re doing.
If needed, consider consulting outside experts for advice and guidance in setting up usability tests involving people with disabilities.
(See also our article
How do I find a knowledgeable consultant?)
Step 4. Do a Pilot study
A bedrock principle of any usability test (survey, interview study, etc.) is to conduct a pilot study. Even seasoned usability professionals conduct pilot studies to ensure that their test methods are sound, that they haven’t overlooked anything, that their test tools work as anticipated, and so on. The benefits of conducting a pilot are lower risk, lower cost and higher confidence in the resources you are using. For managers who may be skeptical of usability studies, small-scale pilot studies can provide initial data to justify the collection of more data in a bigger study.
Step 5. Execute the study and analyze the results.
In the ‘Go In-Depth’ links below we point to resources for planning and executing usability studies, as well as the data analysis and reporting processes.
Step 6. Fix the problems.
While this last Step seems obvious, it is still remarkable how many times usability studies are conducted and then the design errors found are not fixed. It could be because the usability test was carried out too late in a development process. Or, it could be that a working prototype was needed for testing, but the duration between working prototype and product delivery was very short in the project timeline. Or, it could be that the budget was available to test, but there wasn’t the foresight to budget for the fixes. Prior planning and the anticipation of the need for fixes should be built into any usability study.
Often when people are starting out introducing usability testing to a department that hasn’t done it before, the first tests will be somewhat rudimentary, and limited to products that are already in place in the field. As the program matures, more advanced testing methods can be applied as early as the conceptual phase of a new software development. It’s universally accepted in the usability field that it’s easier and cheaper to fix problems when interface are sketched out in post-it-note form than it is to fix problems that have thousands of lines of underlying code.
The process of conducting tests and fixing problems can and should become institutionalized. For advice on how to institutionalize accessibility programs and initiatives, see our other guides in the Related Sections (below).
Go In-Depth
A resource for quickly getting started in web and mobile usability testing…
Don't Make Me Think, Revisited: A Common Sense Approach to Web Usability (3rd Ed., 2014) is the latest version of the guidebook by Steve Krug. The book employs a fun, informal style to convey the basic principles of testing as well as how to deal with any push-back from detractors. The book includes a chapter on including accessibility in user testing.
A resource for understanding and planning usability tests…
The Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests (2nd Ed., 2008) by Jeffrey Rubin and Dana Chisnell provides comprehensive step-by-step guidance on all aspects of conducting traditional usability testing. This is a useful book for beginners to usability testing who want to conduct in-depth testing on computers and other digital devices.
About this article
Authors
This article is published as part of The Accessibility Switchboard Project, an initiative of the National Federation of the Blind Jernigan Institute with support from the members of the Accessibility Switchboard Project Community Of Practice, and from the Maryland Department of Disabilities.
Suggested citation
The Accessibility Switchboard. Beyond maturity assessments: Proactive measures to gauge the success of government accessibility programs. June 2018, Version 1.0. National Federation of the Blind Jernigan Institute. Available: https://www.accessibilityswitchboard.org/
Feedback, additions and updates
The authors welcome feedback on this and other articles in the Accessibility Switchboard. Use the feedback form to provide updates, new case studies, and links to new and emerging resources in this area. The feedback form can also be used to join the mailing list for notification of new content and updates from the Accessibility Switchboard.
Copyright, use and reproduction
Accessibility Switchboard articles are published under the Creative Commons License Attribution-ShareAlike 4.0 International. You are free to share (copy and redistribute the material in any medium or format), and to adapt (remix, transform, and build upon the material) for any purpose, even commercially. This is under the following terms: (1) Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use; (2) ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original. For more detail on the license, see CC BY-SA 4.0 on the Creative Commons website.
Picture credits
‘Flight deck of Hawker Siddeley Trident airliner.’ by ‘Nimbus227’. Public Domain.