Why 95% of Websites Still Fail Basic Accessibility Standards

Written By  Crosscheck Team

Content Team

April 17, 2025 10 minutes

Why 95% of Websites Still Fail Basic Accessibility Standards

Why 95% of Websites Still Fail Basic Accessibility Standards

Every year, WebAIM crawls the top one million websites and checks their home pages for automatically detectable WCAG failures. Every year, the results are roughly the same: around 95–96% of pages fail.

The 2024 report put the website accessibility failure rate at 95.9%. That is not a marginal improvement. In 2019, when WebAIM first published the Million report, the failure rate was 97.8%. Five years of growing legal pressure, developer tooling, and public awareness campaigns have moved the needle by less than two percentage points.

This is not a niche problem. It means that if a person who is blind, has low vision, has motor impairments, or relies on a screen reader visits a typical website, the probability that they will encounter a barrier before they even get past the home page is roughly nineteen out of twenty.

This article explains what those failures actually are, why the web industry has proven so resistant to fixing them, what the business consequences are, and what a realistic path forward looks like — whether you are starting from zero or trying to get a mature product closer to compliance.


The Six Most Common Failures

The WebAIM Million data is specific about which WCAG success criteria account for the majority of failures. The six most common on home pages in 2024 were:

1. Low contrast text (80.8% of pages)

The single most frequent failure. Text that does not meet the minimum contrast ratio of 4.5:1 for normal text or 3:1 for large text. This is not an edge case that only affects users who are color blind. Low contrast degrades legibility for users in bright light, on low-quality displays, and simply as people age — nearly everyone experiences reduced contrast sensitivity over time. It is also among the easiest failures to catch with automated tooling, which makes its prevalence particularly striking.

2. Missing alternative text on images (54.5% of pages)

Images without alt attributes, or images with alt attributes that are empty when they should not be, or images with alt attributes that are present when they should be empty. Screen readers encountering an image with no alt attribute will typically read out the filename — something like img_20240312_hero_final_v3.jpg — which tells the user nothing useful. This failure is often introduced not by negligence but by content management workflows that do not enforce alt text at the point of upload.

3. Empty links (47.5% of pages)

Anchor tags with no text content and no aria-label. Icon links and icon-only buttons are the primary culprits. A link that contains only a social media icon, a search icon, or a hamburger menu SVG with no accessible name gives screen reader users no indication of where the link goes or what it does. The user hears "link" with no further context.

4. Missing form input labels (30.7% of pages)

Form fields without programmatically associated labels. Placeholder text is not a label — it disappears when the user starts typing, it is often insufficient contrast, and it is not consistently surfaced by screen readers as a field label. When a form field has no label at all, screen reader users hear "edit text" with nothing to indicate what information belongs in the field.

5. Empty buttons (27.5% of pages)

Button elements with no accessible name — no visible text content, no aria-label, no title. This most commonly occurs with custom-styled buttons where the visual label is an icon or SVG with no text alternative. The user hears "button" and must guess what it does.

6. Missing document language (17.0% of pages)

The <html> element without a lang attribute. This prevents screen readers from applying the correct language profile for text-to-speech synthesis. A screen reader using an English pronunciation engine to read a French page does not render the content intelligibly. This is also one of the simplest fixes — a single attribute on a single element.

These six failure categories alone account for the vast majority of the 95.9% failure rate. None of them require sophisticated accessibility knowledge to understand. None of them are the result of genuinely difficult edge cases in WCAG interpretation. They are basic structural problems.


Why the Website Accessibility Failure Rate Stays So High

The persistence of these failures is not explained by ignorance of accessibility as a concept. Most development teams have at minimum a passing familiarity with WCAG and the legal landscape. The causes are structural.

Accessibility is not enforced at the point of contribution.

A developer writes a button component with no accessible name. It passes a code review where the reviewer is checking for functionality, not accessibility. It passes a manual QA cycle where the tester is verifying business logic, not running a keyboard navigation check. It passes a design review where the focus is on visual consistency. At no point in that workflow did the system require the developer to provide an accessible name for the button. The failure ships.

When accessibility testing happens at all, it typically happens as a downstream audit — either just before a major release or after a complaint arrives. By then, fixing the issue requires reopening completed work, which creates friction that slows remediation and demotivates the investment.

Automated testing catches less than half the problem.

Tools like axe, Lighthouse, and WAVE can automatically detect roughly 30–40% of WCAG failures. The six categories in the WebAIM Million report are largely within that detectable range, which is why they are in the report at all — they are the things that automated scanning can identify. The implication is that the accessibility failures automated tools cannot detect may be worse. Logical reading order, meaningful link text in context, keyboard interaction patterns for custom components, the coherence of focus management in a modal dialog — none of these are reliably detectable by automated scanning. Teams that rely solely on Lighthouse scores are measuring only a fraction of the accessibility surface area.

Design systems and component libraries carry failures forward at scale.

A single inaccessible button component in a shared design system can be instantiated thousands of times across a product. Fixing the source fixes all instances. But the inverse is also true — an accessibility problem baked into a foundational component propagates everywhere it is used, often without the teams building on top of it being aware of the issue. This is a primary reason why even applications built by teams that care about accessibility end up with widespread failures: the underlying building blocks were never audited.

Accessibility expertise is unevenly distributed.

Most QA engineers and developers do not have deep expertise in assistive technology behavior, WCAG interpretation, or screen reader interaction patterns. Accessibility is a specialization, and most teams do not have a dedicated accessibility specialist. The knowledge required to correctly test keyboard focus management, ARIA live regions, and screen reader announcement ordering is not part of standard engineering or QA education. Teams end up relying on automated tools to tell them whether they have accessibility problems, which means they only see the problems those tools can detect.

There is no natural feedback loop from affected users.

When a functional bug makes a feature unusable, typically someone — a power user, a beta tester, a QA engineer — catches it before it ships or files a report soon after. With accessibility failures, that feedback loop is often absent. Users who rely on assistive technologies frequently abandon sites that do not work for them rather than filing reports. Screen reader users develop patterns of avoiding sites that consistently fail them. The absence of complaints does not indicate the absence of barriers — it may indicate that affected users have already left.


What It Actually Costs

The business case for accessibility is often framed in legal terms, which is accurate but incomplete. The legal risk is real: in the United States, ADA-related web accessibility lawsuits exceeded 4,600 in 2023, a number that has grown significantly year over year. The EU Web Accessibility Directive, the UK Equality Act, and the European Accessibility Act (which came into force for products and services in June 2025) all create compliance obligations that vary by jurisdiction and organization type.

But the business impact extends beyond litigation risk.

Market exclusion. The World Health Organization estimates that approximately 1.3 billion people globally live with some form of disability. In the United States, the Centers for Disease Control estimates that roughly one in four adults has a disability. Many of those disabilities affect how people interact with digital interfaces — vision, motor function, cognitive processing. A website that fails basic accessibility standards is not accessible to a significant portion of the population. That is not an abstract social concern — it is a measurable reduction in addressable market.

SEO overlap. Many WCAG success criteria align closely with practices that search engines reward. Descriptive page titles, meaningful link text, proper heading hierarchy, image alt text, mobile-responsive design, fast load times — these are accessibility requirements that are also SEO signals. The correlation is not perfect, but it is consistent enough that accessibility remediation frequently produces measurable SEO improvements as a secondary effect.

Reputational cost. As accessibility becomes a more visible issue — through litigation news, public complaints on social media, and growing awareness among disability advocacy communities — organizations that are publicly identified as having accessible websites have an advantage. The reverse is also increasingly true: public documentation of accessibility failures circulates in disability communities and influences purchasing decisions.

Internal productivity. This is often overlooked. Employees with disabilities use internal tools built by the same teams. Inaccessible internal applications exclude or disadvantage employees who use assistive technologies. The same structural failures that affect external users affect internal users, and the costs there include both talent retention and accommodation-related obligations.


Where to Start

Given the scale of the problem, teams new to accessibility often struggle with where to begin. The answer is not to run a comprehensive audit and attempt to fix everything simultaneously. That approach typically produces a large backlog that gets deprioritized and never completed.

A more durable approach:

Start with the six common failures. The WebAIM Million categories — contrast, alt text, link names, form labels, button names, document language — are the foundation. An automated scan with axe DevTools or Lighthouse will surface most of them. Fix these first. They are the highest-frequency failures, they are relatively straightforward to remediate, and fixing them at the component level cascades improvements across the application.

Audit the design system and component library. Every inaccessible shared component is a multiplier. Before addressing individual page failures, verify that the foundational components — buttons, form fields, navigation menus, modals, accordions, dropdowns — meet WCAG requirements. A remediated button component fixes every button everywhere it is used.

Integrate accessibility into the build pipeline. Add automated accessibility scanning to CI/CD so that known-detectable failures cannot ship without a deliberate override. This does not catch everything, but it prevents regression and forces the team to consciously acknowledge when a known issue is being deferred.

Add keyboard testing to every QA cycle. Put the mouse down. Navigate the application using Tab, Shift+Tab, Enter, Space, and arrow keys. Try to complete every user flow — sign up, log in, submit a form, navigate the menu, interact with a modal — without touching the mouse. This catches entire categories of failure that automated tools miss, and it requires no specialized equipment or knowledge.

Do at least one screen reader test cycle. NVDA on Windows with Firefox, and VoiceOver on macOS with Safari, cover the majority of real-world screen reader usage. A single QA engineer spending two to three hours learning basic screen reader navigation and running through core user flows will surface failures that nothing else will catch.

Prioritize by user impact, not by WCAG level. A Level A failure on a login form is more urgent than a Level AA failure on an obscure settings page. Prioritize based on the frequency with which affected flows are used and the severity of the barrier for users who encounter it.

Set realistic sprint-cycle targets. Accessibility remediation competes with feature work. The way to sustain progress is to integrate a fixed accessibility quota into every sprint — for example, two accessibility issues resolved per sprint alongside normal feature work — rather than treating it as a special project that runs in parallel and gets cut when timelines compress.


The Measurement Problem

One reason accessibility progress is so slow is that it is hard to measure accurately. A Lighthouse accessibility score is a score for automatically detectable failures only. A 95 on Lighthouse does not mean 95% accessible — it means a particular subset of automatically checkable criteria passed. The actual accessibility of the application is unknown unless manual testing with real assistive technologies has been conducted.

This creates a false sense of progress. Teams improve their Lighthouse scores, celebrate, and move on — while the keyboard interaction failures, screen reader announcement problems, and focus management issues that automated tools cannot detect continue to affect users.

Accurate measurement requires combining automated scanning with structured manual testing across the WCAG success criteria. When failures are found in manual testing, the report needs to include enough information for a developer to reproduce the issue: the exact assistive technology and browser version used, the sequence of interactions that triggered the failure, the DOM state at the moment of the failure, and a recording of the screen and any relevant console output.

Accessibility bugs that are hard to reproduce get deprioritized. Accessibility bugs that come with a full reproduction package — recording, environment details, exact steps — get fixed. The quality of the bug report determines how quickly the issue moves through the engineering queue.


Crosscheck Makes Accessibility Bugs Easier to Fix

When your QA team is running keyboard navigation tests or screen reader checks and finds a failure, the report they file has to be good enough for a developer to reproduce and fix the issue without coming back to ask questions. That is the standard. Most accessibility bug reports do not meet it.

Crosscheck is a browser extension that captures everything at the moment you find a bug: a full screenshot or session replay, the browser console log, every network request, and your complete environment details — browser version, operating system, viewport dimensions. For accessibility bugs, that means the developer who picks up the report can watch a replay of the exact interaction that triggered the failure, with the console state and DOM context attached.

When a focus management issue only reproduces in a specific browser with a specific screen reader active, the replay is the difference between a developer reproducing the failure in minutes and spending two days trying to recreate the conditions. When an ARIA live region fails to announce a status update under a specific timing condition, the console log is the difference between a targeted fix and a guess.

Accessibility work is already harder than most QA work. The tooling around it should not add friction. If your team is serious about moving that 95.9% failure rate — at least for your own product — Crosscheck removes one of the most consistent sources of delay in accessibility bug resolution.

Try Crosscheck free and file your first fully documented accessibility bug today.

Related Articles

Contact us
to find out how this model can streamline your business!
Crosscheck Logo
Crosscheck Logo
Crosscheck Logo

Speed up bug reporting by 50% and
make it twice as effortless.

Overall rating: 5/5