Accessibility commitment and transparency for open-scans
This document defines the accessibility standards, practices, and guidelines for the open-scans project.
Related: See AGENTS.md for AI agent instructions that enforce these standards.
open-scans is an issue-driven accessibility scanning tool that uses GitHub Pages and GitHub Actions to perform automated accessibility scans. The tool helps developers identify and fix accessibility issues by:
- Scanning multiple URLs for WCAG compliance
- Generating actionable reports using five accessibility engines
- Publishing results to GitHub Pages for easy comparison
- WCAG 2.2 Level AA - This project strives to meet WCAG 2.2 Level AA standards for all user-facing interfaces.
- Frontend (index.html, reports.html): Tested with automated tools, keyboard navigation verified
- Generated Reports: Semantic HTML structure, keyboard accessible, proper heading hierarchy
- Scan Results: Machine-readable formats (JSON, CSV) and human-readable formats (Markdown, HTML)
This project uses five accessibility scanning engines:
- axe-core (
@axe-core/playwright) - Industry-standard accessibility testing; Playwright integration for dynamic content - Siteimprove Alfa (
@siteimprove/alfa-cli) - Standards-first approach based on ACT rules; comprehensive WCAG 2.2 coverage; EARL report format support - IBM Equal Access Checker (
accessibility-checker) - IBM's comprehensive WCAG checker - AccessLint (
@accesslint/core) - Automated accessibility testing with WCAG rule mapping - QualWeb (
@qualweb/core) - University of Lisbon WCAG/ACT evaluator
Default: axe-core plus one randomly chosen engine. Use ALL in the issue title or Engine: all in the body to run all five.
All scans run in GitHub Actions workflows:
- scan-request.yml: Triggered on issue creation/edit
- scan-issue-queue.yml: Daily scheduled scans + manual trigger
- scheduled-scan-queue.yml: Timed issues (WEEKLY:, MONTHLY:, etc.)
✅ What is automated:
- WCAG 2.2 rule violations (both ALFA and axe-core)
- Color contrast checking
- Semantic HTML structure validation
- ARIA usage validation
- Keyboard accessibility patterns
❌ What requires manual testing:
- Screen reader compatibility
- Focus management in complex interactions
- Meaningful content descriptions
- Logical reading order
Use ES Modules
// package.json
{
"type": "module"
}Export Functions for Testing
// Use import guard to prevent main() execution during testing
if (import.meta.url === `file://${process.argv[1]}`) {
main();
}Security: Never use command injection patterns
// ✅ CORRECT: Use spawn with argument arrays
import { spawn } from "node:child_process";
const child = spawn(command, [arg1, arg2, arg3]);
// ❌ WRONG: Never use execSync with template strings
// execSync(`command ${userInput}`); // Command injection risk!Semantic HTML
- Use proper heading hierarchy (h1 → h2 → h3)
- Use semantic elements (
<nav>,<main>,<footer>,<article>) - Provide labels for all form inputs
Form Validation
- Real-time URL validation with clear error messages
- Server-side validation as defense-in-depth
- Prevent submission of private/localhost URLs
Keyboard Navigation
- All interactive elements are keyboard accessible
- Visible focus indicators
- Logical tab order
Accessible Reports
- Priority sections show pages with most errors first
- Common issues prominently displayed
- Cross-page pattern analysis for recurring problems
- Detailed failure information with replication steps
- Disability impact indicators (visual, hearing, motor, cognitive)
- Functional Performance Specification (FPS) population data
- Cross-engine overlap indicators showing WCAG criteria covered by multiple engines
- "Copy failure details" button on each finding generates a structured GitHub issue report
Multiple Formats
- HTML with semantic structure and ARIA landmarks
- Markdown for GitHub rendering
- CSV for data analysis
- JSON for machine processing
URL Validation Rules
- Reject localhost URLs (
localhost,127.0.0.1,[::1]) - Reject private IPv4 ranges (10.x, 172.16-31.x, 192.168.x, 169.254.x)
- Reject private IPv6 ranges (fe80:, fc00:, fd00:, ::1)
- Accept only HTTP/HTTPS protocols
- Enforce 500 URL maximum per scan request
Unit Tests
- All scanner modules have unit tests in
tests/unit/*.test.mjs - Use Node.js built-in test runner
- Run tests with:
npm test
Linting
- All code must pass syntax checking
- Run linter with:
npm run lint
Quality Gates
# Run all unit tests
npm test
# Lint all scanner modules
npm run lint
# Test individual modules
npm run run:parse
npm run run:validate
npm run run:scanThis project follows the Accessibility Bug Reporting Best Practices guide. The scan reports implement this guidance directly: the "Copy failure details" button on each finding generates a pre-formatted GitHub issue report.
Every finding in a scan report carries two stable, human-readable identifiers that follow the A11Y-xxxxxxxx format (prefix + 8 hex characters):
| Identifier | What it covers | Hash inputs |
|---|---|---|
| Instance ID | A specific defect on a specific page | Page URL + element locator + rule ID |
| Pattern ID | The same defect type across all pages | Element locator + rule ID + colour mode (no page URL) |
- The Instance ID is stable across scans: the same element defect on the same URL will always produce the same ID, enabling "first identified" tracking over time.
- The Pattern ID is stable across pages: if the same broken component appears on 20 pages, all 20 findings share one Pattern ID, making it easy to identify template-level regressions.
Both identifiers appear in each finding card in the HTML report and are included in the text copied by the "Copy failure details" button.
Example:
Instance ID: A11Y-xxxxxxxx (this defect on https://example.com/checkout)
Pattern ID: A11Y-yyyyyyyy (this defect type across all pages)
Both IDs are computed automatically during scanning. Each
xoryrepresents a hexadecimal character.
Every accessibility bug report must include:
| Field | Description |
|---|---|
| Instance ID | A11Y-xxxxxxxx — stable ID for this defect on this page |
| Pattern ID | A11Y-xxxxxxxx — stable ID for this defect type across pages |
| Page URL | Exact URL where the issue was found, including query string and fragment if relevant |
| XPath | Shortest XPath that uniquely identifies the failing element |
| HTML Snippet | Minimal HTML fragment demonstrating the problem |
| WCAG SC | Specific WCAG Success Criterion violated (number + level) |
| Rule ID | Testing tool rule identifier (e.g. axe-core image-alt) |
| Severity | Critical / High (Serious) / Medium (Moderate) / Low (Minor) |
| Frequency | Number of instances on this page; number of pages affected |
| Colour mode | Light / dark — some failures only occur in one mode |
| Viewport | Desktop / mobile — layout issues are often viewport-specific |
| Level | Definition | Example |
|---|---|---|
| Critical | Users cannot complete a core task at all | Modal dialog traps keyboard focus with no close mechanism |
| Serious | Significant barrier that degrades or blocks a key workflow | Form error messages not announced to screen readers |
| Moderate | Noticeable barrier with a workaround available | Focus indicator is missing but Tab order is logical |
| Minor | Minor issue with minimal real-world impact | Decorative icon has a redundant aria-label |
Frequency amplifies effective severity. A Minor issue that appears on every page, or on a high-traffic task (search, sign-in, checkout), should be escalated by one severity level.
When filing an accessibility bug, use this structure:
## Accessibility Issue: [component] — [failure mode] ([WCAG SC])
**Bug ID:** `A11Y-xxxxxxxx` (instance) / `A11Y-xxxxxxxx` (pattern)
**URL:** [full URL]
**XPath:** `[shortest unique XPath]`
**WCAG SC:** [SC number] — [SC name] (Level [A/AA/AAA])
**Rule:** [engine] — [rule ID] ([rule URL])
**Severity:** [Critical / Serious / Moderate / Minor]
**Frequency:** [N] instance(s) on this page; [N] page(s) affected
**Screen type:** [desktop / mobile] | **Colour mode:** [light / dark]
### HTML Snippet
\`\`\`html
[minimal failing HTML fragment]
\`\`\`
### Description
[What is wrong and why it creates a barrier for users]
### Expected Behaviour
[What the correct, accessible experience should be]
### Actual Behaviour
[What currently happens — the accessibility failure]
### Impact
[Disability groups affected and how: e.g. blind users, voice-control users]
### Testing Environment
| Item | Value |
|------|-------|
| Browser | [name and version] |
| Testing tool | [engine name and version] |
### Suggested Fix (optional)
[Code or prose describing the remediation]The "Copy failure details" button in every scan report automatically generates a report in this format, pre-filled with data from the scan.
This project follows guidance from established accessibility authorities:
- W3C WCAG 2.2 - Web Content Accessibility Guidelines
- W3C ARIA - Accessible Rich Internet Applications
- ACT Rules - Accessibility Conformance Testing
- EARL - Evaluation and Report Language (machine-readable results)
- Section 508 FPS - Functional Performance Specifications
- Siteimprove Alfa - Standards-first accessibility testing
- axe-core - Industry-standard accessibility engine
- IBM Equal Access Checker - IBM's WCAG checker
- QualWeb - University of Lisbon WCAG/ACT evaluator
- Playwright - Cross-browser testing framework
For a comprehensive list of vetted accessibility resources and bug reporting guidance, see:
- TRUSTED_SOURCES.yaml - Machine-readable accessibility resource registry
- Accessibility Bug Reporting Best Practices - Guide to writing actionable accessibility bug reports
All pull requests must:
- ✅ Pass automated accessibility scans (Alfa + axe-core)
- ✅ Include unit tests for new functionality
- ✅ Pass linting checks
- ✅ Follow existing code patterns
- ✅ Update documentation as needed
Found an accessibility barrier? Please:
- Open a GitHub issue with the
accessibilitylabel - Use the GitHub Issue Template in this document
- Describe the barrier and how it affects users with disabilities
- Include steps to reproduce (URL, XPath, HTML snippet)
- Specify your testing environment (browser, OS, tool used)
- Suggest a fix if you have one
Tip: Use the "Copy failure details" button in any scan report to pre-fill a correctly structured bug report.
AI coding assistants (GitHub Copilot, Cursor, Claude, etc.) working in this repository must:
- Read and follow AGENTS.md instructions
- Generate WCAG 2.2 AA compliant code
- Use secure coding patterns (no command injection)
- Follow existing module structure and conventions
- Include unit tests for new functionality
- AGENTS.md - AI agent instructions and coding standards
- README.md - Project overview and usage guide
- .github/copilot-instructions.md - GitHub Copilot specific instructions
- .kittify/AGENTS.md - Spec Kitty project management rules
- ✅ 409/409 unit tests passing
- ✅ All scanner modules lint-clean
- ✅ Five-engine validation (axe-core, ALFA, Equal Access, AccessLint, QualWeb)
- Manual screen reader testing not yet performed
- Limited assistive technology coverage in testing
- Focus on automated testing over manual evaluation
- Expanding test coverage for edge cases
- Adding more comprehensive keyboard navigation tests
- Documenting screen reader testing procedures
- Issues: GitHub Issues
- Repository: mgifford/open-scans
- Community: Join our GitHub Discussions
Last Updated: 2026-04-02
This is a living document. As our accessibility practices evolve, this document will be updated to reflect our current state and commitments.