From 9b034c0edbbba20ef0cbc540f72fa672e50c1b96 Mon Sep 17 00:00:00 2001 From: Bogdan Lazar Date: Fri, 16 Jan 2026 18:40:24 +0100 Subject: [PATCH 1/3] Fix article reference for Adrian Roselli Adrian mentioned that he didn't explicitly say LLMs make image descriptions better. This commit fixes that interpretation. --- src/content/en/2025/accessibility.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/en/2025/accessibility.md b/src/content/en/2025/accessibility.md index d1624d867ab..aab489e1ee6 100644 --- a/src/content/en/2025/accessibility.md +++ b/src/content/en/2025/accessibility.md @@ -858,7 +858,7 @@ Experts like Joe Dolson have explored whether contrasts how humans and language models approach accessible component code. Humans base HTML, CSS, and ARIA decisions on specifications, user needs, assistive technology behavior, and platform quirks, all guided by intentions for the interface. LLMs instead predict likely code from training data, which is problematic because most existing code has accessibility issues, and the models lack intent or understanding of specific users. -Adrian Roselli acknowledges that recent advances in computer vision and LLMs have brought real benefits, such as better image descriptions and improved captions and summaries. However, he argues these tools still lack context and authorship. They can't know why content was created, what a joke or meme depends on, or how an interface is meant to work. Their descriptions and code suggestions can easily miss the point or mislead users. +Adrian Roselli acknowledges that recent advances in computer vision and LLMs have brought some benefits and can potentially help readers distill complex articles into understandable summaries. However, he argues these tools still lack context and authorship. They can't know why content was created, what a joke or meme depends on, or how an interface is meant to work. Their descriptions and code suggestions can easily miss the point or mislead users. AI raises significant ethical concerns that go beyond accessibility. From 5adb94fd06a6af688af48d4ac91867e94d7b3ea1 Mon Sep 17 00:00:00 2001 From: Bogdan Lazar Date: Fri, 16 Jan 2026 18:46:43 +0100 Subject: [PATCH 2/3] Fix typo fixes #4379 --- src/content/en/2025/accessibility.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/en/2025/accessibility.md b/src/content/en/2025/accessibility.md index aab489e1ee6..34bc78f3279 100644 --- a/src/content/en/2025/accessibility.md +++ b/src/content/en/2025/accessibility.md @@ -942,7 +942,7 @@ The map of TLD ranking is very similar to 2024, but obviously doesn't include th {{ figure_markup( image="map-accessible-countries-by-tld.png", - caption="Map of ccessible countries by Top Level Domain (TLD).", + caption="Map of accessible countries by Top Level Domain (TLD).", description="Displayed visually in a world map, the most accessible countries are Norway with 87%, Finland with 86%, followed by Canada, USA, UK, Sweden, Ireland, Australia, New Zealand, Austria, Belgium, Switzerland, Denmark, and South Africa. China is the least accessible by Top Level Domain, with close to 67%.", chart_url="https://docs.google.com/spreadsheets/d/e/2PACX-1vQFD-7C6Jv6q1JyviDsKosRlVwaok7g7nRCQ9NGMw5MaAAohL7EcDejVwgp13Z_T2S_57Zi0YaVb7st/pubchart?oid=1554186781&format=interactive", sheets_gid="1037208406", From 91e328ec55464ef8b5e736cfa62fc4e754e5b946 Mon Sep 17 00:00:00 2001 From: Barry Pollard Date: Sat, 17 Jan 2026 11:13:53 +0000 Subject: [PATCH 3/3] Update src/content/en/2025/accessibility.md --- src/content/en/2025/accessibility.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/en/2025/accessibility.md b/src/content/en/2025/accessibility.md index 34bc78f3279..2a471ab7b50 100644 --- a/src/content/en/2025/accessibility.md +++ b/src/content/en/2025/accessibility.md @@ -858,7 +858,7 @@ Experts like Joe Dolson have explored whether contrasts how humans and language models approach accessible component code. Humans base HTML, CSS, and ARIA decisions on specifications, user needs, assistive technology behavior, and platform quirks, all guided by intentions for the interface. LLMs instead predict likely code from training data, which is problematic because most existing code has accessibility issues, and the models lack intent or understanding of specific users. -Adrian Roselli acknowledges that recent advances in computer vision and LLMs have brought some benefits and can potentially help readers distill complex articles into understandable summaries. However, he argues these tools still lack context and authorship. They can't know why content was created, what a joke or meme depends on, or how an interface is meant to work. Their descriptions and code suggestions can easily miss the point or mislead users. +Adrian Roselli acknowledges that recent advances in computer vision and LLMs can potentially help readers distill complex articles into understandable summaries. However, he argues these tools still lack context and authorship. They can't know why content was created, what a joke or meme depends on, or how an interface is meant to work. Their descriptions and code suggestions can easily miss the point or mislead users. AI raises significant ethical concerns that go beyond accessibility.