Building Accessible AI Assistants That Don't Code People Out
Back to BlogAccessibility + AI Design

Building Accessible AI Assistants That Don't Code People Out

AI assistants are powerful tools. They're also frequently inaccessible. If your assistant only works through text or voice, you're excluding people who are deaf, hard of hearing, visually impaired, or have other disabilities.

Dr. Dédé Tetsubayashi|10 min read

Key Takeaways

  • Most AI assistants are inaccessible by default. Text-only or voice-only interfaces exclude deaf users, blind users, and others.
  • Multimodal design—supporting text, voice, visual, and alternative input methods—makes AI better for everyone.
  • Screen reader support, proper semantic markup, and keyboard navigation are foundational, not optional.
  • Test with disabled users throughout design and development. Accessibility isn't something you add at the end.
  • Accessible AI assistants are better AI assistants. They're more resilient, more useful, and serve more people.

I was on a call last week with a team that had just launched an AI assistant. They were excited about the capabilities—conversational, fast, helpful. But when I asked about accessibility, there was silence. They'd built it to work beautifully with voice and text. They hadn't thought about deaf users, hard of hearing users, blind users, or people with motor disabilities. They'd accidentally created a system that worked great for some people and was unusable for others.

This is the default pattern. AI assistants are built around a single interaction model—usually voice or text—and accessibility becomes an afterthought, if it happens at all.

It doesn't have to be this way. Building accessible AI assistants is not only the right thing to do—it's also better design. Multimodal interfaces serve more people, they're more resilient to errors, and they work better in diverse environments.

The Accessibility Problem with Most AI Assistants

Most AI assistants are built with one primary interface: voice or text. If your interface is voice-only, you've excluded deaf and hard of hearing users. If it's text-only, you've excluded blind users who rely on screen readers. If interactions require mouse clicks on visually small targets, you've excluded people with motor disabilities or vision limitations.

The broader problem is design by persona. Developers imagine a 'typical user' and optimize for that person's needs. That typical user is rarely disabled. So accessibility gets overlooked until someone complains or a lawsuit forces the issue.

Voice Assistants and Deaf Users

Voice assistants are incredibly useful if you can hear. But deaf and hard of hearing users can't use them. Even transcription doesn't fully solve this—if your assistant relies on voice input, deaf users can't interact with it. They're locked out.

Text Interfaces and Blind Users

Text-based assistants are often built without screen reader support. The interface might be visual—buttons, layouts, formatting. If it's not properly marked up with semantic HTML and ARIA labels, blind users using screen readers can't navigate it. They get text read aloud without context or structure.

Complex Interactions and Motor Disabilities

Some AI interfaces require precise mouse movements, rapid clicking, or touch gestures. People with motor disabilities, tremors, or limited dexterity struggle with these interfaces. They might be able to use keyboard navigation or voice input, but if the interface doesn't support it, they're excluded.

The Multimodal Approach: Accessibility as Design

The solution is multimodal design. This means building your AI assistant to work across multiple input and output modalities: text, voice, visual, keyboard, and alternative input methods. Not all at once for every interaction—but multiple pathways for people to engage.

Multiple Input Modes

Users should be able to interact via voice, text, keyboard, or other input methods depending on their needs and environment. Deaf users need text or gesture input. Blind users need keyboard or voice input. Someone in a noisy environment might prefer text. Someone with arthritis might prefer voice. By supporting multiple input types, you serve everyone better.

Multiple Output Modes

The assistant should be able to provide output through text, voice, visual formatting, and other means. Transcripts for voice output. Alt text for images. Captions if there's audio or video. Visual cues and text for information that might otherwise be only visual. This redundancy makes your assistant more useful and more accessible.

Semantic Structure and Markup

If your assistant has a visual interface, it needs proper semantic HTML and ARIA labels so screen readers can navigate it effectively. Buttons should be actual buttons, not divs that look like buttons. Links should be links. Lists should be properly marked up. This matters tremendously for blind users and assistive technology.

Keyboard Navigation

Every function your assistant supports should be accessible via keyboard. Tab order should be logical. Nothing should require a mouse. This is essential for people who can't use a mouse and for users with certain disabilities who rely on keyboard shortcuts.

Practical Design Principles

Design for Multiple Sensory Modes

Don't rely on a single sense to communicate information. If something is conveyed through color, also convey it through shape, text, or icon. If information is only available through sound, provide a text alternative. If the interface is only visual, provide voice or text alternatives.

Provide Context and Structure

Users of assistive technology often navigate in a non-linear way. A screen reader user might jump from heading to heading. A voice user might skip around. Make sure your interface provides enough context and structure that people can understand what's happening even if they're not reading sequentially.

Test at Every Stage

Accessibility isn't something you validate at the end. It's something you design for from the beginning and test continuously. Early in design, test with disabled users. As you build, test with screen readers, keyboard navigation, voice input. Get disabled people involved throughout development.

Document Limitations Clearly

No assistant is perfectly accessible to everyone. Be honest about limitations. 'This assistant works with screen readers but not voice input.' 'You can use voice or keyboard, but not mouse.' Clear documentation helps people decide if your assistant will work for them.

Accessible AI Assistant Checklist

Input and Output

  • Support text input and output
  • Support voice input and output (with captions)
  • Support keyboard navigation
  • Support alternative input methods (voice commands, gestures, buttons)
  • Provide transcripts for audio output
  • Provide captions for any video or audio content

Markup and Structure

  • Use semantic HTML correctly (proper headings, lists, buttons, links)
  • Add ARIA labels where semantic HTML isn't sufficient
  • Ensure proper heading hierarchy
  • Test with screen readers (NVDA, JAWS, VoiceOver)
  • Ensure all interactive elements are accessible by keyboard

Visual and Sensory

  • Don't convey information through color alone
  • Ensure sufficient color contrast (WCAG AA minimum)
  • Provide alt text for all images
  • Support zoom without loss of functionality
  • Support high contrast modes
  • Don't require rapid or timed interactions

Testing and Validation

  • Test with disabled users throughout development
  • Run automated accessibility checks regularly
  • Test with screen readers and voice input
  • Test keyboard navigation thoroughly
  • Conduct manual accessibility testing
  • Test with different devices and browsers

Documentation

  • Document supported accessibility features
  • Document known limitations
  • Provide clear instructions for alternative input methods
  • Include accessibility information in help and support

Who You're Serving by Building Accessible Assistants

When you build an accessible AI assistant, you're not just serving people with disabilities. You're also serving: someone using your assistant in a noisy environment (voice alone won't work), someone in a quiet environment where voice is awkward (text is better), someone with temporary injuries (broken arm, can't use mouse), someone using the interface on a different device than designed for, and someone who simply prefers a different modality.

Accessibility isn't just a niche concern. It's about making your assistant useful for the widest possible audience, under the widest possible circumstances.

The Business Case

Building accessible AI assistants is also good business. Accessible assistants have larger addressable markets. They're more resilient to changes in how people use them. They're less likely to face legal challenges. And they're more likely to serve users well across diverse environments and use cases.

Beyond that, accessibility forces you to think more clearly about your assistant's design and capabilities. Multimodal interfaces are often clearer, more robust, and easier to use—even for people without disabilities.

The Bottom Line

AI assistants are powerful. They're also powerful tools for inclusion or exclusion, depending on how you build them. If you build them without thinking about accessibility, you're leaving people out. If you build them with accessibility in mind from the start, you create something that works better for everyone.

Start now. Test with disabled users. Build multimodal interfaces. Support keyboard navigation and screen readers. Document your limitations. Your assistant will be better for it, and more people will be able to use it.

About Dr. Dédé Tetsubayashi

Dr. Dédé is a global advisor on AI governance, disability innovation, and inclusive technology strategy. She helps organizations navigate the intersection of AI regulation, accessibility, and responsible innovation.

Work With Dr. Dédé
Share this article:
Schedule a Consultation

Want more insights?

Explore more articles on AI governance, tech equity, and inclusive innovation.

Back to All Articles