Hard question, because the field so diverse. In a sense, accessibility is much more then just trying to make computers useable for the blind. At a fundamental level, it is about making software flexible enough to be used in different modalities. People with very little motor capabilities are quite capable of looking at a screen, but they need help moving the mouse and perhaps a good predictive onscreen keyboard to be able to type. Blind people on the other hand are mostly quite content with a standard keyboard, but they need a totally different way of output, like tactile braille or synthesized speech. For the output part, it boils down to having an API which makes a third party app (like a screen reader) able to traverse the logical structure of what the application is presenting on-screen. That is mostly a sort of tree which reflects which widget contains what, and the different types of content. Such an API, however, is not only required for screen readers, but also very useful for things like automated testing, for instance. So a web automation or testing framework could actually be written on top of the accessibility APIs, and sometimes actually is. I am rumbling about this to get you in the right mindset. Its so hard to not see the forest because of all the trees around...
That said, if we're talking about web accessibility, the obvious recommendation is the WAI WCAG. Maybe not the best reference for learning on how to implement things, but its a good start.
Depending on the platform you're at home with, there are screen readers (NVDA, Orca, BRLTTY) which are open source and can be studied. On the user side, and on the "how is this implemented" side.
Installing NVDA on Windows and turning the monitor off is a good way to get your feet wet. It might feel strange at first, but you will notice that things can actually get done this way. Its also a good way to test a website if you have no specific accessibility knowhow yet. Just try to navigate and read its contents.
> [...] and turning the monitor off is a good way to get your feet wet.
Is it weird that I, as someone with normal sight, had never thought of that as a simple way of testing whether your software (and the whole operating system together with it) works correctly with screen readers? It's like there's some sort of unconscious bias which links typing on a computer with its monitor being turned on.
And I lived through the times when most computers didn't come with any pointing device, which led to most software back then being accessible to keyboard-only users (notable exceptions being things like Paintbrush, which required a mouse), so I understand the link between the lack of a device and software being designed to work well without that device.
There seems to be a quite widespread confusion regarding input and output devices when it comes to assistive technologies for the blind. I am being asked a lot how my "braille keyboard" works. Even by people from the tech industry. Thats when I typically gently explain that a good secretary doesn't need to look at their keyboard, the faster you type, the more you need to type blindly. Most assistive technologies for the blind are about output, and not about input... But it is frequently being confused.
> In a sense, accessibility is much more then just trying to make computers useable for the blind. At a fundamental level, it is about making software flexible enough to be used in different modalities. [...] For the output part, it boils down to having an API which makes a third party app (like a screen reader) able to traverse the logical structure of what the application is presenting on-screen. [...] So a web automation or testing framework could actually be written on top of the accessibility APIs, and sometimes actually is.
I agree. If the features are well-designed, then they can be good for many uses, whether or not you are blind.
You could also add pronouncing file (especially if a document is using unusual words), it is useful if you are blind and using synthesized speech, but also if you are not blind and do not know how is the word pronounced then you can easily learn. (Likewise, if you watch television then you can put on caption in case you do not know how to spell some unuusal word (such as someone's name). Captions could also be useful for a "caption scrollback" menu to display prior captions in a list, although I have never seen this implemented, but I think it would be useful.)
Another situation where speech synthesis is often used (by people who are not blind) is GPS-based navigation systems. They often pronounce the street names wrong, so adding data for pronouncing, and then implementing that properly, would be better.
(I have mentioned before that I think that adding a "ARIA view" (with user-defined CSS) might be a best way to make a consistent visual display which uses ARIA instead of the visual styles defined by the web page author (widgets, etc can also be used, and would also be consistent instead of each web page having its own widget styles). However, I have not seen such a thing implemented in a good way.)
I'm currently debating with myself whether I should create GUI programs (with the main code written in Rust) using Qt or Tauri. Tauri is a Rust based GUI framework based on a webview, similar to Electron, although it can use the OS's native web renderer [1]. Do you think one would be better than the other with regards to accessibility? Or is it mostly a question of how I, as a programmer, make use of the tools? For context, those are currently just small tools and utilities I make on my own and provide as Open Source.
I don't currently have Windows. Is there a good way for me to test accessibility on Linux? As a fallback, I will get myself Windows once I port the tools to Windows, so I could test accessibility then.
I have no experience with webview based local apps and their accessibiility, nor did I ever look at Tauri. So to assess if it works, I'd have to check. I am a bit reluctant to recommend Qt because they have let me down in the past at times, but all in all, Qt is mostly accessible, even cross-platform.
Which brings me to your second question. Linux has a GUI accessibility API as well, the AT-SPI. GNOME Orca is the screen reader to use on Linux. If your distro configures things right by default, you should be able to access your Qt application with Orca as a screen reader on Linux.
That said, if we're talking about web accessibility, the obvious recommendation is the WAI WCAG. Maybe not the best reference for learning on how to implement things, but its a good start.
Depending on the platform you're at home with, there are screen readers (NVDA, Orca, BRLTTY) which are open source and can be studied. On the user side, and on the "how is this implemented" side.
Installing NVDA on Windows and turning the monitor off is a good way to get your feet wet. It might feel strange at first, but you will notice that things can actually get done this way. Its also a good way to test a website if you have no specific accessibility knowhow yet. Just try to navigate and read its contents.