Jump to content

Aros/Developer/Docs/Devices/Narrator

From Wikibooks, open books for an open world
Navbar for the Aros wikibook
Aros User
Aros User Docs
Aros User FAQs
Aros User Applications
Aros User DOS Shell
Aros/User/AmigaLegacy
Aros Dev Docs
Aros Developer Docs
Porting Software from AmigaOS/SDL
For Zune Beginners
Zune .MUI Classes
For SDL Beginners
Aros Developer BuildSystem
Specific platforms
Aros x86 Complete System HCL
Aros x86 Audio/Video Support
Aros x86 Network Support
Aros Intel AMD x86 Installing
Aros Storage Support IDE SATA etc
Aros Poseidon USB Support
x86-64 Support
Motorola 68k Amiga Support
Linux and FreeBSD Support
Windows Mingw and MacOSX Support
Android Support
Arm Raspberry Pi Support
PPC Power Architecture
misc
Aros Public License

Introduction

[edit | edit source]

Narrator.device was originally written by SoftVoice and commissioned by Commodore for the early versions of the Amiga OS.

Theoretically, the approach is to SetFunction for the IntuiText, but this only a theory... I think that'd be messy, because the output might end up being really jumbled and/or you would need a lot of work to reconstruct the order they're placed in on screen to make sure it makes any kind of sense. But I guess it's the "worst case" for situations where we can't get more context. The biggest challenge would be to figure out how to let the apps make the text content accessible to the screen reader in a decent way, as well as how to decide what to read. Do you know anything about the accessibility API's for other platforms? Well, yes, generally the windows can be checked if one of them is on top and if the point located inside it or if it is not on top if it is partially visible on this part.

Emacs can be operated 100% from the keyboard, and I know there are blind developers using Emacspeak with it. On Windows I usually use TextPad - it is very accessible, though when I was sighted and worked on Amiga I have used GoldEd.

Generally there is a need to create several off-screen models...

  • All the text on the screen belonging to any of the windows - IntuiText catching will be good for me here.
  • The text in particular controls, like buttons, lists, etc. This should be done individually for any kind of GUI generation interface, like GadTools, ReAction\ClassAct, MUI, BGUI, and so on. I think we can do even better, by adding an API for applications to provide more structured semantic information about their API themselves, but I guess starting with hooking into what we have and see what needs to be improved to make it usable would be a decent start. The idea is really good, but I want to understand if it is possible not to break AmigaOS API's - may be it is possible to make a screen-reader for AmigaOS also without modifying it...

Definitely, for GadTools and for MUI it is possible to simply SetFunction and get in internal information, though I am not familiar with internals of ReAction and BGUI. May be new interfaces has appeared from that time when I was using AmigaOS...

  • A lot of applications do not support standard Tabbing between the controls - it will be a real problem and a really hard thing to fix for me, I believe.

If I get time, I'd love to look at a simple narrator.device compatible wrapper for FLite. FLite is formant based like the original narrator.device so it ought to be possible to implement some of the same type of parameters (setting pitch etc. to get different voices)... The limiting factor is time. Alternatively I'm happy to help out if you or someone else has time to look at it. I actually got it compiled under AROS (command line with wav-file output, not direct to audio) but there's some bug I need to figure out as I wasn't able to play the resulting wav-file anywhere... Hopefully it's a simple bug - once the wav output works then creating a device out of it is hopefully reasonably simple.

The screen reader API would probably be best created as a separate library that could be provided with applications that needs it. An AmigaOS version could rely on SetFunction() etc. to tap in to the necessary other libraries, while on AROS there would be more flexibility in integrating it properly. You can support legacy applications via the generic methods but still add an API for new apps to provide additional information to make it work better.

Windows screen-readers usually rely on the keyboard and Tab-order in Windows applications, but an iOS screen-reader VoiceOver works in quite a different way - it announces the underlying objects that you can operate on while you move a finger over it and the object is activated when you double tap on another part of the screen while continuing to hold the finger over an object. Both approaches can be combined here and if an application allows to use Tab order then the more productive Windows approach can be used, but if not you can move a nouse over the screen or move a finger over the touch-pad and operate in VoiceOver manner.

Hmm, I have the following idea:

  • A special library, called e. g. Accessibility.library is developed by me.
  • The structure of this library is very similar to datatypes.library, e. g. it has modules like MUI.accessibility, BGUI.accessibility and so on, that are crawling inside the appropriate GUI module taking all the necessary internal information and provide it to central module - accessibility.library.
  • The final application does not need to think about differences, simply having standard interface to the library.
  • A GUI developer can develop the appropriate module supporting his GUI generator if he wants.

Examples

[edit | edit source]

References

[edit | edit source]