I am trying to get VO to read a specified string when it encounters a text attachment in an NSTextView. By default, it reads “embedded <type> image.” I would like it to read e.g. “Square, image."
After my previous experience with links[1] in an NSTextField, it seemed like the right way to do this was to use proxy objects. I modelled my attempt off of what TextEdit does when you drop an image into an RTF file.
I’ve made a sample app[2] which shows the broken behavior. I would expect the inline green square to be read as “Square", given that it has that for an AXDescription. But instead, when VO selects it, it just reads nothing.
[2] https://github.com/nk9/TextAttachmentAccessibility
I implemented the protocol methods necessary to stop LLVM from complaining about my NSAccessibilityImage, but it’s missing some properties which TextEdit includes: AXWindow, AXTopLevelUIElement, AXEnabled, AXFocused, AXURL. Implementing the first two of
these doesn’t change the behaviour. I don’t really know how to properly implement three and four. The last one is hopefully unnecessary, given that I don’t want it to read the image’s type.
I’d like to know if I’m approaching this problem from the right direction. Any pointers?
In my real app, I also tried to use the -accessibilityDescription attribute directly on the NSImage without using any proxy objects. However, that didn’t change the behaviour. I didn’t try that in the sample app because it’s making an NSFileWrapper directly,
so there’s no NSImage to set the description of.
Thanks in advance,
-Nick
|