Re: Questions about document accessibility
Re: Questions about document accessibility
- Subject: Re: Questions about document accessibility
- From: "Steve H." <email@hidden>
- Date: Thu, 18 Aug 2011 16:12:32 -0400
Hi Chris,
Thanks for your quick reply! (more inline below)
On 2011-08-18, at 3:45 PM, Chris Fleizach wrote:
>
> On Aug 18, 2011, at 12:34 PM, Steve H. wrote:
>
>> Hi,
>>
>> As I've written before, I've been working on making my PDF reading/annotating app (iAnnotate PDF) accessible. Fortunately, I'm getting close to being finished.
>>
>> However, I have a few outstanding questions that I would like some advice about. My guess is that these are all issues that I am just going to have to deal with, but I was hoping if others have dealt with them before there might be some "collective wisdom" here.
>>
>> 1) I am posting UIAccessibilityAnnouncementNotifications in a few places in order to notify the user about asynchronous popups and status messages. For the most part, this is working ok for me, but my testing has revealed that for some of them VoiceOver systematically gets cut off mid-way through reading their announcement. I think this may be related to other (built-in) UI components posting a "ScreenChanged" or "LayoutChanged" notification around the same time, although I'm not sure. Is there a way that I can prevent this from happening?
>>
>
> When VO sees a screen change it will cut off speaking and search for the first element on the screen.
>
> Try not to send screen changes and announcements at the same time, or order the screen change first.
Just to be clear, I'm pretty sure my code is not sending them explicitly in these instances. In other words, I think they may be being implicitly / automatically sent by some other (non-custom) UI controls on the screen (although I can't figure out what would be doing it). Is it both a "LayoutChanged" (which I *may* be sending) and a "ScreenChanged" notification that does this, or just a ScreenChanged notification?
>> 1A) Related to these announcements, when they finish reading, VoiceOver will then focus on and read the first element on the page, no matter what it had been reading prior to the announcement. Is there a way I can either (a) get VoiceOver to keep its original focus during the announcement, or (b) programmatically set VoiceOver's focus after the announcement?
>>
>
> This will occur if a screen change happens. It sounds like you might not need a screen change
Right, I agree! I'm just trying to prevent it.
>> 2) In addition to reading the PDF text on a page (which currently works nicely), I'd like to make the text selecting / highlighting / underlining / etc. features of iAnnotate accessible. For sighted users, these features involve dragging your stylus or finger over the characters in the text you want to highlight. I haven't figured out how to do this well using VoiceOver. Any tips?
>>
>
> The API would need to be extended for this
OK. I was afraid of that.
>> 2A) One of the issues w.r.t. text markup is that, for most PDFs, I give the text to voiceover via the accessibilityValue of a custom pdf-specific Page view. Thus, while VoiceOver know *roughly* how touches correspond to the text in that region, it cannot access the actual character and line that the finger is currently touching. I think I may be able to get around this using the UIAccessibilityReadingContent protocol, but I haven't tried yet. But any other suggestions would be appreciated as well. In any event, even if I solve this (which I think I can do eventually with enough work), I still am faced by the general problem of #2 about how to make an essentially "dragging" UI accessible.
>>
>
> that is what the reading content protocol is for. you can achieve the level of granularity you're looking for with that
Good, I'll keep working along that front then. Thanks!
>> 3) (Related to both #1 and #2.) It would be really nice to be able to get the "thing most recently read" by VoiceOver (as well as the current "Unit" when the user is using the Web Rotor tool). Is this possible in any way?
>>
>> 3A) Even though I know this isn't possible in iOS5, the ideal annotation interface for me would be in conjunction with the WebRotor tool. As discussed in a previous thread (and filed in a feature request to Apple), in the ideal world, I could make my PDF document structure visible to VoiceOver somehow such that the user could navigate using the Web Rotor (like they would for a web page). If this were possible, and if I had a solution to #3, the highlighting interface would be as simple as the user turning the rotor to specify the "unit" they wanted to highlight (say, "word" or "sentence") and then navigating to the element they wanted to highlight using single swipes, then me mapping the button press to "highlight the current voiceover unit". The reason I relate this is that I both think it lends more weight to my feature request and I wanted to give you some idea of what I would ideally like to have in for markup annotations in case people know ways I might at least come close to achieving that.
>>
>
> I think if we could allow the API to support arbitrary rotor elements then this would solve your problems
Yeah, I agree. I'm looking forward to that! :-)
Thanks again for your response!
>> Thanks in advance for any comments/advice on these things!
>>
>> Steve _______________________________________________
>> Do not post admin requests to the list. They will be ignored.
>> Accessibility-dev mailing list (email@hidden)
>> Help/Unsubscribe/Update your Subscription:
>>
>> This email sent to email@hidden
>
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Accessibility-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden