Re: Making a crossword grid accessible
Re: Making a crossword grid accessible
- Subject: Re: Making a crossword grid accessible
- From: Kieren Eaton <email@hidden>
- Date: Tue, 16 Dec 2008 08:24:37 +0900
Hello all, I'm writing for the first time, after recently diving
into what is turning out to be a challenging accessibility exercise.
I would appreciate any experienced developers or designers giving me
some advice on how I could best achieve accessibility in this
situation.
I am the developer of Black Ink, an application for downloading and
solving crossword puzzles on the Mac. It was brought to my
attention that the application is almost completely inaccessible,
due largely to the fact that the crossword solving "grid" is a
custom view which doesn't currently expose any UIElements.
I know you are a seasoned developer but maybe this will help with you
"faux" elements, specifically the accessibilityisignored: stuff which
is NO be default in a lot of classes that are deemed not essential in
a UI.
http://developer.apple.com/ue/accessibility/accessibilitycustomviews.html
I am planning to "test" the results of this work with a blind user
who has asked for the functionality, so these assumptions are by no
means set in stone, but I think some behaviors include:
I am sure a couple of us "blind" developers , myself included would be
willing to help in that respect.
1. When a square on the puzzle is first selected, the associated
clue is announced, along with the currently entered answer letters
for that that word, and the cursor position within that word. For
instance, when the user selects the first word in this puzzle:
http://www.red-sweater.com/snips/AccessibleXword-20081215-154434.png
I believe VoiceOver should announce something along the lines of:
"one across, crack and redden, four letters, first letter selected,
existing answers blank, H, blank, P."
2. As the user enters answer letters, VoiceOver should announce the
typed letter.
3. As the selection within a word changes, VoiceOver should simply
announce the new position within the word. E.g. if the user presses
the right arrow key from the state in the image above, I would hope
for something like "second letter selected, existing answer H".
if you can get VO to focus on the object then you could just use
nsspeechsynthesizer to announce whatever is required for the
particular object/word/clue etc
At first I decided to implement the UI element as a "table" role,
seeing as how the crossword is comprised of a grid structure and has
rows and columns. But I believe this is something of an
implementation detail for sighted users, and not particular
effective for conveying the structure of a puzzle to e.g. a blind
user.
The way I'm thinking of it now is that the puzzle grid is a group,
that contains "Word" UIElements which are of a text field role. But
even with this approach, getting the kind of VoiceOver effect I'm
hoping for seems to be difficult or impossible.
That would be a much better for a navigation system then you could
interact with the "word" for its individual elements.
Another option might be to say have a separate window which could be
hidden and so is not seen by the sighted user but is still avail to VO
users.
This could be accessed from the clues list (I have not checked you app
out (Yet)) and would enter data into you main view as characters were
added to it.
I've found in the archives evidence that it might be impossible to
get VoiceOver to notice value changes in "Faux" UI elements.
Currently, the puzzle view is a true subclass of NSView, but all of
my interior constructs for accessibility are faux UI elements.
Does anybody have opinions on how I should pursue this? Do you think
I should stick with the "group comprised of words" approach, or
should I go back to a strictly grid-oriented representation? Given
the specific types of puzzle-solving feedback I want to convey to
users, I'm beginning to think that it will be impossible without
hacking/abusing some existing role to achieve the ends. For
instance, I could imagine having my NSView fulfill the "text area"
role and, in order to get certain audio feedback to the user, having
their navigation of my UI fake certain manipulations of text. Of
course, this feels like it would be a means to an end for VoiceOver
in particular, and may be at the expense of general accessibility
for other purposes.
Appreciate any thoughts on this. I realize it's sort of a long-
winded inquiry, but I hope that somebody who is fascinated by this
kind of problem will find it worth digging into :)
Sounds Interesting, Happy to help where we can as accessibility is
becoming more prevalent on the Mac now that VO has come of age.
Kieren
Olearia - Bringing Talking Books to Mac OS X
http://olearia.googlecode.com/
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Accessibility-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden