Re: Making a crossword grid accessible
Re: Making a crossword grid accessible
- Subject: Re: Making a crossword grid accessible
- From: James Dempsey <email@hidden>
- Date: Mon, 15 Dec 2008 17:20:06 -0800
Daniel,
You raise some excellent questions. I've also responded to some other
replies inline below.
As you're finding, games tend to be one of the more challenging
accessibility areas for a few reasons:
1. Custom views that need to be made accessible
2. Custom interaction model specific to the game
3. Elements that don't quite fit into existing roles/subroles
Even a fairly simple game like the Dicey accessibility sample code
example hits upon these challenges.
There is also a known limitation where assistive applications cannot
register to receive notifications on 'faux' UI elements. It is
through notifications that VoiceOver is able to determine that the
value of a text field has changed. So you are correct that VoiceOver
will not read changes to your faux elements. I will contact you off-
list to get some more information from you, to see if there are any
workarounds that would be appropriate for your app.
Yyou might make the crossword accessible as an AXGrid. One benefit is
that AXGrid is a simpler construct than an AXTable, but still provides
a sense of row/column. A drawback to both AXGrid and AXTable is that
a user would need to navigate through the dead spaces on the board.
if you can get VO to focus on the object then you could just use
nsspeechsynthesizer to announce whatever is required for the
particular object/word/clue etc
In general, you don't want to use NSSpeechSynthesizer in addition to
VoiceOver. Using NSSpeechSynthesizer to add speech features to an
application is great addition, but separate than accessibility
information which is then reported by VoiceOver. For instance,
TextEdit has Edit > Speech > Start Speaking, but that feature is
unrelated to VoiceOver accessing TextEdit. It's better to focus on
ensuring that the information reported to AX is correct.
Travis, practically speaking, how would a VoiceOver user approach
the "just a table" situation. Do you imagine VoiceOver speaking
something as terse as "5, 8, H" or "7, 12, empty" as they navigate
around? I guess my hesitation is, why simulate a system that was
designed for sighted users, if (and that's the question) there is a
better way of organizing the game for VoiceOver users.
Almost always, the philosophy of the accessibility API on Mac OS X is
to represent what is actually visible in the user interface through
the AX hierarchy. However, I did think for a moment of a window that
provides an alternate view of a crossword puzzle. Possibly just a
table view with clues in one column and answers in the second column.
The answers hold an asterisk for each un-filled in letter. A VO user,
or a sighted user could then just run through the clues, and fill in
the answers. When a letter was filled in, the appropriate asterisk
would be replaced in the appropriate other fields. (Or two tables,
with an AXLinkedUIElement back and forth, one for across, one for down).
I'll send another email off list as well.
-James
On Dec 15, 2008, at 12:58 PM, Daniel Jalkut wrote:
Hello all, I'm writing for the first time, after recently diving
into what is turning out to be a challenging accessibility exercise.
I would appreciate any experienced developers or designers giving me
some advice on how I could best achieve accessibility in this
situation.
I am the developer of Black Ink, an application for downloading and
solving crossword puzzles on the Mac. It was brought to my
attention that the application is almost completely inaccessible,
due largely to the fact that the crossword solving "grid" is a
custom view which doesn't currently expose any UIElements.
I am planning to "test" the results of this work with a blind user
who has asked for the functionality, so these assumptions are by no
means set in stone, but I think some behaviors include:
1. When a square on the puzzle is first selected, the associated
clue is announced, along with the currently entered answer letters
for that that word, and the cursor position within that word. For
instance, when the user selects the first word in this puzzle:
http://www.red-sweater.com/snips/AccessibleXword-20081215-154434.png
I believe VoiceOver should announce something along the lines of:
"one across, crack and redden, four letters, first letter selected,
existing answers blank, H, blank, P."
2. As the user enters answer letters, VoiceOver should announce the
typed letter.
3. As the selection within a word changes, VoiceOver should simply
announce the new position within the word. E.g. if the user presses
the right arrow key from the state in the image above, I would hope
for something like "second letter selected, existing answer H".
At first I decided to implement the UI element as a "table" role,
seeing as how the crossword is comprised of a grid structure and has
rows and columns. But I believe this is something of an
implementation detail for sighted users, and not particular
effective for conveying the structure of a puzzle to e.g. a blind
user.
The way I'm thinking of it now is that the puzzle grid is a group,
that contains "Word" UIElements which are of a text field role. But
even with this approach, getting the kind of VoiceOver effect I'm
hoping for seems to be difficult or impossible.
I've found in the archives evidence that it might be impossible to
get VoiceOver to notice value changes in "Faux" UI elements.
Currently, the puzzle view is a true subclass of NSView, but all of
my interior constructs for accessibility are faux UI elements.
Does anybody have opinions on how I should pursue this? Do you think
I should stick with the "group comprised of words" approach, or
should I go back to a strictly grid-oriented representation? Given
the specific types of puzzle-solving feedback I want to convey to
users, I'm beginning to think that it will be impossible without
hacking/abusing some existing role to achieve the ends. For
instance, I could imagine having my NSView fulfill the "text area"
role and, in order to get certain audio feedback to the user, having
their navigation of my UI fake certain manipulations of text. Of
course, this feels like it would be a means to an end for VoiceOver
in particular, and may be at the expense of general accessibility
for other purposes.
Appreciate any thoughts on this. I realize it's sort of a long-
winded inquiry, but I hope that somebody who is fascinated by this
kind of problem will find it worth digging into :)
Daniel
--------------------------------------------------
James Dempsey
AppKit Engineering
Apple
email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Accessibility-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden