Re: Speech Recognition in Cocoa and Carbon?
Re: Speech Recognition in Cocoa and Carbon?
- Subject: Re: Speech Recognition in Cocoa and Carbon?
- From: "Ricky A. Sharp" <email@hidden>
- Date: Fri, 17 Aug 2001 22:23:36 -0500
on 8/17/2001 8:32 PM, Lane Schwartz at email@hidden wrote:
>
I'm looking for any info on speech recognition in Cocoa. I've looked
>
through the archives of this list and didn't find any references to the
>
topic. I've also looked through the examples in
>
/Developer/Examples/Speech/. As far as I can tell the only way to get
>
speech recognition (or synthesis) in Cocoa is via the Carbon APIs. Can
>
anyone confirm/deny this?
I haven't seen anything in terms of Cocoa Speech APIs either, so I think
you'll need to call the Carbon APIs.
FYI, the state of Speech (Synthesis & Recognition) is quite shaky in 10.0.x.
You'll find that many words are not pronounced correctly or recognized.
I've been filing many bugs since the pre-Beta days and slowly but surely,
things are getting fixed. Hopefully 10.1 will provide speech up to par with
Mac OS 9.x and earlier.
While I can't share the actual code, here's basically the design of how I
wired SR into my Carbon app:
* Create a simple wrapper object around the SR API. I use a speech
notification callback to enqueue a result. An associated timer will then
pick things off the queue for processing.
* Create a wrapper around an SR Language Model.
* Create a mixin (I guess this would be a protocol under Cocoa). All
objects that want to use SR, then implements the various methods. One of
the methods is responsible for creating an SR Language Model object.
HTH,
------------------------------------------------------------
Ricky A. Sharp Instant Interactive(tm)
Founder & President
http://www.instantinteractive.com
email@hidden
------------------------------------------------------------