Re: AVAudioPlayer: Second instance of same sound...
Re: AVAudioPlayer: Second instance of same sound...
- Subject: Re: AVAudioPlayer: Second instance of same sound...
- From: William Stewart <email@hidden>
- Date: Wed, 18 Nov 2009 19:18:38 -0800
On Nov 18, 2009, at 5:57 PM, uɐıʇəqɐz pnoqɥɒɯ wrote:
On Nov 18, 2009, at 4:55 PM, William Stewart wrote:
On Nov 18, 2009, at 4:21 AM, uɐıʇəqɐz pnoqɥɒɯ wrote:
Thank you. I spent a good 14 or more hours trying to see if this
would work but I couldn't get beyond playing even a single
instance of a 17MByte files. I might be stumbling on a bug in
afconvert since my .aif can be played by AVAudioPlayer without
trouble and the .caf versions that afconvert puts out does not. I
filed a radar: 7404122 (The sounds play fine using QuickTime and
the Finder's preview)
Did you have a chance to look at this possible bug? The noise the
iPhone makes is pretty nasty and I can't imagine how a perfectly
good sound file that plays on the Mac could end up doing that on the
iPhone.'
not yet, but we will
I'm glad that your responses indicate that I can likely accomplish
my goal. With respect to your second response, could you tell me:
then what does -prepareToPlay do? Does it load up a larger chunk
of data initially? I have never yet noticed a difference in the
playing latency of sounds with or without -prepareToPlay, and am
not sure if I even really need to call it. Can you tell me more
about prepareToPlay?
yes (if you don't call it, calling play calls it for you). if you
do setTime it does the preparation for you at the new time
I realize that, but I was under the impression that calling play
without -prepareToPlay would take a bit longer to start than play
with -prepareToPlay called sometime way in advance. i.e. -
prepareToPlay would preload data to make a future -play start
immediately. Of course, there is no such thing as immediate play
with AVAudioPlayer, is there? Seems like -play just queues up
playing for after the runloop is done. As we all have been told, if
you want to synchronize sounds, AVAudioPlayer isn't the framework to
use.
well, this is what happens so if you actually measure it you will see
a difference. In some cases (depends on any number of circumstances),
that time will be negligible. This goes back to my point that the
costs you are concerned about may not actually be a concern for you.
Now, given that I want to play an arbitrary part of the sound file
for only a second or two, how should I stop the playback? I am
currently testing with -performSelector:AfterDelay or can use an
NSTimer. Or do you recommend other ways that are better?
this is the one concern I have for this process. We don't have a
playFor type of call (probably one that we should add) - file a bug
if you don't mind :).
7407192 AVAudioPlayer: Please add a timed play method: -playFor:
(NSTimeInterval)seconds
I also added:
7407214 AVAudioPlayer: Please add a fade-in/out method
ok - thanks
In lieue of that, then I would set a time to fire about a half
second or so just before you want to finish, and then just check
the current playing time, if its not there yet, then task the run
loop for a short period of time (CFRunLoopRunInMode) and then come
back and check again.
However, this really is alot of extra work, and I'm not entirely
convinced that you've done enough profiling to really prove that
this is going to be a problem. Knuth's comment "premature
optimisation is the root of all evil" (well, some exaggeration
maybe, but the observation is very pertinent). If it were me, I
would just have separate files for each sound you want to play, and
then create players (and dispose them) as you need to play the
sounds. Games do this all the time, and we handle quite large
numbers of sounds over the course of those activities
I think you'd find my reason valid. It is not about whether my app
can handle hundreds of sound files. It's about needless
repetition. I have now about 1000 notes that I create in
GarageBand. If I do what I have been doing, when the number of
notes were only 120, then I would select the notes in the GarageBand
document and export them individually, give them a unique name
(c4.Cellos.aif), and click save, repeat 1000 times. Not only is it
tiring, but it is error prone, quite often ending up with the same
note saved twice while another is forgotten. However, I can instead
save all notes that use the same instrument into one .aif file, and
export only as many times as I have instruments I care about. I
only have to repeat this 15 times. Of course, if you someday add
musical instrument synthesis like Garageband has, then my life would
be oh so much simpler. I've looked at implementing an ADSR like
mechanism myself, but I think it's over my head.
Yes, but then I wouldn't be using AVAudioPlayer to implement what you
are doing. I'd be using C, I'd be on the render callback and I'd be
managing my own read thread so I can manage my I/Os, and this would
give me the ability to do sample accurate synchronisation/start-ups
between my "notes", envelopes, mixing, panning, etc.
AVAudioPlayer is a higher-level API for playing back sound files. You
are writing a sampler and to me that is a much lower-level task that I
would want far more control over than AVAudioPlayer will give you.
Bill
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden