Ducking
Ducking
- Subject: Ducking
- From: John Clayton <email@hidden>
- Date: Mon, 20 Oct 2008 09:33:23 +0200
Hi One and All,
I'd like to perform audio ducking in my app and thought I'd throw the
idea out in the open for criticism / review before I go ahead and
write it - as I'm not sure that this is the best way - perhaps I'm
missing something fundamental.
My app has multiple video/audio 'tracks' (basically just a core-data
object that represents some form of media), and at present each track
contains its own, self-contained series / chain of audio units. The
chain looks like this:
QT Movie -> AUTimePitch -> AU HAL Output
The ducking part of the app calls for an attribute on a track called
'isAutoDucking', to allow any track to be ducked (or not). If this is
set to true - then the track should reduce its volume by some
percentage (as defined by the user) during playback, but only if there
is another non-ducked track with audio playing at the same time. I
could in theory reduce the volume of ducked tracks by calculating the
relative volume of other tracks on the fly - but for now I'm trying to
make the problem simple - so I'm trusting the user to set the amount
of ducking [as a %age].
In my opinion, the problem is twofold:
1. figure out when ducking should occur
2. determine by how much a track should be ducked
In my design, (2) is solved by the user specifying a percentage to
duck by, and I'm left thinking that I can implement (1) as follows:
Non-Ducked Tracks:
QT Movie -> AUTimePitch -> [pre-render notification #1 here]
AUHALOutput
the pre-rendering notification #1 - on the Non-Ducked track - is a way
for me to work out whether or not there is audio being processed by a
non-ducked track, I'd likely store a simple boolean in a singleton
somewhere (or maybe a time range of booleans). the goal being to
answer the question: 'is audio being played on a non-ducked track
right now'.
Ducked Tracks:
QT Movie -> AUTimePitch -> [pre-render notification #2 here]
AUDynamicsProcessor -> AUHALOutput
I'd then use the pre-rendering notification #2 to simply set up the
AUDynamicsProcessor to perform ducking, based on the data produced by
the #1 pre-render hook.
My concerns are:
0) Am I missing the point? Is there an easier way to achieve this
with core-audio?
1) this design offers no synchronization - I can't be sure that any
single track is synchronized with others - so my ducking will likely
be out of synch by a couple (few, more, hundreds?) of frames.
2) I have outlined two distinct audio unit chains above, but I think
in practise that I'd have only one - and that I'd just bypass the
dynamics processor for non-ducked tracks.
I'm keen on any input to this design - feel free to point me to any
docs etc that you think would help me get a better grip on the
subject(s).
Thanks for your time.
--
John Clayton
Skype: johncclayton
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden