Re: AUGraph deprecation
Re: AUGraph deprecation
- Subject: Re: AUGraph deprecation
- From: Benjamin Federer <email@hidden>
- Date: Wed, 11 Jul 2018 16:29:40 +0200
Arshia, Laurent,
have you seen this WWDC session video which demonstrates AVAudioEngine's real
time manual rendering mode?
https://developer.apple.com/videos/play/wwdc2017-501/?time=942
<https://developer.apple.com/videos/play/wwdc2017-501/?time=942>
Unfortunately the resources only provide sample code for offline manual
rendering mode and I have a lot more questions than answers after watching that
video. For example, at least on iOS each engine only has one input node. Does
that mean there can only be one input process? Can there be more than one input
nodes on macOS? Where does that C++ code come from? How can the input block
*not* be Swift or Objc while still being called in a realtime context?
If anyone knows of a working code sample online or succeeds in doing this,
please post.
As a bonus this video officially turns down Swift for realtime processing: „...
it is not safe to use Objective-C or Swift runtime from a real-time context.“
Hadn't seen or read this from Apple before.
Benjamin
> Am 11.07.2018 um 15:30 schrieb Laurent Noudohounsi
> <email@hidden>:
>
> Thanks Benjamin for the precision. I thought that `installTapOnBus` was the
> successor of `RenderCallback`.
> For me it was not natural to mix old api like
> `kAudioUnitProperty_SetRenderCallback` in AVAudioEngine.
>
> So as Arshia said, I'm also looking for a way to use real-time processing
> with AVAudioEngine.
>
> Le mer. 11 juil. 2018 à 15:05, Arshia Cont <email@hidden
> <mailto:email@hidden>> a écrit :
> Interesting thread here!
>
> Anyone has achieved low-latency processing on AVAudioEngine?
>
> The RenderCallback seems natural to me (which is the good “old” way of doing
> it with AUGraph). But I’m curious to hear if anyone has done/achieved real
> stuff here with AVAudioEngine real-time processing and how.
>
>
> Arshia
>
>
>> On 11 Jul 2018, at 15:00, Benjamin Federer <email@hidden
>> <mailto:email@hidden>> wrote:
>>
>> Laurent,
>>
>> `installTapOnBus` is not intended for realtime processing as a tap only
>> provides the current frame buffer but does not pass it back into the signal
>> chain. The documentation reads `Installs an audio tap on the bus to record.
>> monitor, and observe the output of the node`.
>>
>> Although I have not done that myself yet my understanding is that for
>> realtime processing you can still retrieve the underlying audio unit from an
>> AVAudioNode (or at least some nodes?) and attach an input render callback
>> via AudioUnitSetProperty with kAudioUnitProperty_SetRenderCallback.
>>
>> I assume the other way would be to subclass AUAudioUnit and wrap that into
>> an AVAudioUnit which is a subclass of AVAudioNode. Yes, it confuses me, too.
>> Random Google result with further information:
>> https://forums.developer.apple.com/thread/72674
>> <https://forums.developer.apple.com/thread/72674>
>>
>> Benjamin
>>
>>
>>> Am 11.07.2018 um 14:34 schrieb Laurent Noudohounsi
>>> <email@hidden <mailto:email@hidden>>:
>>>
>>> Hi all,
>>>
>>> I'm interested in this topic since I've not found any information about it
>>> yet.
>>>
>>> Correct me if I'm wrong but AVAudioEngine is not able to lower than 100ms
>>> latency. It's what I see in the header file of `AVAudioNode` with its
>>> method `installTapOnBus`:
>>>
>>> @param bufferSize the requested size of the incoming buffers in sample
>>> frames. Supported range is [100, 400] ms.
>>>
>>> Maybe I'm wrong but I don't see any other way to have a lower latency audio
>>> processing in an AVAudioNode.
>>>
>>> Best,
>>> Laurent
>>>
>>> Le mer. 11 juil. 2018 à 13:57, Arshia Cont <email@hidden
>>> <mailto:email@hidden>> a écrit :
>>> Benjamin and list,
>>>
>>> I double Benjamin’s request. It would be great if someone from the
>>> CoreAudio Team could respond to the question.
>>>
>>> Two years ago, after basic tests I realised that AVAudioEngine was not
>>> ready for Low Latency Audio analysis on iOS. So we used AUGraph. I have a
>>> feeling that this is no longer the case on iOS and we can move to
>>> AVAudioEngine for low-latency audio processing. Anyone can share experience
>>> here? We do real-time spectral analysis and resynthesis of sound and go as
>>> low as 64 samples per cycle if the device allows.
>>>
>>> Thanks in advance.
>>>
>>>
>>> Arshia
>>>
>>>
>>> PS: I actually brought the deprecation issue of AUGraph in a local Apple
>>> Dev meeting where the EU director of developer relation was present.
>>> According to him, when Apple announces a deprecation, it WILL happen. My
>>> interpretation of the conversation is that AUGraph is no longer maintained
>>> but provided as is.
>>>
>>>> On 11 Jul 2018, at 12:36, Benjamin Federer <email@hidden
>>>> <mailto:email@hidden>> wrote:
>>>>
>>>> Since it was mentioned in another email (thread) I’m giving this topic a
>>>> bump. Would be great if someone at Apple, or anyone else in the know,
>>>> could take the time to respond. The documentation at the link cited below
>>>> still has no indication of deprecation. Will it come with one of the next
>>>> Xcode Beta releases?
>>>>
>>>> On another note I am really interested in how transitioning over to
>>>> AVAudioEngine is working out for everyone. I know AVAudioEngine on iOS.
>>>> What I am interested in is any macOS specifics or hardships.
>>>>
>>>> From my experience AVAudioEngine is relatively robust in handling multiple
>>>> graphs, i.e. separate chains of audio units. I had some issues with the
>>>> AVAudioPlayerNode connecting to multiple destinations in that scenario.
>>>> Also connect:toConnectionPoints:fromBus:format: did not work for me as it
>>>> only connected to one of the destination points. Anyone else experienced
>>>> problems in that regard?
>>>>
>>>> Thanks
>>>>
>>>> Benjamin
>>>>
>>>>
>>>>> Am 08.06.2018 um 16:59 schrieb Benjamin Federer <email@hidden
>>>>> <mailto:email@hidden>>:
>>>>>
>>>>> Last year at WWDC it was announced that AUGraph would be deprecated in
>>>>> 2018. I just browsed the documentation
>>>>> (https://developer.apple.com/documentation/audiotoolbox?changes=latest_major
>>>>>
>>>>> <https://developer.apple.com/documentation/audiotoolbox?changes=latest_major>)
>>>>> but found
>>>>> Audio Unit Processing Graph Services not marked for deprecation. The
>>>>> AUGraph header files rolled out with Xcode 10 beta also have no mention
>>>>> of a deprecation in 10.14. I searched for audio-specific sessions at this
>>>>> year’s WWDC but wasn’t able to find anything relevant. Has anyone come
>>>>> across new information regarding this?
>>>>>
>>>>> Judging by how much changes and features Apple seems to be holding back
>>>>> until next year I dare ask: Has AUGraph API deprecation been moved to a
>>>>> later time?
>>>>>
>>>>> Benjamin
>>>>
>>>> _______________________________________________
>>>> Do not post admin requests to the list. They will be ignored.
>>>> Coreaudio-api mailing list (email@hidden
>>>> <mailto:email@hidden>)
>>>> Help/Unsubscribe/Update your Subscription:
>>>>
>>>>
>>>> This email sent to email@hidden
>>>> <mailto:email@hidden>
>>>
>>> _______________________________________________
>>> Do not post admin requests to the list. They will be ignored.
>>> Coreaudio-api mailing list (email@hidden
>>> <mailto:email@hidden>)
>>> Help/Unsubscribe/Update your Subscription:
>>>
>>>
>>> This email sent to email@hidden
>>> <mailto:email@hidden>
>>
>
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden