• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Convolution Audio Unit how to?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Convolution Audio Unit how to?


  • Subject: Re: Convolution Audio Unit how to?
  • From: Mark Heath <email@hidden>
  • Date: Mon, 28 Nov 2011 08:45:09 +1100


On 26/11/2011, at 8:21 AM, Brian Willoughby wrote:


On Nov 23, 2011, at 03:26, Mark Heath wrote:
I have only described my initial problem. I haven't begun to describe my buffering, I wasn't sure if it was implemented correctly. I wondered how I determine if my Filter was Process()ing the left or right channel. This appears to be a moot point as the host application (or parent class or whatever) starts an instance per channel.
It's not the host application that handles this, but the parent class. The source to the parent class is part of your project, so it is entirely under control of the AudioUnit source code, even if you normally do not see or change that code.

I see this part of the code now. It's all filled in by Xcode and the how to (tremolo filter) on Apple's site touch it.



eg if the partition size was 512 samples and my window size was 16 samples. then I could only output 504.
You seem to still be assuming that your algorithm operates directly on the incoming data. It does not. You must copy the incoming data to a FIFO where it will then be processed by your algorithm. The FIFO consumes from the input buffer and feeds your convolution. Your core convolution code consumes from the input FIFO and feeds the output FIFO (you might be able to combine the two FIFO memory elements if you're clever). Finally, your AU consumes from the output FIFO and feeds the AU output buffer. Between the AU I/O and the FIFO(s), you're dealing with 512 samples in your example. Between the FIFO(s) and your convolution code, you're dealing with 16 samples at a time. You must implement counters to track all of this.

I thought that reading from inSourceP and writing to DestP would be simplest way to implement this, until I discovered that inSourceP and DestP point to the same place in memory :-/



It appears that I simply zero pad the firs 8 samples of the output partition and set a latency (of 8 / samples per second). correct?
No. You're missing the fact that you need to keep around samples from preceding calls to Render() so that your window is always contiguous.

If the initialise fills the buffer with zero then the start of the sample is zero padded.


Also, why did you start with an example of a 16-sample window, but now are talking about 8 samples of padding and latency?

8 samples look ahead, 8 samples behind (I probably should've said 17 sample window) I only need to read 8 samples to begin output. However I discovered that it was easier to implement delaying by the whole window size.


I was under the false assumption that I must process input sample 1 into output sample 1's position but wondered what I did towards the end of the input buffer. I guess that this assumption is only a requirement for non realtime filters, that I don't output anything until I have read enough input. This might be a flawed when applying it to Audio Units (or other realtime audio frameworks).
If you report a latency of 0.0, then you must process input sample 1 into output sample 1 doesn't's position. However, since convolution necessarily involves latency, then you will never be able to have 1- to-1 sample alignment. Instead, your AU reports its latency, and that tells the AU host that the sample positions will not be aligned. Then, your job is to implement code that processes input sample 1 into output sample 16's position, assuming a latency of 16 samples. It has nothing to do with real-time or not.

Non real time filters have a definite start to the data, and do not need to output anything until they have enough data to do so, hence output sample 1 will be in input sample 1's position.



Is this true, that for filters which require look ahead the very start of the output must be padded with 0s?

Yes. There are many ways to implement the details, but they all end up with silence at the beginning for the duration of the latency. The Reset() call is the only clue that the AU has as to when the time line "begins," so your algorithm and state variables must be adjusted properly in the Reset() call.

This is probably the key that I've been missing.


If I report 0 as tail time does this mean that I will lose windowsize/2 samples from the end?

My understanding is that the AU host will combine latency and tail time. If you report your latency as windowsize, then the AU host will continue to pull from your AU after the end for windowsize samples. If the tail time is 0, then it stops there, but you haven't lost anything.

Thank you, these are the things that I didn't understand about the implementation in AU.



This is interesting as the window size is a user settable parameter. According to the example I must be able to handle the user changing these parameters while the filter is running. Does this mean that I cannot change the latency? Or that I must write my filter in such a way that the latency is fixed regardless of the parameters?

You cannot change the latency while the AU host is running, because it will not know that anything changed. You can notify the AU of a change to the latency parameter, but it's doubtful that any host will expect latency to change. Instead, there is an "initialized" state, before which the AU host is supposed to give your AU all the information it needs, such as sample rate and buffer size, so that you can report the proper latency.


Note that latency has nothing to do with the user settable CoreAudio buffer size. If your actual convolution window size is truly user settable, then you'll need to send a notification whenever you change the latency, and you'll probably also need some mechanism to make sure that this only happens when the transport is not running. That's actually a difficult challenge considering that you do not have the basics working yet.

The solution is to limit the largest window size and set the latency to this largest size.



I've made a buffering implementation that assumes that the inFramesToProcess is larger than my window size. There is possibly a case where this is not true, which I have not written for.

Bad assumption. Sample-accurate parameter rendering is a feature that an AU host might implement without notification to your AU other than the parameters to the Render() or Process() call. Your code should be prepared to render a single sample, if necessary, which highlights the need for a separate FIFO beyond the normal buffers that are provided as parameters.

I will need to reimplement my code for this. I'm still wanting to check that the theory is working and that my filter is not introducing the distortion. Currently the host app is giving me 512 bytes per frame which is sufficient for my testing.


Thank you for your detailed descriptions of the parts and concepts of AU (and realtime processing) that I wasn't familiar with.

Mark
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


  • Follow-Ups:
    • Re: Convolution Audio Unit how to?
      • From: Paul Davis <email@hidden>
References: 
 >Convolution Audio Unit how to? (From: Mark Heath <email@hidden>)
 >Re: Convolution Audio Unit how to? (From: Brian Willoughby <email@hidden>)
 >Re: Convolution Audio Unit how to? (From: Mark Heath <email@hidden>)
 >Re: Convolution Audio Unit how to? (From: Brian Willoughby <email@hidden>)
 >Re: Convolution Audio Unit how to? (From: Mark Heath <email@hidden>)
 >Re: Convolution Audio Unit how to? (From: Brian Willoughby <email@hidden>)

  • Prev by Date: Re: What happened to MovieAudioExtractionRef
  • Next by Date: iphone hardware output sample rate
  • Previous by thread: Re: Convolution Audio Unit how to?
  • Next by thread: Re: Convolution Audio Unit how to?
  • Index(es):
    • Date
    • Thread