• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: processing question
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: processing question


  • Subject: Re: processing question
  • From: James Chandler Jr <email@hidden>
  • Date: Fri, 5 Jun 2009 19:19:33 -0400

Another classic illustrative example which probably resides in the same category as Normalize, would be Reverse Audio, which would rearrange a region to play backwards. Such a function isn't especially useful, but that doesn't keep it from being a fairly common feature <g>.

If (in the host) a user does 'Select All' and then the user invokes a nonrealtime Reverse Audio function, then the Reverse Audio plugin would need free read/write random access to the entire track to get its job done.

In the old Premiere audio plugin scheme, which was primarily designed as a nonrealtime scheme-- A compatible host had to be smart enough to give the plugin whatever file data the plugin would request and smart enough to write whatever data the plugin would return. During processing, the plugin was boss <g>.

James Chandler Jr.

On Jun 5, 2009, at 6:43 PM, Brian Willoughby wrote:


On Jun 5, 2009, at 15:11, Brian Davies wrote:
Thanks for that. It's actually a de-clicker I have in mind and the user may want to treat only a selection of the file. The main principle of declicking is that almost all audio samples are good and must not be changed including not inserting or deleting anything.

Maybe I could set ProcessingInPlace to FALSE, and then pull samples from the host which are beyond (in time) the samples being pulled from my AU. That way, I would set L = T = 0. On the first pull by the host, for B samples, I pull B+N samples, and return the B samples unchanged. Next pull, I pull B samples, which are ahead of the host's pull, and now I can return B treated samples. If I pull beyond the end of the file I will zero-pad, which will work for my algorithms. That way, I would not be at the mercy of the host to do the right thing with L and T samples.

Can I use ProcessingInPlace in such a way?


ProcessingInPlace is not exactly related to the concepts you're dealing with. ProcessingInPlace merely controls a minor detail of buffering - whether the input samples are processed and written to a new buffer, or if the processed samples are written over the input samples. ProcessingInPlace is true when the input samples are overwritten, which potentially allows a slight performance boost if you're only going to modify a small fraction of samples. But even when ProcessingInPlace is false, you can still copy almost all of the audio samples to the output buffer without any change in their value. Either way, you still have to address the issues you've raised.

I think that your biggest issue is the fundamental difference between offline processing such as normalizing, repairing, or de- clicking, where only selected samples are altered, versus online processing where all samples are passed through the same algorithm. There has not been much discussion here of the former category of processing. I believe that this is something which is primarily handled by the host application, which must allow for the user to treat only a selection of the file. Perhaps it bears repeating that an AU does not process a file, in general, unless we're talking about a sample player. An AU is handed a stream of buffers by the host, and the host accesses the file to be processed. Thus, the host must be designed to handle potentially discontinuous data from such an operation only a selection of a file as opposed to the entire file.

P.S. Actually, now that I've written this, I realize that there are at least three categories of audio processing. Normalization and certain other gain processing requires a two-pass system so that the algorithm can first analyze every sample in an entire file before it can process the first sample correctly. These algorithms are almost impossible in an AU without some kind of metadata. Then there are selective repair algorithms like yours, which are basically easy enough to handle with an AU, but you do have the special problem of how to treat the edges of the selection so that you don't introduce new clicks at the boundaries. Finally, we have the basic AU which processes a continuous stream of audio samples, and the only exception is the Reset() which clears out any state so that a new stream can be started. The last type of AU is the one most discussed.

Brian Willoughby
Sound Consulting

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden

_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
  • Follow-Ups:
    • Re: processing question
      • From: Brian Willoughby <email@hidden>
    • Re: processing question
      • From: Doug Wyatt <email@hidden>
References: 
 >processing question (From: Brian Davies <email@hidden>)
 >Re: processing question (From: Brian Willoughby <email@hidden>)
 >Re: processing question (From: Brian Davies <email@hidden>)
 >Re: processing question (From: Brian Willoughby <email@hidden>)

  • Prev by Date: Re: processing question
  • Next by Date: Re: processing question
  • Previous by thread: Re: processing question
  • Next by thread: Re: processing question
  • Index(es):
    • Date
    • Thread