• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Sequencer project
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Sequencer project


  • Subject: Re: Sequencer project
  • From: Gregory Wieber <email@hidden>
  • Date: Mon, 16 Aug 2010 19:55:55 -0700

So,  in my app each beat is composed of 16 'ticks'.  I need that kind of granularity in my sequences.  Now,  that means that at high bpms the tick-lengths can become shorter then the number of samples calculated in each render callback (512 in this case).  You're assuming that I can fill the buffer with samples -- but what I really need to do is call a function at a specific moment in time.  512 samples later is too late.  Does this make sense?



On Mon, Aug 16, 2010 at 6:10 PM, Andrew Coad <email@hidden> wrote:
> I've been using time stamps in an audio unit render callback to count samples. Problem with that is that the callback processes 512 frames of audio at once; so it's not really accurate either.

Disagree. The number of frames being processed per callback has no impact on accuracy for an application like a metronome or sequencer where you know ahead of time what needs to be played and at what time. The accuracy of a single metronome click (i.e. the difference between desired time of the sound and actual time) will never be more than one half of one sample time (depends on the BPM rate) which is 1/2*44100s = 11.3us. At 140 BPM, this yields a worst case error of 0.16%. You can never get more accurate than this. Even if you calculated timing at the nanosecond level you can only start the sound on sample time intervals so you will still be in error by the same amount. Fortunately, an error of 0.16% has no meaning in music applications.

The number of samples being processed per callback does have an impact in applications where you don't know ahead of time what needs to be played - e.g. all cases of user initiated real-time sound. For these cases, the worst-case delay in getting the user initiated sound to actually play is (buffer size in samples per second)/44100s = 11.6ms for a 512 sample buffer. I don't know how much delay is perceivable by a human being (it will surely vary person-to-person) but 11.6ms is not perceivable. In cases where even this delay is undesirable you can reduce the number of samples per callback (512 is the default) but this increases the frequency at which the callback occurs so there is a trade-off here.

Andrew Coad

 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

  • Follow-Ups:
    • Re: Sequencer project
      • From: McD <email@hidden>
References: 
 >Sequencer project (From: Patrick Muringer <email@hidden>)
 >Re: Sequencer project (From: Gregory Wieber <email@hidden>)
 >RE: Sequencer project (From: Andrew Coad <email@hidden>)

  • Prev by Date: Re: iPhone, iPad sampling rates
  • Next by Date: Re: Sequencer project
  • Previous by thread: Re: Sequencer project
  • Next by thread: Re: Sequencer project
  • Index(es):
    • Date
    • Thread