Re: inOffsetSampleFrame?
Re: inOffsetSampleFrame?
- Subject: Re: inOffsetSampleFrame?
- From: Takeshi Yokemura <email@hidden>
- Date: Tue, 2 May 2006 10:46:26 +0900
William,
Thank you for the answer!
Then, I have another question.
When in a slice Render() is called?
My understanding is...,
if the processing starts from Slice#1 for example,
the first rendering is called SOMEWHERE in Slice#2.
If so,
rendering doesn't nesessarily come to the beginning of a slice,
and our queue will also have a part of next slice's events.
I mean, we'll get such queue data as below when Render() is called.
(Suppose the slice size is 512 and Render() is called at 700 in absolute
time)
event queue# absolute time slice# inOffsetSampleFrame
NoteOn 1 200 1 200
NoteOn 2 400 1 400
NoteOn 3 500 1 500
NoteOn 4 512 2 0
NoteOn 5 612 2 100
In this case, we have to process until queue#3 in the first rendering,
and #4 and #5 should be left for next rendering.
.....is it right?
William Stewart wrote at 06.5.1 11:21 AM :
>
>On 30/04/2006, at 3:52 AM, Takeshi Yokemura wrote:
>
>> Hi, everyone.
>>
>> I'm now developing a software synthsizer,
>> and it kinda finished for now.
>> But it seemes not to be accurate about time control.
>> The timing of the notes randomly shift.
>>
>> The amount of the shifts seem to be about 10ms at maximum.
>> As the result, when I play a sereis of 32nd notes for example,
>> it doesn't come with the same intervals.
>>
>> I noticed that note start/stop occurs
>> only at the beginning of Render( ) in my program,
>> because I didn't consider "when in a slice" an event has occured.
>>
>> To fix this, I have to refer to inOffsetSampleFrame , right?
>
>Yes - and this is the sample offset that should be applied to the
>scheduling of the event in the NEXT render call.
>
>So, a note that is scheduled with say a 200 sample offset, starts 200
>samples into the buffer supplied in the next render.
>
>Bill
>>
>> But I cannot figure the time relationship of
>> - timing that Render( ) is called
>> - time origin of the value of inOffsetSampleFrame
>>
>> My understanding so far is:
>>
>> slice n slice n+1 slice n+2
>> |--------------------|--------------------|--------------------|
>> ^Note-on event ^Render() call
>> |<----->| |<------------------>|
>> inOffsetSampleFrame output
>>
>>
>>
>>
>>
>> ^Note starts playing
>>
>> - if an event occurs in Slice n , it's put into the queue
>> - at somewhere in next slice( Slice n+1) , rendering Render() is
>> called , and output waveform is formed on the buffer
>> - in Slice n+2 the buffer is read and the sound actually comes out
>>
>>
>> Is it right or not??
>> _______________________________________________
>> Do not post admin requests to the list. They will be ignored.
>> Coreaudio-api mailing list (email@hidden)
>> Help/Unsubscribe/Update your Subscription:
>>
>> This email sent to email@hidden
>
>--
>mailto:email@hidden
>tel: +1 408 974 4056
>________________________________________________________________________
>__
>"Much human ingenuity has gone into finding the ultimate Before.
>The current state of knowledge can be summarized thus:
>In the beginning, there was nothing, which exploded" - Terry Pratchett
>________________________________________________________________________
>__
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden