Re: Core Audio & DP Performance
Re: Core Audio & DP Performance
- Subject: Re: Core Audio & DP Performance
- From: Kurt Bigler <email@hidden>
- Date: Sat, 19 Apr 2003 13:35:02 -0700
on 4/19/03 11:47 AM, Lubor Prikryl <email@hidden> wrote:
>
Thanks for your answer Jeff,
>
I actually process all in the audio thread. I simply arrange buffers
>
and call rendering function of plug-ins. But the principle of time
>
constrained thread doesn't allow me to access the second processing
>
thread just in time when it is necessary.
>
>
E.g.
>
The system calls audio thread A in certain time,
>
it gives me a time interval to process buffers
>
I pass some part of processing to another thread B, which is not time
>
constrained
>
waiting for scheduling to switch to B...
>
...the time to process A is over
>
and I didn't fill the buffers
>
>
Yes, I can increase the priority of B , but it is not guaranteed, that
>
the task scheduling will give it CPU time just when I need it (in
>
contrary to real-time threads). So I think, it is impossible to split
>
processing into the original constrained and an additional, even high
>
priority thread.
>
>
Lubor, DSound
You just have to process a buffer or more in advance in your other
high-priority (feeeder) thread or threads, so when the audio thread callback
occurs it completes immediately, the data having been already computed. The
feeder thread(s) can "act" as if they were the audio callback, but the data
is really being computed in advance. Then you can work on subdividing the
computation into multiple threads so you can take advantage of multiple
cpus.
I actually wonder whether this might produce less additional latency than
you think. It is really a hardware cycle that you have to keep up with, and
the HAL is supposed to optimize the timings of things automatically. But
what does that really mean?
So here's a question for the CA team. If the HAL recognizes that the audio
thread is completing immediately all the time, won't it then start making
the call later, closer to when the hardware needs the data? In that case
won't the net latency be reduced? Or does the HAL always try to make the
callback as early as possible? If so, then it would seem to me that some
latency is being "wasted", and that there might be some alternative
(supplemental) API model that would better support the HAL's relationship to
a feeder-driven approach.
-Kurt Bigler
>
>
>
>
>
On Friday, April 18, 2003, at 11:30 PM, Jeff Moore wrote:
>
>
> My guess is that you are doing all your DSP on the HAL's IO thread,
>
> yes? That's what the ping-ponging of the CPU load indicates to me. By
>
> doing this, you are effectively ignoring the second processor. What
>
> you are seeing is the IO thread running on the CPUs. But you will only
>
> ever be using one of them at a time to do work. Your performance
>
> numbers bear this out.
>
>
>
> To make use of both processors, you need to come up with a scheme to
>
> offload some of your DSP work onto another thread. How you best
>
> accomplish this will depend on how you have your rendering pipeline
>
> set up. You'll need to figure out the data dependancies and then
>
> organize the work accordingly.
>
>
>
> On Friday, April 18, 2003, at 01:58 PM, Lubor Prikryl wrote:
>
>
>
>> Hi,
>
>> although this is a bit off topic of development interests, maybe it
>
>> is important.
>
>>
>
>> I tested the same reverb algorithm on various systems, the criteria
>
>> were the maximum effects before clicks appeared.
>
>> The tested plugin DS-RV1 is available with our GT Player on Apple OS
>
>> X downloads site.
>
>> Buffer size 128 samples, PC uses ASIO drivers
>
>>
>
>> The result is a maximum number of processing reverbs.
>
>>
>
>> iMac G4 800MHz / MOTU828 7
>
>> Silver G4 733/ MAudio Delta 2496 AP 6
>
>> Dual G4 1.4GHz / MAudio Delta 2496 AP 12
>
>> Celeron 1.2 GHz / MAudio Delta 2496 AP 10
>
>> P4 1.8 GHz / MAudio Delta 2496 AP 16
>
>>
>
>> The problem of DP G4 is:
>
>> Both processors are loaded in average by the same load. But the load
>
>> is fluctuating from one processor to the second, so in certain
>
>> moments one runs with very low and the second with very extensive
>
>> load.
>
>> The maximum overall load is very far from the 85%-90% of single CPU
>
>> machine (of course).
>
>>
>
>> The application itself (GT Player) runs GUI (e.g. meters) in one
>
>> thread, event loop in another one, midi and audio with their
>
>> time-constrained threads. Can developer schedule threads to make
>
>> performance of both processors more "stable"???
>
>>
>
>> Lubor, DSound
>
>> _______________________________________________
>
>> coreaudio-api mailing list | email@hidden
>
>> Help/Unsubscribe/Archives:
>
>> http://www.lists.apple.com/mailman/listinfo/coreaudio-api
>
>> Do not post admin requests to the list. They will be ignored.
>
>>
>
>>
>
>
>
> --
>
>
>
> Jeff Moore
>
> Core Audio
>
> Apple
>
> _______________________________________________
>
> coreaudio-api mailing list | email@hidden
>
> Help/Unsubscribe/Archives:
>
> http://www.lists.apple.com/mailman/listinfo/coreaudio-api
>
> Do not post admin requests to the list. They will be ignored.
>
_______________________________________________
>
coreaudio-api mailing list | email@hidden
>
Help/Unsubscribe/Archives:
>
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
>
Do not post admin requests to the list. They will be ignored.
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.