Re: System preparation for realtime audio
Re: System preparation for realtime audio
- Subject: Re: System preparation for realtime audio
- From: Brian Willoughby <email@hidden>
- Date: Wed, 5 Mar 2008 22:28:18 -0800
I am not entirely sure why you are using JACK. In my impression,
JACK is useful for users (not developers) who do not have the source
code available for their favorite playback application, but who still
want to manipulate (or record) that output.
In your case, you are developing a DAW, and it would seem that you
have full control of the audio data and processing. Why do you need
JACK? Can you not work directly with the audio output devices or
aggregates?
I suppose that if your system involves integrating e.g. iTunes
playback, instead of simply opening the same file within your DAW and
playing it yourself, then you might need JACK to harvest the audio
streams of other applications. But it would still seem to me that
you would be better off implementing your own playback rather than
integrating the playback of an application for which you do not have
source into your DAW audio stream.
In other words, JACK might be making your problem more difficult than
it needs to be. I am merely suggesting that reworking your design to
use audio output units instead might get your product working
sooner. I have nothing against JACK for the problems it is designed
to solve, but it does seem to overly complicate your design.
P.S. I scanned your web site, but cannot determine from the
voluminous text whether JACK might be integral to your feature set
somehow. Where's the Executive Summary? ;-)
Brian Willoughby
Sound Consulting
On Mar 5, 2008, at 16:45, alejandro wrote:
We are a post-production DAW manufacturer and we are porting our
product Audiohive (www.openstudionetworks.com) from Linux to OS X.
Our app uses JACK for audio I/O and uses multiple cores for
processing. The processing threads are detached from the JACK
callback threads, in a producer/consumer model, and must have a
very good realtime response to avoid losing audio packets. In Linux
there is a way to lock threads to cores in order to avoid cache and
scheduling conflicts. Also interrupts can be redirected to
particular cores (to the ones that are not processing audio).
There is a lack of these features in OS X, but what I am
experiencing is that all these processing threads, with
THREAD_TIME_CONSTRAINT_POLICY (by far the best I have tried,
including the new THREAD_AFFINITY_POLICY), have a scheduling "stop"
every second, so the CPU meters show a peak. For 512 samples, the
peak doubles the percentage of CPU consumption, but for 256 samples
it is tree times more!!!
As you can imagine it is a huge problem for us. Thank you,
Alejandro
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden