Re: preferred disk api for high bandwidth sample reading
Re: preferred disk api for high bandwidth sample reading
- Subject: Re: preferred disk api for high bandwidth sample reading
- From: Living Memory <email@hidden>
- Date: Sun, 03 Jan 2010 22:17:35 +0000
I just read my original post, of course I meant 24 bit not 16 (brainstorm), and yes the sample data amounts to many gigs, far more than can fit into memory.
Paul's observation probably says it all...
>> there is no
>> way to handle data on disk that provides any shortcut around careful
>> buffering&caching design
But really here I'm looking to gain from other peoples experience using core-audio to develop sample readers of this scale on the Mac as I don't want to have reinvent wheels if I can help it and the previous threads on issues like this are quite old now.
thanks all
On 3 Jan 2010, at 19:45, tahome izwah wrote:
> Wow, I had no idea that we're talking about RAM requirements of this
> magnitude! I honestly don't know what would be the best way to handle
> this, as I don't think anybody has ever done this before - not in a
> synth that is supposed to have 64 notes polyphony anyway...
>
> The only synth I used that did something that came close was the
> Hartmann Neuron. They used a Linux system and converted sounds to
> models that could be pretty big and were resynthesized in realtime
> from HD. Not sure if this helps though... but maybe there is some info
> on it on the 'net.
>
> --th
>
>
> 2010/1/3 Paul Davis <email@hidden>:
>> On Sun, Jan 3, 2010 at 1:49 PM, tahome izwah <email@hidden> wrote:
>>> Just my 2 cents: With memory getting cheaper all the time and MacOS X
>>> being (mostly) 64bit now I would read all the (relevant) samples into
>>> memory rather than streaming from HD. The way I see it your app is
>>> going to need a hi end Mac anyway so it could as well use the
>>> available RAM rather than read from HD (which is comparably slow). I
>>> guess there must be a clever way to figure out what data you need and
>>> when, so you can preload at least that part.
>>
>> i am not a recording engineer, so i don't record "real sessions" as i
>> test my host app. nevertheless, i have test sessions of around 15-30GB
>> of audio data, way beyond anything that will fit into RAM. there is no
>> way to handle data on disk that provides any shortcut around careful
>> buffering&caching design. you might get away with "just put it all in
>> RAM" if your application is limited in scope, but otherwise, you're
>> just deferring the point at which things start to break.
>>
> _______________________________________________
> Do not post admin requests to the list. They will be ignored.
> Coreaudio-api mailing list (email@hidden)
> Help/Unsubscribe/Update your Subscription:
>
> This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden