Re: High bandwidth disk management techniques
Re: High bandwidth disk management techniques
- Subject: Re: High bandwidth disk management techniques
- From: Mark Gilbert <email@hidden>
- Date: Mon, 2 May 2005 11:33:52 +0100
Thanks Doug !!
This gives me quite a bit to chew on !
- I have now implemented all this, including async reads (which are
currently forcing a wait for completion, whilst I plan a proper async
architecture). I can't really force the 4096 boundries, but I am not
worried about extra file system CPU or memory overhead - its the disk
access I am trying to optimise.
- Currently I am reading each file into a general buffer (which may
be multichannel interleaved), waiting for completion, then move this
data into a set of mono channel buffers. Then I loop around and do
the same for the next file. After all this I move on with my full
set of mono buffers. its all basically sync in practice at the
moment, and as such it behaves pretty much how it always has.
So - would there be any benefit in starting ALL the async reads one
after the other, then wait for all the callbacks (one for each read)
to complete, and them move onto my deinterleaving code ? Is there
potential benefit to having 64 async reads scheduled ? Will the file
system take advantage of efficiency opportunities ?
Incidentally, I tried the kFSNoCacheBit ('noCacheBit' ???) and it
resulted in pure noise instead of audio...... I also found the
noCacheMask which according to the docs is similar - this doesn't
harm the audio, but I am not clear if its doing anything to the
caching.
On a related topic, when we are WRITING 64 channels to disk (all
working OK), we write once per second, but the disk only accesses
about once every 5 seconds. Is there a way (and any benefit) to
forcing the system to write more often ?
Cheers
Mark
At 1:24 am -0700 2/5/05, Doug Wyatt wrote:
On May 1, 2005, at 2:30, Mark Gilbert wrote:
We have noticed that OSX seems to maintain a HUGE disk cache (as
much as 250 MB) which sits between our calls to disk and the actual
disk access (we call once per second, but the disk is only hit
around once every 5 seconds).
Use the Carbon File Manager.
Unless you're reading loops repeatedly, set kFSNoCacheBit on
positionMode in your calls to FSReadFork. This will prevent caching
of read data that you won't be reading again.
Read into page-aligned buffers (i.e. addresses that are multiples of 4096).
Read in multiples of 4096 bytes.
Read from file positions that are multiples of 4096 from the
beginning of the file.
A fully aligned read should bypass various places in the file system
and File Manager that will do buffering and copying.
Do an unaligned read first if it makes it possible for all of your
subsequent reads on a file to be aligned.
When deciding how large a buffer per file to maintain, consider
drive seek times. A drive with a 10 ms seek time can only read from
100 different files per second.
I've heard of people reading filesystem data structures in order to
get a big picture, for all the reads during a given time interval,
what tracks/sectors will be read, and reorder the reads to minimize
seek times. But that's beyond my experience.
Cheers,
Doug
--
email@hidden
Tel: +44 208 340 5677
fax: +44 870 055 7790
http://www.gallery.co.uk
New Product ! - Zen MetaCorder
Location Sound Field Recorder
http://www.metacorder.info
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden