Re: High bandwidth disk management techniques
Re: High bandwidth disk management techniques
- Subject: Re: High bandwidth disk management techniques
- From: kelly jacklin <email@hidden>
- Date: Mon, 2 May 2005 06:32:19 -0700
On May 2, 2005, at 3:33 AM, Mark Gilbert wrote:
This gives me quite a bit to chew on !
- I have now implemented all this, including async reads (which are
currently forcing a wait for completion, whilst I plan a proper
async architecture). I can't really force the 4096 boundries, but
I am not worried about extra file system CPU or memory overhead -
its the disk access I am trying to optimise.
This will become a limiting factor in the performance of your I/O.
In order to do the best I/O performance possible, you must read from
page boundaries in file into page boundaries in memory, otherwise de-
blocking will occur. De-blocking goes through the unified buffer
cache, so this you are incurring filesystem-level caching, which is
inappropriate for non-looped audio files.
- Currently I am reading each file into a general buffer (which may
be multichannel interleaved), waiting for completion, then move
this data into a set of mono channel buffers. Then I loop around
and do the same for the next file. After all this I move on with
my full set of mono buffers. its all basically sync in practice
at the moment, and as such it behaves pretty much how it always has.
Issue the reads, and then schedule the de-interleave from the
completion proc, and then do the de-interleave on another thread
(even the main thread would be better than the completion proc).
This lets multiple IOs be in flight, and does not require that the
IOs block on your de-interleaving.
So - would there be any benefit in starting ALL the async reads one
after the other, then wait for all the callbacks (one for each
read) to complete, and them move onto my deinterleaving code ?
Or de-interleave the completed reads on another thread.
Incidentally, I tried the kFSNoCacheBit ('noCacheBit' ???) and it
resulted in pure noise instead of audio...... I also found the
noCacheMask which according to the docs is similar - this doesn't
harm the audio, but I am not clear if its doing anything to the
caching.
As you discovered, you use the mask, not the bit... The noCacheMask
will result in the reads avoiding the unified buffer cache in the
kernel, and thus the read can (if the circumstances are right) be
transferred right into your properly aligned user buffer (again,
assuming the read is aligned and grained to page boundaries, and your
buffer is as well). Anything else will result in varying degrees of
pollution of the unified buffer cache, which can also cause page
contention (as the UBC is used for both file caching and VM).
On a related topic, when we are WRITING 64 channels to disk (all
working OK), we write once per second, but the disk only accesses
about once every 5 seconds. Is there a way (and any benefit) to
forcing the system to write more often ?
Sounds like you are not also setting the noCacheMask for writes,
which you should be doing...
kelly
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden