Re: High bandwidth disk management techniques
Re: High bandwidth disk management techniques
- Subject: Re: High bandwidth disk management techniques
- From: Wolfgang Schneider <email@hidden>
- Date: Mon, 2 May 2005 15:04:04 +0200
I thought about the threaded method, too a while ago, but did not try
it yet. My guess is, that multiple
threads would only yield a better result with a NCQ (native command
queuing) disk, but that's only
theory.
best,
Wolfgang
Am 02.05.2005 um 14:57 schrieb Mark Gilbert:
Interesting.
This sounds like perhaps its not worth pursuing the multiple
simultaneous reads idea.
Anyone else have any experience with this ?
Cheers
Mark.
At 2:47 pm +0200 2/5/05, philippe wicker wrote:
On May 2, 2005, at 12:33 PM, Mark Gilbert wrote:
Thanks Doug !!
This gives me quite a bit to chew on !
- I have now implemented all this, including async reads (which are
currently forcing a wait for completion, whilst I plan a proper
async architecture). I can't really force the 4096 boundries, but I
am not worried about extra file system CPU or memory overhead - its
the disk access I am trying to optimise.
- Currently I am reading each file into a general buffer (which may
be multichannel interleaved), waiting for completion, then move this
data into a set of mono channel buffers. Then I loop around and do
the same for the next file. After all this I move on with my full
set of mono buffers. its all basically sync in practice at the
moment, and as such it behaves pretty much how it always has.
So - would there be any benefit in starting ALL the async reads one
after the other, then wait for all the callbacks (one for each read)
to complete, and them move onto my deinterleaving code ? Is there
potential benefit to having 64 async reads scheduled ? Will the file
system take advantage of efficiency opportunities ?
Some times ago (OS X 10.2.x) I tried to bench the simultaneous disk
accesses to a number of files. For the little story, these files were
25 MBytes segment of a MacOS X update. I did that using two methods.
The first one was to sequentially access a 64KByte chunk in each
file. The second method was to trigger the access to each file in a
different thread (one thread dedicated to a given number of files),
this method is a variant of multiple asynchronous reads. The results
were really surprising and unexpected (at least for me): the
"threaded" access gave a twice worse - yes worse - performance on the
internal drive than the "sequential" method, and an equivalent result
on an external FW400 drive. I don't have a definitive explanation for
these results, it appears like if the OSX disk scheduler and/or the
disk driver and/or the disk firmware do not attempt to optimize by
rescheduling physical sectors read. I didn't try using several
spindles. The result may have been different in this case.
Philippe
--
email@hidden
Tel: +44 208 340 5677
fax: +44 870 055 7790
http://www.gallery.co.uk
New Product ! - Zen MetaCorder
Location Sound Field Recorder
http://www.metacorder.info
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
email@hidden
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden