In AUSampler, we use <AudioToolbox/ExtAudioFile.h> to do this - it gives you a number of benefits - you talk to the file in the client format, which you can specify and you don't care what the file format is ( - so, for your code, both compressed (AAC, ALAC) and uncompressed (lpcm) look the same.
Bill On Jun 28, 2012, at 8:49 AM, Philippe Wicker wrote: On 28 Jun 2012, at 16:54, Paul Davis wrote:
On Thu, Jun 28, 2012 at 10:41 AM, Philippe Wicker <email@hidden> wrote:
Hello,
We are currently investigating different solutions to add a disk streaming engine to some of our products. Along with the development of a proprietary solution we're also looking for third party sources. Does anyone knows about a library - including commercial one - that provide a high performance disk streaming engine?
this question is, well, a bit *undefined*.
what do i mean? well lets consider the famous Giga pre-reading behaviour that was the subject of a patent for a while. any unix filesystem of the last 30 years has the same behaviour! if you read N bytes from a given location in the file, the OS will actually read N+M bytes and cache the extra M bytes in memory (the unix buffer cache). the value of N and M are configurable, naturally (though they are unfortunately constant across all files on the filesystem, at least). this behaviour applies even to apple's rather weak unix filesystems. but wait, can't you do better that the OS by rolling your own? say, by opening the file in "direct" mode such that the buffer cache is not used, and then doing intelligent caching in an application/data-specific way?
This is one of the basic idea behind the "proprietary solution". We'd like to convert from file format to native format and apply an SRC while reading a chunk from the disk in a dedicated worker thread, so that the audio callback in the plug-in can focus on the kind of processing that cannot be anticipated (e.g. the pitch shifting). Our "intelligent" caching would be there "only" to save the converted and SRC'ed data. Also, the samples we would have to read ranges from relatively small sizes (less than a second) to medium size (5 to 10 seconds), so we think to implement different "strategy" depending on the size.
well, good luck with that. Oracle certainly pulls this off with their databases, but there have been many, many studies where people have tried to do better than the buffer cache and discovered that in real world scenarios, they can't. if you were to accept that to be the end of the story (it probably isn't), then on OS X at least, you wouldn't plan on using any "disk streaming engine" at all - you'd just do regular system calls to read/write and let the OS take care of the rest.
to get any kind of an answer to this question, i suspect you need to describe in more detail what you mean by "a high performance disk streaming solution".
The minimum requirement is to be able to stream - say - 40 samples simultaneously, including the conversion from the file format to the native float 32 format and the SRC. In the future we may have to double or triple... that number for one instance of the client plug-in. And of course the engine must be cross-platform (Mac and Windows).
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
|