virtual audio microphone
virtual audio microphone
- Subject: virtual audio microphone
- From: Tuviah Snyder <email@hidden>
- Date: Fri, 29 Jun 2012 19:18:42 +0000
- Thread-topic: virtual audio microphone
Hello,
I need to develop an audio driver which reads audio from shared memory that my application writes to, and is exposed as a normal audio device to other applications such as Skype, and Hangouts. Since there is no need to communicate with actual hardware, just read only access toshared memory..should I write this as a kernel coreaudio driver or a user mode CFPlugin audio driver?
What would be the advantages and disadvantages of either approach?
best,
Tuviah
________________________________________
From: coreaudio-api-bounces+tuviahs=email@hidden [coreaudio-api-bounces+tuviahs=email@hidden] on behalf of email@hidden [email@hidden]
Sent: Friday, June 29, 2012 12:00 PM
To: email@hidden
Subject: Coreaudio-api Digest, Vol 9, Issue 181
Send Coreaudio-api mailing list submissions to
email@hidden
To subscribe or unsubscribe via the World Wide Web, visit
https://lists.apple.com/mailman/listinfo/coreaudio-api
or, via email, send a message with subject or body 'help' to
email@hidden
You can reach the person managing the list at
email@hidden
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Coreaudio-api digest..."
Today's Topics:
1. Re: Disk streaming library (Ross Bencina)
2. Re: Disk streaming library (Philippe Wicker)
3. Re: Disk streaming library (Patrick Shirkey)
4. Re: Disk streaming library (Paul Davis)
5. Please check out my new company, Directr... (Brian Lambert)
6. CoreMidi MTC (Patrick Cusack)
----------------------------------------------------------------------
Message: 1
Date: Fri, 29 Jun 2012 17:50:44 +1000
From: Ross Bencina <email@hidden>
To: Paul Davis <email@hidden>
Cc: email@hidden
Subject: Re: Disk streaming library
Message-ID: <email@hidden>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 29/06/2012 12:54 AM, Paul Davis wrote:
> this question is, well, a bit *undefined*.
Agreed.
> but wait, can't you do
> better that the OS by rolling your own? say, by opening the file in
> "direct" mode such that the buffer cache is not used, and then doing
> intelligent caching in an application/data-specific way?
On Windows you can better in my experience.
I'm not sure what metrics are being applied here but there are a few to
consider: Even if the same disk throughput can be achieved using the
native file system cache, a workload-specific prefetch system might use
less CPU, give lower read latency, use less RAM etc.
With an N+M heuristic I would suggest that M needs to be controlled
based on the workload (ie file read-rate). Application level
caching/prefetch may still be needed for non-linear reads (ie looping),
or the "keep the first few seconds in RAM" thing.
One thing you *do* want is to have as many io-ops in flight at once.
Afaik the interfaces for doing this are completely different accross
Linux, MacOS and Windows.
> well, good luck with that. Oracle certainly pulls this off with their
> databases, but there have been many, many studies where people have
> tried to do better than the buffer cache and discovered that in real
> world scenarios, they can't.
Links please?
There are other reasons to roll your own: asynchrony, determinism, i/o
prioritisation, portability to platforms that don't meet your
Linux-centric assumptions.
> if you were to accept that to be the end of
> the story (it probably isn't), then on OS X at least, you wouldn't plan
> on using any "disk streaming engine" at all - you'd just do regular
> system calls to read/write and let the OS take care of the rest.
>
> to get any kind of an answer to this question, i suspect you need to
> describe in more detail what you mean by "a high performance disk
> streaming solution".
Agreed. And I also agree with the "simplest is best" approach until
proven otherwise by practical experiment.
The main thing is to decouple computation from i/o so you can keep the
io-op pipeline full. Anything that forces you to synchronously
interleave computation with io is not so good for this (*cough* libsndfile).
Ross.
------------------------------
Message: 2
Date: Fri, 29 Jun 2012 11:04:11 +0200
From: Philippe Wicker <email@hidden>
To: Ross Bencina <email@hidden>
Cc: email@hidden
Subject: Re: Disk streaming library
Message-ID: <email@hidden>
Content-Type: text/plain; charset=windows-1252
On 29 Jun 2012, at 09:50, Ross Bencina wrote:
> On 29/06/2012 12:54 AM, Paul Davis wrote:
>> this question is, well, a bit *undefined*.
>
> Agreed.
>
>> but wait, can't you do
>> better that the OS by rolling your own? say, by opening the file in
>> "direct" mode such that the buffer cache is not used, and then doing
>> intelligent caching in an application/data-specific way?
>
> On Windows you can better in my experience.
>
> I'm not sure what metrics are being applied here but there are a few to consider: Even if the same disk throughput can be achieved using the native file system cache, a workload-specific prefetch system might use less CPU, give lower read latency, use less RAM etc.
>
> With an N+M heuristic I would suggest that M needs to be controlled based on the workload (ie file read-rate). Application level caching/prefetch may still be needed for non-linear reads (ie looping), or the "keep the first few seconds in RAM" thing.
Yes. We already reached the conclusion that we'll have to cache data around the loop locators. The "keep the first few seconds in RAM" thing is - to my knowledge - the only way to guarantee a low latency (e.g. when a note on is played).
>
> One thing you *do* want is to have as many io-ops in flight at once. Afaik the interfaces for doing this are completely different accross Linux, MacOS and Windows.
I'm not aware of all the APIs that could enable that, but I'd think that an asynchronous read API is needed to give the OS (the disk driver maybe) a chance to reorganise (reschedule) a bunch of IO requests.
>
>
>> well, good luck with that. Oracle certainly pulls this off with their
>> databases, but there have been many, many studies where people have
>> tried to do better than the buffer cache and discovered that in real
>> world scenarios, they can't.
>
> Links please?
>
> There are other reasons to roll your own: asynchrony, determinism, i/o prioritisation, portability to platforms that don't meet your Linux-centric assumptions.
>
>
>> if you were to accept that to be the end of
>> the story (it probably isn't), then on OS X at least, you wouldn't plan
>> on using any "disk streaming engine" at all - you'd just do regular
>> system calls to read/write and let the OS take care of the rest.
>>
>> to get any kind of an answer to this question, i suspect you need to
>> describe in more detail what you mean by "a high performance disk
>> streaming solution".
>
> Agreed. And I also agree with the "simplest is best" approach until proven otherwise by practical experiment.
>
> The main thing is to decouple computation from i/o so you can keep the io-op pipeline full. Anything that forces you to synchronously interleave computation with io is not so good for this (*cough* libsndfile).
Hmmm. The need of data on all the streamed samples is uncorrelated between the samples. I mean that you need to start streaming a sample (or a group of samples) when a note ON (or a similar event) is received. From the plug-in point of view these events are random, unpredictable. Also the rate at which a sample has to be read depends on a lot of parameters (the note value, the pitch modulation…). Disk read commands have their origin in the audio callback because this is where events are processed. It may happen that a few read commands are sent to the worker thread at roughly the same time. Do you mean then that we should wait for the completion of all the asynchronous IOs (assuming an asynchronous disk API) before processing the raw sample data (conversion and SRC)?
>
> Ross.
>
>
>
------------------------------
Message: 3
Date: Fri, 29 Jun 2012 11:04:46 +0200 (CEST)
From: Patrick Shirkey <email@hidden>
To: email@hidden
Subject: Re: Disk streaming library
Message-ID:
<email@hidden>
Content-Type: text/plain; charset=iso-8859-1
On Fri, June 29, 2012 9:50 am, Ross Bencina wrote:
> On 29/06/2012 12:54 AM, Paul Davis wrote:
>> this question is, well, a bit *undefined*.
>
> Agreed.
>
>> but wait, can't you do
>> better that the OS by rolling your own? say, by opening the file in
>> "direct" mode such that the buffer cache is not used, and then doing
>> intelligent caching in an application/data-specific way?
>
> On Windows you can better in my experience.
>
> I'm not sure what metrics are being applied here but there are a few to
> consider: Even if the same disk throughput can be achieved using the
> native file system cache, a workload-specific prefetch system might use
> less CPU, give lower read latency, use less RAM etc.
>
> With an N+M heuristic I would suggest that M needs to be controlled
> based on the workload (ie file read-rate). Application level
> caching/prefetch may still be needed for non-linear reads (ie looping),
> or the "keep the first few seconds in RAM" thing.
>
> One thing you *do* want is to have as many io-ops in flight at once.
> Afaik the interfaces for doing this are completely different accross
> Linux, MacOS and Windows.
>
>
>> well, good luck with that. Oracle certainly pulls this off with their
>> databases, but there have been many, many studies where people have
>> tried to do better than the buffer cache and discovered that in real
>> world scenarios, they can't.
>
> Links please?
>
> There are other reasons to roll your own: asynchrony, determinism, i/o
> prioritisation, portability to platforms that don't meet your
> Linux-centric assumptions.
>
>
>> if you were to accept that to be the end of
>> the story (it probably isn't), then on OS X at least, you wouldn't plan
>> on using any "disk streaming engine" at all - you'd just do regular
>> system calls to read/write and let the OS take care of the rest.
>>
>> to get any kind of an answer to this question, i suspect you need to
>> describe in more detail what you mean by "a high performance disk
>> streaming solution".
>
> Agreed. And I also agree with the "simplest is best" approach until
> proven otherwise by practical experiment.
>
> The main thing is to decouple computation from i/o so you can keep the
> io-op pipeline full. Anything that forces you to synchronously
> interleave computation with io is not so good for this (*cough*
> libsndfile).
>
Is Erik round to defend that?
Just wondering if the assessment is any different on iOS?
--
Patrick Shirkey
Boost Hardware Ltd
------------------------------
Message: 4
Date: Fri, 29 Jun 2012 09:12:26 -0400
From: Paul Davis <email@hidden>
To: Philippe Wicker <email@hidden>
Cc: email@hidden
Subject: Re: Disk streaming library
Message-ID:
<email@hidden>
Content-Type: text/plain; charset="windows-1252"
On Fri, Jun 29, 2012 at 5:04 AM, Philippe Wicker <
email@hidden> wrote:
>
>
> Yes. We already reached the conclusion that we'll have to cache data
> around the loop locators. The "keep the first few seconds in RAM" thing is
> - to my knowledge - the only way to guarantee a low latency (e.g. when a
> note on is played).
>
I should probably stress that in addition to relying on the OS for low
level disk i/o strategy, Ardour also buffers a big chunk of data for both
capture and playback (default: 5 seconds worth). This is necessary to deal
with stalls in disk i/o that, while rare, do seem to happen on different
platforms. Linux actually seems to be the worst, though admittedly the last
time I checked it was many years ago (at that time, disk i/o could stall
out for *seconds*). The basic strategy is very simple: a single-reader,
single-writer FIFO between the audio callback and disk i/o, with both audio
callback and disk i/o occuring in separate threads. We tried using multiple
threads for disk i/o a few years back and on *nix like systems we didn't
find that it really helped. Instead, the disk i/o thread does its best to
ensure reasonably equitable round-robin serving of all the FIFOs by reading
min(BLOCKSIZE,FIFO-SPACE) rather than FIFO-SPACE which would allow
starvation. In Ardour 3.0, the size of these FIFOs is dynamically variable
by the user (it matters more in Ardour which is often being used
exclusively for HDR, in which case using huge amounts of RAM to buffer
against any chance of a dropout in the to-disk pipeline is entirely
reasonable).
> One thing you *do* want is to have as many io-ops in flight at once.
Afaik the interfaces for doing this are completely different accross Linux,
MacOS and Windows.
Yes, although we also found that the improvements from doing this were
nowhere near as large as some of the papers/docs on this had lead us to
suspect. It also means using asynchronous I/O APIs which are more complex
and not anywhere near as well tested and evaluated as their older cousins.
As a result, Ardour specifically does *not* do this, and we still get
pretty excellent i/o performance, even on windows (where we are using POSIX
API calls, not windows).
> >> well, good luck with that. Oracle certainly pulls this off with their
> >> databases, but there have been many, many studies where people have
> >> tried to do better than the buffer cache and discovered that in real
> >> world scenarios, they can't.
> >
> > Links please?
>
Ross, I wish I could help, but its been a few years since I was working in
a CS department and reading this stuff all the time. I could be wrong now.
>
> Hmmm. The need of data on all the streamed samples is uncorrelated between
> the samples. I mean that you need to start streaming a sample (or a group
> of samples) when a note ON (or a similar event) is received. From the
> plug-in point of view these events are random, unpredictable. Also the rate
> at which a sample has to be read depends on a lot of parameters (the note
> value, the pitch modulation…). Disk read commands have their origin in the
> audio callback because this is where events are processed.
You're not really discussing a streaming engine at all then. You just need
a very smart disk caching mechanism, which is related, but different.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.apple.com/mailman/private/coreaudio-api/attachments/20120629/a91615d9/attachment.html>
------------------------------
Message: 5
Date: Fri, 29 Jun 2012 08:15:18 -0700
From: Brian Lambert <email@hidden>
To: email@hidden
Subject: Please check out my new company, Directr...
Message-ID:
<email@hidden>
Content-Type: text/plain; charset="windows-1252"
*Hey!
I'm part of a team starting a new company called Directr. We're
turning **personal
moviemaking upside down by allowing **anyone to create beautiful, short,
shareable movies **in just a few minutes.*
*
*
*Starting today, we’re giving our friends a sneak peek into what **we’re
working on and announcing that we’ve raised seed funding from a **top-notch
group of investors. *
*
Check out the TechCrunch article here:
http://techcrunch.com/2012/06/29/directr/
Directr will launch later this summer but, in the meantime, we would love
your support!
It would be awesome if you could:*
*
- Visit our site at www.directr.co where you can reserve your username
- Follow us on twitter at http://twitter.com/directrfilms to stay
updated and
- Like us on Facebook at http://www.facebook.com/directrfilms
*
*Thanks!*
*
*
*Brian*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.apple.com/mailman/private/coreaudio-api/attachments/20120629/927ec530/attachment.html>
------------------------------
Message: 6
Date: Fri, 29 Jun 2012 08:41:25 -0700
From: Patrick Cusack <email@hidden>
To: email@hidden
Subject: CoreMidi MTC
Message-ID: <email@hidden>
Content-Type: text/plain; CHARSET=US-ASCII
I have set up a midi input port in my code and have attached a call back for reading midi data received. That is all working fine. I am reading Midi Timecode and parsing it in my call back. What I have noticed is that depending on when I start my application, I could be as late as 1 second from the device that is transmitting the MTC. Sometimes it is a frame behind. Regardless, it is inconsistent and frustrating. I am not doing any blocking or Obj-C calls in my readProc. I have even gone to the trouble of disconnecting my usb midi device after running my application to see if there is any weird IOKit stuff going on. I could really use some help, even wild-eyed theories? I feel as if Midi TimeStamps are useless as there is no objective reference to compare them to.
Thanks,
MidiMadAndSad
------------------------------
_______________________________________________
Coreaudio-api mailing list
email@hidden
https://lists.apple.com/mailman/listinfo/coreaudio-api
End of Coreaudio-api Digest, Vol 9, Issue 181
*********************************************
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden