Re: CoreAudio vs CoreData
Re: CoreAudio vs CoreData
- Subject: Re: CoreAudio vs CoreData
- From: Gregory Wieber <email@hidden>
- Date: Fri, 14 Oct 2011 11:29:42 -0700
If you're going to use core data to read audio information that is time specific, you could do it on another thread and write the information into a C array (or a custom struct) that both the core data part of your application and the render callback have access to, this way you avoid calling objective-c from within your render callback. You could increment a timer in your callback and make that variable accessible to the core data thread, to keep them in sync.
Think of it, metaphorically, as how you might use a system like OpenGL to render graphics that are based on audio information. You wouldn't have to render the audio data on the graphics card in order to create a graphical representation of that data -- you would simply give OpenGL access to that information and keep the information retrieval in sync with your frame rate. (Again, not a perfect comparison, but perhaps will help you brainstorm your architecture.)
Core data is very fast in my experience, but you'll get into locking trouble if you try to use it directly within the render callback.
Greg Wieber
On Fri, Oct 14, 2011 at 5:51 AM, Paul Davis
<email@hidden> wrote:
On Fri, Oct 14, 2011 at 8:43 AM, patrick machielse <
email@hidden> wrote:
> Op 14 okt. 2011, om 02:54 heeft Paul Davis het volgende geschreven:
>
>> suck the data out ahead of time, and make it available to the render
>> callback via some other (lock- and block-free) method.
>>
>> CoreData is not a suitable place to store dynamic, realtime data
>> (where "realtime" is in the audio sense, not the financial trading
>> sense). if you need to "store" modifications by the user, don't go
>> directly to CoreData except as a serialization technique.
>
> If I interpret correctly, that would approach would erase most advantages we would gain by using CoreData in the first place (f.i. easy undo support) -- and add more code. Your advice would be to not use CoreData and just use a custom data representation?
you need to conceptually separate editing from what take places in the
render callback. whatever data you use in the render callback needs to
be accessible via lock-free, block-free methods. accessing data via
CoreData does not provide that, so you should not be using it there.
there is nothing to stop you from using CoreData in the UI-driven
(note that I did not say GUI, just UI) side of your app, but you must
make whatever data is needed by the render callback available via some
other mechanism/representation.
ardour, for example, has a complex and sophisticated mechanism to
support undo/redo, but its existence and implementation is completely
invisible to the backend that does audio rendering. we also have a
complex and not very sophisticated RCU (read-copy-update)
architecture for quite a bit of the data used by the backend, which is
what makes it possible for the GUI to be messing around with the data
without using locks in the backend.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (
email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to
email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden