• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
(no subject)
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

(no subject)


  • Subject: (no subject)
  • From: Herbie Robinson <email@hidden>
  • Date: Thu, 20 Jan 2005 23:46:47 -0500

I'm working on an AudioUnit.

I'd basically like to know if storing an int/float/double is atomic.

More exactly, if the storing instruction does comit all bytes into memory at
once.

For a mono cpu this is most likely true, since a context change cannot
happen in the middle of an instruction (afaik). I don't mind if a thread
gets an old copy of the data: I just don't want it to get a corrupted copy.

What happens for a dual cpu?  When one is storing to memory and the other is
reading from it, is it possible that it might get an half new/half old copy?

I guess the question is: does both cpu can access the memory at the same
time ? Even if one is writing to it? When one cpu write, the cache line for
the other has to be updated..

I know it isn't good programming style to rely on such specific behavior,
but I'd like to avoid waiting a complete audio render cycle before applying
new paramaters.

You can encapsulate accesses with Macros or inline functions in C++. That isolates the machine dependencies.


There are also atomic routines for threading and incrementing someplace in the API that are good for more complicated operations.

If you need to have a consistent set of parameters available to the rendering engine, then you can have two vectors each with a set of parameters. The rendering routine would pick up a shared pointer to the vector or parameters and pick up the parameters. The setting routine would fill in the "other" set and then change the pointer. It might be a good idea to make sure each vector is in a different cache line.

When thinking about multiple CPU performance, think in terms of cache lines, not variables. What is really going on when one shares data is entire cache lines bounce back and forth between the CPUs. If one puts unrelated data in the same cache line, one can get a ping-pong game going the sucks up the bus bandwidth.
--
-*****************************************
** http://www.curbside-recording.com/ **
******************************************
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden
  • Prev by Date: touching non-current MIDI setups
  • Next by Date: Question about notification delivery
  • Previous by thread: touching non-current MIDI setups
  • Next by thread: Question about notification delivery
  • Index(es):
    • Date
    • Thread