On Jul 5, 2014, at 12:48 , Gerriet M. Denkmann <email@hidden> wrote:
I have a heavily used array called maskArray ( maskArray[i] = 0x1 << i ).
I summed this array 100 Mio times in Objective-C Took 0.05 sec. This is 1.5 times faster than NOT using maskArray, instead computing the value directly (0.075 sec) uint32_t maskArray[32];
Did the same in Swift. Using maskArray took 15 sec. This is 160 times slower than computing directly: 0.09 sec (1.25 slower than Obj-C). And it is 300 times slower than Obj-C.
var maskArray = UInt32[](count: Int(32), repeatedValue: 0)
Perhaps you could post more of the Swift code?
Note that it took me a while to work out that when you said “Obj-C” you really meant “C”. That is, you’re using a C array, not a NSArray.
Keep in mind that Swift arrays are structs, not classes, so they’re passed around by value, not by reference. On top of that, Swift arrays are subject to being (shallow) copied when the compiler decides to do that, so the way you handle the array will affect your performance. That’s why it would be helpful to look at the larger context in which the array is being used.
If you’ll forgive some editorializing …
I suspect that what you’re trying to do is — semantically — something that doesn’t exist in the Swift language itself. In C, easy memory access to raw data provides a feasible pattern to caching these kinds of pre-calculated values as a performance optimization. In Swift, arrays are *collections*, not … er, well … arrays — in the C sense.
Just as you’d be unlikely to use a NSArray for this performance optimization, it might make no sense to use a Swift array either.
Instead the correct approach (assuming that a lookup-based performance optimization is called for) may be to get hold of some raw memory (e.g. a NSData object of suitable size) and index into its bytes directly. |