It is taking the value of error and converting it from the host’s native byte order to big-endian format.
CFSwapInt32HostToBig(error);
It then writes the binary value of the error in big-endian format to bytes 1,2,3,4 of the 20 character error string.
(errorString + 1) is a pointer to byte 1 of the 20 character string.
(UInt32 *)(errorString + 1) casts this pointer to be a pointer to a UInt32.
*(UInt32 *)(errorString + 1) dereferences this UInt32 pointer, allowing it to be used as an l-value.
So, this allows one to assign the UInt32 value of CFSwapInt32HostToBig(error) into bytes 1,2,3,4 of the 20 character error string as though it was a UInt32.
Then the code goes on to examine the bytes using isprint.
On Wed, May 16, 2012 at 4:00 AM, Ben
<email@hidden> wrote:
I'm running through the examples in the book 'Learning Core Audio' and I came across these two lines...
char errorString[20];
// see if it appears to be a 4-char-code
*(UInt32 *)(errorString + 1) = CFSwapInt32HostToBig(error);
The second line looks pretty alien to me (rusty on C), can someone explain what is going on please? I understand what the function CFSwapInt32HostToBig() is doing, but what is going on with errorString?
Thanks in advance.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden