long double data type
long double data type
- Subject: long double data type
- From: Luigi Castelli <email@hidden>
- Date: Tue, 15 Jun 2010 08:26:50 -0700 (PDT)
Hi folks,
I am on a MacBook Pro Intel Core 2 Duo, running Snow Leopard 10.6.2.
I am working on a DSP audio mastering application that needs to perform mathematical operation at very high resolution.
(Yes, I did my tests and 64 bits of precision just doesn't cut it)
To accomplish this task I am thinking about performing all my calculations in quad precision, using the long double data type.
After a little research and experimentation, I found that in Xcode 3.2.1 this data type is 128 bits.
As a matter of fact, when I write the following:
printf("%d", sizeof(long double));
I get 16 bytes, which equals 128 bits.
Sign bit: 1
Exponent width: 15
Significand precision: 113 (112 explicitly stored)
All clear so far.
However, I've also been told that internally the floating-point unit of my processor only uses the first 80 bits.
...and that's where I start getting confused.
So, I have a few question to shed some light on the issue:
1 - is it true that the floating-point unit of a MacBook Pro Intel Core 2 Duo only uses the first 80 bits???
2 - if so, does it mean that 80 bits include the exponent + mantissa or only the mantissa ?
3 - if all 80 bits include exponent + mantissa how are they subdivided ? (how many bits to the exponent and how many to the mantissa)
4 - when I store in memory the long double data type does my computer store the full 128 bits of the original type or only the first 80 bits?
5 - regardless of the real resolution, does the long double data type work natively on the Intel processor or is it implemented in software ?
Thanks for any clarification.
- Luigi Castelli
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden