Re: 32-bit on 10.8
Re: 32-bit on 10.8
- Subject: Re: 32-bit on 10.8
- From: Vincent Habchi <email@hidden>
- Date: Sun, 12 Aug 2012 10:57:44 +0200
Le 12 août 2012, à 04:09, koko <email@hidden> écrivit :
> Is 64-bit the end or will there be 128-bit?
The term "64-bit" is misleading. There are several parameters that can be quantified by a number of bits: internal width of registers, of the data and address buses, both internal and external, of the various ALUs, etc. Now, what is most of the time meant by 64-bit is that integer registers, the one that are used to perform addressing, are 64-bit wide.
In the old days of, let’s say, the 68000, that was about it: there were no other registers inside the machine. The 68000 was a 32-bit CPU embodied in a 16-bit package: at that time, the so-called "fakir" package, PGA, had not been devised, so Motorola pushed the possibility of DIP as far as they could (64 pins). Now with 64 pins you could not output a 32-bit wide data and address bus together, so they compromised, leaving 16 pins for data and 20 for addresses; that's why most of the time the 68000 was referred to as a 16/32-bit processor. The 68020 and 80386 were the first 32-bit internal chips to be packaged in a full 32-bit capable physical interface. But note that apart some minor extension to the instruction set, and the addition of the co-processor micro-coded dialog (line F instructions), nothing in the 68020 was really new compared to the bare 68000. On the other hand, since most of the peripheral chips were only 8-bit wide, and memory was scarce and expensive, there appeared reduced version of the 68000 (68008, 8-bit data bus, 48 pins, Sinclair QL) – BTW, the Z80 (and the 6809) had the possibility of computing on (and maybe transferring) 16-bit quantities, but nobody ever dared call them 16-bit processors.
I think that’s one of the reasons we still count with bytes. Nobody cares about nibbles anymore, because no hardware relies on 4-bit exchanges since eons. But bytes remained important even in the 32-bit era because CPUs had to talk with 8-bit peripherals, whose registers were not lengthened mainly because their pin count had to remain low and there was no need for a greater width (serial and parallel norms were both 8-bit oriented). The other reason was the ASCII norm: processing "chars" and strings required 8-bit operations. Legacy 8-bit instructions were kept for this purpose, and the granularity never increased from 8 to 16 and 32-bit as it did from 4 to 8. This is a pain, because exchanging 8 or 64-bit takes the same time, and octet based management requires a lot of extra hardware for maintaining cache operation and coherency. Besides, we shall currently all use modern Unicode which is 16-bit wide (at least). There is no real need for octets these days.
One can argue there were no significant leaps between 8 and 16-bit processors. That’s mainly true: widening of registers and buses is not a technological breakthrough, just an evolution. Yet, there is a major step forward, the debut of the "protected" or "supervisor" mode: that was the key to implementing all modern OS that fence kernel space from user space and therefore provide a robustness that was impossible to achieve on 8-bit CPU.
With current architectures, the portrait is not so simple to draw. When we remain in the integer realm, it’s still fairly straightforward: EMT64 capable CPU have 64-bit integer registers which are used both to hold data and addresses (that’s different from the 68000 which had separate data and address registers that could, in theory, be of different sizes), so we speak of "64-bit processor", although the full 64-bit address bus is not wired externally (but is used internally by the MMU and the TLB). In the floating point kingdom, 64-bit (a.k.a double precision) is the norm since years, at least for FPU operations. But we all know that SIMD instructions, beginning with MMX, then SSE, now AVX, use larger data widths (128 with SSE, 256-bit with AVX), so from the floating point (point) of view, CPU are more 128 or 256-bit than 64-bit. I’m not sure, but I guess that internal data paths between AVX ALU and caches have been widened accordingly. With next-year-to-be-released Haswell processors and AVX2, the 256-bit data support will be extended to integer operations. We can therefore call these future CPU 256/64-bit.
That’s from the hardware side. Unfortunately, since most people gave up programming in assembly (what a loss! ;)) in favor of C, its derivatives, or other various high level languages, there was a need to nail down the width of various data type. So we ended up with a 'int' 32-bit wide, which was the most reasonable choice at the time the C was normalized because processors had 32-bit data and address registers, and as a side benefit, we could bet that sizeof (int) = sizeof (void *). When these moved to 64-bit, C 'int' should have followed, since both data and address registers were doubled, but it was not done (why? Because we would have lacked a 32-bit qualifier? Some other side effect?). So we ended up with sizeof (int) != sizeof (void *). And mainly I guess that’s what is meant by '64-bit' from the developer point of view, besides – of course – utilizing libraries compiled with the 64-bit instruction set.
All of this just to say that 128-bit is almost here. We have 128-bit resisters and instructions, albeit presently just for floating point computations (but integer will follow next year) and once we depart from the equation sizeof(int)=sizeof(void *), we can write C programs for any architecture width. The transition from 32 to 64 is difficult because we relied on bad habits. It’s like moving from classical physics to quantum mechanics or relativity: it requires changing almost all our habits of thought, but once it is done, it is fairly rewarding.
Vincent
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden