Why I dont think that 64 bit will be enough „for a long time“

Di, 22 Sep 2009 23:46:01 +0200

At the moment, there seems to be a trend to managed code. Microsoft pushes its .NET-Framework, scientific and functional languages are built on top of the JVM, Apple developes LLVM. Not really surprising, since Runtime Speed and Memory is not really a bottleneck anymore in most cases, while there is a huge necessity for enhanced security, reusability, maintainability and portability of code.

Surprisingly enough that x86-based architectures are still spread around vitally all computers, even though for managed code a RISC-Architecture would be more convenient. But thats another story.

Meanwhile, the x86_64 architectures are spreadig, and seems that 32 bit architectures will be deprecated soon. That is a good thing. Anyway, one could already think of „x86_128“, an 128-bit-architecture. I have heard by many people till now, that 64 bit will be „enough for many years“. But actually, I dont see why.

Having a 128 bit bus size can get you double speed when handling huge amounts of data – which is one bottleneck, as far as I know. Mostly its not a problem to store data, or to work with it as soon as you have it, but its a problem to get it quickly from one place to the other.

Having more registers – a consequence of an 128 bit architecture, at least I hope so – is also a good thing. As far as I know, registers are still a lot faster than accessing the memory-cache. Algorithms for media compression or encryption often handle with many local values that are changed often, which can be perfectly optimized when having many registers. And garbage collectors can also profit from this, as far as I know.

Talking about garbage collectors, there is another way they – and memory management in general – can profit from. Having an adress-space that is much wider than the actual physical memory makes it possible to divide the memory into scopes of different purposes. You can divide it into struct-sizes, for example, or into object-spaces, etc., and therefore can distinguish objects according to the pointers on them (a strategy which is used for example in SBCL).

And I think also the increased complexity for the operating system managing multiple processor cores (which seems to be a trend, also) could also be accomplished in a better way.

So I dont really think that something like „x86_128“ will not evolve and spread in the next 10 years, even for consumer-pcs.



Di, 22 Sep 2009 04:10:02 +0200

Ich frage mich bei wie vielen Leuten ich mich damit jetzt unbeliebt mache...