With the advent of 64 bit processors and the operating systems that virtualize them, we are going to see some amazing new technologies emerge in the next 5-10 years. I had the pleasure of interviewing some folks involved in the 64 bit revolution today and really left the interview feeling excited about the future of applications and what we will be able to do on our personal computers thanks to powerful 64 bit processors coupled with cheap memory. You'll be able to see the interview on Channel 9 in a few weeks.
With 64 bit computing, there is no longer the 32-bit memory barrier that hampers things like fast analysis of very large datasets (because on 32-bit systems you can't hold them in-memory without causing significant perf problems or a dreaded out of memory exception...). Also, there is increased performance of the system generally due to the greater working address space. When apps start being written in 64-bit mode from scratch (that is, not ported from 32-bit), the possibilities of what we will see are truly compelling, like new types of games and powerful video/audio processing applications for your 64-bit laptop.
What new applications might we expect as a result of 64-bit computing?
Is there a biological barrier to 64-bit? I mean, will I, a normal computer user, need 64-bit? Or is 64-bit a domain of server applications? Yeah, I know it'll make Photoshop run faster, but do I need that extra speed? Who would want/need 64-bit ?
Posted by: Anonymous Coward | February 15, 2005 at 11:47 PM
AC, that's a good question, one that was asked by users running 16-bit systems shortly after the advent of 32-bit. Years from now, you'll ask yourself why anybody would still run 32-bit.
Posted by: Carmine | February 16, 2005 at 12:57 AM
I don't think the move to 64-bit will be as much of an advantage for the common user as the jump from 16-bit to 32-bit. The main advantage of 64-bit is being able to address more than 4 gigabytes of memory without performing some serious swapping or workarounds. The leap to 64-bit is orders of magnitude larger than from 16 to 32-bit.
Most of the applications that need to address that much memory (databases, scientific computing, video processing, high resolution image manipulation, application servers) can already take advantage of that on high end servers and workstations. Alpha has been around for a long time, Sun is still around, and now you can pick up a PowerMac G5 or an Athlon 64 and setup a cheap little 64-bit box with Linux or Solaris. When the 64-bit chips start becoming common in your average Joe's PC, they're not going to notice much difference at all, and possibly even a decrease in performance due to the doubled size of the pointers.
By the time I can order a computer with 4+ GB RAM for under $1000, I'm sure solitaire will be able to fully take advantage of that extra memory. As the old saying goes, "any program will expand to fill available memory." Or as JWZ said, "Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can." Developers will always find a new way to use/waste memory.
My guess is this will lead to higher level languages replacing more and more old low level code. Applications will be able to be developed faster and at the same time be more maintainable. I think this may have something to do with your post "What happened to OS research?". As you stated, operating systems have become mostly stable, providing a solid base for the next step. The majority of innovation is being done at a higher level, in the application layer. The research that was being done on operating systems has moved to language design and automatic memory management. Managed code allows developers to work on the actual function of an application, rather than needing to control every tiny little detail. This process of building on the stable foundation will only lead us to higher and higher levels of abstraction. Base components will continue to get minor little tweaks or the occasional reimplementation, but the majority of new design will be on the upper layers tossing around these components.
Posted by: David | February 16, 2005 at 02:51 AM
Am I correct in thinking memory is miles behind processor technology, and this will affect 64bit processors as well. I'm thinking of L1,L2 caches, although I could be getting it completely wrong!
Posted by: chris | February 16, 2005 at 08:30 AM
David, well-stated analysis. I would only argue that 64-bit will enable a whole new class of user mode applications that are at this point in time unknown. You are entirely correct that the most obvious short-term benefactors of 64-bit will be server applications like SQL and Oracle, which are already running on 64 bit today in the high end machines, as you mention. There will be low-end powerful machines too when consumer grade 64-bit computers become widely available.
Let's also not forget that there will be marked performance and stability improvements in certain classes of current 32-bit client applications as well when they are ported to 64-bit.
Typically, software is generally way ahead of hardware and those of us in the software world often find ourselves limited by what hardware enables for us. With the advent of 64-bit, software has a wide open field to play with, much of it being unexplored...
It will be really interesting to see what new ideas and subsequent breeds of desktop applications will arise from 64-bit possibility. It's quite compelling.
Posted by: Carmine | February 16, 2005 at 03:13 PM
Chris, if you had mentioned hard drives as a limiting factor, I would definitely agree with you. You could argue that the size of the on chip caches is lagging, but I don't think the speed is a problem. The biggest single bottleneck in most computers is disk I/O. Disk access takes orders of magnitude longer than a simple memory access. Larger caches, or even more layers in the cache hierarchy such as L3, can lessen the impact of memory accesses and RAM misses, but the caches have to get loaded at some point from disk, usually on the first cache/RAM miss. Most hard drives now include caches of their own to provide another buffer in the memory hierarchy. I/O will become even more of a bottleneck as multicore CPUs and multiprocessor systems become even more commonplace, as each processor fights over the available resources.
The main problem with hard disks is that they are approaching the physical limits of density and how fast they can reliably operate before physics kicks in and you end up with scattered bits from a head crash.
The move to 64-bit processors will make room for more solid-state RAM drives. I/O bound applications would benefit from a solid-state RAM filesystem such as non-transactional databases like datawarehouses (transactional databases need to persist the data and logs in case of failure), application compilation (reading/writing to many generated files), and high resoultion video editing.
Posted by: David | February 17, 2005 at 01:27 AM
I can't believe it.. really
You guys are really talking about "what will be the advantage of 64 bits computing"
your juste 10 years late. Sun, SGI and Nintendo saw the 64 bit world as an advantage for multiple years and are continuing so.
While research on 128 bits processors are begginings, you are wondering what's the advantage a 10 years old technology is bringing to us
And Drive I/O isn't a factor at all; a fc-al array in raid 5 can produce the hell of an I/O
Posted by: Eric boutin | April 19, 2005 at 10:12 PM
Well your question is only 10 years too late. The 64 bit CPU was introduced more than 10 years ago. The MIPS 4000 family as well as the Ultra SPARC CPU's are both mainstream CPU’s that have been available for more than 10 years now. While full fledged virtualization was not available in 64 bit, but has been available for a few years now in 32 bit mode and for most people it merely gives them the ability to run multiple Operating systems on one machine.
For the majority of people the jump to 64 bit mode will mean little since the 32 bit processor architecture already provided 4GB of ram. While some OS's provide this per task other lesser OS's only provided a shared 4GB for all tasks. Perhaps the real advancement would be having the mainstream OS that better utilize the processing power already available to them. Instead of moving to a larger 64 bit architecture that includes all the extra bloat of doubling the size of all pointers.
Posted by: James Dickens | April 20, 2005 at 02:37 AM