• Technology
  • CFO IT

64-Bit Computing

Moore is merrier: for power users everywhere, your chip has come in.

Copernicus, Newton, Kepler, and Einstein all developed “laws” or theorems so fundamental to our understanding of the universe that they are known to every schoolchild. Moore’s Law isn’t quite up there with E=mc2 on the greatest-hits list, but it isn’t far behind. The notion that computing power would double about every 18 months as engineers figure out how to build ever-faster microprocessors has held up amazingly well.

Many of those advances, however, have been made within the confines of the “conventional” (if one dares apply such a word to microchips) 32-bit architecture. But since the mid-’90s, some computers have enjoyed greatly expanded memory and processing speeds, thanks to a 64-bit architecture. They’ve tended to be expensive, proprietary, and complex computers used mainly by scientists and engineers. Now relatively inexpensive 64-bit processors bring this speed advantage to the everyday computers that power Websites, corporate applications, and even your home PC.

Sixty-four-bit computing represents the third major architecture shift since the invention of the microprocessor. The first shift, in the early 1980s, brought processors from 8-bit to 16-bit computing. The second came in the late ’80s through the early ’90s with the move from 16-bit to 32-bit computing. The third shift appeared in the mid-’90s, when the first proprietary 64-bit processors appeared. More recently, Intel, Advanced Micro Devices, and Apple Computer have introduced 64-bit processors for desktop computers and servers running the Windows, Linux, and Mac operating systems.

How It Works

The term 64-bit refers to the size of the addresses the processor uses to organize the system’s main memory banks. Sixty-four-bit systems must use wider registers so that the programs running on the computer can calculate these larger addresses. But for reasons we won’t go into here, the expansion of the main memory address range works wonders on the overall ability of the chip to crunch data. A 32-bit processor can directly address as many as 4 gigabytes (billion bytes) in the main memory. By contrast, a 64-bit system can address as many as 16 exabytes—that’s 16 billion gigabytes.

That may sound like nerdy nattering, but computers with 64-bit processors can run database and other business programs faster, manage larger data files and databases, allow more concurrent users and applications to access data, and reduce software-licensing fees. Basically, the more memory a processor can access at a time, the less it relies on information stored on the computer’s disk drive. By analogy, consider how quickly you can call a friend whose telephone number you’ve memorized; now consider the time needed to call someone whose number you have to look up.

“The less I have to go to disk, the faster my applications run,” explains Jeff Jones, director of strategy for IBM DB2 information-management software. “The faster my applications run, the quicker I can make business decisions.”

As memory costs have dropped (10 years ago, 4 gigabytes of memory cost about $100,000; today the same 4 gigs go for under $2,000), the processor has become more of a bottleneck. The notion of a chip that can tackle 16 exabytes may seem like too much of a good thing, but companies are already figuring out ways to optimize their older applications to take advantage. New applications written expressly for 64-bit architectures are being developed by Microsoft and others.

Discuss

Your email address will not be published. Required fields are marked *