How to build your own supercomputer!!

The term “supercomputer” is a loose one. There’s no official definition, so there’s nothing
preventing you from applying the term to your desktop PC, laptop or digital watch.
Broadly, though, it refers to a computer that’s much more powerful than the typical
hardware of its period.
The first supercomputer is often said to be the CDC 6600, designed in the early 1960s Seymour Cray (whose name would become synonymous with supercomputing). It could
perform calculations at a rate of around one megaflops – that is, one million floating-
point arithmetical operations per second; roughly five times the performance of a
contemporary mainframe such as the IBM 7090.
Today, the term might refer to a system such as the Fujitsu K computer, capable of more
than ten petaflops – a staggering ten-billionfold increase over the original Cray. The two
aren’t perfectly comparable since the two systems performed quite different tasks, but it’s
clear we’re dealing with vast amounts of power.
Supercomputing applications
It might not be immediately obvious what anybody might need with such incredible
computational power, but there are a number of real-world tasks that will devour all the
processing resources you can throw at them. In scientific research, supercomputers
can be used to test fluid dynamic or aerodynamic models without the need to
build expensive prototypes. At CERN, supercomputers perform simulated
subatomic experiments. Seismologists use supercomputer resources to model the
effects of earthquakes, and meteorologists can rapidly analyse large quantities of
sensor data to predict how weather systems will develop.
Supercomputing is at the forefront of new technologies, too. Creating a computer
interface that responds to natural language, for example, is an extremely challenging task,
owing to the immense variety of sounds, situations and nuances that must be understood;
the more horsepower that can be thrown at the problem, the better it will be.
Looking further ahead, supercomputing could even deliver the holy grail of artificial
intelligence. Back in 1997, IBM’s Deep Blue supercomputer notoriously defeated
grandmaster Garry Kasparov at chess. Its Blue Gene/P supercomputer, unveiled in 2007,
has been used to simulate a neural network of 1.6 billion neurons, representing around
1% of the complexity of the human brain. And last year, IBM’s Watson computer appeared
as a contestant on US game show Jeopardy!, defeating two former champions to walk
away with a million-dollar prize.
A supercomputer at home
Few of us run seismology labs, or develop artificial intelligence systems. However, there
are domestic roles for supercomputing, too. If you’re a budding film-maker, you’ll that creating sophisticated cinematic effects involves much intensive computation. The
more power you have on hand, the more quickly you can try things out and see results.
With enough grunt, you could recreate the photorealistic animations of Michael Bay’s
Transformers movies, or the fantastically detailed world of Wall-E – but even for a
dedicated studio such as Pixar, each frame of an animated movie can take around 90
minutes to render. The precise figure varies from frame to frame, depending on its
complexity, and the computing resources available. Many scenes are rendered
simultaneously – otherwise a film such as Toy Story 3 would take decades to render.
With a high-performance computer, you can also play a big part in distributed projects
such as SETI@home and Folding@home. These projects let you use your computer to
analyse raw data for worthy causes; in the case of SETI@home, you’ll be analysing radio
telescope data for possible evidence of extraterrestrial life. The Folding@home project
uses volunteer computing power to conduct simulated experiments that could lead to
treatments for diseases such as Alzheimer’s and Parkinson’s.
You don’t need a supercomputer to participate in these distributed efforts, but by donating
an exceptional quantity of computing power, you can make a significant contribution to
research that could change the world. There’s also the cachet to be gained from working
your way up the leaderboards of the most active contributors: the faster your PC, the
higher you’ll be placed.
Building your own supercomputer
If you fancy getting stuck into tasks such as this, you could buy dedicated hardware from
the likes of HP or Cray, but this is probably overkill, and would certainly be tremendously
expensive. The Cray XK6, for example, can perform more than one petaflop, but system
prices start at around half a million dollars. A cheaper option is to make use of hosted
computing services such as Microsoft Azure or Amazon Web Services. But if you want to
own and control your own hardware, a home-brew approach can provide a usable
measure of supercomputing power at a comparatively realistic price.
What does a homemade supercomputer look like? As we’ve noted, there’s no formal
definition of a supercomputer. One thing that’s likely to characterise your hardware,
however, is parallelisation: historically, parallel processing is the means that has allowed
supercomputers to achieve their exceptional levels of performance.
Almost every modern CPU on the market has two or more physical cores built directly the chip package so arguably you could install a mainstream CPU in a regular
motherboard and call it a supercomputer. Indeed, a modern Core i7 system will deliver
computing power on a similar scale to that of a real supercomputer from 20 years ago,
such as the Intel Paragon, which cost a million dollars and filled half a room.
However, the term supercomputer implies something beyond the norm, and these days,
an eight-core system is comparatively run-of-the-mill. A 16-core system might qualify. A
48-core system? Now we’re getting somewhere.
How do you go about assembling a system like this? One option is to invest in a
motherboard that supports multiple processors. Another is to combine many computers
into a cluster that functions as a single supercomputer. Alternatively, you could look
beyond the CPU to add-on cards that place huge quantities of raw number-crunching
power in the hands of the CPU. Or you could use the hundreds of stream processors on a
graphics card to the same end. Let’s look at each of these approaches in turn.
Multiple CPUs
Mainstream desktop chips aren’t ordinarily used in multiprocessor configurations, and
you’ll find very little hardware support for doing so. If you want to run multiple CPUs in
parallel, you’re basically limited to workstation or server architectures. On Intel hardware,
this means LGA 2011 chips, most of which come under the Xeon brand. If you prefer
AMD, you can use the still-supported Socket G34 platform, or the newer Socket C32 that
supports the latest Opteron models.
None of this is cheap – the hardware is aimed at businesses, which are typically willing to
pay for heavy-duty hardware. Dual Intel socket 2011 motherboards start at around $350,
and processors at $300+ each for the Core i7-3820. Move up to the top-of-the-range eight-
core Xeon E5-2690 and you’re looking at much more.
This approach has one major benefit, however: Windows is designed to “just work” in
multiprocessor environments, so any program that can make sensible use of a dual-core
processor should automatically scale up to run in a 16-core environment. This makes a
multiprocessor model appealing if you want to use your supercomputer to run
mainstream multithreaded applications such as 3D-rendering tools or media encoders.

Comments