Originally appeared in July 25, 2003 HPC Wire

WILL EARTH SIMULATOR'S GRIP BE SHORT-LIVED?

Reports say that Jack Dongarra is the timekeeper in a race that has no finish line and no winner's circle -- just leaders whose tenuous hold on first place is determined by trillonths of a second.

As custodian of the industry's "Top 500 List" ( see http://www.tgc.com/hpcwire/top500/top500.html) for more than a decade, Dongarra, professor at the University of Tennessee and a researcher at Oak Ridge National Laboratory, has watched the speed of the world's fastest computers double every 18 months or so.

But Dongarra says he is even more surprised by the rate at which cutting-edge supercomputing has moved into everyday life.

"The laptop computer I'm using today -- an IBM Think Pad with a Pentium III processor -- would have ranked as one of the 500 fastest computers in the world in 1995," he says. "The exponential growth is just staggering."

Most people probably aren't even aware of the ways in which supercomputing touches their daily lives. It helps forecast the weather, design new drugs, search for oil, animate Hollywood's cartoon movies and improve homeland security.

And sometimes, supercomputers even help answer the most trivial questions.

"The 15,000 processors that the Google search engine uses to search the Web would surely rank among the top 500 supercomputers," Dongarra says. "But its system is so busy that they can't stop long enough to run our benchmark test."

Two decades ago, a "gigaOPS" was the Holy Grail of supercomputing -- a then- blazing computational speed first attained by a Cray 2 supercomputer in 1985. For under $1,000, anyone today can buy a personal computer capable of several gigaOPS, or a billion calculations a second.

But if today's desktop computers crunch numbers at prodigious rates, today's supercomputers purée them -- performing calculations so swiftly that speeds must now be measured in "teraOPS," trillions of operations a second.

And they are likely to pass the next major milestone -- a "petaOPS," or a quadrillion operations a second -- before the end of the decade.

For the moment, the apex of Dongarra's closely watched list -- supercomputing's equivalent of the Tour de France's yellow jersey -- is occupied by Japan's Earth Simulator, a $500 million machine that consumes more electricity than most office buildings and churns out more than 35 trillion calculations a second.

Earth Simulator, built by the Japanese government to model global temperatures, predict natural disasters and simulate the entire solid Earth as a system, is housed in a specially cooled four-story building in Yokohama.

With more than 5,000 NEC processors linked by 1,900 miles of cable, it computes faster than the combined efforts of the next four fastest machines, all of which are American. After leading the pack for a decade, U.S. computer makers were shocked when the Japanese leapfrogged into first place last year.

But Dongarra says Earth Simulator's grip on first is likely to be short-lived.

Sandia National Laboratories and Cray Inc. are already hard at work on Red Storm, a "massively parallel" computing machine -- a system that divides big problems into small pieces for simultaneous processing. When it is up and running next year, it is expected to do 40 trillion calculations a second. It's primary use will be simulating nuclear weapons explosions.

Even bigger and faster machines are in the wings. Under a $216 million contract with the U.S. Department of Energy, IBM is building ASCI Purple, a 100 teraOPS supercomputer that DOE laboratories will use to model the behavior of high explosives, air turbulence and materials properties.

ASCI Purple could be the first supercomputer to approach the power of the human brain. It will have 400,000 times as much memory as the typical desktop computer and disk storage for the equivalent of 1 billion books.

But ASCI Purple will soon be dwarfed by another IBM machine, a supercomputing behemoth called Blue Gene/Lite. With 130,000 linked processors, it is designed to do 367 trillion calculations a second -- fast enough, IBM says, to be used in genetic research to simulate the folding behavior of complex proteins.

The Lite designation provides a hint of where IBM thinks it is headed. Later generations of the Blue Gene supercomputer will be built with an eye on breaking the quadrillion-calculations-a-second barrier.

For companies and countries, bragging rights to supercomputing prowess are indisputable. India proudly proclaimed this year that its Ultimate Lotus supercomputer in Bangalore made it the latest country to join the "teraOPS club," which includes the United States, Japan, France, Germany, Great Britain, Switzerland, China and South Korea.

As supercomputer speeds have increased, the uses of such massive amounts of raw computing power have grown more diverse. Supercomputers now model the structure of the early universe and help predict the consequences of a terrorist attack.

High-performance computers have become the virtual test tubes of the 21st century. Data, in bits and bytes, and mathematical formulas are the proxy for everything from lab mice to ocean currents.

With enough accurate data, scientists can model real-world processes that are too big, such as global climate, or too dangerous, such as wildfires, to experiment with in the real world.

Supercomputers run the models for disease outbreaks, urban smog, thunderstorms, ozone holes, El Niños, melting icecaps and exploding stars. They help map the human genome, search for extraterrestrial intelligence, eavesdrop on potential terrorists and shape the flow of urban traffic.

"Every time you turn on your television set and watch the weather forecast, you should realize that a supercomputer is behind the scene helping to make the prediction," says Dongarra. "It's not always accurate, but it's getting better all the time."

Just last month, the National Weather Service dedicated its newest supercomputer system -- Frost and Snow. Running at a relatively sedate 450 billion calculations a second, it will help produce severe weather outlooks three days in advance, hurricane forecasts and warnings out to five days and severe winter storms up to seven days in advance.

Saudi Aramco, the Saudi national oil company, last month installed a new supercomputer -- a cluster of more than 1,800 Pentium III processors -- to process the growing flood of seismic data from its oil and gas explorations. Companies now using at least one of the fastest 500 supercomputers in the world include Johnson and Johnson, Charles Schwab, Sprint, State Farm, Nike, Nestlé and Avon.

The most significant supercomputing advances, however, have been spurred by the government. The nuclear test ban treaty forced the United States to find a way to assure the integrity of its nuclear weapons without testing them.

In 1995, the Department of Energy launched the Advanced Simulation and Computing, or ASCI, program, whose goal was to evaluate and maintain the aging nuclear weapons in the U.S. arsenal.

Last year, scientists at the Lawrence Livermore and Los Alamos national laboratories completed the first detailed three-dimensional simulation of the first second of a nuclear explosion. ASCI White took more than four months of around-the-clock computing to complete the work. On a high-end PC it would have taken 750 years.

"These are big, big simulations," says Stephen Lee, deputy director of Los Alamos' computer-science division. "But when you compare them to all of the variables that you have to run in a high-fidelity model of something like the world's oceans, there's clearly a growing need."

As raw processing power grows, he says, engineers must also design connections with greater capacity, new ways of "mining" data from vast amounts of storage, and software that can use it more efficiently.

With machines capable of a quadrillion operations a second on the horizon, is the end of the race in sight? Lee doesn't think so.

It may, in fact, be time to start getting acquainted with the lexicon of future supercomputing achievement -- the exaOPs (quintillions of operations a second), the zettaOPS (sextillions of operations) and the yottaOPS (septillions).

"This is like a perpetual-motion machine," he says. "Supercomputing is driven by need. And what we feel we need is limited only by the human imagination."

Copyright 2003, HPCwire. All Rights Reserved.
Mirrored with permission.

  ORNL | Directorate | CSM | NCCS | ORNL Disclaimer | Search
Staff only: CSM computers | who, what, where? | news
 
URL: http://www.csm.ornl.gov/PR/hpc-07-25-03.html
Updated: Tuesday, 29-Jul-2003 10:04:20 EDT

webmaster