Computational Science and Advanced Computing

Improving Access to Supercomputer
Revolutionizing Science by Computer
Detecting Proteins by Computer
Helping Industry Crunch Numbers
Computer Library
Fast Machines on Fast Networks
Virtual Labs
High-Performance Storage System
GMR Research
Mysteries of Magnetism
How Solids Melt
Evaluating "Nanomachines"


Buddy Bland checks the operation of the Intel Paragon XP/S 150, a massively parallel computer at ORNL. Photograph by Lynn Freeny.

Computational science adds a new dimension to the more traditional experimental and theoretical approaches to scientific research. The use of computational tools has become vital to most fields of science and engineering and to many parts of the educational enterprise.

High-speed, large-scale computation has become the primary technology enabling advanced research in many areas of science and engineering. For this reason, blue-ribbon panels studying U.S. technological competitiveness have emphasized the critical importance of furthering our nation's traditional lead in high-performance computing technology.

Indeed, in many applications of interest to DOE and ORNL, computer simulations are the only feasible method of scientific investigation. Conventional methods would require prohibitively expensive experimental facilities and decades of effort. Leadership in this area requires the capacity to integrate advanced mathematical and computational techniques, data management and analysis methods, software tools, communications technologies, and high-performance computing systems.

At ORNL, we focus our strengths on scientific Grand Challenges and other highly complex computing issues. How? We integrate expertise in basic and applied research with the outstanding high-performance computing systems and infrastructure of our Center for Computational Sciences. Our strengths range from the ability to develop realistic mathematical models of complex phenomena and scalable algorithms for their solution to the availability of massively parallel processors and storage systems accessed by high-performance computing environments.

ORNL has long been a leader in computational plasma physics and materials science, nuclear physics and transport calculations, matrix computations, geographic information systems, and environmental information management. More recently, the Laboratory has become a leader in algorithms for parallel computers, informatics with emphasis on biosciences, global climate simulations, groundwater contaminant transport, distributed computing tools and interfaces, high-performance parallel computers, and data storage systems.

As a result of this leadership, we're working closely with major universities and with both computer and applications industries to conduct collaborative research and to commercialize technology. Automobile manufacturers, for example, are sponsoring computerized car-crash simulations at ORNL. And oil and aerospace companies are relying on ORNL's parallel virtual machine software to solve some of their most complex problems by linking and metrogeneous computers into high-performance, high-speed unified systems.

Speeding Up Access to the Paragon Supercomputer

Researchers seeking elusive answers to some very complex problems are now succeeding more quickly in Oak Ridge. A new software system developed for ORNL's Intel Paragon XP/S 150, one of the world's most powerful supercomputers, makes it possible for people doing more modest computing tasks and code development to work simultaneously with those performing huge production computing jobs.

Called overlapping partitions, the software system was designed by staff at ORNL's Center for Computational Sciences. The system was installed and demonstrated by Intel in January 1995 as the final milestone of a $16-million contract between ORNL and Intel Corporation. The new system allows users to simultaneously share the Paragon's 3072 parallel processors efficiently, automatically, and flexibly. Working together, these processors can perform 150 billion calculations per second.

Typically, researchers from around the nation use ORNL's supercomputer during the day in a "hands-on" fashion to work on codes and to perform a variety of computations from relatively simple to elaborate. The Paragon performs these computations in seconds, minutes, or hours, depending on their complexity. Meanwhile, the computer also runs huge "production jobs," which require days, weeks, or months of computing time. These jobs include global climate modeling, studies of how solids melt, and simulations of magnetic alloys and nano-scale machines at the atomic level.

Our new software system enables our supercomputer
to handle big and small jobs simultaneously.

The new software allows quick access to the Paragon for these "hands-on" jobs without the need to stop longer-running production jobs. The overlapping partitions system frees the Paragon to perform calculations researchers need quickly during the day while simultaneously running the ongoing production jobs at various levels of capacity 24 hours a day, depending on priority and processor availability. This capability makes more efficient use of the computer's processors, allows more researchers to get computed results faster, and satisfies more customers. We can't always provide results as fast as the scientists wish, but we have taken an important step in the right direction.

Funding for the Center for Computational Sciences (CCS) at ORNL is provided by DOE's Mathematical, Information and Computational Sciences Division and the Computer Hardware, Advanced Mathematics, and Model Physics (CHAMMP) Program for global change. Extensive information about the CCS, the Paragons, and overlapping partitions can be found on the World Wide Web at http://www.ccs.ornl.gov.

Revolutionizing Science by Computer

You're a scientist and you want to simulate the synthesis of a new material on parallel computers rather than do time-consuming trial-and-error experiments in the lab. So you enter parameters such as how to mix the ingredients, when and how much to heat them and at what pressures, and when and how much to cool the molten material to strengthen it. Then you go home and rest while your simulation runs overnight. When you return the next morning, you're disappointed in the calculated results. You spend the next few days consulting with your collaborators and trying to figure out what went wrong.

Our CUMULVS software allows scientists to view and influence a simulated experiment in progress.

Image of a shock wave reflecting off an underground rock layer.

Things might have turned out differently if the same simulation had been augmented with the new ORNL-developed CUMULVS software system, named after cumulus clouds because collaborative environments and distributed computing are often depicted as a cloud. This system could revolutionize the way science is done by computers. CUMULVS was built to enhance collaboration and efficient simulation by allowing multiple scientists (possibly in remote locations) to view and influence the same simulation as it progresses. Each scientist can view the same or different aspects of the simulated process while it runs (interactive visualization) and can remotely change parameters to "steer" the process toward a desired result. For example, you might "see" that delaying the cooling of the molten material is weakening it, so you decide to interactively change the simulation to cool the material more quickly and see if it gets stronger.

CUMULVS also gives programmers the ability to protect their simulations from computer crashes a critical capability when the simulation is executing in parallel across many computers. If requested, CUMULVS will save essential data (checkpointing) and automatically restart a failed component on a new computer (task migration).

Two large parallel applications have already incorporated the power of CUMULVS. An acoustic wave propagation simulation, which is of interest to oil exploration firms doing seismic analysis, models the transmission of sound waves traveling through and reflected by underground rocks of different densities. The second application aims at improving magnetic recording properties of nickel-copper alloys by performing an atomistic simulation of candidate mixtures using only first principles of quantum physics.

At the Supercomputing '95 conference, where it was demonstrated in these two applications, CUMULVS won the award for best interface and fault tolerance in the High Performance Computing Challenge. The CUMULVS program is also likely to win over many scientists.

The development of CUMULVS was supported by DOE's Mathematical, Information, and Computational Sciences Division and by DOE's Office of Energy Research, Basic Energy Sciences.

Computer Procedure Forgives Errors, Detects Proteins

Genetic information that determines who we are, what we look like, and how we function is encoded in the sequence of four different chemical bases forming steps of the twisted ladder of deoxyribonucleic acid (DNA). The traditional procedure for DNA sequencing for the Human Genome Project—gel electrophoresisis—highly efficient, but its error rate can be high. Errors in sequence introduce bases that are not really there or skip bases that are.

These errors make it difficult to identify proteins, which are made of various combinations of amino acids. Each of the 20 amino acids is coded for by a particular group of three bases. So, a sugar-digesting enzyme that contains 300 amino acids is a product of 900 bases. The base sequence, or nucleic acid alphabet, spells out the protein's amino-acid sequence, or protein sequence.

An ORNL algorithm accurately identifies proteins
from erroneous DNA sequences.

If a base is erroneously missing in the sequence, the groups of three bases coding for each amino acid will fall out of sequence—a frame shift. Multiple frame shifts could prevent identification of a protein by its sequence. Repeating the sequencing 10 times allows accurate identification, but it's expensive.

To cut costs, ORNL has developed a frame-shift-tolerant protein sequence comparison algorithm that accurately detects proteins from a one-time DNA sequence. This step-by-step computerized procedure compares the experimental sequence with sequences in a database, considers all possibilities for errors, and finds the best match. In this way, proteins can be identified from corrupted sequences and errors can be determined.

The algorithm is part of the recently released Version 1.3 of the Gene Recognition and Analysis Internet Link (GRAIL) system, which is used by more than 1000 biomedical laboratories and biotechnology firms. The ORNL-developed GRAIL, which uses statistical analysis to separate meaningful words from genetic gibberish, finds genes in sequences through pattern recognition and through database comparisons for which the new algorithm is used. Recently, GRAIL pinpointed a gene responsible for a genetic disorder that can lead to paralysis, muteness, and death in boys—a theme of the movie Lorenzo's Oil.

Funding for this research has come from DOE's Office of Health and Environmental Research.

Crunching Numbers To Solve Difficult Industrial Problems

For some industrial firms, a new or improved product or process is not possible without solving complex problems. Sometimes these solutions can be obtained only by writing computer codes that run on parallel computers built from many nodes. To enter the esoteric world of codes, nodes, and other aspects of high-performance computing, industrial firms often require the help of appropriate computer experts.

Providing support and assistance to U.S. industry to smooth its path into high-performance computing is a prime expectation of ORNL's CCS. To help meet this goal, the CCS launched the Computational Center for Industrial Innovation (CCII). This DOE national user facility established in August 1994 hosts ORNL-industry collaborations in projects featuring high-performance computing. Thanks to our computational capabilities, CCII users are solving challenging, industrially relevant problems problems that have previously eluded solution because of insufficient computational power or inadequate software availability.

A number of user agreements have been signed with a variety of businesses, software vendors, and other federal agencies. Consider these two examples of collaborations that illustrate CCII's impact.

The Intel Paragon is being used by industrial researchers
to model advanced military aircraft and
aluminum production processes.


Image of an advanced short takeoff and vertical landing fighter aircraft simulated by Lockheed Martin Skunk Works' scientists using computers available at the Computational Center for Industrial Innovation at ORNL. The yellow beads are particle traces of the exhaust from thruster jets during a simulated takeoff.

Advanced military aircraft are being designed to take off and land quickly without the need for long runways. To explore aerodynamic properties of generic "advanced short takeoff and vertical landing" fighter aircraft, Lockheed Martin Skunk Works' scientists are using CCII facilities. Large, complex, three-dimensional models of this type of aircraft are simulated using sophisticated computational fluid dynamics codes. Shortened takeoff distances and vertical landings for these advanced fighters are made possible by using small jet outlets under the aircraft's fuselage and wings to provide a large vertical thrust. Investigating a range of aircraft options using conventional experimental techniques is difficult and expensive. By using the high-performance computational facilities of CCS, Skunk Works' scientists can rapidly and accurately simulate many aircraft systems and flight strategies while reducing the number of costly physical experiments that must be performed.

Reynolds Metals' scientists are using CCII facilities to model industrial magnetohydrodynamic processes in which a magnetic field interacts with a conducting fluid. These processes are widely used in the aluminum industry for stirring, confinement, and control of liquid metal before and during casting operations. In addition, after the aluminum solidifies, inductive heating devices are frequently used both in the rolling of the aluminum ingots into strips and in the final heat treatment of the strips. Accurate modeling of these processes is important both for control of the existing manufacturing processes and design of future enabling technologies. This modeling, however, is computationally intensive because of the strong coupling among the various physical phenomena—heat transfer, electromagnetism, and fluid flow. Differences in magnitude between the size of the processes (typically meters) and the scale of change of the parameters that must be modeled (often millimeters) further complicate the calculations. Preliminary modeling of these complex industrial processes has been achieved by using the powerful Intel Paragon computers in the CCS.

Other projects under way at CCII involve automobile safety, materials processing and design, engineering design, nuclear reactor modeling, and manufacturing strategies. Additional companies are joining the center and many others, upon learning of CCII capabilities and accomplishments, are considering membership. By calling upon the capabilities and systems provided by the CCS to help open doors to industry, CCII is meeting an important U.S. need.

Computer Library Puts ORNL in Record Books

Imagine a special think tank in which hundreds of mathematical geniuses rapidly perform calculations all day long. Each mathematician has a brilliant personal assistant and a unique collection of notebooks, textbooks, and reference books in a cubicle. All the books make up the shared, or distributed, memory of the think tank. All personal assistants know the locations of all books in the think tank and the information in them. So, when Dr. Smith asks his personal assistant for special data, the assistant fetches the information by "borrowing" books from Dr. Jones and Dr. Miller and copying the appropriate pages for Dr. Smith.

At ORNL's CCS, the hundreds of parallel processors that make up the Intel Paragon XP/S 150 are like the think tank's mathematicians, except they work all night, too, and together can perform 150 billion calculations per second. Also, each processor has access to a clone of one brilliant assistant, whose job is to retrieve needed data from the shared memory; this shared assistant is the ORNL-developed Distributed Object Library (DOLIB). Such a collection of programs and routines available on each processor has enabled the Paragon to break a record.

A growing number of university and national laboratory researchers are interested in computer modeling of molecular dynamics. Their goal: to discover the physical behavior and properties of a system of molecules in motion in response to various forces. Until very recently, the largest molecular dynamics simulation ever undertaken involved six hundred million atoms. Using DOLIB, a team of researchers from ORNL and the state of New York shattered that record by simulating a system of 1 billion atoms of argon, calculating the forces involved as the gas atoms naturally approach and repel each other in a box.

Our DOLIB programs have enabled our supercomputer to break a record in molecular dynamics modeling.

DOLIB has also helped researchers using the Paragon predict the flow and fate of contaminant particles in groundwater, as well as changes in water balance in future climates under global warming scenarios. For these two problems, processors needed access to the shared memory, which contains all available information on horizontal flow (advection) of particles in groundwater or moisture in air.

The DOLIB team also surmounted another obstacle: the unacceptably long times required to receive data from the mass storage disk and get calculated results from the computer during a computation. To solve this input/output (I/O) problem, the team developed the Distributed Object Network I/O Library (DONIO), which makes use of DOLIB. DOLIB finds the processors that have the needed data in memory, and DONIO makes copies at high speed of the portion of the mass storage disk holding the data (as much as 100 gigabytes).

In an acoustic-wave propagation application of interest to the oil industry, DONIO reduced I/O time from more than 9000 seconds to 273 seconds (a reduction of 97%) and in a molecular dynamics application, from 1200 seconds to 50 seconds (a reduction of 96%). Now, the time it takes to move the data in and the results out is almost as short as the brief time required to do the actual calculations. When it comes to minding the storage, ORNL soon may be breaking other records.

Computing Future: Fast Machines on Fast Networks

The building blocks of a lightning-fast parallel supercomputer are small computers linked together that simultaneously solve pieces of a complex problem. So most computing experts agree that the next logical step in high-performance computing is to link two of these fast parallel supercomputers together. But what if two of the world's fastest computers are separated by hundreds of miles? Then the challenge becomes finding a way to connect them through a very-high-speed network to create one of the world's largest virtual computers.

By linking ORNL and Sandia supercomputers
via a high-speed network, a virtual
computer will be created to solve
nearly impossible problems.

ORNL scientists and colleagues from the CCS and Sandia National Laboratories (SNL) are preparing to demonstrate the effectiveness of such an arrangement. These two laboratories possess a striking level of computing power, making such a demonstration noteworthy.

The basic idea is to link the two largest Intel Paragons (an 1840-processor XP/S 140 at SNL and a 2048-processor XP/S 150 in the CCS) over a high-speed Asynchronous Transfer Mode (ATM) network to solve problems too large for either machine alone extraordinarily formidable problems relevant to both ORNL and SNL. By extending computational parallelism into the network and surmounting technical hurdles on the path to virtual high-speed computing, the researchers will advance the technology.

To use this distributed computing power effectively requires codes developed with the ORNL-SNL communication time in mind. One such code is a materials science code written by ORNL researchers to model the magnetic structure of complex magnetic alloys. Another is a global change code that couples the atmosphere (code prepared at ORNL) and the ocean (code prepared at Los Alamos National Laboratory) to provide a superior climate simulation over extended times. Additional codes that address the safety of defense weapons are being prepared.

To enable the two Paragon computers to work together, ATM boards were specially designed and built for them by the small company, GigaNet, with support from SNL. The excellence of these boards was demonstrated through high-performance links connecting the SNL, ORNL, and Intel booths at the Supercomputing '95 exhibition in San Diego.

Our work is focused on ensuring compatibility among codes, operating systems, ATM, and parallel virtual machine software and addressing performance issues so that scientific goals can be met. Significant questions concerning network connections and network availability are being addressed. The support of the Energy Sciences Network (ESNet), DOE's branch of the Internet, continues to be important. Given the significance of scientific problems that can be solved only by high-performance systems linked by a high-speed network, anticipation for a functioning virtual supercomputer is high.

Virtual Labs Soon Will Be Reality

Can scientists scattered all over the United States do collaborative research at ORNL using neutron scattering? ORNL hopes to show soon that an experiment can be performed at one of its user facilities from a distance.

Researchers working throughout the United States might do better science if they collaborated with their colleagues rather than competed against them. More effective collaborations could eliminate duplication of effort while improving the quality of research results and technology developments.

The high cost of travel and organizational roadblocks often prevent many potentially valuable collaborations. However, these barriers can be eliminated through use of computing and communication technologies that permit remote operation of research equipment, bringing user facilities to users "over the network." Such virtual laboratories, or "collaboratories" that transcend physical distance and organizational structure, are one goal of the DOE 2000 program.

Emerging from the DOE Offices of Defense Programs and Energy Research, the program's goal is to provide tools and systems, software and hardware, to make possible highly effective collaborative research projects involving geographically dispersed scientists and facilities. What are the benefits? Reduced travel costs. Improved quality and efficiency of research. A new opportunity to exploit DOE strengths in high-performance computing and communication. A way to maximize collective use of DOE national user facilities and other resources by appropriate researchers at each site and from far away.

Scientists elsewhere may soon remotely operate
research equipment at ORNL facilities.

Mona Yethiraj, an ORNL scientist, checks a small-angle neutron scattering spectrometer that will be operated remotely for neutron science experiments at ORNL's High Flux Isotope Reactor.

ORNL and other DOE national laboratories are implementing the DOE 2000 program by establishing a remotely accessible environment through video links, cameras, interactive laboratory notebooks, and software to control instruments, adjust samples, and view and manipulate data from home pages of the Internet's World Wide Web. Two ORNL user facility instruments have been put on line for remote operation. They are the HF 2000 cold field-emission transmission electron microscope at the High Temperature Materials Laboratory and a small-angle neutron scattering spectrometer at the High Flux Isotope Reactor (HFIR). By late 1996 we hope to have widely scattered non-ORNL users examining the structure of specimens and doing neutron scattering experiments over the network. We hope they find collaborative research from a distance as pleasant, effective, and collegial as that on site.

High-Performance Storage System for Fast Computers

As computers become faster at a remarkable rate, there is a corresponding surge in the amount and availability of electronic information. For the CCS, the quantity of digital information that must be retained, properly characterized and catalogued, and readily accessed—all with absolute accuracy—is already enormous. And the rate of growth is phenomenal.

To deal properly with the onslaught of information, the CCS in collaboration with the DOE Atmospheric Radiation Measurement Project has assembled a storage environment with a capacity of about 100 terabytes (tera means trillion, or a million million). The software that coordinates and structures the data within this hierarchical disk-and-tape storage system is currently NSL-Unitree. However, this serial software will be far too slow in the near future.

Recognizing the impending demands on storage software some years ago, a consortium from ORNL, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, Sandia National Laboratories, and IBM Government Systems designed and initiated the development of the High-Performance Storage System (HPSS). Its purpose is to address the storage access and management needs of very large, high-performance computing and data management environments. It is designed to move large data files between storage devices and parallel or clustered computers faster, more efficiently, and more reliably than today's commercial storage system software products. To accomplish this goal, HPSS uses a network-centered architecture and parallel transfers with target data rates reaching many gigabytes per second.

ORNL has helped develop an advanced system to
rapidly retain and access digital information.

The ORNL-CCS team working on HPSS was responsible for developing strategies and systems for the entire storage environment. Initial tests of HPSS within the CCS system will begin soon. The capabilities of HPSS for parallel transfers will be of particular value in the CCS because of the 14 tape drives in our storage environment.

A number of major computational centers are implementing HPSS; all of the development laboratories have plans to do so. Sandia National Laboratories is using HPSS for storage with its Intel Paragon. We anticipate that HPSS will become the de facto storage system software standard for high-performance computers.

The work is supported by DOE, Defense Programs Office, Accelerated Strategic Computing Initiative.

DOE Award for GMR Research

Using parallel computers in ORNL's CCS, scientists better understand a physical effect that already has magnetic appeal: it allows more data to be packed on computer disks. These insights into the giant magnetoresistance (GMR) effect have earned three ORNL researchers the prestigious DOE­Basic Energy Sciences Division of Materials Sciences Award for Outstanding Scientific Accomplishment in Metallurgy and Ceramics for 1995.

Discovered in France in 1988, GMR is a large change in a magnetic material's resistance to the flow of electrical current, caused by an applied magnetic field. It was found then that electrical resistivity of a layered iron-chromium film was lowered when the material was placed in a magnetic field. This effect allows GMR "read sensors" to read data crammed into high-density disks as tiny regions of magnetization.

We're working with IBM on use of the giant
magnetoresistance effect to make
higher-density disk drives.

ORNL's Bill Butler (front), Xiaoguang Zhang, and Don Nicholson received the DOE­Basic Energy Sciences Division of Materials Sciences Award for Outstanding Scientific Accomplishment in Metallurgy and Ceramics for 1995. Much of their work was done using ORNL's Kendall Square Research parallel computer (in the background). Photograph by John Smith.

By working at the atomic level, performing first-principle calculations of variations of electrical resistivity in metal alloys, we hope to improve magnetic storage systems. We are now working with IBM on the use of GMR to make higher-density disk drives.

In our computer modeling of the GMR effect in materials that have structures "layered" at the atomic scale, we found that electrical resistivity drops when the directions of two layers' magnetisms are aligned by an external magnetic field. We used parallel computers to calculate conductivity (the inverse of resistivity) and to calculate the magnetic field strengths required to align fields in magnetic layers in systems in which copper layers are embedded in cobalt.

Our calculations showed a waveguide effect for electrons in the cobalt-copper system. Like light waves moving a long distance in an optical fiber for telecommunications, some electrons travel far in copper without being scattered because they are trapped in the copper, which has lower resistivity than cobalt.

Comparison of our calculations with IBM's measurements implied that, if smoother interfaces could be made between alternating layers of cobalt and copper, exciting developments could follow. Examples might be computer operating memory that is immune to power disruptions and ionizing radiation and GMR motion sensors that increase the efficiency and safety of our home appliances, automobiles, and factories. Future magnetoelectronic devices could be attractive.

The research was sponsored by DOE's Office of Defense Programs, Technology Management Group, through a cooperative research and development agreement with IBM and DOE's Office of Energy Research, Basic Energy Sciences.

Computer Code Probes Mysteries of Magnetism

About 25 years ago, when Malcolm Stocks began working at ORNL as a postdoctoral scientist, he was puzzled by some new neutron scattering results obtained by Joe Cable (now retired).

The results offered fresh insights into the magnetism of a copper-nickel alloy—a simple, well-studied metallic alloy made up of copper and nickel atoms almost randomly distributed on an underlying crystal (face-centered cubic) lattice. Nickel is a ferromagnetic element, and copper is nonmagnetic. If copper is added to nickel, the alloy becomes less magnetic, losing its magnetism when it's 50% copper.

Over the next several years Stocks and others developed sophisticated methods for calculating electronic properties of disordered alloys. While successful, calculations based on these methods described the magnetism of copper-nickel alloys in terms of an "average" magnetism (magnetic moment) associated with nickel atoms, which decreased as copper was added to the alloy. But results from Cable's experiments at ORNL's HFIR suggested that nickel sites throughout the alloy sample had magnetisms ranging from high to low, depending on their environment. Cable's findings indicated that a nickel site's magnetism is much higher if it has nickel atoms nearby and much lower if it has copper neighbors.

Calculations by a new supercomputer code confirm neutron data on magnetism in a copper-nickel alloy.

Magnetisms of individual atoms in a copper-nickel alloy are calculated using a new computational method and ORNL's massively parallel Intel Paragon supercomputer.

These curious results stuck with Stocks for years. In the 1990s, when the Intel Paragon supercomputer was being installed, he saw an opportunity to use it to model alloy magnetism at a more detailed level and simulate the neutron scattering experiment directly. But first he and his colleagues had to develop a completely new type of computer code that could perform first-principle calculations on hundreds to thousands of atoms. In this study they carried out first-principles calculations on 256 copper and nickel atoms randomly distributed in a box to simulate the disordered alloy. The calculations were performed on 256 parallel-processing nodes of the Intel supercomputer.

Stocks was amazed and pleased to learn that the Paragon's calculations strongly support Cable's results. The calculation not only gave excellent agreement with the measured neutron-scattering cross sections but also provided an atom-by-atom picture of magnetism. From this it can be seen that the magnetic moments are higher for nickel sites whose neighbors are nickel atoms having strong magnetism than for nickel sites surrounded by (1) nickel atoms having weak magnetism or (2) virtually nonmagnetic copper atoms.

The winning combination of code and computer is now being applied to more mysteries of magnetism, such as why and how ferromagnetic materials lose their magnetism at specific temperatures and whether the direction of atomic-level magnetic moments in alloys and magnetic multilayers can vary in previously unsuspected ways. Results from our investigations using nodes and neutrons could help improve the design of alloys to give them better magnetic properties for data storage and power generation.

The research was sponsored by DOE's Office of Energy Research, Basic Energy Sciences.

Explaining How Solids Melt

Chocolate melts in our mouths, but exactly how it turns from solid to liquid isn't clear in our minds. We don't really understand precisely how ice melts as it cools our drinks. Some of the world's top scientists have tried—and failed—to explain melting at the atomic level.

Recently, three ORNL researchers have used the Intel Paragon XP/S 35 to help confirm a theory developed in the 1970s about how substances melt in two dimensions. Using an interatomic force model often used for rare-gas solids, they simulated two-dimensional systems containing 576, 4,096, 16,384, 36,864, or 102,400 atoms.

Results for the two largest systems show the existence of a new "hexatic" phase between the solid and liquid phases, as predicted by the theory. This two-dimensional simulation helps explain the melting process. It also may help researchers gain an understanding of how substances melt in the real world—in three dimensions.

The research was supported by ORNL's Laboratory Directed Research and Development Program.

Computational Checks on "Nanomachines"

In this holographic representation of a computational simulation of a nano-device the size of a dust mote, the tube is formed from carbon atoms (blue tennis balls), the soccer ball is a buckyball, and the green squash balls are flowing helium atoms that penetrate and push the buckyball "piston" through the tube.

When you hold ORNL's plastic card for guests to the light—the Kodamotion card made by Kodak—you see what looks like a soccer ball and a few green squash balls rolling in a tube made up of blue tennis balls glued together. In this holographic representation of a computational simulation of a nanodevice (from nanometer, a billionth of a meter), the tube is formed from carbon atoms, the soccer ball is a buckyball, and the green balls are flowing helium atoms that can penetrate and push the buckyball "piston" through the tube. By turning the Kodamotion card slowly, you see interactions among all the atoms, even the naturally vibrating "balls" in the graphite tube.

Nanotechnology futurists dream of microbe-sized machines built from a few million atoms. They envision robots the size of a dust mote that manufacture therapeutic drugs or swim along eating stream pollutants or slaying cancer cells in the bloodstream. They envision "smart" materials embedded with nanosensors that alert operators to atomic-level defects, providing adequate time to repair or replace equipment before it can fail. Already on the scene are "optical tweezers"laser beams that spin microscopic rotors in liquid and can be used to dissect bacteria and manipulate molecules at room temperature.

We have developed computational tools to do reality
checks on designs for nano-scale devices.

But will the dreamers' designs work as intended? Will the devices be stable? What are their operational limits? Our chemical physicists have developed tools to do reality checks on these ideas. They have adapted algorithms to solve molecular dynamics equations quickly on parallel computers—in a minute for every hour previously required. Now, computational simulations of the mechanical properties of molecular bearings and gears are feasible. Interactions of a gas, liquid, or laser light with a nanomachine part can be precisely modeled.

Computational simulations can lead to new discoveries. ORNL's simulation of a design for a nano-scale "frictionless" bearing revealed the new phenomenon of atomic-scale friction, suggesting that a redesign is needed to reduce this effect.

Our chemical physicists are now working under a CRADA with Angstrom Tools, LLC, to develop user-friendly software to design and test proposed nanostructures and devices. They are convinced that practical nanomachines are in the cards.

The research has been supported by a grant from ORNL's Laboratory Directed Research and Development Fund and by DOE's Office of Energy Research, Basic Energy Sciences, Materials Sciences Division.


Next article