Computational Science and Advanced Computing

Remote Control of ORNL Microscopes
Electronic Notebooks for Scientists
Challenges of Linking Four Supercomputers
Operating Supercomputers Seamlessly
Speeding Up Storage and Retrieval of Supercomputer Data
Advanced Lubricants and Supercomputers
Modeling the Invar Alloy

Simulation of an internal combustion engine.

If you need to solve a difficult mathematical problem, you can count on ORNL for help. The Laboratory is a long-time leader in computational plasma physics and materials science, nuclear physics and transport calculations, matrix computations, geographic information systems, and management of environmental information. We have developed additional strengths in parallel computing, informatics, global climate simulation, groundwater contaminant transport, and computing tools and data storage systems.

The DOE-Energy Research Office of Computational and Technology Research supports ORNL developments of parallel processing algorithms; tools to facilitate the use of parallel and distributed computing systems; and applied mathematical, statistical, and computational methods for analyses of physical processes. The DOE office also funds the operation of ORNL’s Center for Computational Sciences (CCS); the heart of CCS is the massively parallel Intel Paragon XP/S 150 supercomputer, one of the world’s fastest computers, which is used by the DOE user facility at ORNL called the Computational Center for Industrial Innovation. A distributed computing initiative combines resources and expertise at CCS and Sandia National Laboratories to support DOE’s missions in national security and science. Because of ORNL’s impressive expertise and equipment, we have become a computing power, a notable computational resource for a variety of DOE missions.

Remote Control of ORNL Microscopes

Larry Allard (sitting) and Edgar Voelkl demonstrate the remote operation of an electron microscope. Photograph by Tom Cerniglio.

In November 1996, students in an electron microscopy class at Lehigh University focused on a virtual place in cyberspace. They took turns remotely operating an electron microscope by computer. Tapping a key or two on the computer keyboard, the curious students moved the specimen about, changed the image magnification, and adjusted the microscope’s focus. On the computer screen they studied the sample as its different images appeared, transmitted across the Internet from the microscope located at ORNL. They also had live video and audio contact with their ORNL collaborator, using teleconferencing tools.

ORNL researchers are helping to show
the potential of using the Internet
for remote operation of
research equipment.

ORNL researchers and other scientists in the DOE laboratory system have been demonstrating the potential of using the Internet or other networking technology for remote operation of research equipment. At ORNL the Hitachi HF-2000 field emission transmission electron microscope has been used in demonstrations from Lehigh University, Detroit, San Diego, and Washington, D.C. Remote operation of the ORNL microscope was demonstrated to (now former) Secretary of Energy Hazel O’Leary at a DOE conference in March 1996 in Reston, Virginia.

The HF-2000 microscope and three other ORNL electron microscopes are now among nearly a dozen microscopes at DOE facilities that will be remotely operated as part of the new Materials MicroCharacterization Collaboratory pilot project supported by the DOE 2000 initiative. Other ORNL projects receiving DOE 2000 initiative funding are the development of electronic notebooks (see next story) and remote collaborations involving experiments at ORNL’s High Flux Isotope Reactor and the Advanced Photon Source at DOE’s Argonne National Laboratory.

According to DOE, a collaboratory is “an open laboratory spanning multiple geographical areas where collaborators interact via electronic means—‘working together apart.’ ” It enables researchers to conduct experiments remotely and view and discuss the results immediately over the Internet.

In the materials collaboratory, ORNL researchers will be working on microscopy projects with researchers from Argonne, Lawrence Berkeley National Laboratory, and the DOE-funded microscopy center at the University of Illinois at Urbana-Champaign. Contributing partners in the research will be the National Institute of Standards and Technology and six manufacturers of microscopes and control systems: Philips; JEOL; Hitachi; R. J. Lee; Gaton, Inc.; and EMispec.

The collaborators will try not only to bring their “user facilities” to the user but also to make them more user friendly. They will automate routine functions as much as possible and provide an easy, effective, mouse-driven user interface. They will also add features to ensure data security and keep unauthorized users off the system.

Collaborative research projects among scientists at the participating laboratories will embrace critical problems involving surfaces and interfaces, which are important in controlling the behavior of advanced materials. Specific microscope research will focus on catalysts used to control emissions from automobiles and diesel trucks, as well as interfaces between substrates and coatings designed to protect them against corrosion. That ORNL will be a key player in the early development of virtual laboratories is a virtual certainty.

The research is supported by the Office of Mathematical, Information and Computational Sciences, the Office of Basic Energy Sciences, and the Office of Transportation Technology.

Electronic Notebooks for Scientists

Closeup of a page of ORNL’s electronic notebook on the computer screen. Photograph by Tom Cerniglio.

Scientists traditionally use paper notebooks to record their ideas for experiments and notes on experimental setups, observations, and research results. These notebooks are kept on bookshelves or in file cabinets.

Is there a more efficient alternative in the age of computers and the Internet? Why manually copy documentation into a paper notebook when you can cut and paste it electronically and later do searches to quickly find a special entry?

Some of our researchers recommend a new recordkeeping tool—the electronic notebook. An electronic notebook is a repository for objects that document scientific research. It can be used to enter, retrieve, or query objects such as text, sketches, images, tables, spreadsheets, and graphs. Electronic notebooks are not calculators, nor are they chat spaces. They hold a static record of ideas, experiments, and results.

We have developed an electronic notebook
that’s especially useful for scientific
collaborations involving the remote
operation of research equipment.

A notebook accessed by computer offers scientists all the features of the traditional paper notebook, plus the capability to accept multimedia input (audio and video clips) and computer-generated images, tables, and graphs placed by drag and drop. Furthermore, electronic notebooks can be useful for scientific collaborations, especially those involving the remote operation of research equipment. They can be easily searched for information. They can contain hyperlinks to other information, such as a research paper or raw data stored elsewhere on the Internet.

Our electronic notebook can be easily searched for information.

We have developed a prototype for an electronic notebook that is being used by more than 30 different groups around the country. Commercial interest will quickly grow once the obstacle of legal acceptance of the electronic notebook concept is overcome. We anticipate that electronic notebooks will soar in popularity not only for collaborating groups but also for private users.

ORNL is collaborating with researchers from DOE’s Lawrence Berkeley National Laboratory and Pacific Northwest National Laboratory (PNNL) to design a common notebook architecture that will allow interoperation of the different notebooks. That is, an ORNL researcher could use the Laboratory’s notebook interface to view entries that were written by a colleague at PNNL using his own notebook. Also, the two researchers could share input tools they each developed.

In developing the Web-based electronic notebook architecture, ORNL researchers are focusing on ensuring the security of the notebook. Electronic notebook entries can be digitally authenticated and signed, individually or collectively. They can be electronically time stamped and notarized. While entries cannot be modified once signed, the pages can be annotated and forward referenced. Entries can be secured by encryption, both in transit and in storage. All these security actions can be performed transparently to the users, thus adding no complexity to the user interface.

The ORNL prototype uses Common Gateway Interface scripts to access notebook pages. The researchers are developing Java applets (mini-programs written in the Java programming language developed by Sun Microsystems) to enter objects in the notebook, such as a pen-based sketch pad. Of course, as they make further developments in this project, they record their progress—in their electronic notebooks.

The research was initially supported by ORNL’s internal Laboratory Directed Research and Development Program. Funding now comes from DOE’s Mathematical, Information, and Computational Sciences Division. A demonstration version of ORNL’s notebook is available on the World Wide Web ( It can be accessed by any authorized user from any type of computer that has a Web browser.

Challenges of Linking Four Supercomputers

It would be nice to be able to solve any scientific problem using one computer. But one ordinary single-processor computer in an office isn’t always enough: new problems continually demand more power and speed. Some scientific problems are so complex that the only approach is to divide and conquer. The problem is broken into small parts, and hundreds or thousands of computer processors are linked together to attack pieces of the problem at the same time for a rapid solution. The linked processors form one computer—but it’s a parallel supercomputer that fits in a large room.

Parallel supercomputers in Oak Ridge, Pittsburgh, and Albuquerque are linked by a high-speed network to rapidly solve complex scientific and national security problems. Images by Daniel Pack and Ross Toedte.

In the quest to solve increasingly complex problems, a project is under way to link geographically separated parallel supercomputers by a high-speed network to form one distributed supercomputer. An ORNL team is working with researchers from Sandia National Laboratories and the Pittsburgh Supercomputing Center to develop this system linking four massively parallel supercomputers: ORNL’s XP/S 150 (2048 individual processors) and XP/S 35 (512 processors) Intel Paragons, one Intel Paragon at Sandia (1840 processors), and Pittsburgh’s Cray T3E (512 processors).

Although the concept is simple, the task is not. For example, the supercomputers at the three centers use different operating systems and compilers. A program written for any one of the computers must be able to run on two to four machines simultaneously despite the system differences, a feat somewhat like flawless simultaneous translation into three languages at lightning speed. This translation is done using Parallel Virtual Machine (PVM) software developed at ORNL.

Another challenge is speeding up communication among supercomputers so that no processor is idle while waiting for data from another computer. The asynchronous transfer mode (ATM) interface card, developed by GigaNet, increased the intermachine communication rate from the 300 kilobytes per second possible with Ethernet to 72 megabytes per second (MB/s). But there are other bottlenecks.

Think of data exchange between computers as pumping water through a hose. One slowdown has been the pumping rate: the intermachine message routing systems currently move data through a service node—a “doorway” out of the machine—that carries only 17 MB/s. A PVM modification called “direct to ATM” routing will send data directly to the ATM network instead, enabling applications to use nearly all of the 72 MB/second bandwidth (the hose) between machines. The change will accelerate communication between the ORNL Paragons to about 72 MB/s.

The primary slowdown between ORNL and Sandia, on the other hand, has been the hose. The inter-site Energy Sciences Network (ESnet) moves only 1 to 2 MB/s between the two sites now; however, network capacity upgrades will increase the data flow to 12 MB/s soon and to 72 MB/s in a few months.

The two ORNL Paragons are already linked and running applications. Linking the geographically separated computers is more complicated: for some applications, individual processors must be added incrementally, the system tested, and the bugs worked out.

The first phase of a materials application for modeling the magnetic behavior of nickel-copper alloys is running on ORNL’s Paragons. The materials code is set up to run one atom of the model per computer processor. A production run on 1372 processors across the ORNL Paragons completed more than 2.2 quadrillion mathematical floating point operations. Plans include a 2048-processor run that will involve the ORNL XP/S 150 and Sandia’s Paragon, as well as a 2916-processor run that will involve the ORNL XP/S 150, possibly the XP/S 35, Sandia’s Paragon, and Pittsburgh’s Cray T3E.

A shock physics code developed to address nuclear weapons safety is running across the ORNL and Sandia Paragons at a gradually increasing scale. It involves predicting the response of a nuclear weapon to a hypothetical nearby chemical explosion to assess whether a sympathetic detonation would be likely. This code has run on 1024 processors at ORNL and Sandia, and will also be used for fundamental scientific studies such as the consequences for global climate of an asteroid striking an ocean.

A third application, a global climate model, will include two different codes (one for the atmosphere and one for the ocean) communicating with each other while running on multiple computers. Work on this application is focused on determining the best procedures for approaching such a complex calculation, which will predict climate over an extended time, in a distributed computing setup.

When the distributed supercomputer is complete, it will be big enough to tackle a host of complex problems that are virtually impossible to solve today. We will be counting on distributed computing using ever larger machines to solve the biggest problems in the future.

This project is funded by DOE’s Mathematical, Information, and Computational Sciences Division.

Operating Supercomputers Seamlessly

ORNL is working to make the multi-site
supercomputing environment “seamless”
so that linked machines can be used
as if they were one computer.

As research problems become larger and more complex, their solutions require more and more computing power. They inspire the development of advanced computing systems such as massively parallel supercomputers in which many processors are linked together. Some of these machines at different sites are becoming even more powerful because they are linked together by a high-speed network, forming a distributed supercomputer. One of the current challenges in computing research is making such a complicated environment seamless so that a user can run an application across these linked machines as if they were one computer.

In the quest to solve increasingly complex
problems, geographically separated
parallel supercomputers are being
linked by a high-speed network
to form one “distributed

ORNL is helping develop a seamless environment that spans the parallel supercomputers at ORNL, Sandia National Laboratories, and Pittsburgh Computing Center. The research team wants to make distributed supercomputing systems like this one so easy to use that they are accessible to any researcher whose project needs them, not just to computer scientists.

To achieve the goal of seamlessness, several complications are being worked out. For example, all three supercomputer centers use different operating systems, access policies, and communication networks. Connecting these heterogeneous computers is Parallel Virtual Machine (PVM) software developed at ORNL. PVM, which won an R&D 100 award in 1994, is specifically designed to make a heterogeneous cluster of computers appear as a single computational resource. As the seamless environment becomes more of a reality, a user will be able to specify the general requirements for an application (e.g., amount of memory, processing time, storage), and then PVM will decide the most efficient way to run it, do the necessary translations, and port it to the appropriate processors without further instruction from the user.

The formidable security systems and access restrictions around all three computer centers are another obstacle. Sandia, particularly, because it is a weapons laboratory, has a practically impenetrable firewall around its computing system designed to deflect anything from the outside. Methods had to be devised to get data through the security systems without compromising their effectiveness and to prevent interception of data traveling over the network. Sufficient progress has been made to allow shared computer runs between ORNL and Sandia.

One of the highest hurdles remaining is scheduling run time on the machines. Competition is stiff for time on even one of the supercomputers; negotiating simultaneous computing time and preparing each machine for shared runs is a logistical nightmare. At present, an expert must schedule each linked run manually. The goal is eventually to automate the entire scheduling process from input of code to output of results.

Most of the low-level infrastructure for the distributed system is in place. The seamless environment work is aimed at making access to the system progressively easier so that the user will see and use the entire collection of resources as a single computer. At present, only about five persons have the skills to set up an application to run at more than one site; the long-term goal is for any scientist with an appropriate application to be able to do so.

Several scientific problems are already being solved in our multiple-center distributed computing environment. The supercomputing team is working with researchers to meet their needs for more computing power and speed as it chips away at the obstacles to running these supercomputer clusters seamlessly in parallel.

This project is funded by DOE’s Mathematical, Information, and Computational Sciences Division.

Speeding Up Storage and Retrieval of Supercomputer Data

The speed of parallel computer processors isn’t the only thing that dictates how fast supercomputing applications can run. Just as essential is the speed with which processors can store and access data. In fact, given the development of lightning-fast parallel computers and the exponential growth in the sizes of data sets, some computing experts think the time required for data storage and retrieval will set the pace for computing for the foreseeable future.

The excellence of HPSS has just
been recognized in a 1997
R&D 100 Award.

The pace has been quickened by the development of the High-Performance Storage System (HPSS) by IBM and a consortium of national laboratories, including ORNL. IBM began marketing the system in late 1996. The HPSS moves very large data files among high-performance computers, networked workstation clusters, and storage systems many times faster than was possible before.

Powerful parallel computers generate vast quantities of data (including results of calculations), and systems are needed to accept, store, catalog, and retrieve those data rapidly and with absolute reliability. The HPSS addresses those demands using standard but, for optimal effectiveness, top-of-the-line network and storage technologies and vendor products. It is largely a huge software application—about a million lines of code—that controls a user’s storage hardware and network devices and generates and maintains the “metadata” (information that identifies the stored files in detail—labels, locales, sizes, access limitations, etc.) needed by a particular site.

An HPSS package consists of servers (central computers in local area networks) and “data movers,” software modules that transfer data streams between processors and storage devices. The package is completely modular; any module can be upgraded or replaced without affecting the rest of the system. Like most storage systems, it is hierarchical: From the computer, it routes data into disk storage arrays (high speed but modest capacity) and later, for archiving, onto tape (high capacity but slower access). The HPSS manages these different classes of storage devices as a single system.

One secret to the HPSS’s success is that all computer processors and storage devices are connected directly to the network so that data move directly between them at network speed. (Conventional storage systems route data through a server and a control interface, a big bottleneck.) Another HPSS plus is parallelism: Many data streams can move simultaneously among multiple computing processors and multiple storage devices, or a single huge file can be split into smaller subfiles that are transferred simultaneously. These advantages give HPSS the capability to transfer data at rates of gigabytes (billions of bytes) per second; the actual speeds for a given site are limited only by the amount of available hardware.

Advanced techniques are used to ensure security and protection of data. Client-server processes are structured as “transactions,” related groups of functions that must occur together to maintain the integrity of the data set. If all parts of a transaction are not completed, the entire transaction is redone.

The HPSS is scalable and almost infinitely extensible. It can assimilate virtually any number, speed, and capacity of processors and storage devices.

ORNL’s Center for Computational Sciences is converting its entire storage system from NSL-Unitree to HPSS. Sandia, Lawrence Livermore, and Los Alamos national laboratories already have adopted the system, as have a number of other high-performance computing centers. It will be used by these DOE defense labs to carry out their responsibility of ensuring the integrity of the U.S. nuclear stockpile in a world without nuclear testing.

The HPSS will be useful in any environment that involves high-speed transmission of substantial amounts of data, such as hospitals, corporations, universities, and some types of online services. (Cable services that provide movies on demand, for example, would be a natural for the HPSS.) ORNL is a member of HOST, a conglomerate of medical care institutions, medical groups, vendors, and research institutions interested in using the HPSS for storing and rapidly accessing medical records. The requirements for the task would be enormous: all medical records in the country must be retrievable quickly through one index, and security measures must eventually be in place to prevent unauthorized access to or tampering with records.

ORNL is developing additional features for HPSS and is working with Storage Tek, a maker of tape libraries, to ensure that HPSS works properly with that company’s very fast Redwood drives. Work is under way on other compatibility issues to ensure that the HPSS is adaptable to platforms across the computer industry.

Given its unique combination of speed, parallelism, and scalability, the HPSS is expected to define the state of the art in advanced storage system software for some time. It will be setting the pace for high-speed computing into the 21st century.

The project was supported initially through a CRADA with IBM and now is funded through the Accelerated Computing Strategic Initiative, part of DOE’s Defense Programs.

Advanced Lubricants and Supercomputers

The squeaky wheel gets the grease to reduce friction between moving parts and prevent them from burning up. But it’s less widely known that motor oils used in our cars today will not be able to take the heat as lubricants in tomorrow’s highly efficient vehicles. Because the lean, clean cars being developed for the U.S. Partnership for a New Generation of Vehicles will operate at higher temperatures and engine speeds, today’s motor oils would break down too fast to be reliable. Thus, industry is searching for advanced lubricants that will stand up to the harsh conditions of advanced vehicles. The problem is that it would take many decades for researchers in the industry to synthesize and test billions of different hydrocarbon liquids to identify the most promising candidates for advanced lubricants. An attractive complement to experimentation is computation.

ORNL has shown that computer simulations accurately predict performance lubricants.

ORNL researchers have shown that molecular simulation calculations on our Intel Paragon supercomputers can accurately predict the performance of advanced lubricants. We have developed algorithms and parallel codes for use on the Intel Paragon so that we can simulate the behavior of lubricant molecules. This computational approach should provide industry with a much faster, cheaper way to identify suitable lubricants for advanced vehicles.

Many different lubricant candidates (i.e., hydrocarbon liquids) can be synthesized, and even more can be simulated. The properties of a hydrocarbon liquid depend in a complex way on two factors: the way in which molecules interact with each other and the conformations of the individual molecules themselves in the liquid—that is, the spatial arrangements of atoms in a molecule that stem from free rotation of the atoms about a single chemical bond. Complicating matters further is that these two factors are intimately interrelated (molecules interacting with each other affect their conformations, and vice versa). As a result, hydrocarbon liquid properties are found to depend on the number of carbon atoms in the molecule’s backbone and the number, position, and length of branches along the backbone, as well as on temperature and pressure.

In our simulations, we have studied how the viscosity of various hydrocarbon liquids varies with all these factors. Viscosity is a measure of resistance to flow; it’s high in corn syrup and low in alcohol. If the viscosity of oil changes during use, problems could ensue. If the oil viscosity is too high before you start your car on a cold morning, the engine might not start; if it’s too low when you’re driving on the interstate highway during a summer afternoon, your engine might fail.

Viscosity of a lubricant can be changed by a number of factors. Because different fluids have different viscosities, the structure of the hydrocarbon molecules—backbone length, branching, etc.—is expected to affect viscosity. Temperature is a factor—liquids usually have a lower viscosity at a high temperature. A third important factor is the shear rate. The shear rate in a lubricant is determined by how rapidly the two solid surfaces being lubricated move past one another and how far apart the moving surfaces are. With a faster speed and narrower distance, the shear rate climbs.

At low shear rates most liquids, including lubricants, exhibit constant viscosity when the shear rate changes. This constant viscosity behavior is called Newtonian behavior because it is described by Newton’s law of viscous flow. On the other hand, at the high shear rates that occur in automobile engine lubricants, most liquids (including lubricants) exhibit “shear-thinning” behavior—their viscosity decreases as the shear rate increases. The experimentally measured viscosity of a lubricant is usually its Newtonian viscosity because the shear rates at which the shear-thinning behavior occurs (shear rates that occur in your automobile engine today) are far too high for experimental viscosity measurement.

Finally, previous experiments and molecular simulations by other researchers have suggested that a lubricant’s viscosity may change as the distance between the two lubricated surfaces decreases to molecular dimensions. Of course, this finding would be vitally important because engine wear occurs where surfaces are closest to one another.

This visualization is created from a molecular simulation of squalane molecules in confined flow between moving surfaces. The red balls are the atoms of the moving surfaces; the top surface moves to the right, and the bottom surface moves to the left. The balls of other colors represent the carbon atoms of the squalane molecules (the hydrogen atoms are not shown). The black balls are the carbon atoms at each end of the 24 carbon backbone of the squalane molecules. The gray balls are the single-carbon branches along the backbone. The squalane molecules are shown in different colors so that it is easier to distinguish the carbon atoms of one molecule from those of another. Careful examination of the squalane molecules reveals that they are somewhat aligned in layers parallel to the moving surfaces rather than being randomly oriented as would be the case in the static bulk liquid.

We have simulated the performance of squalane, a hydrocarbon molecule typical of many lubricants. The squalane molecule has 24 carbon atoms in its molecular backbone and 6 short side branches (each 1 carbon atom long) symmetrically placed along the backbone. One measure of lubricant performance is the viscosity index, a measure used by industry to describe how viscosity changes with temperature. The viscosity index of squalane predicted by our simulations is in excellent agreement with experimental measurements. This was the first time that a substance’s viscosity index had been predicted by molecular simulation. It was also the first time that accurate models had been extended to calculating the viscosity of molecules as large as those used in motor oils.

Such calculations were beyond the capabilities of earlier computers and computer codes. With this success, industry can now use molecular simulations with confidence to predict the viscosity index of new lubricants, even before they have been synthesized.

At the University of California at Santa Barbara and the University of Illinois at Urbana-Champaign, research groups have experimentally studied the viscosity of liquids between surfaces moving past one another separated by only a few atomic diameters. They found that the viscosity measured in this so-called “confined flow” appeared to be much higher than that of bulk oil, and they inferred that the flowing oil’s molecules may be lined up in layers parallel to the moving surfaces. This behavior occurs at shear rates much lower than those in your engine.

Previous molecular simulations (using models of unknown accuracy) apparently supported these interpretations; however, our molecular simulations of squalane in confined flow (using models proven to be accurate) have challenged the earlier conclusions. We found that the viscosities of the confined and bulk oil are identical when correct attention is paid to all of the factors involved in either simulations or experiments. We concluded that at high shear rates, shear flow alone is sufficient to align the molecules, and the aligning of narrowly spaced surfaces affect the viscosity only at very low shear rates.

Our accurate computer simulations have challenged the consensus from results of previous experiments and simulations, but from this challenge has come a new level of molecular understanding. Just as the squeaky wheel gets the grease, we believe the exciting new results of molecular simulations will attract the attention of both experimental and theoretical scientists and of industry, as well.

The research was sponsored by the Laboratory Directed Research and Development Program at ORNL.

Modeling the Invar Alloy

Malcolm Stocks and Bill Shelton view a computer image of the direction and magnitude of magnetic moments calculated for a disordered nickel-iron alloy called Invar. Inset: A simulation visualization of the Invar alloy. Photograph by Tom Cerniglio.

Like people, some materials behave normally and others do not. Take a normal metal or alloy and heat it up. It will expand because the more the material is heated, the more its atoms vibrate and push apart. Take a normal magnet made of ferromagnetic metals such as iron, nickel, or cobalt. Its atoms are magnetic—like compass needles, they can be aligned up or down to north or south poles, producing a large magnetic field.

ORNL scientists are using computer
simulation to better understand
the complex magnetic behavior
of a disordered nickel-iron
alloy called Invar.

Now consider a disordered alloy of nickel and iron called Invar (short for “invariable”). Discovered 100 years ago by the Swiss-born French physicist Charles Guillaume (who received a Nobel Prize for physics in 1920 for his discovery), Invar is not normal. Heat it up to a certain temperature range and it won’t expand or contract (it’s invariable); however, its ability to maintain its dimensions over a wide range of temperature makes it useful for highly precise Swiss watches, pendulums in clocks, standards of measure, high-precision instruments, shadow masks for televisions and computer monitors (to reduce glare), and tubing surrounding fiber-optic cables.

Invar is also a magnetic material—but not a normal one. For one thing, based on neutron and X-ray scattering data obtained by ORNL researchers, Invar’s resistance to contraction has been definitively linked to magnetic pressure: natural repulsion by iron atoms as they approach each other during cooling keeps the vibrating atoms from moving even closer together, nearly canceling the contraction effect. For another thing, the orientations of its atomic magnets are unusual for a magnetic material; they are not simply up or down—sometimes they point outwards at an angle, like a compass needle pointing north-northwest. Because it is so abnormal, Invar is one of the most studied and most complicated of materials.

ORNL scientists are using computer simulation to better understand the complex magnetic behavior of a disordered nickel-iron alloy in which the atoms are randomly arranged within a face-centered cubic structure. Using an ORNL-developed computer code on 256 nodes of the Intel Paragon massively parallel supercomputer, they calculated the orientations of 256 magnetic atoms of an Invar alloy that is 36% nickel and 64% iron. Such a material is highly magnetic but on the borderline—small changes in volume or composition (e.g., an increase in iron content) could result in loss of magnetism.

We started with atoms whose magnetic moments (orientations with respect to the direction of magnetization) point in random directions. Our computer code, which knows the position of each nickel and iron atom, performs the quantum mechanics calculations to determine which way the magnetic moment of each atom should point. The code calculates electronic charge density (how electrons arrange themselves about sets of nuclei), magnetization density (imbalance in up and down electron spins), and the average orientation of the magnetization associated with each atom.

When the volume of the Invar alloy is decreased or its iron content is increased, two changes might occur, according to earlier studies. The alloy can remain ferromagnetic with all its magnetic moments pointing in the same direction, parallel to the direction of magnetization (collinear), but getting smaller, accounting for the sudden drop in its magnetic strength. Or the magnetic moments of atoms on the cube corners of the face-centered cubic alloy could point up while the magnetic moments of atoms at the center of each face point down, making the alloy antiferromagnetic but still collinear.

Our calculations suggest that some magnetic moments are antiferromagnetic and some are noncollinear (at an angle to the direction of magnetization) in the nickel-iron Invar alloy. All nickel magnetic moments point in the same direction as the magnetization, but some iron magnetic moments point at angles to it. In shells where 1 iron atom is surrounded by 12 iron atoms, the magnetic moment of the central atom wants to point down, not up like its neighbors. This neighbor shell is antiferromagnetic. If a central iron atom has fewer than 12 neighbors (8 to 11), the atoms in this neighbor shell act confused and unable to decide whether their magnetic moments should point up or down, so instead they become noncollinear as each points at an angle to the direction of magnetization. Some atoms’ magnetic moments turn upside down or are noncollinear, and in both cases they are shorter than the ones that point up.

To improve our simulations, we will be collaborating with scientists at DOE’s Ames Laboratory to understand the effects of temperature on the dynamics of magnetic moments, including the Curie temperature at which magnetism disappears. We have been working with other ORNL scientists performing neutron scattering experiments on Invar alloys at the High Flux Isotope Reactor to understand the complex magnetic structure of the alloys. And we have collaborated with ORNL scientists doing high-energy X-ray experiments using the National Synchronous Light Source at Brookhaven National Laboratory to study observed displacements of Invar atoms off their ideal lattice sites.

Scientists have long suspected that the Invar alloy contains a mix of magnetic orientations (ferromagnetic, antiferromagnetic, and noncollinear). Our simulations lend theoretical support to that idea. Thanks to the combination of experimentation and computation, we expect that advances in understanding this invariable alloy will be almost constant.

The research is sponsored by DOE, Office of Energy Research, Office of Basic Energy Sciences, Division of Materials Sciences and Office of Computational and Technology Research, Division of Mathematical Information, and Computational Sciences.

Next article