Energy Production and End-Use Technologies

Fusion Plasma Turbulence Can Be Suppressed
ORNL Reviews Nuclear Plant Containments
ORNL Leads Study on Using Reactors To Burn Plutonium
ORNL and Industry Seek Commercially Acceptable Wire
Highly Efficient Refrigerator Design
ORNL’s Soft-Switching Inverters Will Help Save Energy
U.S. Vulnerable To Another Oil Price Shock
ORNL’s Noisy Chaos Approach May Improve Engine Efficiency

Fossil and fission energy now and fusion energy in the future. Efficient distribution of electricity by utilities and efficient use of energy in industrial processes. Leak-tight buildings and refrigerators on a strict energy diet. Transportation vehicles that go farther on less fuel to reduce our nation’s need for imported oil. These are some of the targets of ORNL’s energy-related research.

ORNL is one of the world’s premier centers for research and development (R&D) work on energy production, distribution, and use and on the effects of energy technologies and decisions on society. As a primary performer of DOE-sponsored R&D in energy efficiency, ORNL brings to bear its remarkable capabilities in materials science, biotechnology, engineering, and technology development and evaluation. The objects of our expertise are transportation systems, buildings and building materials, industrial processes, and utility distribution systems. Our research on fission, fossil, and fusion technologies applies the Laboratory’s strengths in physics and engineering to the improvement of existing systems and the development of new science and technology.

Unique facilities for energy-related R&D are used both for technology development and for fundamental investigations in the basic energy sciences that underpin the technology work. ORNL’s scientific, engineering, environmental, economic, and social science expertise is integrated to supply the information needed in making decisions that ensure a sustainable energy future.

Fusion Plasma Turbulence
Can Be Suppressed

In fusion energy, a key to getting heat out is to make sure that heat is initially locked in. That’s why fusion energy researchers have tried to find ways to eliminate turbulence in experimental fusion devices.

Turbulence at the DIII-D tokamak edge amplifies a global shear flow that, in turn, suppresses turbulence. These two snapshots of a time evolution of turbulence at the plasma edge show fully developed turbulence just before a transition (left) and suppression of turbulence at the edge (right).

Turbulence is a major cause of heat loss in fusion plasmas in doughnut-shaped tokamak devices. Heat losses must be avoided because the goal of a fusion energy device is to obtain an output of energy (heat to make steam to generate electrical power) that far exceeds the input of energy—electrical power. Electricity is needed to operate magnets that confine the charged particles (electrons and nuclei) making up the gaseous hydrogen plasma so that they are in close contact (high density). Also needed is electrical power to heat the plasma to high temperatures so the closely packed hydrogen nuclei overcome their natural repulsion and fuse, forming helium and releasing neutrons and large amounts of usable heat.

In the 1980s, it was found that as power to heat the plasma is increased, plasma turbulence is also increased. This irregular fluctuation in plasma velocity and pressure results in loss of heat (what fusion energy researchers call “energy transport”). It became a major concern that efficiency of plasma heating declined with increasing power. That was the bad news.

ORNL-led experiments showed that plasma
turbulence and associated heat losses
can be reduced in a properly
operated fusion device.

Now, here’s the good news. Experiments led by ORNL in 1996 at the DIII-D tokamak in San Diego, together with experiments performed at the Tokamak Fusion Test Reactor in Princeton, New Jersey, showed that plasma turbulence can be suppressed by operating experimental fusion devices in the most advanced “plasma confinement mode.” It was found that turbulence could be almost eliminated at certain power levels by properly controlling plasma temperature, density, and current. One change that was essential to turbulence suppression was ramping the current to make it peak towards the plasma edge rather than in the center. In future experiments, it is planned to use radio-frequency power for this purpose and to sustain the plasma discharges in a steady state.

Four successive stages in the evolution of plasma turbulence. The first snapshot shows the turbulent eddies. In the second, turbulence amplification of shear flow has started, with the eddies being distorted by the flow. The distortion increases and the eddies are sheared in the third frame. Finally, turbulence is totally suppressed.

In parallel, a theoretical effort involving scientists at ORNL, the University of California at San Diego, and the Princeton Plasma Physics Laboratory developed an explanation for the reduced turbulence seen on DIII-D. It was proposed that, under certain conditions, a plasma can “self-organize” and move from the state of chaotic turbulence to a state of order. A turbulent plasma consists of eddies, or small vortices, that increase the rate at which the energy leaks out. Flows in plasma, such as plasma rotation, are driven by the electric field generated by currents in the plasma in interaction with the tokamak’s magnetic field. If plasma conditions are properly controlled, the same turbulence can amplify flows in different plasma locations to create a global “sheared flow” in which each eddy is pulled in two directions, breaking it up. In this way, turbulence and the resulting energy losses from the plasma can be nearly eliminated.

The ORNL-led turbulence suppression experiments enabled the DIII-D machine to achieve a record performance. Overall, the project was considered a sheer success.

The research was sponsored by DOE, the Office of Energy Research, Office of Fusion Energy Sciences.

ORNL Reviews Nuclear Plant Containments

In this example of a concrete containment being constructed for a nuclear power plant, some of the primary construction materials are evident.

In the United States, the biggest challenge to a concrete containment at a nuclear power plant was the 1979 Three Mile Island accident. In this case, a sequence of equipment malfunctions and operator errors caused a loss of reactor core cooling that led to partial melting of the fuel and a release of radiation into the concrete containment’s environment. But the containment structure did its job. Despite the temperature and pressure buildup from the accident, the concrete containment limited the release of radioactivity into the atmosphere to a level low enough to cause the public no harm.

Today’s nuclear power plants have a large containment structure made of either concrete or steel to limit radiation releases to the surrounding public. Of the 110 U.S. nuclear power plants licensed for commercial operation, over 60% have a concrete containment. The others use a steel containment but rely on concrete structures to provide additional radiation shielding and protection against environmental effects.

Although these plants’ concrete structures have performed well and will continue to meet their intended functions, some in all likelihood will undergo degradation from exposure to hostile environments, just as some U.S. concrete bridges and highways are deteriorating. Examples of potential threats to the integrity of nuclear-plant containment structures are listed in a document prepared in 1996 by ORNL and Johns Hopkins University for the U.S. Nuclear Regulatory Commission (NRC). The document is titled Report on Aging of Nuclear Power Plant Reinforced Concrete Structures.

One potential threat to these structures is corrosion of the reinforcing steel bars used to compensate for the low strength of concrete when it is loaded in tension. Another threat would be an attack by chemicals, which can either erode the concrete or cause harmful expansive reactions of its constituents. Also, concrete located in regions where moisture accumulates can crack from exposure to freezing and thawing conditions. Finally, if the concrete is exposed to elevated temperatures or irradiation, it can crack and lose its strength and rigidity.

The report’s authors have documented examples of degradation already present in some nuclear power plant containment structures—corrosion of steel reinforcement in cooling-water-intake structures, greater than estimated losses of forces used to precompress the concrete in prestressed concrete containment designs, and cracking and spalling of containment dome concrete as a result of weather-induced freezing and thawing. As nuclear power plants continue to age, the authors state, the incidences of degradation can be expected to increase.

By the end of this decade, more than 60 U.S. commercial nuclear power plants will be more than 20 years old, with some nearing the end of their initial operating license period. Faced with the large costs of shutting down and cleaning up reactors and replacing lost generating capacity with other sources, many U.S. utilities are expected to seek extensions of their initial plant operating licenses (nominally a 40-year period). Although mechanical and electrical equipment in a plant can be replaced, it would be extremely difficult and economically unattractive to replace a concrete containment structure. To get approval from the NRC for a continuation of service, utilities must provide evidence that the concrete structures will continue to perform as designed.

ORNL has developed a methodology for
determining if concrete containments
at aging nuclear power plants
will continue to perform
satisfactorily.

Since 1988, ORNL researchers have been developing a methodology the NRC can use as part of the evaluation process for nuclear power plants seeking to continue operation. Under this program, ORNL’s Structural Materials Information Center was set up to collect and disseminate, both on paper and electronically, data on how the properties of materials vary over time under the influence of environmental stressors. Currently, more than 140 materials are being evaluated in the center. We have developed an aging assessment methodology that uses ranking criteria to identify structural components and degradation factors of primary importance for managing aging structures. This methodology enables utilities to focus their inspection programs on structures or structural components most important to aging and identifies the type of degradation that might be expected.

We have established guidelines and criteria for assessing the condition of concrete containment structures. Also, a reliability-based approach has been developed that can be applied in evaluation of these structures to estimate their current and future performance. One application of this approach would be in the development of optimized in-service inspection and maintenance programs.

In addition, we have conducted in-depth evaluations of (1) several concrete-related technologies, such as knowledge-based systems for concrete and concrete-related materials; (2) in-service inspection and condition assessment techniques and methodologies for their application; (3) corrosion of metals embedded in concrete, including criteria for applying methods to halt or prevent corrosion (e.g., cathodic systems that protect the embedded steel reinforcement by forcing corrosion to occur at another location so the structure is not affected); and (4) ways to repair degraded concrete structures, such as filling cracks with epoxy or polyester materials, using chemical grouts to halt water seepage, and using inorganic and organic materials to replace spalled concrete materials.

Results of this program are summarized in the ORNL–Johns Hopkins University report mentioned earlier. Although this activity addressed concrete structures in nuclear power plants, our program results could be applied to buildings, bridges, roadways, and other infrastructure-related facilities. We know enough to provide sound advice on managing aging facilities, even though not all the answers are cast in concrete.

The project was sponsored by the NRC’s Office of Nuclear Regulatory Research, Division of Engineering Technology.

ORNL Leads Study on Using Reactors To Burn Plutonium

Now that the arms race is over, the United States hopes that Russia will join a new race: ridding the world of excess weapons-grade plutonium as quickly as possible. This potentially hazardous material, along with highly enriched uranium in surplus nuclear weapons, is a legacy of the end of the Cold War between the United States and the former Soviet Union. To give the race a jump start, President Clinton announced his latest nuclear weapons nonproliferation strategy, whose formulation was influenced by an ORNL-led study.

On January 14, 1997, the Clinton Administration announced a two-pronged strategy to ensure that plutonium from dismantled U.S. and Russian nuclear weapons will never again be used in weapons production. A $2.3 billion, 20- to 30-year program was proposed for putting 50 metric tons of U.S. surplus plutonium out of harm’s way. It was decided to permanently store some of the bomb-grade material by immobilizing it in glass or ceramic logs and mixing it with highly radioactive waste for storage in canisters in a U.S. repository. The remainder of the plutonium would be used as mixed-oxide (MOX) fuel in existing electricity-generating commercial reactors, which ordinarily are fueled with slightly enriched uranium.

In January 1994, President Clinton and Russian President Boris Yeltsin asked experts to jointly study options for the long-term disposition of fissile materials, particularly plutonium, taking into account the issues of nonproliferation, environmental protection, safety, and technical and economic factors. The Department of Energy’s (DOE’s) Fissile Materials Disposition Program was established soon after to implement the Presidents’ directive. ORNL then became DOE’s lead laboratory for characterization, assessment, and development of reactor-based plutonium disposition options.

In collaboration with utilities, other national laboratories, and the Canadian government, ORNL’s Fissile Materials Disposition Program, managed by Sherrell Greene, led a 3-year study to identify and evaluate U.S. reactor options for plutonium disposition. The study, recently documented in a series of ORNL Reactor Alternative Summary Reports, examined the challenges and consequences of producing and burning MOX fuels. This study provided much of the scientific basis for President Clinton’s recent decision to burn some of the plutonium as MOX fuel in American commercial light-water reactors.

The plan is to burn MOX fuel in 3 to 6 reactors for 10 to 15 years to dispose of surplus plutonium as quickly as possible. The plutonium would be used only once; the spent fuel from the reactor would not be reprocessed to extract residual plutonium and maximize the fuel’s energy value, as is done in Europe. Instead, the spent fuel (which is unusable as weapons material) would go into long-term storage in a U.S. geological repository.

Partly because plutonium and uranium absorb neutrons differently, the ORNL-led study found that relatively minor reactor modifications would be required, along with some modifications in fuel-handling and spent-fuel systems, for selected reactors to burn MOX fuel. Also, safeguard and security measures must be upgraded at these reactors to ensure that plutonium-containing fuel is kept out of the hands of people bent on making an atomic weapon.

An ORNL-led study provided a scientific
basis for President Clinton’s
decision to burn plutonium
in U.S. power reactors.

The technologies needed to fabricate MOX fuel from dismantled nuclear weapons were described in the ORNL-led study. The heart of each nuclear warhead is a plutonium metal pit, a sphere smaller than a bowling ball. In one facility, each pit will be converted chemically to a mixed oxide, using a process being developed at DOE’s Los Alamos National Laboratory: First, hydrogen will be added, producing plutonium hydride, and then hydrogen is driven off, leaving plutonium metal, which is then oxidized, producing a fine plutonium oxide powder. In a second facility, depleted uranium will be added to the powder to make mixed-oxide fuel elements.

The ORNL-led study concluded that new technology must be developed for the efficient conversion of plutonium pits to powder. Estimates indicate the facilities for fabricating MOX fuel could be ready in 10 years.

On the international front, the Joint Russian-U.S. Plutonium Disposition Options Study was recently issued, thanks partly to ORNL leadership. ORNL staff members Bruce Bevard and David Moses wrote major sections of this report. ORNL’s Jim Stiegler sat on the 9-member U.S./Russian Steering Committee on Plutonium Disposition, which provided oversight for this report. Stiegler has since been replaced by Gordon Michaels, director of ORNL’s Nuclear Technology Programs. Co-chairs of the committee are Nikolai Egorov, deputy minister of Minatom (equivalent to the nuclear part of DOE), and Bruce MacDonald of the White House Office of Science and Technology Policy.

In addition, ORNL has begun managing the implementation of two multiyear, multinational programs to demonstrate three different types of MOX technologies: American light-water reactors, Canadian deuterium-uranium heavy-water reactors, and the VVER pressurized water-cooled reactors in Russia. The Russian government, which is not enthusiastic about the immobilization option, plans to dispose of at least 50 metric tons of surplus plutonium by burning it as MOX fuel in electricity-generating reactors. Thanks to the moxie of ORNL and other U.S. participants, the race to disarm should forge ahead.

The ORNL research was sponsored by DOE, Office of Fissile Materials Disposition.

ORNL and Industry Seek Commercially Acceptable Wire

Mariappan Paranthaman recently used electron-beam evaporation to make a superconducting wire 7 centimeters long. He shows the nickel substrate on which buffer layers are deposited by e-beam evaporation. Photograph by Tom Cerniglio.

ORNL has developed industrially appealing processes for producing high-temperature superconducting tapes. Now, we are working with industry to determine how to produce wire of industrial strength—and an industrial length.

ORNL is working with industry to determine
how best to fabricate longer, stronger
high-temperature superconducting wire.

On April 10, 1996, at a scientific meeting in San Francisco, ORNL researchers rolled out a short superconducting tape that, when chilled to 77 K by liquid nitrogen, can carry large amounts of current without energy-wasting resistive losses. They announced the development of a process for making the backbone, or substrate, of superconducting wire using a pair of rollers, heat, and thin ceramic films. The rolling-assisted biaxial textured substrates (RABiTS™) process generated excitement among researchers in the electrical industry. It represented a leap forward in the race to develop fabricable superconducting wire. The substrate can be made with equipment like that used to produce labels on soft drink cans, videotapes, and liners inside snack food bags.

The ORNL process is faster and probably cheaper than competitive processes. It conditions the substrate, or template, upon which a high-temperature superconducting film of yttrium-barium-copper oxide is grown. The substrate is made of textured nickel covered with buffer layers of cerium oxide and yttria-stabilized zirconia that are 350 times thinner than a sheet of paper. These oxide layers are needed as a chemical barrier to prevent the substrate’s nickel atoms from dislodging the superconductor’s copper atoms. But the layers must be put down uniformly so that their crystalline structure closely mimics that of the nickel tape. The buffered substrate aligns crystalline grains in the superconducting film as it grows. Such a superconducting “sandwich” allows efficient flow of electricity in the presence of high magnetic fields if it is chilled by liquid nitrogen (which costs only 2% of the price of liquid helium, the coolant for low-temperature superconductors).

The goal is to develop industrially appealing processes, ranging from electron-beam evaporation to a chemical coating process, to produce these tapes, which on a laboratory scale are centimeters long, in kilometer lengths. Such wires could be used in transmission cables, transformers, steel and paper mill motors, generators, and magnet-containing devices such as medical diagnostic machines.

A nonexclusive license agreement has been signed with Midwest Superconductivity to use the technology in research and development, with an option for wire and tape commercialization rights. Co-developer Westinghouse is producing the roll-textured nickel base metal. Recently, 3M Company and Southwire Company have announced plans to further develop RABiTS™-based wires for transmission cable. Our collaborations with industry should shorten the time it takes to make a durable superconducting wire that’s acceptably long.

The RABiTS™ team received a NOVA Award for Teamwork from Lockheed Martin Corporation and captured top honors in the technical achievement category of the corporation’s Oak Ridge awards competition. The project was conducted under joint sponsorship by DOE’s Office of Energy Efficiency and Renewable Energy and DOE’s Office of Energy Research.

Highly Efficient Refrigerator Design

Ed Vineyard checks the instrumentation in the highly efficient refrigerator model. Photograph by Tom Cerniglio.

The “fridge of the future” will use half as much energy as today’s refrigerator-freezers, and it will change the way we chill our foods, easing fears about chewing holes in the protective ozone layer and warming the globe. At ORNL a popular refrigerator model has already been put on a strict energy diet, exceeding one turn-of-the century goal. The refrigerator-freezer was altered to reduce its energy use by 50%, from 2 kilowatt hours per day to 1 kWh/d. This reduction in energy use exceeds the limit called for in a new rule announced by the federal government on April 24, 1997. The rule requires refrigerators sold in 2001 to use 30% less electricity than those on the market today.

A popular refrigerator model has been
altered at ORNL to cut its
energy use in half.

This laboratory prototype is the product of work by ORNL researchers and refrigerator manufacturer engineers in a cooperative research and development agreement (CRADA). The CRADA between ORNL and the Appliance Research Consortium (a subsidiary of the Association of Home Appliance Manufacturers) achieved a dramatic energy reduction in a standard 20-cubic-foot refrigerator with a freezer on top. More than 60% of the refrigerators sold in the United States are “top-mounted” refrigerators like the lab model.

Some 125 million refrigerators in the United States consume approximately 1.5% of the energy used in the country. If the energy used in units currently in homes was reduced to 1 kWh/d, refrigerators would consume only about one-half of one percent of the nation’s energy, saving almost $6.5 billion annually. The accompanying decrease in demand for electricity from coal-fired power plants would also significantly reduce greenhouse gas emissions.

The CRADA’s “technically feasible model,” which exceeds the goal of government standards scheduled to go into effect in 2001, is more efficient because of four changes. Vacuum insulation panels were used around the freezer section to reduce heat gain. Polyurethane foam was added to the doors, doubling their thickness. Also, a high-efficiency compressor was installed. Three motors that used to operate on alternating current to drive two fans and the compressor were replaced with three direct-current, electrically commutated motors, which use less electricity and release less waste heat. Finally, the automatic defrost control, which daily removes ice from refrigerant coils to improve their heat transfer, was replaced with adaptive defrost, so that defrosting occurs only when needed—perhaps every other day in summer and once a week in winter, depending on the humidity and number of times the refrigerator door is opened.

ORNL and its CRADA partners—Amana, General Electric, Maytag, Sub-Zero, Sanyo, W. C. Wood, and Whirlpool—also developed a second model that is more cost-effective than the initial prototype. It has all the extra features except for vacuum insulation around the freezer and increased evaporator area. This second model would result in a savings of approximately $4.5 billion annually.

Vacuum insulation (such as the powder-evacuated panels being studied at ORNL) may be revisited in an ongoing CRADA between ORNL and Frigidaire. The CRADA’s goal is a production model, leading to a highly efficient refrigerator on the market in three to four years; such an appliance should pay for itself in three years through reductions in electricity bills. The problem is that the hydrochloro-fluorocarbon HCFC-141b that is now used as the blowing agent to insulate refrigerators will be banned in 2003. The replacement insulation will likely be less energy-efficient, making vacuum insulation look more cost effective.

Both the technically feasible and cost-effective models use the ozone-friendly refrigerant R-134a, which has been designated to replace CFC-containing refrigerants in new refrigerators because of its lack of chlorine.

Because it is a greenhouse gas and thus contributes to global warming, R-134a may also have to be replaced. One proposed replacement is a hydrocarbon, which raises the risk of house fires. Engineering problems must be solved to avoid house safety problems and adverse effects on global change. The idea is to keep our food cold without making the globe too warm.

The research was funded by DOE’s Office of Building Technologies, State and Community Programs, and by the Appliance Research Consortium of the Association of Home Appliance Manufacturers.

ORNL’s Soft-Switching Inverters Will Help Save Energy

This compact, reliable, ORNL-developed soft-switching inverter is a key technology for efficient conversion of electrical power from one form to another (e.g., direct current to alternating current).

Electrical power is often delivered or stored in a form different from what’s needed for a particular use. So, to get power to the people, a device called an inverter is needed to convert incoming power from, say, direct current (dc) to alternating current (ac) at variable frequencies and voltages. Unfortunately, such conversions waste electrical energy and generate heat.

ORNL is developing highly efficient,
compact, and reliable soft-switching
inverters to convert dc to ac.

To address this problem, a new type of power inverter is being devised—one that may be a building block for technologies ranging from electric buses, to more efficient heat pumps, to safer brain surgery techniques. Called a soft-switching inverter (SSI), this device is more efficient, more compact, and more reliable than conventional inverters. Also, it produces little electromagnetic interference—those irritating magnetic fields that interfere with proper operation of other electronic devices.

The first SSI developed at ORNL was the resonant snubber inverter (RSI), which has been patented. Now, our researchers are developing the next generation of SSIs, which are expected to have a wide array of industrial and military uses.

A conventional hard-switching inverter uses six semiconductor transistors (switches) that open and close up to 20,000 times per second to create an alternating current. Every time a switch is turned on or off with full current or voltage running through it, high, instantaneous power losses are generated. These power spikes wear out switches and equipment and produce waste heat. The RSI adds small auxiliary components that temporarily divert the power from the main switches so that they are turned on and off without power loss.

Because the RSI operates more efficiently than a conventional inverter (98% vs 94% efficiency at high power), it produces less waste heat. It loses only 2% of the energy at high speeds and 20% at low speeds, compared with conventional inverter losses of 6% at high speeds and 30 to 40% at low speeds. Lower heat losses decrease the possibility of equipment degradation and failure and allow a more compact design. SSIs use lighter, cheaper “sinks” to absorb the operating heat, and device components can be safely placed closer together. Compared with the newest conventional inverter, the SSI weighs only about one-third as much and occupies one-tenth the volume.

Because the SSI is smaller and lighter, it may be used in electric cars or buses. Its greatest efficiency gains are at the mid-power range at which it would operate in an automobile.

An SSI-equipped heat pump would run continually at varying fan speeds instead of cycling on and off. It would use less power because heat pumps use five times as much power cycling on as is consumed during normal operation. Also, it would run more quietly while offering improved comfort.

ORNL has a cooperative research and development agreement with Stereotaxis involving use of the RSI in a medical procedure that employs superconducting magnets to route tiny magnetic devices through the brain. The technology could be used to inject medicines, perform biopsies, thermally destroy targeted tissue, or deliver radioisotopes to a tumor, minimizing the amount of brain tissue involved in surgery or radiation treatment.

SSIs may also be used in industrial machinery such as pumps, compressors, and conveyor belts. Such inverters should improve the reliability of adjustable speed drives, increasing the efficiency of much industrial equipment.

ORNL researchers are developing SSIs in support of the DOE/U.S. Navy Power Electronics Building Blocks Program. The program’s goal is to do for power electronics what integrated circuits did for computers: revolutionize the technology by making it possible to build compact, cool-running inverters. Such technology will empower people to use energy more efficiently.

The ORNL development of soft-switching inverters was initiated under ORNL’s Laboratory Directed Research and Development Program. It is now being supported by DOE’s Office of Energy Efficiency and Renewable Energy, Office of Transportation Technologies, as part of DOE’s contribution to the DOE/U.S. Navy Power Electronics Building Blocks Program.

U.S. Vulnerable To Another Oil Price Shock

David Greene, who led the ORNL oil study, surveys the gasoline storage tanks near Middlebrook Pike in Knoxville. The tanks are filled with fuel from the Colonial pipeline. Oil is refined into gasoline near the Gulf of Mexico and piped north by the pipeline. Photograph by Tom Cerniglio.

The oil price shocks of the 1970s and ’80s sparked worldwide inflation and caused many nations to slip into recession. Could such a disruptive drop in oil supplies and rapid rise in prices happen again? Yes, according to an ORNL study. The United States is increasingly vulnerable to another oil price shock. Even the U.S. Strategic Petroleum Reserve wouldn’t provide much of a buffer unless the disruption were very small and short.

The Organization of Petroleum Exporting Countries (OPEC), which represents many Persian Gulf nations, is poised to control more than half the world petroleum market again within 10 years. OPEC nations, which have most of the world’s proven oil reserves, dominate world oil trade. Because they are drawing down their reserves half as fast as non-OPEC states, their share of oil resources is actually growing. In the future, various situations could disrupt oil supplies from the Persian Gulf region: a revolution in a major oil-producing country, a Mideast war, terrorist activities, a political boycott, or the deliberate manipulation of supplies.

ORNL found that a price shock in the oil
market now would have a serious
economic impact on the
United States.

ORNL researchers examined changes in oil market fundamentals since the 1970s to see if a comparable price shock now would have a serious economic impact. They found that it would. Although some factors had improved (the rate of growth of oil demand is slower, for example), others were the same and still others worse (e.g., the United States now imports more oil than ever before). The ORNL team simulated the economic effects of a 2-year price shock in 2005 through 2006, assuming a 10% OPEC pumping slowdown in 2005 and an additional 7% cut the next year. Output then would increase at 0.5% per year through 2010. The reductions are roughly the same as those during 1979 through 1980 when OPEC repeatedly cut production to maintain high oil prices.

The simulation showed oil prices would jump from about $21 per barrel in 2004 to $54 per barrel in 2005, fall to $46 in 2006, and stabilize at $28 to $30 in the next few years. In contrast, if markets remain stable, the Energy Information Administration expects oil to hold steady at around $20 to $25 per barrel through 2015. In the ORNL simulation, the cost of the price shock to the United States economy, and the windfall to OPEC, would amount to about $500 billion through 2010.

The ORNL analysis also shows oil dependency has changed little since the 1970s. Almost half (46%) of the oil consumed annually in the United States is imported, equal to the record highs of the 1970s. Although many power plants have switched to other fuels, the transportation sector, which accounts for two-thirds of U.S. oil use, remains 95% dependent on petroleum for energy. Increased travel has outstripped fuel efficiency gains in automobiles.

What can be done to counter the threat of oil price shocks? The best defense appears to be increasing the price elasticity of oil (the ability of economies to respond effectively to changing oil prices). If demand is highly elastic, buyers can respond to a price hike by buying less until the price drops. If supplies are more elastic, OPEC production cuts can be compensated for by increasing supplies from the rest of the world.

A key to increasing price elasticity is developing new technologies. Needed are vehicles that use fuel more efficiently to make a barrel of oil go farther, and alternative fuels to substitute for oil when prices rise. Needed also are easier and cheaper methods of finding and producing oil, such as three-dimensional seismic imaging and advanced drilling technologies. These advances would both reduce the demand for oil and enable the rest of the world to supply more fuel faster and more cheaply if OPEC supplies decline. With highly elastic demand, non-OPEC nations should be able to respond effectively to future shocks and reduce the economic costs by half or more.

The research was sponsored by DOE’s Office of Policy.

ORNL’s Noisy Chaos Approach May Improve Engine Efficiency

Stuart Daw conducts an experiment using an internal combustion engine to produce data to improve ORNL’s “noisy chaos” computer models of engines. Photograph by Tom Cerniglio.

Lay a drinking straw on the kitchen table, flick its middle gently with your finger, and watch it roll. Now flick it twice as hard and note that it rolls twice as far. This is what scientists and engineers call a linear system. Now, put the straw on its end so it balances on the table. Blow on it gently—a little change. Watch the straw topple to the table—a big change. The balanced straw is an example of a nonlinear system, something that’s very sensitive to small perturbations.

Now, turn on the kitchen’s gas stove. For a second, nothing happens and then, when the gas concentration reaches a critical level, poof!—up pops a flame. Like the balanced straw, the gas flame is nonlinear. Unlike the straw, however, the gas flame continues to fluctuate between unstable states, a process called flicker. In a sense, flicker is caused by the flame continually going out and relighting itself, over and over. A flickering flame is an example of chaos, the seemingly random fluctuation of nonlinear systems because of their sensitivity to small variations in the past.

ORNL’s noisy chaos theory considers
external and internal “noises” that
may affect the behavior of
chaotic systems such as
flames and engines.

One of the most important recent scientific discoveries has been that even very simple nonlinear processes can exhibit chaos. Chaos theory (or more properly, deterministic chaos theory) is concerned with how seemingly random processes can be explained using simple mathematical models that include nonlinear effects. Scientists have demonstrated that such models come surprisingly close to reproducing complex behavior seen every day, even when there is nothing explicitly random in the models. The apparent randomness comes from the nonlinear sensitivity feeding back into itself from moment to moment, causing a never-ending repetition of slightly different patterns. In the flame, flicker results when the fuel is depleted by burning, then the flame shrinks, then the fuel builds up again, then the flame suddenly flares up again, and so on.

ORNL researchers have added a new twist to chaos theory. We also take into account external and internal “noises” that perturb the nonlinear processes. Any real system is never completely isolated from its surroundings; likewise, other processes are always going on in the system at scales below our level of observation. Such noise, whether it comes from outside or inside, disturbs the ideal chaotic patterns that would otherwise exist, causing them to become “fuzzy” but not eliminating them completely. In the case of a flame, noise comes from the hiss of the gas jet and air currents in the room. To be more accurate, models of the flame should include how it responds both to its past history and to these noisy effects. We think this “noisy chaos” approach more accurately reflects how real systems behave.

We are applying our noisy chaos concept to better understand the behavior of internal combustion engines in today’s cars and trucks. Like the gas flame, the combustion in your car engine fluctuates from moment to moment because of its nonlinearity. Such fluctuations are a problem because they increase pollution and reduce fuel efficiency. If combustion were the only process going on in the engine, it would probably be relatively easy to redesign the engine to reduce or eliminate the fluctuations. The problem is not so simple, however, because of the vehicle’s other noisy processes, such as flexing belts, rattling gears, and pumping pistons. All these other processes perturb the combustion, making its chaotic patterns fuzzy and difficult to observe. If we can understand the resulting patterns in spite of their fuzziness, it is still theoretically possible to improve engine performance.

Improving the performance of internal combustion engines has both strategic importance for U.S. energy consumption and potential commercial applications. Thus, through a CRADA, ORNL and Ford Motor Company researchers measure the noisy chaotic patterns in internal combustion engines and develop improved computer models for explaining them. We believe the noisy chaos approach could be an important key to designing cars that use less fuel and emit less pollution. We also believe that the noisy chaos concept will have much broader applications than just automotive engines. For example, it may be applicable to utility boilers, precision machining, anti-skid brakes, and cardiac pacemakers.

Making nonlinear energy processes more efficient and less polluting is important but not easy. Results so far indicate that ORNL’s noisy chaos theory could be a sound approach to this difficult challenge.

The research was supported by DOE’s Office of Energy Research and Ford Motor Company.

Next article