Scientific Visualization at ORNL
by Ross Toedte and Dianne Wooten

photo

Ross Toedte points out a detail in the visualization of a car crash simulated by ORNL researchers. Photograph by Tom Cerniglio.

Interpreting the world around us as visual images is an activity humans have been engaged in throughout history. Even our cave-wall pictures tell a story. Visualization of the physical world is one manifestation of this activity. The ultimate goal of science is insight into physical phenomena, and visualization serves to tell this story. It enhances the verification of expected results and highlights the unexpected. Similarly, as Richard Hamming, senior lecturer at the Naval Postgraduate School, stated so succinctly, “The purpose of computing is insight, not numbers.” In addition, the ability to share information visually is the most effective way people can communicate. The role of the Center for Computational Sciences Visualization Laboratory at ORNL is to enhance insight through the use of visualization and virtual (immersive) techniques as well as state-of-the-art software and hardware.


In 1990, the Visualization Lab at ORNL (see the Review, Vol. 23, No. 2, 1990), dubbed the VizLab, was formed to help ORNL researchers develop scientific visualizations of their data and results of their calculations. The VizLab then was part of the Graphics Development Section of Martin Marietta Energy Systems, Inc. Our primary mission was to acquire and demonstrate visualization technology, educate ORNL users, and provide proof-of-principle visualizations for researchers to justify the use of such technology in their own work. Visualization was promoted as a tool for science.

In 1994, ORNL’s Center for Computational Sciences (CCS), which is part of what is now Lockheed Martin Energy Research Corporation, absorbed the Visualization Lab because of the unique fit between its ongoing computational science interests and the VizLab’s capabilities. With the mission of ORNL in mind, the VizLab now focuses on being a key participant in the scientific discovery process. By providing state-of-the-art visualization hardware, software, and techniques, the VizLab partners with scientists to augment their investigations. This focus has enabled the VizLab to contribute to ORNL’s awareness of the importance of visualization. In addition, researchers from most ORNL divisions have taken advantage of the VizLab’s resources to produce significant results.

Through CCS we have had the opportunity to work on numerous ORNL projects in areas such as materials science, groundwater remediation, car crash safety, and nanotechnology. One such project was with Malcolm Stocks of the Metals and Ceramics Division, who conducted a study of the magnetic properties of alloys. The VizLab was able to visualize a lattice of copper and nickel atoms with vectors attached to each atom. Each vector represented the magnitude and direction of the atom’s magnetic moment. As these vectors changed over time, an animation was created that was useful in helping researchers understand the unique magnetic behavior of the alloy.

Visualization’s Role in Scientific Discovery

The roots of computer-based visualization and virtual (immersive) reality can be traced to such pioneers as Ivan Sutherland in the 1960s. Sutherland, viewed by many as the father of computer graphics, cofounded Evans and Sutherland Computer Corporation. His research contributed early to the development of military flight simulators. Since that time, visualization has had an ever more pronounced effect on scientific discovery at research institutions around the world. (For examples of discoveries in which ORNL visualizations played a role, see Figs. 1, 2, and 3.) From its early use in simulating terrain navigation, visualization has affected the very scientific method it serves. Computer-assisted visualization has become commonplace and has been used advantageously in nearly every type of research. Now, we hardly consider how this silicon and software-based technology has changed the process in which it is embedded. Examples of visualization can be readily found in microscopy, earth sciences, materials science, and every discipline in between. Today, the interpretation of visual results often determines whether an experiment is continued, terminated, or modified and restarted.

drawing

Fig. 1. Joe Carpinelli (PhD., May 1997), a physics student under the direction of University of Tennessee-ORNL Distinguished Scientist Ward Plummer, documented the first glimpse into how a charge density wave is formed at the surface of a crystal. Carpinelli explains that a temperature change can sometimes induce a crystal to lower its symmetry. “The electrons rearrange themselves to form something of a wave,” he says. He used VizLab resources to enhance a visualization of such phenomena to produce this image, which appeared on the cover of the May 1996 issue of Nature magazine.

Fig. 2. This visualization, dubbed the “blue tornado,” represents the results of calculations by visiting scientist Paul Miller concerning the vortex state which arises when a magnetic field penetrates a superconductor by creating normal, nonsuperconducting regions.

drawing

drawing

Fig. 3. This simulation, by G. Mahinthakumar of CCS, on the Intel Paragon shows the impact of heterogeneity on pump-and-treat remediation of contaminated groundwater.

This fundamental change in the experimental process has occurred only partly because of the availability of computing technology. Another important contributing factor has been the rapid reduction of computer prices, which has encouraged researchers to use computers to integrate with or even replace more traditional equipment in their labs. Recent reductions in federal research funding have aroused increased interest in purchasing low-priced, graphics-capable computers for experimental laboratories.

Essential Tool for High-Performance Computing Environments

Visualization has been intimately linked to high-performance computing since its advent in the 1970s. Whether they employ large monolithic processors, vector processors, massively parallel processors, or combinations thereof, high-performance computing engines have one consistent trait—lots of data. A 1989 report funded by the National Science Foundation entitled Visualization in Scientific Computing states, “The human brain cannot interpret gigabytes of data each day, so much information goes to waste.” Since the publication of that report and because of continued advances in computer performance, it is reasonable to replace the prefix “giga” with higher-order prefixes such as “tera” or even “peta.” Future advances along the computational power curve will undoubtedly result in even larger data volumes, necessitating increased reliance on new visualization and multisensory techniques for interpretation.

Visualization is critical to the operation of these computing environments in a manner that is effective for researchers. Visualization has a role not only in understanding the results of a computer simulation once it has concluded but also in viewing the results as they are produced. If the results are not desirable, it may be necessary to change the simulation parameters while the simulation is running. This ability to change parameters is called “computational steering.” The technique was first exhibited at the 1989 SIGGRAPH computer graphics conference in Boston. Simulations running on computers in Illinois were changed on the fly with input from a terminal on site at the conference.

Elements of Visualization

When a scientist comes to us for help in presenting data, we first determine how the visualization is going to be used. The technique we recommend depends partly on whether the visualization is to be used as part of a presentation or as an analysis tool. A very-high-quality static image or video animation is more costly to generate, but it may be better suited for use in a conference or as part of a presentation for future funding of a project. A simple, inexpensive image or animation may be the best solution for a researcher who needs to analyze data from multiple computer simulations. We also ask a prospective user to identify, if possible, the most important phenomenon he or she wants to observe or show. We can then tune the visualization to highlight that aspect of the science.

photo

Dianne Wooten analyzes visualizations of atomic-scale interactions in important materials. Photograph by Tom Cerniglio.

Consider a materials science problem of finding the area of the highest density of electrical charges within a three-dimensional (3D) volume. One way to simplify the process of looking into a volume is to break the problem down into a series of two-dimensional (2D) slices running along one of the axes of the volume. Each slice of the volume shows the charge density as a color contour map within that slice. The original volume can be represented by an outline of a rectangular box. The series of images generated by visualizing a perspective view of this box with each of the slices correctly oriented within the box produces an animation.

Another method of visualizing data in three dimensions is to generate isosurfaces. An isosurface is a surface generated by connecting all data in the volume that have the same value. In groundwater contamination modeling, a 3D volume of concentrations can be simplified by generating isosurfaces of different contaminant concentrations.

Numerous visualization techniques are now available, ranging from simple x/y graphs to complex virtual reality environments. Images produced using 3D bar charts, contour plots, isosurfaces, molecular models, 2D cutting planes for 3D volumetric data, vector flow fields, and particle tracing can greatly enhance the understanding of science. We choose the best technique for presenting as much data as possible in an image without confusing or distorting the original data. In addition, annotation is important for clarifying what the scientist is trying to communicate. Just as a “picture is worth a thousand words,” so is a picture worth millions of numbers.

VizLab Infrastructure and Capabilities

Once you have decided what you want to visualize and how to convey your meaning to your intended audience, you have to consider how to achieve your end product. Visualization is an activity that is highly leveraged by hardware and software. The availability of different software tools and the appropriateness of computing hardware are the keys to effective visualization. CCS has made significant investments in both of these areas in the quest to create a flexible visualization environment adaptable to the needs of ORNL and any of its affiliate organizations involved in computational science. These investments can be categorized as modeling, rendering, and animation on the software side; the principal hardware components include graphics generation, networking, and storage equipment. These elements do not make up the entire visualization infrastructure of CCS. However, they represent most of the major functional categories of a CCS visualization infrastructure that has successfully helped facilitate collaborative computational science activities involving ORNL, other national laboratories, universities, and industrial centers.

Modeling for visualization is not to be confused with modeling a physical process. The goal of the former is to meaningfully represent data produced by the latter using any of a variety of software packages.

One way of classifying these packages is by the maximum spatial dimensionality of the data representations they can produce. The packages mentioned in this article are not the only solutions, but we have proven to ourselves that they are well suited to particular visualization needs. Interactive Data Language (IDL) is an excellent package for looking at one-dimensional (e.g., x/y charts) and 2D data (images). IDL also has an animation capability for revealing how particular data change over time. Advanced Visual Systems (AVS) is another package that can be used for these types of data, but it is rather lean in its annotation capability. Where AVS really shines is in its flexibility in combining different visual cues into a complex but cohesive visualization application that can handle time-transient data. On the high end, we use the Wavefront Advanced Visualizer package to import complex 3D models (e.g., car bodies or manufactured parts) or to build from scratch models that are compositions of large numbers of simple geometric primitives such as triangles, polygons, and hexahedra.

Rendering is the process of creating a realistic view of a 3D scene for projection onto a 2D computer screen. To do this effectively, all factors that affect a particular view of the world should be considered, including surface smoothness, light intensities, perspective, and reflectance. AVS has a rudimentary rendering capability, but it ignores some ingredients of realistic scenes, such as shadows. Wavefront Advanced Visualizer has an excellent renderer for creating photo-realistic scenes. The tradeoff, of course, is that rendering a complex scene with this degree of realism takes longer—up to half a day on a mid-range workstation.

Ensuring that the visualization end product meets but does not exceed the need is critical for precisely this reason. A rough-cut visualization of airflow over a wireframe model of an aircraft wing is quite sufficient for on-screen inspection by a team of researchers working in close physical proximity. Save the half-day renderings for audiences at conferences and meetings with program sponsors. At these venues, the important technical points have been identified in advance, possibly through visualization. The degree of visualization “polish” needed for these purposes typically goes beyond that needed during scientific exploration.

Animation enables the use of physical time to look at a particular data parameter. Animation of a time-transient 3D problem is, for this reason, considered to be four-dimensional visualization. Often the animation parameter is the time step or iteration of the simulation that produced the data, but any data parameter can be suitable. For example, geologic core sample data could be animated as a function of the depth of the sample, possibly isolating a particular mineral in a slice of the core sample in a manner difficult to produce using other techniques, such as isosurfacing.

We employ a number of animation tools in the VizLab for handling various animation file formats. Hardware support of graphics varies greatly from workstation to workstation. The graphics engines used in the CCS VizLab range from single-processor Macintoshes to high-performance multiprocessor Silicon Graphics workstations. A number of our workstations have specialized chip sets for expediting graphics processing. For example, hardware z-buffering is used for fast rendering of 3D scenes. By subdividing or “slicing” a scene along its depth, or z-axis, and determining which slices the object intersects, resolution of hidden objects becomes easier, simplifying a scene prior to rendering.

Texture mapping, a hallmark of virtual reality environments, is useful for making 3D scenes geometrically simple without sacrificing apparent realism. The objects are wrapped with digital images to give them a more varied and natural look. Often, these images can be produced procedurally to create a particular pattern. Let’s say you wanted to model an orange. You can geometrically model it as a simple smooth sphere. It becomes realistic only when you drape it with a digital image of a fine-grained, semi-random pattern of hues ranging from orange to brown. These hues mimic the minute ridges and valleys of the orange peel.

The VizLab works as an integral piece of the CCS computational science infrastructure, creating implications for networking and storage to provide the speed and storage capacity needed for the problems being solved by CCS and its collaborators. Currently, we use fiberoptic-based networking on our higher-end machines to achieve data transfers an order of magnitude beyond Ethernet speed. We anticipate an investment in more advanced networking technologies to boost this nearly another order of magnitude. A total of 60 gigabytes of local disk storage is used to handle a wide range of data requirements.

We have several novel hardware devices that also deserve mention. Stereo viewing is essential for depicting 3D scenes realistically. Simulation of humans’ natural ability to focus their eyes independently on objects to achieve a sense of depth is handled in two ways in the VizLab. In 1995, we acquired a high-resolution desktop stereo display. Called the Fakespace PUSH, this device has two 1280- by 1024-pixel 1-inch cathode-ray-tube displays, which the user looks at through a flexible face piece that resembles a SCUBA mask. The optics are mounted on an articulated tripod stand. PUSH’s optics head moves independently from this tripod. The user moves virtually through a scene by pushing, pulling, and twisting the optics head while looking at the displays. Simpler but less natural stereo viewing is done by using shutter glasses. These glasses alternately restrict viewing to one or the other lens. The shuttering is synchronized to the graphics board of the computer via infrared light that pulses at a frequency of 60 Hz. On the positive side, these glasses are inexpensive and feel like regular glasses on your head. The downside is the mismatch between physical motion and virtual motion—that is, movement through the scene is facilitated by hand motion rather than head motion.

Tool for Public Communication

At its most basic, visualization is intended to facilitate understanding and communication, both of which are concepts intimately tied to education. It naturally follows that visualization is an invaluable tool for communication at all education levels. VizLab visualizations have been used in high school and university science education courses to facilitate education in computer science, visualization, and various other science disciplines. In recent years, the Saturday Academy for Computing and Mathematics (SACAM) program at ORNL enabled high school students from across the state to experience the Laboratory on a personal level. One of the SACAM technical seminars focused on the use of high-performance computing and visualization to address real-world problems. A half-day course introduced the students to scientific problems and ways to model them, visualize the results of the computational models, and interpret the resulting visualizations to gain understanding. The benefits these young adults obtained from the course are multiple. They included a greater appreciation of math and science, realization of the role interpersonal communication plays in a complex working environment, and exposure to new and exciting technologies.Conveying to the general public the complex nature of research activities at laboratories like ORNL is very difficult. Many of these research activities potentially have direct effects on society. However, many people have problems grasping the societal implications of these activities. Insufficient communication of the ramifications of this work breeds misunderstanding.

Visualization is a natural and effective way of remedying this problem. Take, for example, the VizLab’s role in ORNL’s computational nanotechnology effort. Nanotechnology involves research in the physical properties of objects miniaturized to a scale of thousands of atoms. Through visualization, it becomes obvious how different customized molecular structures can be useful in medicine, communications, and environmental applications. This enhanced communication can help create a sense of partnership between research organizations and the public.

Visualization and the Entertainment Industry

Toy Story, the full-length computer-animated movie produced by Pixar, recently made its way from the box office to the corner video store. With this event, more people are being exposed to the leading edge of Hollywood’s computer prowess, which has blossomed into a special effects industry that has helped rescue California’s ailing economy. Few people realize, however, that Hollywood is a leading source of computer graphics research. This research, in turn, migrates into commercial packages including those for visualization.

photo

Diane Wooten uses the VizLab’s video capabilities to show molecular dynamics results from a project with researchers Don noid and Bobby Sumpter of the Chemical and Analytical Sciences Division. Photograph by Tom Cerniglio.

In the mid-1980s, computer graphics artists working for movie producer George Lucas at Lucasfilm were instrumental in the development of fast hidden-surface and shading algorithms. Particle flow models developed in the early 1990s were initially used to simulate natural waterfalls. These same algorithms have found many practical applications, such as in automobile manufacturing and fluid-flow modeling. The RenderMan rendering language, developed by Pixar and now available in commercial off-the-shelf products, is responsible for the realism seen in individual frames of Toy Story and past Pixar productions. Industrial Light and Magic is the LucasArts subsidiary responsible for blockbuster special effects seen in the past 15 years in movies such as Star Wars, The Abyss, and The Mask. Star Wars took 3D modeling to a new level with its highly detailed depictions of spacecraft and space travel. The Abyss provided a new approach to 3D morphology; the “virtual” star of the show was an alien pseudopod, seemingly composed entirely of sculpted water.

Because of its innovative chip designs, Silicon Graphics has been a driving force in the computer hardware market since the early 1980s. These designs are the platform of choice for the entertainment companies previously mentioned. Whether chip design precedes or follows societal demand for new forms of entertainment is debatable and probably unresolvable. As for visualization and virtual reality, it takes little mental effort to imagine the impacts of these technologies on the scientific research community. Moreover, as you watch these effects unfold in the movies or on television, you can envision applications that have significant implications for society. For example, imagine a driving simulator so immersive that it could affect human behavior in addition to helping measure it. Current technology makes possible the Iowa Driving Simulator at the University of Iowa. The simulator uses state-of-the-art graphics along with audio and a motion platform to examine a range of engineering and safety issues relevant to designers of cars and highway systems.

Immersive Reality

Immersive reality, also known as virtual reality, can be viewed as an extension of visualization. A tool for understanding high-volume and complex technical data, it involves the visual sense and often requires high-performance graphics. However, immersive reality differs from visualization because it appeals to more than just the visual sense to enhance understanding. It delivers realistic stimuli to the senses—especially sight, hearing, and touch—and minimizes overall system latency.

Immersive reality was originally developed by the military as a way to give personnel experience in varied, high-stress environments without exposing them to potential harm. More recently, it has been used as a way to experience theoretical circumstances and proposed products prior to their manufacture.

Specialized input/output hardware, some unique to immersive reality, has evolved for working in simulated worlds. Head-mounted displays (HMDs) are used to deliver stereo views of a 3D scene to the user. Navigation, pointing, and selection are handled by various hand-oriented input devices. Both HMDs and hand input devices use embedded positioning devices known as trackers to communicate location to the computer. The computer generates views of the 3D scene that are appropriate for the tracked position and actions.

Sound is useful as both an input and output medium. Although several systems exist for receiving and interpreting voice commands for the computer, each has a practical vocabulary limitation of several hundred words. The systems must be programmed to understand the speech nuances of the user, such as accent, pronunciation, and tempo. Directional sound can be simulated by replicating a sound across several speakers and varying the amplitude between speakers.

A recent development in immersive reality is the availability of haptic, or touch, devices. These devices simulate the sense of feel through various means. The most effective of these uses a motorized, articulated arm. By varying the resistance of one or more degrees of freedom of the arm, you can simulate properties such as gravity and kinetic energy.

Future Directions

Our foremost near-term goal is the development of visualization as an element of seamless computing environments. Interfaces that transparently enable derivation of meaning from and control of computing processes is critical to visualization’s role as a research tool. We are continuing to investigate and integrate new products into our lab for immersive exploration of scientific worlds. We are also interested in exploring Web-based technologies, especially those that fully support 3D. Java and VRML have particularly interesting features for distributing visualization and immersive reality applications. Lastly, we are keenly interested in robust software that can be integrated with our visualization tools. The value of these tools will be measured by the extent to which they enable our collaborators to dynamically modify the worlds they see and the way they see them. Ultimately, our future research will be driven by the tools that offer the greatest contribution to the evolving cycle of scientific discovery through visualization of computational and experimental results.

BIOGRAPHICAL SKETCHES

ROSS J. TOEDTE is the visualization manager for ORNL’s Center for Computational Sciences (CCS). His current activities include development of advanced graphics techniques for computational science, graphics systems integration, and visualization education. He has a B.S. degree in computer science from Southern Illinois University and has done work for an M.S. degree in computer science at the University of Tennessee at Knoxville. He has held a variety of positions at ORNL since joining the staff in 1981. His current position involves managing the CCS Visualization Lab (VizLab). In addition to the CCS staff, the VizLab serves affiliates of the Computational Center for Industrial Innovation, Partnerships in Computer Science (PICS) partners, and the ORNL research community. Before taking his current position, he was a computing specialist in the Computing and Telecommunications Division (C&TD). He has worked as a visualization practitioner and promoter since cofounding the visualization lab in C&TD in 1989. As a member of the Graphics Development Section from 1983 to 1989, he was involved in a wide range of graphics activities including corporate graphics device support and mainframe graphics consulting. He has also worked as a computing analyst in on-line and database systems. His professional interests include virtual reality, scientific visualization, animation, and video technology.

DIANNE WOOTEN is a computing specialist in CCS specializing in scientific visualization. She has a B.S. degree in mathematics from Tennessee Technological University and an M.S. degree in computer science from the University of Tennessee at Knoxville. At the CCS Visualization Lab, she provides visualizations in the form of prints, overheads, animations (on computers and on video), and virtual reality. Her interests include developing Web technologies and working with ORNL scientists to provide scientific insight through visualization. Prior to this current commitment, she worked in system support capacity, maintaining and developing computer codes used for computer graphics. She has worked with a number of ORNL scientists over the years, providing visualizations of their scientific data for analysis and for presentation materials. Most recently, she has developed Web-based applications that provide multiplatform accessibility to corporate data.

 

Where to?

Next article | Contents | Search | Mail | Review Home Page | ORNL Home Page