Visualization and Virtual
Environments Research

by Raymond E. Flanery, Jr., Nancy W. Grady, Joel W. Reed, and Daniel R. Tufano

photo

Dan Tufano and Ray Flanery use computer technology to transform data and calculated results into visualizations such as the one on the computer screen. It shows the amount of precipitable water predicted to be present over one part of the earth under a global warming scenario. This visualization was produced as part of the Computer Hardware Advanced Mathematics and Model Physics Program. Photograph by Tom Cerniglio.

Visualization and virtual environments research at ORNL is being spearheaded by the Advanced Visualization Research Center in ORNL’s Computer Science and Mathematics Division. Previous research has included visualization for climate modeling, seismic modeling, and melting simulations, as well as visual interfaces for data mining. Current and future research areas include three-dimensional medical imaging, Web-based visualization, and multimodal interfaces for virtual environments.


Until recently, there were two kinds of scientists: theorists and experimentalists. But science is entering a new era in which research is classified in three ways: theory, experiment, and computer simulation. With the maturation of simulation as a full partner in the research enterprise comes an increasing need for visual communication techniques between the computer and the researcher. The Advanced Visualization Research Center (AVRC) in ORNL’s Computer Science and Mathematics Division (CSMD) seeks to address this need. The Grand Challenge class of simulations strains the limit of hardware and software because of its computational intensity—the large size of the data sets involved and the large bandwidth needed for data transfer. Further, the increasing complexity of the data sets requires more sophisticated techniques for optimal human perception. With these research needs in mind, CSMD created AVRC.

Large Data Sets

drawing

Fig. 1. View of precipitable water in the atmosphere, generated using the commercial visualization package AVS. The distribution of precipitable water is a key to important climatic trends such as drought, heavy rainfall, and destructive storms. See additional images on the back cover.

One of the first issues to be addressed was the handling of large data sets. ORNL is a participant in the Computer Hardware, Advanced Mathematics, and Model Physics (CHAMMP) Program with Argonne National Laboratory (ANL) and the National Center for Atmospheric Research (NCAR). This DOE program seeks to rapidly advance the science of climate prediction over decade and longer time scales, linking the emerging technologies in high-performance computing to the development of computationally efficient and numerically accurate climate prediction models. At full resolution, this project will generate 72 terabytes (TB) of data for a single 100-year climate simulation. This level of data creation has necessitated the development of techniques to use disk space as memory, and tape storage as others use disks, to make sufficient resources available for visualization of this data. Figure 1 shows a view of precipitable water in the atmosphere generated using the commercial visualization package AVS; the distribution of precipitable water is a key to important climatic trends such as drought, heavy rainfall, and destructive storms. This single frame required extracting the surface pressure and three-dimensional (3D) moisture fields from the approximately 60 variables stored during each step of the climate model. These variables are then used to calculate precipitable water at each step. A texture mesh with varying transparency is then superimposed on the globe to visually represent the precipitable water. (For video representations, see http://www.epm.ornl.gov/vis/avrc.html for three different mpeg videos of calculations of precipitable water from nine days to two months.) Modules were created within the AVS package to accept data being fed from the simulation running on the Intel Paragon XP/S 150 at ORNL’s Center for Computational Science and processing data at the rate of one frame per simulation CPU hour to create movies.

Parallel Algorithms

photo

Nancy Grady and Joel Reed discuss the results presented in a visualization (shown on the screen) of interest to the gas and oil industry. Produced as part of the Gas and Oil National Information Infrastructure Project, this visualization reveals subsurface structure based on measurements of the energy of acoustic waves passed through underground rock formations. Photograph by Tom Cerniglio.

A second project took a different approach to the visual analysis of large data sets. The Gas and Oil National Information Infrastructure project was a multilaboratory effort to develop the infrastructure technologies, such as computational steering, remote collaborative tools, and data distributed visualization, that are important to both the gas and oil industry and to DOE. As part of this project, supercomputers at four national laboratories—Livermore, Los Alamos, Oak Ridge, and Sandia—were used to produce a consistent data set containing synthetic seismic data. The idea was to create a visualization that reveals subsurface structure using measurements of the energy of acoustic waves passed through and reflected by underground rock formations. To facilitate processing of this 2- to 4-TB data set housed at the National Storage Laboratory at Livermore, data-parallel visualization techniques were developed to spread the computational load for isosurfaces (surfaces of constant velocity, indicating subsurface structure) within these data across a cluster of processors. The data were divided into slabs (groups of bytes), and the marching-cubes-based isosurface algorithm was used to process the data in parallel. Marching cubes is a contouring algorithm that creates surfaces of constant scalar value in three dimensions [W. E. Lorensen and H. E. Cline, “Marching Cubes: A High Resolution 3D Surface Construction Algorithm,” Computer Graphics 21(3): 16369, July 1987]. The results were reunited for rendering on the host visualization machine. Figure 2 shows the subsurfaces generated from this synthetic seismic data.

drawing

Fig. 2. This visualization reveals subsurface structure, using measurements of the energy of acoustic waves passed through and reflected by underground rock formations.

In Situ Vitrification

drawing
Fig. 3. Visual representation of data to predict final melt shape after an in situ vitrification experiment to seal waste in the ground. Shown are a set of spheres for each thermocouple for measuring temperature profiles (colored for temperature), cylinders showing the position at each time step of four graphite rods delivering electrical energy, and a surface showing the melt front.

In situ vitrification (ISV) was developed by DOE’s Pacific Northwest National Laboratory (PNNL) to stabilize radioactively contaminated soils in place. An electrical current is used to melt radioactive waste in underground soil, forming a leach-resistant glassy material. AVRC has developed codes that allow researchers to track the melt front during an ISV experiment. The code uses such information as temperature profiles (measured by a set of thermocouples), as well as the temperature and position of four graphite rods used to deliver the electrical energy. The final melt shape predicted by these codes matched extremely well the final shape of the solid, as measured after the experiment. The codes employ techniques from statistical analysis, numerical analysis, and visualization to take this spatially sparse set (about 90 values per time step) of temperature history at more than 8000 time steps. For each time step the temperature distribution is approximated throughout a 64 × 64 × 64 volume with an isosurface representing the melt front in the soil, as shown in Fig. 3. The data are visually represented as a set of spheres for each thermocouple (colored for temperature), cylinders showing the position at each time step of the graphite rods, and a surface showing the melt front. The researcher can then interact with this visual representation of data either by rotating the image or by selecting a viewing position interactively and observing the development of the melt front over time as an animation.

Mathematical Algorithms for Surface Rendering

drawing
Fig. 4. ORNL researchers can reconstruct the appearance of the probable face predicted from a number of marker points on the skull, using a neural net trained on measurements from recent MRI scans of living volunteers. As shown here, these points are used to generate a mesh for the skin surface, which is then smoothed, colored, and texture-mapped for realism to improve the chances that the victim will be identified.

The entertainment industry is becoming one of the leading users of computers, as a medium for creating their art; in fact, the special effects industry is transforming computer visualization. The computer provides a great deal of flexibility for rendering and doing precise drawings, making it very easy to make changes. A highly visible modeling effort involving the definition of a mesh to describe a face and the manipulation of this mesh to animate facial features is evident in the movie Toy Story. At ORNL a current exploratory project in computational forensics is seeking to develop the tools to automate the facial recognition process. The idea is to use the computer to reconstruct the face of a mutilated or decaying body based upon skull measurements in the hope of identifying an unknown victim and possibly the killer.

Ed Uberbacher, head of ORNL’s Computational Biosciences Section, Richard Mural of the Life Sciences Division, and Reinhold Mann, director of the Life Sciences Division, have created a database of facial tissue-thickness data using measurements from magnetic resonance imaging scans of living volunteers. Researchers in ORNL’s Informatics Group are developing neural network techniques to predict skin thickness at a number of marker points on a human skull. Instead of basing the reconstruction on only a dozen or so points, they used the computer to plot thousands of points, so the surfaces of the face would be mathematically based on the shape of the entire skull, not just a few landmark points directly below the skin. Researchers in the Visual and Information Sciences Group use these points to generate a mesh for the skin surface and then to smooth, color, and texture-map this surface for realism, as shown in Fig. 4. This approach is being used to support experimental perception research being done to understand the critical factors for facial recognition, a project being explored within the Human Systems Research Group. This interdisciplinary team hopes to create a prototype of the process used to automate the generation of facial features, which the user could control mathematically, thus improving the chances of a reconstructed face being recognized.

Visual Interface for Time Series Data Analysis

Visual analysis of time series data becomes very difficult as large quantities of data are generated. Statistical techniques developed by George Ostrouchov, Darryl Downing, Max Morris, and Val Fedorov, all of the Statistics Group, can be used to extract useful or interesting information from data sets too large for normal browsing. A researcher cannot effectively monitor large data streams and pick out regions of interest. To allow automated monitoring and subsequent user browsing of interesting regions, a visual interface was created which incorporates statistical filters to “score” the data, store the data around the region of interest into an underlying database, and present the user with a graphical interface to the scoring metadata.

The system begins by reading the raw data in intervals, applying the feature extractor to these chunks of data, and writing a representation of the filtered data to the database. During this process, as shown in Fig. 5(a), a histogram and box plots of the raw data sections are displayed, allowing the user to tailor parameters to better reduce or filter the resulting data representation.

drawing drawing
Fig. 5. (a) The displayed histogram and box plots of the raw data sections allow the user to examine the filtered data. (b) Pop-up window displays original raw data for researchers trying to understand more fully the data described in the meta-data display.

In the visualization and analysis phase, the power of scientific visualization is used to discover unusual or interesting features of the original large data set in an easily comprehensible yet compact format. Through the visual interface the user displays one of the filtered representations of the original data. The system generates the necessary query language commands to the database system to load the appropriate filtered data. As the user peruses the filtered data representation, the user can select a particularly interesting trend or point that appears to be worth more detailed examination. The system, via query language commands, retrieves location information about the corresponding original raw data values. The original raw data for the selection is loaded and displayed on a pop-up window, as shown in Fig. 5(b). This type of automated visual interface is important when researchers need to analyze quickly large amounts of data to determine the regions that should be examined.

Web-Based Visualization

drawing

Fig. 6. A prototype of a 3D space was developed at ORNL and shown at both the Supercomputing ’95 and Supercomputing 96 conferences. The booths at the conferences were created from computer-assisted drafting tools and imported into a drawing package that translated it into Virtual Reality Modeling Language. The links for additional information on any of the posters, demonstrations, and systems were inserted by hand to connect to available Web pages.

One newly emerging visual medium is the Internet’s World Wide Web. The Web has exploded in the past two years as an information exchange medium. Currently, this medium is dominated by the display of text-based information and simple graphics. A number of researchers are developing new capabilities that would allow them to present information within 3D spaces that can be navigated and manipulated. The Virtual Reality Modeling Language (VRML) was designed for the presentation of 3D worlds on the Web. It allows the visitor to interact with and manipulate the objects contained in the virtual environment. Using VRML 2.0, a 3D environment was prototyped for ORNL’s participation in the Supercomputing ’96 conference. Models of the objects in ORNL’s booth were created using computer-assisted drafting tools, translated into VRML 2.0, and then combined to create the virtual booth shown in Fig. 6. Each poster in the scene has a behavior attached to it. When the poster is activated (by a mouse click), it highlights itself and then plays an audio file describing itself. Tools for developing 3D displays are evolving rapidly. This world may be seen at http://www.epm.ornl.gov/SC96. Although VRML is now largely used to display 3D scenes, it is expected to mature into a fully functional visualization medium.

Future Directions

photo

Ray Flanery views a virtual reality model of a crystal through a head-mounted display. Photograph by Tom Cerniglio.

Some recent activities have broadened visualization research at ORNL into the area of synthetic environments, a term that encompasses both work-station-based visualization and the more totally immersive user interface known as virtual reality. To build this capability, CSMD has acquired a head-mounted display, a head tracking system, and virtual world development software. A powerful new Silicon Graphics computer will provide the foundation for the research planned in this area. This research will examine ways in which humans can interact with complex data sets, using multiple sensory channels (e.g., vision, hearing) and input modalities (e.g., speech, touch, gesture). The effects of synthetic environments on human perception and motor control, and the interaction of the two, will be addressed by a program of experimental research. New application areas for these capabilities will include information visualization for networking and for data mining. Because the goal of developing synthetic environments is to create an interface that is naturally suited to human users, a multidisciplinary team has been assembled to make use of visualization, simulation, and perceptual psychology. We are ready for simulations to provide information as valid as that from theory and experiment.

BIOGRAPHICAL SKETCHES

RAYMOND E. FLANERY, JR., is director of ORNL’s Advanced Visualization Research Center in the Computer Science and Mathematics Division (CSMD). His research interests include visualization of computational applications, visualization interfaces for data mining, and multimodal interfaces for virtual environments. He received his M.S. degree in mathematics at Youngstown State University in Ohio and came to ORNL in 1987.

NANCY W. GRADY joined ORNL in 1987 as a Wigner Fellow in the Metals and Ceramics Division. She is currently leader of the Visual and Information Sciences Group in CSMD. She holds a B.S. degree in physics and honors mathematics from the University of Tennessee and a Ph.D. degree in mathematical physics from the University of Virginia. Her current research interests include scientific visualization and mathematical techniques for data mining, Web-based collaborative technologies, and information visualization.

JOEL W. REED joined the CSMD research support staff in 1995. He holds B.S. and M.S. degrees in computer science from the University of Tennessee. His interests include using Virtual Reality Modeling Language technology as a solution to Web-based visualization problems.

DANIEL R. TUFANO is leader of CSMD’s Human Systems Research Group. He received a B.S. degree in psychology from Georgetown University and M.A. and Ph.D. degrees in psychology from Princeton University. He spent four years performing training effectiveness research at the Army Research Institute. The following ten years he worked at Grumman Aircraft Systems, managing the Advanced Cockpit Technology program. At ORNL his research responsibilities are in three areas: management and display of information to automobile drivers; recognition of computationally reconstructed faces; and locomotion and visual perception of spatial layout in virtual environments. His research is broadly concerned with human perception and performance.

 

Where to?

Next article | Contents | Search | Mail | Review Home Page | ORNL Home Page