Analysis of Material Performance
in Automotive Applications

By Srdan Simunovic, Gustavo Aramayo, and Thomas Zacharia

photo

Srdan Simunovic (left), Gustavo Aramayo, and Thomas Zacharia view a computer simulation of a collision between two cars. Photograph by Tom Cerniglio.

Cars and trucks will use fuel more efficiently if they are built of materials lighter than steels used today. But will these materials absorb energy as well as or better than steel to protect passengers during a vehicle collision? Crash testing of prototypes can provide answers. However, computer modeling of materials and vehicles yields solutions in less time and at a lower cost. ORNL researchers have shown that complex models can be developed and modified to run rapidly on massively parallel computers.


 
Crunching Numbers vs Crunching Metal to Explore Vehicle Safety

A spanking new Ford Explorer delivered to ORNL has come apart, but studies of its parts should help ensure that tomorrow’s automobiles meet or exceed established safety standards. Future cars and trucks will be made of lighter materials to help increase their fuel efficiency, so design changes may be needed to guarantee that lighter vehicles will be as safe in collisions as the heavier steel vehicles of today.

Researchers at the Laboratory dismantled one model of the nation’s top-selling sport utility vehicle and weighed and measured each piece. As part of the project, they plugged that information into an ORNL computer program. A computer model was built which the Department of Transportation (DOT) can provide to U.S. automakers to help them improve future vehicles.

“Our role isn’t to confirm or deny the safety of the Ford Explorer,” says Thomas Zacharia of ORNL’s Computer Science and Mathemathics Division. “We’ve built a computer model of the Ford Explorer and run simulated 35-mile-per-hour crash tests. Results from these simulations will be used to ensure that future, lighter vehicles meet safety requirements.” ORNL, which performed similar tests on a Ford Taurus a few years ago, has gained expertise in automobile crash simulations using ORNL’s powerful Intel Paragon XP/S 150 model supercomputer. That’s why DOT is funding the current project.

“It’s a lot less expensive to crunch numbers on a computer than it is to crunch metal in a crash test,” says Zacharia. Once ORNL researchers validate their simulated offset, head-on crash with an actual controlled crash in Buffalo, New York, they will be able to perform a number of crashes and generate a variety of data. Simulated crashes will provide information identical to what researchers could gain from actual crashes that cost up to $75,000 per crash.

While ORNL is developing a vehicle model for the Explorer, other institutions are doing the same with other top-selling vehicles—the Chevrolet Lumina, Chrysler Concorde, Honda Accord, and the Dodge Neon.

ORNL’s work is related to a national effort to develop a vehicle that gets 80 miles per gallon—triple the fuel efficiency of today’s cars—without sacrificing performance, utility, cost of ownership, or the safety that consumers demand. “To achieve this goal, we’re talking about a weight reduction of 40 percent,” Zacharia says. “That dramatic reduction requires the use of lightweight metals, plastics, and composites that offer new challenges for automobile engineers.”—Ron Walli

photo

Thomas Zacharia examines a Ford Explorer during disassembly. Parts are measured and weighed to help researchers predict vehicle energy dissipation during collisions simulated by computer.

Intense competition in the mass-production vehicle market is driving down the time cycle for new vehicle development. The cycle currently takes between 2 and 4 years, and it consists of conceptual design, mechanical design and analysis, materials and process selection, tooling and equipment development and setup, product manufacturing and assembly validation, and finally, production operations. Very little time is allowed for development of new materials and processes because changes in these areas might have major impacts on all stages of operation and could cause significant delay or negative consequences. For the design team to accept a new material or process, it must have no risk associated with manufacturing and use at the time the commitment is made. In-depth understanding of the science of materials synthesis, processing, and performance is a fundamental need of the automobile industry.

Accurate materials models are essential for the development of realistic vehicle deformation simulations. They are derived from a collection of experimental data and accumulated experience with the material. The knowledge about material behavior for conventional automotive structural materials such as mild steel is considered reasonably complete. However, further improvements in vehicle efficiency mandate the introduction of stronger and lighter materials—such as higher-strength steels, aluminum and magnesium alloys, and composite materials—into the load-bearing systems. The behavior of these materials is not yet sufficiently well understood for the variety of conditions that vehicles experience during service to be used confidently for mass automotive production.

One project currently under way aims to accelerate the introduction of lightweight materials to automotive applications through advanced computational simulations for assessing design and performance. The project includes (1) development of improved computer codes for deformation analysis, (2) development of improved vehicle models with experimental validation of the models, (3) improved understanding of design methodologies and vehicle assembly procedures, and (4) development of a lightweight materials database. The combination of computational simulations on supercomputers and rigorous experimental validation will enable assessment of the performance of low-weight materials in automobiles more economically and in a much shorter time than the trial-and-error approach would require.

Development of Vehicle Computational Model

The structure of mass production automobiles has been changing rapidly during the past two decades. Most new cars no longer are made of separate frame and body structures; instead each has an integral system known as a unitized body (unibody). The unibody, which consists of a large number of welded stamped metal parts, is the main energy-absorbing structure of the vehicle. Front and rear subframes are attached to the unibody very late in the assembly process. The front subframe usually carries the transversely positioned engine, transmission, front suspension, and wheel assembly; the rear subframe carries the rear suspension and rear axle. The complexity of the unibody-subframe structure does not allow for obvious simplifications in simulation models as in cases where a clear distinction exists between the primary frame and the secondary structures.

Perhaps the most dramatic automotive design verification comes from actual vehicle collision tests. They not only bring perspective to everyday driving, but also make us more appreciative of the challenges that face vehicle designers. The dissipation of energy and the extent of deformation in collisions are often critical design considerations. A clear understanding of material behavior is essential to the design of structures and mechanisms that protect vehicle occupants. From the standpoint of a design engineer, vehicle impact simulation must meet three essential requirements: accuracy, versatility, and computational feasibility. The first two requirements usually translate into large, detailed finite element (FE) models that are usually not feasible for single-processor workstations because of long computation times (around 600 CPU hours) and large memory requirements. Because the best vehicle models capture complex deformation during impact, it is not unusual for these models to have 50,000 or more FEs.

Over the past several years, the National Highway Traffic Safety Administration (NHTSA) has been developing an FE model of a midsize sedan. The model has been obtained by first disassembling the vehicle and then scanning the shape and measuring the mass and inertia of each component. The FE model is derived from the geometric model by discretizing each digitized part using FEs and connecting them into the final model. The separation of geometrical representation from the computational FE allows for flexibility in model modifications and the addition of complex constraints.

Research under way at ORNL in collaboration with NHTSA and George Washington University addresses essential requirements for developing detailed vehicle computational models. Vehicle models are combined with lightweight materials models and are used to analyze material performance in a wide variety of impact situations.

drawing
drawing
drawing
Fig. 1. Single-car offset impact with a rigid barrier, showing the results of a car crashing into and glancing off a rigid barrier such as a wall or post. Fig. 2. Two-car frontal offset impact, showing the results of a head-on collision between two cars (in which the headlight of one car and the front center of the other collide). Fig. 3. Two-car oblique offset impact, showing the results of a collision between two cars in which the front of one comes in at an angle to the front of the other.

Several different impact situations are currently being investigated to optimize the vehicle models: single car offset impact with a rigid barrier (Fig. 1), two-car frontal offset impact (Fig. 2), and two-car oblique offset impact (Fig. 3). Simulations are compared with test data that consist of high-speed films of vehicle collisions and traces from accelerometers that are placed throughout the vehicles. In addition, the crashed vehicles are disassembled and analyzed so that the main mechanisms for the dissipation of impact energy can be identified and quantified.

drawing
drawing
Fig. 4. Bottom view of test vehicle after two-vehicle crash. Fig. 5. Deformed shape of the front underside of the vehicle after collision—simulation result.

Deformed parts are extracted and digitized so that they can be directly compared with simulation results. Figures 4 and 5 show the deformed shape of the front underside of the vehicle after collision and the corresponding simulation result. In the comparison process, these images are scaled and overlaid to assess the quality of the model and to identify areas of the model that need further improvement. This well-organized and detailed approach provides credibility to the simulations and builds confidence in findings obtained by simulating new, not yet experimentally tested materials and impact conditions.

Parallel Computers and Crashworthiness Analysis

Crash simulations are being performed using the massively parallel version of LS-DYNA3D software. A research collaboration has been established between ORNL and the software vendor to improve the software performance on distributed-memory massively parallel computers. The program calculates accelerations, velocities, deformations of components, and forces acting on vehicles, taking into consideration variables such as different materials, impact interactions, complex constraints, and spot welds. The program employs an explicit time integration scheme with mass matrix diagonalization, thus making the matrix factorizations trivial without need for any significant interprocessor communication. The downside to this approach is that the computation is only conditionally stable. The stability condition requires that the time step increment within which the entire state of deformation needs to be computed be proportional to the size of the smallest element in the FE model. For example, the simulation of a 120-millisecond-long collision of a car with a rigid barrier requires more than 130,000 such time increments. The FE model for the car used in this study involves 27,000 to 30,000 FEs, and approximately the same number of nodes. To process one FE during a computational increment, approximately 1000-floating point operations and several hundred “words” of memory are needed.

drawing
Fig. 6. An example of domain decomposition using RSB is shown here.
The domain decomposition approach has been employed in the program as the principal method for exploiting concurrent distributed-memory processing. Using this approach, different parts of the structure are assigned to different processors for computation of their deformation. At certain points in the program, information has to be exchanged between processors to account for interaction between the subdomains (adjacency and contact interactions) and to synchronize computation. The efficiency of the computation is influenced by the ratio between the balanced computational load assigned to the processors and the amount of communication that is needed between them. For a given size of the problem, as the number of processors used increases, the subdomains (and therefore computational load) decrease while communication becomes dominant. After the computation time becomes comparable to communication, the simulation time cannot be further reduced by increasing the number of processors. Three different decomposition methods can be used in the program: (1) recursive spectral bisection (RSB), (2) recursive coordinate bisection, and (3) greedy algorithm. An example of domain decomposition using RSB is shown in Fig. 6. The subdomain-to-processor assignment for the car above has been shown in exploded view to illustrate the domain decomposition approach. Usually, the average size of interfaces, or “cuts,” between different subdomains is directly proportional to the communication that would be required in the program. Low communication resulting from RSB makes it a method of choice for parallel processing on distributed memory computers when there is no unilateral contact between the FEs. However, in situations where structural parts interact through contact and the spatial relations drastically change, the advantages of RSB may not be so apparent.

Vehicle impact simulation involves computing the deformation of vehicle parts as a result of their contact with impacting structures as well as with other vehicle parts. If two interacting parts reside on different processors, this interaction needs to be carried through interprocessor communication. Because it is very difficult to determine in advance which parts will come into contact, such information cannot be embedded into domain decomposition ahead of computation. In effect, this requires the application of geometrical reasoning, which is a global memory operation on a distributed memory environment, consisting of mainly independent entities (processors) having limited spatial scope. The contact algorithm employed in the program is based on frequent spatial sorting of the contact entities and redistribution of the sorted position information between processors. The problem space is divided into regular subdomains (“buckets”) in x, y, and z physical directions. The process involves extensive communication between processors, and, if performed every time step, may create a computational bottleneck. Reducing the number of “bucket” sorts can speed up computations considerably; then, if errors in computation are noticed, the sorting frequency can be increased to maintain accuracy.

drawingdrawingdrawing
Fig. 7. Because the problem cannot fit on one processor, 16 processors were used as the base case. Fig. 8. An example of CPU time per problem time increment throughout the simulation using 128 processors. Fig. 9. When representative CPU times for different numbers of processors are normalized with the CPU time needed for 16 processors, computational efficiency increases to 64 processors.

The timing results for different numbers of processors simulating the offset impact of a car with a rigid barrier are shown in Fig. 7. The time axis represents average CPU time spent on the problem’s time increment. In this case, the representative CPU time was averaged over a number of time steps and compared as it fluctuated, depending on simulation conditions and machine load. An example of CPU time per problem time increment throughout the simulation using 128 processors is shown in Fig. 8. When the representative CPU times for different numbers of processors are normalized with the CPU time needed for 16 processors (see Fig. 9), it may be noticed that there was a reasonable increase in computational efficiency up to 64 processors. A further increase in the number of processors did not significantly reduce the overall computational time as the communication between processors became dominant. In the case of the two-car crash, the efficiency increased up to approximately 128 processors. This efficiency increase could be attributed in part to the increase in the average size of computational subdomains assigned to processors with respect to required communication. Computational simulations for the single- and two-car impacts have been run numerous times to identify deficiencies of the existing models and to improve their performance.

Conclusions

Computational requirements for simulations are becoming more important as complex materials models for lighter materials are employed in new vehicle models. Ongoing research at ORNL will result in accurate vehicle and materials models that can be used for a host of applications, primarily the evaluation of the performance of lightweight materials in vehicles, especially those subjected to collisions. Massively parallel computing plays an essential role in this research because it allows for rapid development of these models. Scalability studies on current vehicle models indicate a threshold for the number of processors that can be efficiently used for a given simulation. This threshold can be related to the average number of finite elements that is assigned to each computer node of a massively parallel computer.


BIOGRAPHICAL SKETCHES

SRDAN SIMUNOVIC is a research engineer in the Modeling and Simulation Group of ORNL’s Metals and Ceramics Division. He holds Ph.D. and M.S. degrees in structural mechanics from Carnegie Mellon University and a B.S. degree in civil engineering from the University of Split in Croatia. He joined the ORNL staff in 1994. His research activities include modeling of composite materials, automobile impact simulations using the finite element method, massively parallel computing, and modeling of casting processes. He is a member of the U.S. Society for Computational Mechanics and ASM International.

GUSTAVO A. ARAMAYO is a senior staff engineer in the Engineering Analysis Group of ORNL’s Engineering Technology Division. He holds an M.S. degree in engineering mechanics and a B.S. degree in civil engineering from the University of Alabama at Tuscaloosa. He joined ORNL in 1974. His current activities involve the analysis and modeling of impact problems in fuel and weapons transportation packages and passenger vehicles, mechanical and thermal analysis of transfer rollers, modeling and simulation of refractories in kilns, and modeling and analysis of sheet metal processes. He is a member of the American Society of Civil Engineers, the Metals Society, the American Association for the Advancement of Science, and the American Institute of Aeronautics and Astronautics.

THOMAS ZACHARIA is director of ORNL’s Computer Science and Mathematics Division. He received his Ph.D. degree from Clarkson University. He came to ORNL in 1987 to work on weld process modeling. His research interests involve advanced computational modeling and simulation of materials and processes. He played an active role in developing, establishing, and leading the Modeling and Simulation Group of ORNL’s Metals and Ceramics Division. His research has resulted in several cooperative programs with industry and academia. He has chaired or co-chaired several international meetings and conferences in process modeling. He has presented invited keynote addresses for international conferences and serves as a committee member for several technical societies. Zacharia has received numerous awards for his research.

 

Where to?

Next article | Contents | Search | Mail | Review Home Page | ORNL Home Page