One-Page Summaries of Presentations

Monday Morning 10-minute Presentations: Draft Schedule

Participants

William Klein -- Boston University

It is well known that numerical investigation of materials properties,
whether in Geosciences or Materials Science must deal with phenomena on
many time and length scales. The non-linear nature of the systems,
coupled with the need to understand extremely long time scales, combine
to make numerical techniques of limited use in many problems of
practical importance. However, numerical investigation is at present
the only method we have that will allow us to probe dynamical
mechanisms on microscopic length scales, information that is essential
to the understanding of processes such as aging, fracture and
degradation.

To get some idea of the magnitude of the problem consider the question
of molecular dynamics simulation of simple models such as
Lennard-Jones that form glasses. Glasses of technological interest are
expected to retain their properties on time scales of the order of
decades.  ($\sim 10^{9}$ seconds) Molecular dynamics simulations for
$10^{5}$ particles can model these simple systems for roughly
$10^{-5}$ - $10^{-4}$ seconds. 

Hardware advances alone will not bridge this gap of 13-14 orders of
magnitude in the near future as long as the mismatch between the clock
of the silicon device and the rate of molecular vibration remains. The
maximum clock speed available, which is limited by the physics of the
hardware is approximately $10^{8}$ Hz.  If we assume a time step of
$10^{-15}$ seconds to treat the fastest molecular vibrations
faithfully, 100 operations per interacting pair of particles and 100
particles per processor we arrive at a crude estimate of $10^{11}$
seconds of CPU time for one second of real time.  (In fact, real codes
have a difficult time operating at this efficiency.) Even if the
estimate is off by several orders of magnitude, the point is clear:
$10^{8}-10^{10}$ seconds in real(physical) time translates to
approximately $10^{11}-10^{13}$ years of computer time with present
technology, and is not reachable in the foreseeable future by either
advances in hardware or by conventional algorithms.

It is essential then to develop algorithms that will exploit the
understanding of the underlying physical mechanisms to accelerate t he
dynamics.  Such algorithms have been developed to look at equilibrium
properties of simple systems but no one has been able to adapt this
approach to treat the more complicated model s needed to understand
materials or to treat even simple models far from equilibrium. The
problems that need to be overcome are

1) Adapting the algorithms to continuum systems with a high degree of
spatial heterogeneity 

2) Dealing with the ``frustration'' found in systems where the
repulsive part of the interaction potential dominates the physics and

3) Making the algorithm {\sl dynamically faithful,} that is, modifying
the dynamics to obtain acceleration without destroying the ability to
obtain information about the long-time evolution of the system under
its natural dynamics.

The time line for developing these algorithms is hard to predict since
it requires understanding the fundamental relation between ``natural''
dynamical evolution and the way this evolution might be modeled on a
computer. However the possible benefits are substantial. The algorithms
would be useful for understanding materials in Geological systems as
well as materials of use in civilian as well as military technologies.
The would also be useful in other large scale computational problems
such as the temporal evolution of networks of earthquake faults.  The
cost of such development would be small as is the risk.

Kenneth Larner -- Colorado School of Mines

Subsurface Imaging with Reflection Seismology

Imaging the Earth's subsurface by means of reflection seismology,
whether for environmental applications in the near-surface or for
hydrocarbon exploration at larger depths, poses computational
challenges on a grand scale.  Tens of terabytes of data are recorded in
modern 3D offshore seismic surveys, and the processing needed to
mitigate shortcomings in data acquired in the uncontrolled laboratory
that is the Earth and to image the Earth's structure, lithology, etc.,
requires yet 3-4 orders of magnitude more in computation.  For a
typical such survey, full  3D prestack depth migration, the key step in
mapping the recorded seismic wave field into an image of the subsurface,
can take 6-12 months of CPU time on a 32-node SGI.  Moreover, the
nonlinear inversion processes for estimating the seismic wave speed
necessary to do accurate depth migration typically require many
iterations of the costly depth migration process --- and ideally one
would like to do the estimation of wave speed interactively.

Over the years, research has been aimed at developing algorithmic
approaches that minimize compromises in the quality of processing and
imaging while introducing dramatic shortcuts in the required
computation.  Large as are these inversion and depth-migration
problems, however, they currently are founded on simplistic models for
the seismic wave propagation.  They generally assume that the Earth is
a fluid, thus ignoring the presence of shear waves; they do not
seriously model attenuation of signal during propagation; and until
recently, they have had no ability to take into account the fact that
the Earth's strata are anisotropic.  Moreover, attempts to characterize
fluid-bearing reservoirs falter because so relatively little is known
about wave propagation in fluid-filled porous media.  Likewise, no
adequate experimental results or effective-medium theory presently
exists for understanding wave propagation in media that are strongly
heterogeneous at all scales and thus likely to entail strong multiple
scattering of waves. Moreover, even for today's relatively simplistic
theories, computational tools are not capable of handling the problem
of relating macro observations to microscopic complexity.

These issues are of first order in attempting to understand the
extremely heterogeneous near-surface of the Earth, which is the target
in environmental investigations and through which waves must travel en
route to deep exploration targets.  Coupled processes such as
seismo-electric phenomena and 4D multi-phase fluid flow in reservoirs
are also poorly understood.  Likewise, in a related method used in
applied geophysics, only the surface has been scratched in the large
problem of forward 3D vector electromagnetic wave propagation, let
alone inversion, in heterogeneous media.

Advances in these areas would measurably aid the finding and recovery
of dwindling and ever-harder-to-find energy resources.  The benefits to
the DOE's mission and to society generally are thus large.

The next 6-8 years should see measurable progress in the development of
the mathematical tools for describing wave propagation in fractured and
highly heterogeneous media. To make progress in understanding of the
physics and necessary mathematics, however, will require more time as
well as teams of physicists, mathematicians, and computation experts.
Developments in inversion of seismic data to take transverse isotropy
into account are happening at present, as are extensions to
orthorhombic media.  These will likely produce valuable spin-offs over
the next few years, along the way toward more comprehensive solutions.
Most of these problems will require higher-speed networks as well as
codes that operate on distributed parallel, likely heterogeneous,
computer systems.

Ki Ha Lee -- LBNL

 
Joint analysis of electromagnetic(EM), seismic and hydrological data

Quantitative analysis of geophysical data has been an essential tool in
understanding various subsurface phenomena such as changes in crustal
dynamics over a long period of time, behavior of reservoir properties
in producing petroleum and geothermal fields, and the near surface
hydrology critically important for understanding environmental
problems.  Imaging techniques used to analyze realistic geophysical
data require a great deal of computational resources.  The number of
unknowns involved in a typical imaging problem is on the order of 100
million.

Fluids play important roles in determining the physical properties of
rocks and soil.  For most crustal rocks the electrical resistivity
depends on ionic conduction in the pore fluids, hence resistivity is a
strong function of porosity and interconnectedness (fluid
permeability).  These are the very parameters that are important in the
fault mechanism, changes in reservoir property, and near surface fluid
flow that transports toxic chemicals.  It has been observed that
seismic velocity and amplitude are also strongly affected by these
parameters.  Velocity distributions determined from 3-D seismic surveys
have been used to estimate formation porosity between wells.

If seismic and EM data are available at a common site it is possible to
jointly analyze the combined data set to obtain better estimates of flow
parameters; saturation, porosity and fluid type.  As each geophysical
parameter is individually related to a combination of flow parameters, only
the simultaneous analysis of a combined data set would result in the
optimum estimation of flow parameters.  A critical component for this
approach to be successful is, however, the knowledge that quantitatively
relates geophysical parameters such as electrical conductivity and seismic
velocity to the flow parameters.  One can obtain this information through
laboratory experiments on rock samples and well tests.  The relationships
are very site specific and scale dependent.

Joint analysis involves the combination of four interrelated disciplines;
EM, seismic, geohydrology, and rock physics.  The first three require data
acquisition and individual imaging capability on a common scale.  The
hydrologic inverse problem is notoriously ill-posed.  However, when used in
conjunction with other information, it may provide valuable information on
well-field-scale permeability variations, which is often the key parameter
needed to design an effective environmental remediation scheme.  The rock
physics part will focus on laboratory experiments under simulated
conditions.  Resulting data can be statistically analyzed to provide
empirical relationships between geophysical and hydrological parameters.

The approach to the joint analysis consists of minimizing the combined
misfits in EM, seismic, and geohydrological data.  Empirical relationships
are used as constraints that ultimately connect the parameters.  Due to the
nonlinear coupling of the different parameters involved, the anticipated
scale of the joint analysis problem is at least 10 times of those
individual ones.  Initial estimation suggests that one needs to have access
to a computing facility that can handle one billion parameters at any given
time.  The computing environment may not have to be massively parallel, and
a multi-tasking mode of operation can be equally effective.  A software
that allows users to access these computing facilities without having to
modify their scalar algorithms will be greatly useful.

A 4-year effort may be required for the completion of the proof of concept
stage.  Three tasks are involved in the joint analysis; task 1) data
acquisition,  task 2) joint inversion, and task 3) laboratory experiments
on rock samples with additional well testing.  Annual cost estimation is
$200K for task 1, $400K for task 2), and $300K for task 3.  Total cost over
the 5-year period is $3.6M.  This estimation does not include cost for the
computer time.

Larry R Myer -- LBNL

Questions:

What are the relationships between microscopic properties and
characteristics of earth materials and their macroscopic properties,
where microscopic refers to the scales of engineering interest, and
properties refers to mechanical, hydrologic and geophysical
properties.  How can discontinuities be accounted for in these
relationships?

Benefits:

Simulation and prediction of the behavior of earth materials for
engineering applications impacts every aspect of society.  Energy and
resource extraction encompass mining of minerals, oil and gas recovery
and geothermal resource exploitation.  Simulation and modeling is
involved in exploration for blue resources as well as in extraction.
Management and utilization of water resources imports society in many
ways and is closely tied to energy extraction and utilization.
Simulation and prediction of flow and transport are integral to water
management, design, construction and maintenance of infrastructure such
as roads, tunnels, dams, and buildings require prediction of the
mechanical behavior of earth materials.  Evaluation of the
environmental impact of societies activities as well as remediation and
mitigation of previous activities requires simulation of earth
materials incorporating complex chemical and biological processes.

The impact of significantly improved understanding of macroscopic earth
material properties will be large.  Engineers have, in general, learned
to compensate for uncertainty through conservative design and
applications of empirical rules derived on a trial and error basis.
Major difficulties, expense, and mistakes occur when unprecedented
conditions as requirements arise.  A recent example of this is the
nuclear waste program.  Improved understanding of material properties
will lead to more cost effective design of civil structure, discovery
of new mineral, oil and gas resources, and more efficient and complete
extraction of these resources.

Problem Definition:

The major difficulty in deriving microscopic properties from
microscopic ones is in identifying a representative volume of material
whose properties can be sealed to a larger volume.  This difficulty
arises in large part due to the extreme heterogeneity of earth
materials and, in particular, the presence of discontinuities at all
scales.  Microscopic discontinuities constitute grain boundaries and
inter- and intra-granular cracks.  Fractures and joints are ubiquitous
at the macroscopic scale and tectonic faults and plate boundaries are
examples of meso scale discontinuities.  Though there is some evidence
(hope) that properties such as length, stiffness, and conductance of
discontinuities may be fractal, the question of how the properties of
discontinuities scale is largely unanswered and how to account for
these effects in extrapolating measurements from microscopic to
macroscopic scales is unresolved.

Direct measurement of properties at different scales is required.
However, these are extremely difficult and expensive experiments.  The
hope is that computational experiments will replace some significant
proportion of the required physical experiment.  At present such
experiments cannot be performed; if the complex mechanical, thermal,
hydrologic and chemical processes are adequately modeled, then the size
and geometric structure must be over simplified, and conversely.

Specific research areas:

1) Deformation and fracture of discontinuous earth materials.  Codes
are needed which can analyze deformation and fracture in assemblages of
blocks under arbitrary external and pore level loads.  The blocks could
be grains at the microscopic scale or blocks defined by fractures at
the macroscopic scale.  The physics is pretty well defined.
Significant advances in computational efficiency are required for
particle problems.  Domain decomposition and parallel computing methods
need to be applied.

2) Simulation of flow and transport.  Pore level computations have been
carried out but not for sufficiently large models to draw conclusions
about scale.  Nor have the simulations been efficient enough to be of
practical engineering use.  The physics and chemistry of many
multiphase multicomponent processes needs to be better understood and
incorporated into models.  Efficient ways of converting the complex
topography of the pore space into a numerical grid are needed.

3) Simulation of wave propagation in discontinuous media.  Effects of
discontinuities on wave propagation are only partially understood.  For
some combinations of frequency discontinuity stiffness and spacing it
is appropriate to define effective media properties.  However, for
other combinations discontinuities need to be explicitly included in
models even at macroscopic engineering scales.

Simulations have been limited for the latter conditions due to
computational efficiency. 3-D simulations have not been performed.
Domain decomposition and parallel computing techniques need to be
applied.

4) Visualization.  As model complexity increases, so does the
difficulty of interpreting results.  Development of visualization
capabilities coupled to the simulators are essential.  Huge benefit
would be gained, for example, from being able to visualize or "observe"
the movement of a particle in a 3-D pore space while at the same time
modifying input parameters to the model.  In solid mechanics it would
be beneficial to be able to visualize changes in propagation of
fractures as loading conditions change.

5) Computational efficiency.  More efficient equation solvers and
adaptive gridding techniques need further development and application
into earth materials.  New approaches to numerical simulation may be
required.  For example, efficiency may be gained by integrating
equation solving and mesh generation with knowledge of the structure of
the material.  In both mechanical and flow problems, the heterogeneity
of the material results in localized regions which contribute more or
less to the process being modeled.  New methods for evaluating where
these regions are and how they are connected are needed.  It may thus
be possible to use heterogeneity as an advantage in modeling.

David Yuen- University of Minnesota

Geodynamical modeling: a computational challenge for the 21st century

The surface manifestations of geodynamical processes are revealed over
a wide range spatial and temporal scales, from days to millions years
and from inches to thousands of miles.  Because of the inherent
nonlinearity of the properties of rocks, numerical modeling plays an
important role in our understanding of the dynamical processes in the
shallow and deep parts of our planet, which is coupled by virtue of
crustal-mantle flow process.

Numerical modeling of geodynamical processes, such as fault movements
and volcanic eruptions, demand greater and greater computer power, as
we obtain more observational data and experimental constraints from
mineral physics.  The computational challenges facing geophysical
modelers are no less demanding than those confronting nuclear
engineers or astrophysicists.  Geoscientists also have many problems
requiring at least 10^9 grid points.

Although the fundamental physical laws may be different, the
nonlinearities in the governing equations are no less daunting in the
earth science realm. In geodynamics one faces nonlinear problems with
an intrinsic multiple scale nature in both space and time because of
the many feedback processes.  Indeed the problem has multiple physics
and chemical processes.

Earth scientists have to face problems spanning from the atomic to the
geological scale.  For instance, rheology depends on many state
variables one of them is grain-size, which is coupled to the local
chemical kinetics and temperature. Yes, this is a problem which
geoscientists have tackled and tougher ones will arise yet.  In the
propagation of faults, we have to consider thermal-mechanical coupling
in the same manner as ma terial scientists working on steel alloys.

There are indeed many opportunities for geoscientists to learn and
benefit from an integrated multidisciplinary effort.  For instance,
recently wavelet transforms were applied in geophysics, not for data
compression, but for solving non-linear partial differential equations
with sharply varying physical properties, using fewer grid points.

Finally, we will emphasize the dire need for developing cost-effective
solutions for visualization of extremely large (terabyte) data-sets,
while at the same time for maintaining a vigorous effort on developing
better and more robust numerical algorithms to get "more bang for the
buck" on our investments in large-scale computational facilities.

Arthur J Freeman -- Northwestern University

Large Scale Electronic Structure and Properties of Pure and 
Defective Materials: Modeling Linking Spatial and Energy Scales.

As distinct from pure (ideal) crystals, for which there is a well
developed theory, the term "real crystal" is used to emphasize the
crucial importance of considering specific defects including: (i)
impurity or vacancy defects; (ii) dislocations (including misfit
dislocations of interfaces) or other faults, (iii) grain boundaries
(GB) and (iv) domain walls (for magnetic materials).  The structure of
these defects, their interactions and energetics are characteristics
which govern the macroscopic properties of real materials on a
fundamental level.  Extended defects such as dislocations and GB most
directly manifest themselves in mechanical properties, including
ductility and fracture properties of various structural materials.
Interactions of extended and point defects determine fundamental
mechanisms of solid-solution hardening.  The structure and energetics
of domain walls and their interactions with lattice defects are
fundamental characteristics which determine remagnetization processes.
There are also a number of "secondary" effects determined by these
defects which for some problems or materials may become "primary".
Such phenomena as leakage current in metal/insulator interfaces,
effects of density of dislocations in semiconductor devices or the
well-known electromigration problem are determined on a fundamental
level by specific defect properties.

Modeling fundamental properties of such defects will play an increasing
role since the progress of experimental techniques are much slower than
the development of hardware and software.  The greatest challenge for
modeling of real crystals properties is determined by the multiscale
nature of the problem when formulated on a very general level.  On the
one hand, these defects are too "big" to belong on the  microscale but
they are too "small" to be considered as macro-objects.  On the other
hand, the recent experience with atomistic simulations clearly
demonstrates that details of the complex non-central , non-pairwise
interactions in metals and intermetallics which can not be described
with simple models of interatomic interactions (such as pair potentials
or embedded atom) may have a strong effect on characteristics of such
defects.  Thus, the problem can not be reduced in a universal way to
multi-scale modeling (micro-electronic structure calculations to fit
potentials) and further mesoscale /atomistic simulations (with millions
atoms using parallel computers) but require the development of method
for modeling based on a natural linking of different length scale
descriptions.

Thus, the development of a strategy which comprises data bases,
improved computational methods and models which start from electronic
structure and proceed one step up in essential length scales should be
a focus of the CSI.  One has to emphasize that it is a very timely and
reliable investment of efforts: reliable since it focuses on the next
step after the micro-scale to build a solid basis for further
macroscale modeling: reliable since it is based on microscale methods
that were developed in recent years, including ab-initio techniques and
expected improvements for large-scale simulations.

Obviously, however, the most rewarding strategy in this direction could
be the use of hybrid approaches.  The general idea of such approaches
can be illustrated using the Peierls-Nabarro model for dislocations.
Despite a number of assumptions and its simplicity, this model gives an
unprecedent example of naturally linked atomistic and continuum
descriptions.  The idea is exceptionally rich and goes far beyond this
particular case and its Peierls implementation.  This strategy should
be explored for various problems including magnetic defects such as
domain walls.

Different  approaches  for  hybrid  simulation techniques may include:

1. Continuum/quasi-continuum mesoscopic models parametrized using
ab-initio methods.  Models in spirit of Peierls model  suited for
natural, feasible parametrization  using ab-initio methods. (The
simplest , almost trivial (but successful) examples of such models are
Rice-Wang for interfacial embritlement and Rice-Tomson for
brittle-ductile behavior) Further analysis of these parametrized
models  using both numerical and analytic techniques, with the
possibility of solving inverse problems, needs  to be emphasized as
potentially very useful for computer aided design of materials.

2. Atomistic simulations with adjustable  interatomic interaction
potentials.  This will require the development of new concepts of
constructing interatomic potentials suitable for adjustment at each
step of the system in evolution, in keeping with  results of electronic
structure calculations for the local environment. In contrast with the
standard EAM, this should include not only total charge density but
components coming from particular chemical bonds that vary in
direction.

3. Hybrid of  ab-initio (full-potential)  description in the critical
region (area near the crack tip, interface-impurity) with continuum
elasticity employed for the rest of the simulation sample.

The benefits are clear:  

a) A consistent, reliable description of defect properties starting 
   from electronic structure.  
b) A solid basis for microstructure/macroscale simulations.  
c) A powerful universal methodology of hybrid computational methods.  
d) A basis for understanding particular defect properties and how to 
   relate results to chemical bonding or other information that may
   prove insightful for designers of materials.

As another example to improve the mechanical properties of high
temperature structural intermetallic alloys of great importance for the
aerospace industry, we will investigate microscopic mechanisms
governing the deformation and fracture properties of these materials
using first-principles electronic structure methods.  These results
will be used in the design of new advanced materials with improved
mechanical properties.  We will develop effective ab-initio real-space
techniques and algorithms, to overcome the drawbacks of both
band-structure and semiempirical methods for electronic structure
calculations.  These techniques will allow multi-scale simulations of
"real" materials, whose properties are essentially determined by their
defective structure, and to bridge the gap between a microscopic
quantum-mechanical description of the electronic structure and chemical
bonding and mesoscopic phenomena which govern the mechanical response
of intermetallics.  Examples include dislocations and
impurity-dislocation interactions. These real-space approaches will
permit large-scale investigations of the role of deformations and
defects on mechanical, magnetic, transport and optical properties of a
wide class of materials, including magnetic materials (giant
magneto-resistance materials, hard magnets, magneto-optical materials
ect.), semiconductors, HTSC.  Future theoretical developments of
ab-initio real-space methods will include corrections to the local
density approximation, such as LDA+U and beyond, effects of many-body
interactions, full potential corrections.

Taking advantage of the order-N scalabilty of real-space techniques,
highly-effective and optimized parallel codes will be developed and
used for large-scale simulations.  As an example of status, we have
calculated the electronic structure of an edge dislocation, modeled
with a core of 100-200 non-equivalent atoms in a cluster of ~10,000
atoms; the time on an ORIGIN 2000 single processor is ~45 min. for one
iteration with 50-100 interations needed.  The scalabilty with respect
to number of non-equivalent atoms is linear, and almost linear with
respect to the number of processors.

Bruce N Harmon -- Ames Laboratory

Materials, Methods, Microstructure and Magnetism

The impact on societies and economies of the development of new
materials with unique or improved properties has a history extending
back thousands of years.  The material of choice may exhibit a desired
property only a few percent better than the next candidate material,
yet this may be enough to dominate a market and to generate sufficient
income to justify sizable research expenditures.  Often, in the process
of incrementally improving an existing material a new material will be
discovered that causes a revolution in technology.

The availability of computers operating in the teraflop range owes much
to the dramatic long term improvements in materials and processing
techniques.  Besides manufacturing and market forces, there are
fundamental science issues demanding explanations of how large
aggregates of atoms behave collectively, particularly when the behavior
depends critically on temperature and on microstructure.  The strength
of materials, the quality permanent magnets, and even the function of
biological molecules all depend on microscopic interactions over
distances larger than the scale of individual atoms and bonds.

Magnetism is an area where microstructure is vital but poorly
understood.  Consider this quote from a Reviews of Modern Physics
article:  "The technical magnetic characteristics... are extrinsic
inasmuch as they depend crucially on the microstructure of the
material.  The microstructure involves the size, shape, and orientation
of crystallites of the compound and also the nature and distribution of
secondary phases, which usually control domain-wall formation and
motion and hence determine the magnetization and demagnetization
behavior."  The market for magnetic materials is $10 billion/year and
$100 billion for all items containing magnets in the United States
alone.

The long term goal for this topic is the understanding of the
microscopic, quantum mechanical, interactions governing magnetic
properties of technically important materials.  This entails accurate
and detailed determination of the relevant microstructures:
dislocations, grain boundaries, interfaces, impurities, surfaces.  For
this work a hierarchy of techniques are needed to span the length
scales involved.  Already very accurate first principles calculations
(100's of atoms) are used to create data bases to fit parameters for
environmental tight binding molecular dynamics methods capable of
accurate 10,000 atom simulations.  The development of such accurate
empirical methods is still an art.  Extending calculations to larger
numbers of atoms and to larger length scales is necessary for many
problems, but testing of algorithms for speed and accuracy is
progressing now.

The calculations for the magnetic calculations need to proceed along a
similar hierarchy.  First principles methods have been developed to
allow non-collinear magnetic structures, and extended to parallel
machines for 1024 atom simulations.  Running such codes with a
stochastic thermal bath will require teraflop level computing.  For
Heisenberg models with exchange parameters determined from first
principles, stochastic equations of motion can be run for 1000's of
atoms on modern parallel machines.  Accurate calculations of
temperature dependent magnetic interactions near defects (of various
sizes) will require first principle results feeding into more empirical
methods.  Within a few years the atomistic approach will overlap the
programs of the micro-magnetics community and they will no longer have
to rely on continuum models with empirically determined parameters.

John J Rehr -- University of Washington

Real-Space Many-body Methods for Optical Response

I. Proposal: Development of robust real space codes for excited state
electronic structure calculations that go beyond the independent-electron
approximation, in particular GW, TDLDA, etc. Applications are intended
primarily for excited states probed by the interaction between photons and
condensed matter, e.g. by synchrotron radiation experiment.  We propose a
systematic development of real space many-body methods applicable to
general condensed systems and their implementation in portable codes.

1) GW and Dielectric Response: The GW approximation provides a natural
"quasi-particle" description of excited states that is directly applicable
to optical response. The GW approach is analogous to density functional theory,
but with an energy dependent self-energy rather than a local exchange
approximation.  GW calculations require the dielectric response function of a
material, which can be evaluated using density functional theory (DFT)
calculations. This additional calculational overhead increases the complexity
of the calculations compared to DFT by an order of magnitude or more.

2) Dynamic Response - The GW approximation is a good approximation at high
excitation energies.  However, low energy optical excitations can also
be treated with the TDLDA (time dependent local density approximation).
Also calculations of optical response near an absorption edge can depend on the
relaxation of a system to the creation of core hole and photoelectron, i.e.,
the transient response of a system. A treatment of this problem requires
algorithms for treating time-dependent response and the cross-over from the
sudden to adiabatic limits. Generalizations of GW and the TDLDA will be
developed for such dynamic screening problems.

Benefits: This effort addresses the fundamental, computationally challenging
problems of optical and dielectric response, which are of great importance in
many scientific fields. Presently there is a great need for such an effort
to complement the experimental effort at several major synchrotron radiation
photon sources for probing properties of complex materials.  This initiative is
a timely and natural sequel to current ground state density functional
approaches, addressing a range of problems that go beyond the one electron
approximation.  The effort would yield a quantitative many-body treatment of
synchrotron photon spectroscopies: e.g., theories of x-ray absorption and
emission, thus improving the utility of the major synchrotron facilities. 
For example x-ray absorption spectra is used to determine the geometrical
structure of complex materials, and edge spectra contain electronic
and magnetic information.  These spectroscopies also provide quantitative
tests of theoretical developments. Finally our approach would be based on the
development of robust, automated modular codes that could be used reliably by
other scientists, in a way similar to that with our group's real space 
x-ray absorption spectroscopy codes (the FEFF codes).

Computational tools needed: algorithms for calculating dielectric response
based, for example, on full potential real space multiple scattering
algorithms. The real space approach is well suited for parallelized
algorithms. Parallelized large matrix utilities are also needed.
Effects of local vibrations and disorder could be added using results from
other CSI developments, e.g., algorithms for MD simulations.

Advances/barriers: Efficient algorithms for real space calculations of
dielectric response and efficient full potential, all electron real space
electronic structure algorithms are needed.  Though formal theories exist,
efficient algorithms for transient response calculations are not well
developed.

Projected time scale: 10 years  The algorithms can be developed hierarchically,
starting e.g., from local, electron gas approximation, and successively adding
refinements. Time line: 2001-3 full potential generalization of our real-space
multiple scattering codes with local GW; adaptation of our codes to TDLDA
calculations; 2002-5 successive improvements in both G and W; 2005-7 
development of dynamic response functions based on generalizations of
GW and TDLDA; 2007-2010 implementation in general optical response codes.

Cross-cutting: This work parallels interests Workshop area 3: large
quantum mechanical systems and chemistry.  Presently our local real space
multiple scattering muffin-tin based  codes already require complex
non-sparse matrices of dimensions of order 16Nx16N, where N is the number
of atoms in a cluster (typically 100-500) and requires storage typically
of about 100 MB  for an adequate treatment. Present calculations can be done on
modern workstations in cpu hours.  Extensions to full potential, full GW
will increase the complexity by several orders of magnitude.

Priya Vashishta -- Louisana State University

Billion-atom molecular dynamics simulations of processing, mechanical 
behavior, and fracture of nanostructured ceramics and ceramic matrix 
composites

Proposed research:

Atomic MD-FE Continuum Simulation of Dynamic Fracture:  Understanding
of mechanical failure in ceramic and ceramic-matrix composites requires
microscopic examination of plasticity due to dislocation emission and
the interaction of cracks with defects such as grain boundaries.  In
this connection, it is also important to have a knowledge of stress
inhomogeneities in the system.  The hybrid approach will link atomistic
molecular dynamics (MD) simulations with continuum thermodynamic
approaches.  This is the single most challenging problem in the entire
field of simulations.  The Atomistic- Continuum hybrid simulation
techniques will cover time scales from a fraction of fempto seconds to
micro seconds and length scales from angstroms to microns.  It will
have profound impact on simulations of high temperature ceramics,
ceramic matrix composites, and MEMS.

Effect of Environment - Aging due to Oxidation:  Oxidation is one of
the major causes of damage, especially at high temperatures and under
stress.  For example, oxidation embrittlement of ceramic matrix
composites involves ingress of oxygen through matrix cracks in the
composite, and it drastically changes the structural performance.
Design and lifetime prediction of materials depend crucially on
understanding the effects of oxidation.  Most metals and alloys are not
stable against oxidation.

Processing of Nanostructured Ceramics and Ceramic/Matrix Composites:
Morphology, micro and macro structure of the material as a function of
cooling rates, chemical additives, and environmental gases will be
investigated.  Simulation of evolution of microstructure from cooling
of ceramic melts is possible by using hybrid atomistic MD-continuum
method on teraflop parallel computers.

Micro-Electro-Mechanical Systems (MEMS):  Currently there is a great
deal of interest in the fabrication of MEMS.  Efforts are underway in
many different laboratories to design complex systems, including micro
robots, by integrating sensors, processing circuitry, and actuators on
the same chip.  Using hybrid electronic-atomistic-continuum approach it
will be possible to simulate electro-mechanical behavior of the system
under a variety of extreme conditions and hostile environments.

Relation to DOE mission and benefits:  

Lifetime extension of structural components requires simulation-based
prediction of aging problems before the nondestructive or other
evaluation program will detect them.  Relevant materials include
various ceramic materials and ceramic-matrix composites with complex
microstructures.  Reliable methods to include the effects of aging and
microstructures in the assessment of subcritical growth of flaws are
essential.  The proposed program will implement an integrated, scalable
MD and FE software capable of incorporating these effects, which will
be valuable to other scientists at DOE laboratories.

Time scale for proposed tasks:

2001:  Billion-atom MD simulation of mechanical behavior and 
       fracture in silicon nitride in extreme environments.

2003:  Billion-atom MD plus FE continuum approach to dynamic fracture 
       in nanostructured ceramics and ceramic/matrix composites.
       Materials will include silicon nitride, silicon carbide, and
       alumina.

2005:  Evolution of nanostructures to microstructures using 
       atomistic/continuum hybrid simulations.

2007:  Effects of oxidation and corrosion on mechanical behavior and 
       fracture incorporating immersive, interactive visualization and 
       real-time remote collaboration using CAVE.

2010:  Simulation of MEMS on emerging Petaflop architectures -- friction,
       wear, and lubrication in submicron-size moving components.


High-performance computing resources for billion-atom MD:

Teraflop Computers:  A teraflop machine consists of 4,000 to 8,000
processors, each with a performance of 125 to 250 Mflops, connected via
a high speed-high bandwidth interconnect.  Whereas theoretical maximum
performance of teraflops on a parallel supercomputers has been claimed,
sustained teraflop performance on a variety of applications will be
attained in the time period 1999-2000.  There are large scale
computation problems (simulations of real materials processing, high
temperature ceramics, ceramic matrix composites, and MEMS) which could
benefit from sustained teraflop computing.

Petaflop Computers:  Present thinking in the high performance computing
community is that it is feasible to build a petaflop machine with
components which will be available in years 2,007-2,010, assuming the
current growth rate in the performance of the memory and processors.
This would imply a processor clock speed of 1.25 GHz, eight eight-way
parallel complex CPUUs per processor chip giving 80 Gigaflops
performance per node. The machine will have 8,000 processing nodes
giving a theoretical maximum performance of 0.64 petaflops.  A
superconducting design with 200 GHz superconducting CPU with
conventional memory subsystem is also feasible.  A more promising
architecture is processor-in-memory (PIM) model.

Parallel simulation algorithms and their implementations will have to
take into consideration these emerging developments if the goal of
simulation based virtual materials design is to be achieved in the
early part of the 21st century.

Need for scalable parallel-computing and visualization tools:

Space-Time Multiresolution Algorithms:  Molecular dynamics (MD) is a
powerful tool for the atomistic understanding of long-range
stress-mediated phenomena, phonon properties, and mechanical failure of
nanostructures.  For realistic modeling of these systems, however, the
scope of simulations must be extended to large system sizes and long
simulated times.  New space-time algorithms and physical models
encompassing multiple levels of abstraction are being developed.

Fast Multipole Methods:  The most prohibitive computational problem in
simulations is associated with the calculation of long range part of
the interatomic potentials.  To overcome this problem, space-time
multiresolution algorithms have been designed.  These include the
computation of the Coulomb interaction with the Fast Multipole Method
(FMM) which reduces the computation from O(N**2) to O(N) for an N-atom
system.  A multiple time-scale (MTS) approach is used to exploit
disparate time scales associated with slowly and rapidly varying parts
of interatomic interactions.  These multiresolution algorithms have
been implemented on various parallel computers using spatial
decomposition.

Dynamic Load Balancing:  Realistic simulations of fracture and
materials processing are characterized by irregular atomic
distributions.  One practical problem in simulating such irregular
systems on parallel computers is that of load imbalance which degrades
computing efficiency. This necessitates a dynamic load-balancing scheme
in which workloads are repartitioned adaptively during the simulation.

High Performance Programming Environments and Message-Passing
Interfaces:  Past efforts in developing large scale, massively parallel
applications have been hampered by the lack of a stable,
high-performance programming environment.  MPI provides an effective
mechanism for developing grand challenge materials simulations by
supporting code modularity.

Tera Scale Data Management:  Disk space and I/O speed present a major
bottleneck in large-scale materials simulations, which require storing
positions and velocities of billions of atoms.  This problem can be
addressed using data compression.  However, common compression schemes
perform poorly for this specific kind of data. Scalable compression
algorithms are needed to optimize the storage of molecular-dynamics
simulation data.

Interactive and Immersive Visualization:  A critical aspect of
large-scale simulations is the ability to represent information
contained in massive amounts of data in a form and via media that
enhance both understanding and visual appreciation of the scientific
content.  Towards this objective, use of a CAVE-- a fully immersive and
interactive, multi-viewer environment that links human perception
(audio, visual, and tactile) to the simulated world on parallel
machines is highly desirable. The CAVE will address the primary
visualization paradigms for materials-simulation effort by providing
adequate visual bandwidth for real-time interaction  and immersion in
very large atomistic simulations.  High-performance network connections
will enable collaborative exploration of large-scale datasets resulting
from simulation work.

Art F Voter -- LANL

Extending Simulation Time Scales

Recently there has been broadening interest in developing simulation
approaches to link the disparate length scales that control material
behavior.  Assuming that this problem is well addressed by other
authors on this panel, I focus on a related but sometimes overlooked
issue, that of time scales.

The use of molecular dynamics (MD) simulations in materials science has
increased rapidly in the last decade, due to both the improved quality
of available interatomic potentials and the increasing speed of
computers.  In addition, massively parallel computers (available to a
subset of researchers) now allow simulations of very large systems
(10^7-10^9 atoms) that seemed inaccessible just a few years ago.  In
contrast, the accessible time scales have increased only in proportion
to the speed of a single processor, and hence have remained anchored in
the nanosecond range (picoseconds for first-principles descriptions or
for very large systems).  These times are too short to study many of
the interesting and critical processes involved in plastic deformation,
transport, or annealing.

Recent developments offer hope of overturning this paradigm.  For
systems whose dynamical evolution can be characterized by infrequent
transition events, two new methods have been presented that can extend
the time scale significantly, reaching microseconds, and perhaps
milliseconds.  The first approach, termed hyperdynamics, accelerates
the transition rate (e.g., for diffusive events) using a biased
potential surface in which basins are made less deep.  An especially
appealing feature is that, in some cases, the the accessible simulation
time increases superlinearly with computer speed.  With the development
of more general bias potentials, this approach should become especially
powerful over the next 10 years.  The second approach, also for
infrequent-event systems, harnesses (for the first time) the power of
parallel computers to achieve longer time scales instead of length
scales.  Again, this should become dramatically useful within the next
few years as the typical desktop workstation evolves into a platform
with tens or hundreds of parallel processors.  Finally, I note that
these two methods can be used in conjunction to achieve a
multiplicative gain in simulation speed.

A research investment to investigate and develop these approaches, and
related methods for finding transition states efficiently, should have
a profound impact on our ability to connect to time scales that have
previously seemed hopelessly distant. This in turn assists in the
connection of length scales.  For example, in the annealing of a
radiation-damaged region, no adequate approach exists for
characterizing the complicated defect dynamics during the first few
microseconds, yet such an understanding is critical as input to kinetic
Monte Carlo and continuum models that can then predict macroscopic
properties over human time scales.  Other processes where extending
and motion of dislocation kinks in a bcc crystal, dynamics of a crack
tip at low strain rate, and growth of thin films.  These studies will
clearly impact energy-related problems, especially in the design of
improved materials for fission and fusion environments.

It is interesting to note the synergistic relationship between the
advance of computer speeds (and increasing parallelism) and the
simulation approaches that best take advantage of them.  For example,
neither of the two methods discussed above would have been very useful
if invented ten years ago, but with present computer speeds they begin
to offer a significant gain.  A similar effect can be seen in the
development of the new, powerful first-principles approaches that scale
as the number of atoms (N).  Until recently, no computer could have run
a case large enough to reach the breakeven point where N-scaling was
more efficient than a traditional algorithm.  Taking advantage of this
natural time evolution of the most efficient approach requires an
ongoing investment in method development, and the nature of the payoff
is not necessarily predictable.

John W Wilkins -- OSU

	Model potential suite for multi-scale modeling.

Proposal: systematic development of suite of model potentials
for simulation of long-time molecular dynamics of large-scale
defected materials, including semiconductor and metallic alloys,
composites, polymers, and proteins.

Types of potentials: range of applicability.  (presented in increasing
	order of difficulty of application and potential accuracy
	in mimicking first-principles calculations)

(1) classical potentials -- pair potentials plus three-body interactions:
	equilibrium structure especially for molecular-dynamics time-evolved 
	defected material; "prediction" of phase diagrams; starting 
	structures for next two potential types.

(2) effective-medium/embedded-atom potentials -- simplest treatment of 
	electronic degrees of freedom: defect energies in bulk and on
 	surfaces;  molecular dynamics of processes such as diffusion
	of defects in bulk and clusters on surfaces

(3) tight-binding potentials -- treats orbital character of
	electronic wavefunctions:  bonding in insulators, semiconductors
	and metals; defect energies and energetics.

Benefits: 

(a) Validation of potential suite for individual atoms or atom pairs
    allows systematic calculation of increasingly larger systems by 
    using computationally less-intensive potentials.

(b) Study of materials in non-equilibrium situations such as high
    temperature, strain, and time-dependent forces.

(c) Study of large-scale composites, multi-phase materials, strongly
    defected material. 

(d) Molecular dynamics studies on macroscopic time scales (microsecond
    to millisecond).

Computation tools needed: automatic first-principle and model-potential
code running for many structures to build up data base that allows
parameter optimization. 

Advances/barriers: understanding of what features of different
structures dominate parameter selection and model form.

Projected time scale:

Example of silicon --  most studied single structure -- helps indicate
time scale.  Classical potential, after initial Stillinger-Weber
breakthrough, is still evolving to handle increasing range of
problems:  dimers on surface, molten state, defected material.  EM/EA
potential is in more primitive state due to lack of interest or rather
greater attention on classical and tight-binding potentials.  Much work
needs to be done on tight-binding potentials to cover wide range of
situations.  Biggest challenge is energetics of different bonding
geometries; for example, to explain relevant geometries of bulk,
graphitic, and fullerene structures.

2001: agreement on criteria for judging potential form and parameters
2003: tight-binding potentials for single element structures
2004: tight-binding potentials for a few alloys
2005: pair-potentials for same elemental structures and same alloys
2007: effective-medium/embedded atoms potentials for above
2010: steady progress on all three schemes to build up potential suite

Computational infrastructure advances:  

Assume that scalable computing will steadily advance: specifically,
compilers and preprocessors/optimizers that are effective across
different platforms; shared memory architecture.

Cross-cutting computational science needs:

Most urgent need is for effective tools for debugging and monitoring
parallel code development and use; visual tools are essential.

Sparseness of tight-binding potentials called for advances in handling
parallel sparse matrix-matrix operations.  Matrix operations involving
only subsets of the indices of the objects require advances in
optimizer schemes to produce efficient parallel code.  For example,
some of the indices could involve fast-Fourier transforms, and the
optimizers must be able to handle them together with the sparse operations.

Richard G Hoagland -- Washington State University

The Relation between Microstructure and Mechanical Properties:
Challenges to Multiscale Modeling

Modeling is making important contributions to understanding the
origins of mechanical properties of crystalline solids in critical
areas where experimental techniques are unable (currently and for the
foreseeable future) to extract key pieces of information about the unit
processes that influence strength, toughness, and ductility over a
range of environments and temperatures. I will cite three areas:

1. Defect interactions - atomic scale problems involving elastically 
    nonlinear interactions between defects.

2. Hardening mesoscale phenomena involving the interaction between 
    very large groups of dislocations.

3. Composites continuum level problems involving the deformation and 
    fracture of inhomogeneous and multiphase materials.

Some relevant caveats and observations:  There are many problems that
remain to be explored even though they may require relatively modest
computational horsepower. Such problems are often overlooked in a rush
to find an application for new hardware and/or where there exists a
disconnect between programmers and materials scientist and engineers.
For example, some of the most fundamental issues concerning
defect-defect interaction have yet to be explored, even though
qualitative descriptions of many of these interactions have been around
for decades.  Good, empirical interatomic potentials such as embedded
atom method, are OK for probing generic features of atomic scale
problems.  In general, exploration of specific materials requires better
(faster) ab initio methods and probably enormous improvements in
hardware.  Two-dimensional problems are generally computationally
convenient at all scales.  3D problems typically engulf and expand
beyond the bounds of capability of commonly available hardware using
typical computational algorithms.

A short (and very incomplete) list of examples of areas where modeling
is currently making contributions (and could become a critical factor
in material development):

1. superplasticity and creep suggest favorable grain boundary
    structures to augment sliding kinetics

2. ultra-high strength materials suggest critical length scales and 
    interfacial properties in nanophase and layered structures.

3. fracture suggest methods for changing crack tip processes

4. high temperature composites suggest types of microstructural 
    arrangements that improve both low and high temperature properties.

Areas where collaborative improvements would benefit include:

1. fluid mechanics  behavior of fluids in small interstices.

2. hybrid calculations  mixed EAM, ab initio, continuum.

3. parallelization of code.

Roger E Stoller -- ORNL

Primary Damage Formation and Microstructural Evolution in Irradiated
Materials

When materials are exposed to high-energy neutrons, the energy of the
incident particle is dissipated in a series of billiard-ball-like
elastic collisions among the atoms in the material. This series of
collisions is called a displacement cascade. In the case of crystalline
materials, the cascade leads to the formation of two types of point
defects: empty lattice sites called vacancies and atoms left in the
interstices of the lattice which are called interstitials. Small
clusters that contain several vacancies or interstitials can also be
formed. Although the time and spatial scale characteristic of
displacement cascades is only about 10-11 s and 10-8 m, respectively;
the time scale required for radiation-induced mechanical property
changes can range from weeks to years and the size of the affected
components can be as large as several meters in height and diameter.
For example, radiation-induced void swelling can lead to density
changes greater than 50% in some grades of austenitic stainless steels
and changes in the ductile-to-brittle transition temperature greater
than 200C have been observed in the low-alloy steels used in the
fabrication of reactor pressure vessels. These phenomena, along with
irradiation creep and radiation-induced solute segregation have been
extensively investigated by both theoretical modeling and irradiation
experiments for a number of years.

The differences in the time and spatial scales of the phenomena
involved in radiation-induced microstructural evolution has lead to the
use of several different methods of computer simulation to model
different components of the problem. For example, recent improvements
in computer technology and the interatomic potentials used to describe
atomic systems have broadly advanced the state of the art in
displacement cascade simulation using the method of molecular dynamics
(MD). Molecular dynamics simulations involving more than 1,000,000
atoms have been carried out in order to study high-energy displacement
cascades. The results of these simulations are quite detailed, but are
limited to simulation times of only about 100 ps. Monte Carlo (MC)
methods have been used to extend the effective time scale of the
atomistic simulations long enough (~10s of seconds) to investigate
point defect and solute atom diffusion, and some aspects of solute
segregation. Finally, kinetic models such as those based on reaction
rate theory have been used to investigate long-range diffusion and
microstructural evolution on the time scale of years and the spatial
scale of tens of micrometers. In order to relate the predicted
microstructural changes to mechanical property changes, simple
dislocation barrier hardening models are typically used.

Although current models are fairly robust, there are significant
limitations in each area. The most detailed atomistic modeling (MD and
MC) has employed embedded-atom type potentials and has been limited to
pure metals. Simulations with higher-order interatomic potentials (such
as tight-binding potentials) are needed to verify the details of
defect energies and the behavior of point defect clusters, particularly
in transition metals such as iron. Interatomic potentials for metallic
alloys and ceramics are needed to investigate the behavior of materials
that are relevant to engineering structures. Both of these improvements
will increase the need for higher-speed computers and/or the
development of improved parallel computing algorithms to compensate for
the higher level of numerical complexity. The effective-medium kinetic
models are generally limited by a lack of detailed thermodynamic
information. They are able to simulate some of the effects of long
irradiation exposures by averaging out the details of primary damage
formation and the spatial dependence that would arise from local
composition variations. As such, they can not properly account for
radiation-induced phase decomposition and precipitation. Improved
models relating mechanical properties to microstructural are also
required to account for the superposition of incremental changes in
complex, radiation- induced microstructures and to explain effects such
as radiation-induced flow localization. Although the use of more
detailed models in either of these latter two areas would increase
computational requirements, the greater need is for model development.
In each area, the need for visualization tools increases as data sets
become larger and more complex.

The specific needs of radiation damage modeling are directly related to
fundamental issues in other areas of materials science, e.g. questions
of defect properties (point defects, solutes, dislocations, ...) and
defect- defect interactions generally control material behavior.

Dieter Wolf -- ANL

Materials Durability and Lifetime Prediction: Connecting 
Atomic, Mesoscopic and Macroscopic Properties

(1) Opportunity.  A massive computational effort is needed to develop
models that will allow the prediction of the degradation behavior and
time to failure of polycrystalline materials, coatings and components
from fundamental, atomic-level materials properties. This behavior is
intimately tied to the evolution of polycrystalline microstructures
(e.g., the grain sizes and grain shapes, interfacial cracks, porosity)
under the influences of stress and temperature, giving rise to
irreversible processes (such as grain growth and recrystallization,
stress development, crack nucleation and growth, plastic deformation)
that result in the degradation and, ultimately, the failure of the
component. The main challenge is to establish two key links among the
three different length scales involved. First, the physical behavior at
the mesoscale, i.e., at the level of the interfaces, grain junctions
and dislocations in the microstructure, has to be linked to the under
lying atomic-level structure and composition of these key defects.
Second, the overall materials response to thermal and mechanical
driving forces has to be linked to the interplay between the underlying
interfacial and dislocation processes (involving for example grain
sliding, grain-boundary migration, cavitation, dislocation and crack
nucleation and propagation). Information on these types of processes is
inherently difficult or impossible to obtain from experiments (see Sec.
4). The conceptual advances needed to link the three length scales
therefore have to come from a hierarchically structured modeling
approach with theory. No such approach is currently available.

- Atomistic modeling is limited by available computational resources to
  systems which are too small to be representative of an actual component.

- Modeling at the mesoscopic level is limited by insufficient atomic-level 
  understanding of the nature of the inhomogeneous regions of the material; 
  i.e., of the underlying interfacial processes and atomic-level mechanisms 
  that govern key aspects of microstructural evolution.

- Phenomenological theories can only be applied to highly simplified 
  microstructural models, with virtually no information on a variety of 
  crucial effects known only from atomistic modeling.

What is particularly missing at this stage of development is the
ability to simulate properties at the mesoscale, using the results of
large-scale atomistic simulations as input to predict macroscopic
behavior for comparison with existing and the development of new
phenomenological models.

(2) Approach.  In a small effort developed in recent years at ANL,
molecular dynamics simulations are used for the synthesis of
controlled, fully dense or porous, bulk or thin-film microstructures by
growth from a melt into which small, more or less randomly oriented
crystalline seeds are inserted. Being able to control the
misorientations and initial positions of the seed grains provides the
unique capability to manipulate the microstructure, for example, via
tailoring the distributions in the grain size and grain shapes, the
porosity, as well as the types of grain boundaries in the system. With
presently available computational resources the grain size and the
number of grains that can be considered are too small for any realistic
comparison with key experiments. However, the significant increase in
computer power expected from the Computational Sciences Initiative
would enable key processes and mechanisms taking place in model
microstructures to be identified at the atomic level. The insights and
certain key parameters that could be extracted from such simulations
could then be used as input for mesoscopic-level simulations.

(3) Benefits.  A dramatically improved fundamental understanding of how
tailored polycrystalline microstructures evolve under the effects of
temperature and stress will provide the guidance needed for a
systematic approach to the development of microstructurally and
interfacially engineered materials. Such guidance is likely to
facilitate major breakthroughs, for example, in the design of
fracture-resistant cast steels, toughened yet creep-resistant
high-temperature structural ceramics, hard and corrosion-resistant
coatings for cutting tools, thermal-barrier coatings for turbine-engine
applications, etc. Industrial companies, such as Caterpillar and
McDonnell-Doug lass, have expressed a strong interest in this type of a
simulation approach which could provide a core tool linking up
atomic-level type simulations with macroscopic industrial-design tools,
such as the DOE sponsored Casting Process Simulator, CaPS.

(4) Parallel Experimental Efforts.  As mentioned above, experimental
information on the interfacial processes and atomic-level mechanisms
that control microstructural evolution is extremely difficult or
impossible to obtain. Moreover, even simple parameters characterizing
an evolving microstructure, such as the average grain size and
information on the grain shapes and grain junctions, is very difficult
to access. The modeling program outlined above should therefore be
accompanied by an experimental program on the non-destructive
characterization of evolving microstructures. In one such attempt
presently in its infancy stage, high-energy x-ray scattering at the
Advanced Photon Source combined with advanced robotics techniques would
be used to image the grain boundaries and grain junctions in an
evolving microstructure. This effort is presently being formed among
teams from Carnegie-Mellon University , the University of Riso, the
European Synchrotron Radiation Facility in Grenoble and ANL.

(5) Computational Resources, New Developments.  In order to bridge the
length-scale gap to the mesoscale, the atomic-level simulations will
require the massive computational and graphic visualization resources
which the Computational Sciences Initiative would provide.
Simultaneously, however, key conceptual theoretical advances are needed
to identify exactly what type of  atomic-level information is needed
and the manner in which it is fed into the mesoscopic-level Monte-Carlo
type simulations.  Guidance for the development of such a conceptual
framework will come from in-depth analysis of key atomic-level
simulations on the evolution of designed model microstructures, which
will provide insights into the critical aspects in the physical
behavior of individual interfaces and grain junctions that control the
evolution of the microstructure as a whole.

Anthony C Hess -- PNL

Connection between geoscience and materials; computational issues.

The techniques used by both the material science community and the
geophysics/geochemistry community are identical up to certain time and
length scales (years, and meters). This includes atomic scale methods,
such as solid state quantum mechanics, molecular dynamics, and Monte
Carlo techniques, micro-scale models that describe microstructural
evolution, and macroscopic strategies such as  finite-element
approaches and computational fluid dynamics. Understanding the response
of a metal or alloy to external temperature and pressure conditions
could, under the proper circumstances, be treated with the same
methodologies regardless of the origin of the problem. Researchers in
the earth sciences, however, regularly work on length an time scales
that are vastly larger and longer than those that appear in typical
material science applications. Natural systems are also
thermodynamically open, unbelievably heterogeneous and contain
biological systems. The strategies adopted by researchers working on
these larger time and length scales or directly in the field have no
direct parallel in the material sciences.

A range of techniques including, atomic scale theories, computational
fluid dynamics, finite element simulations, seismic imaging, transport
models will require machines of the order of 1-10 sustained Teraflops
in the near term with data requirements in 10=s of pedabytes. In
addition to the computer hardware, a strong, sustained commitment to
scientific software development including basic tools (message passing,
global arrays, languages, math libraries, etc.) as well as end user
application codes.  Significant resources must also be allocated for
software development and the cost of maintaining large groups of
research teams in the host of application areas to do the scientific
work. Interactive 3D visualization technology is also necessary to
understand the results generated by many disciplines. Today=s computing
engines can easily generate gigabytes of data; tomorrows methods and
computing machines will generate pedabytes.  High speed networks
capable of moving this amount of data between researchers in reasonable
amounts of wall time must also be established (transfer rates in excess
of gigabytes/sec).

William A Shelton -- ORNL

Compuational Issues

A major focus of materials research in the future will be to establish
the microscopic foundations for the relationship between technical
magnetic properties and microstructure.  The achievement of the above
goals will require overcoming a number of major problems involving
microstructure (independent of magnetism), magnetism (independent of
microstructure), giant magneto-resistance, and thermal and transport
properties. The quality of permanent magnets depends on understanding
how large aggregates of atoms behave collectively, particularly when
the behavior depends critically on temperature and microstructure
(e.g.  defects and interfaces).  Accurate first principles and
semi-empirical methods for evaluating the properties of large systems
of atoms are needed.  The success of future theoretical  models will be
predicated on exploiting the power of massively parallel supercomputers
and developing a software environment for performing large scale
quantum simulations on microstructural length and time scales.

To achieve the above long term goals will require significant new code
and algorithm developments that push the envelope of what is possible
in terms of length and time scales, as well as complexity of phenomena
that can be treated.  This would require the development of finite
temperature spin dynamics (non-collinear magnetic moments) within both
the O(N) ab-initio methods and the tight-binding molecular dynamics
(TBMD).  The tight binding molecular dynamics method also needs to be
extended to multi-component systems, transition metals and magnetic
materials for performing simulations with a sufficient number of atoms
to accurately treat long range magnetic correlations in alloys.  This
would ultimately require spin-orbit coupling, intrasite exchange, and
single site anisotropy TBMD parameters.

To development these new codes and algorithms will require the
development of new models based on advanced computational methods.  For
example, the quasi-classical spin dynamics formalism can be cast as a
stochastic differential equation and requires the development of
advanced numerical methods for the solution of stochastic differential
equations that include various types of random components, either
additive or multiplicative.  In addition, new large scale first
principles methods can be developed using a wavelets basis which
results in a large complex nonsymmetric sparse matrix formalism that
that could be solved using preconditioned iterative methods or
nonsymmetric complex sparse matrix methods. Developing robust
non-linear optimization algorithms is necessary since the number SCF
iterations necessary to achieve convergence rapidly grows with
increasing system size.  Equally important is the need for improved
time stepping algorithms in order to perform the dynamical simulations
over the appropriate length of time.

Computational tools to aid the materials scientist with developing or
adapting computer codes that are not necessarily their own codes are
needed.  A graphical user interface that would graphical represent the
individual routines of an entire code would be useful, in adapting a
large unknown code, to be able to plug in new algorithms within the
entire code or to select and combine different routines.  This
environment would allow the materials scientist to evaluate various
routines of differing complexity within the entire code.  Finally, a
seamless computing environment would allow the materials scientist to
submit a job where they only have to include a time they would like the
job to finish.  This would relieve the materials scientist of the
burdens associated with worrying about resource management issues
including scheduling, task migration, load balancing and  fault
tolerance.  This type of system would allow the scientist to
concentrate more on the science rather than on the computer science.

Jeffrey N Brooks -- ANL

Plasma/Surface Interaction Analysis for Fusion*

Motivation:  Understanding and control of the plasma surface
interaction (PSI) is probably the most single critical issue for
magnetic fusion power development.  The key PSI issues are boundary
material erosion by plasma particles, hydrogen and helium recycling,
and plasma contamination.  Integrated computer codes have been
successfully developed to analyze limited aspects of PSI phenomena.
Advanced computers and numerical techniques would permit us to
substantially advance our PSI predictive capability and aid the choice
and optimization of fusion surface materials and plasma regimes.  A
"virtual" tokamak where numerical experiments could be conducted, is a
possibility with advanced computing.

Problem:  The phenomena to be analyzed are (1) net sputtering erosion
of fusion surface materials by charged and neutral particles, (2) heat
and particle removal at the surface, (3) surface evolution, melting,
and vaporization due to plasma high-power transients, and (4) effect of
plasma surface interactions on the edge and core plasmas.  These
require a wide range of integrated models/codes particularly for edge
plasma parameters and magnetic field geometry, sheath physics,
molecular dynamics and/or binary collision sputtering/reflection,
material thermal and mechanical response, 3-D line radiation transport,
and atomic and molecular processes of surface materials in the plasma.
Analysis of the self-consistent full boundary problem-not to mention
the non-linear problem (surface materials significantly changing the
plasma)-requires substantially faster computers and numerical methods
than presently available.

Opportunity:  A two to three order of magnitude increase in computer
power, and associated advancements in numerical techniques would
substantially improve the study of plasma surface interactions in
fusion reactors.  Since prototype fusion power reactors will cost ~5-10
billion dollars, and PSI issues are critical to their design and
operation, there is a very high potential for cost savings from
advanced computation.

Daryl Chrzan -- UC Berkeley

Statistical Mechanics at the Microscale

It is apparent that many outstanding problems in materials science
require theories to span many orders of magnitude in both space- and
time-scales.  A survey of existing theories suggests that our
understanding of the atomic and macroscopic scales is quite advanced.
What we really lack is the ability to connect these two types of
calculations.  This is, and has always been, one of the greatest
challenge faced by materials theorists.  The following is motivated by
a desire to develop a fundamental understanding of mechanical
properties in a broad range of materials.

The rapid increase in available computing power suggests new ways in
which we might approach these types of problems.  One way is to
envision creating a general theory capable of modeling a wide variety
of materials.  These theories are likely to involve some type of hybrid
technique in which atomic scale calculations are used to model small
scale behavior, and finite-element techniques are used to couple the
atomic scale behavior.  (The work of Ortiz and Phillips represents one
attempt to move in this direction.)  Certainly, these approaches should
be pursued.  If developed successfully, they offer a powerful modeling
tool destined to be applied in numerous fields.  (The equivalent of a
band-theory for mechanical properties?)

An alternate approach is to couple the scales on a material by material
basis.  The proposed technique is best understood through example.
Suppose we wish to understand the contribution of dislocation motion to
deformation of a BCC material under high temperatures and stresses.
Our first task is determine the nature of dislocations in BCC
materials.  These are most often observed to be long, straight screw
segments.  It is thought that the screw segments are not mobile because
of the noncompact nature of the core.  In order for the dislocation to
move, the core must become compact - a thermally activated process.  It
is unlikely that the entire core will constrict, and this necessitates
the development of double kink pairs:  Segments of dislocations will
reside in adjacent Peierls valleys, and they will be joined by
near-edge oriented kinks.  The motion of the dislocation, then, takes
place through the lateral motion of these kinks.

At this point, a statistical analysis is necessary to connect the
scales.  One envisions a model in which double-kink pairs are formed
through a thermally activated process, and then move according to a set
of appropriately defined rates.  Their formation rates and mobilities
are determined by the applied stress and the long-ranged interaction
stresses.  (These interaction stresses can be calculated using
calculated elastic constants and elasticity theory.) The kink formation
energies and rates may be obtained from atomic scale calculations of
the sort suggested by others at this workshop.  An important step,
then, becomes the application of ideas borrowed from statistical
mechanics to the analysis of the interactions of the micro scale
particles - the dislocation segments.  A suitable analysis may yield
equations of motion for the dislocations that can then be used as input
for larger scale calculations. In addition, these simulations allow one
to determine which atomic-scale parameters are relevant to the larger
scale dislocation dynamics, and will serve to focus the atomic-scale
efforts.

The above is just one example.  One can envision similar statistical
analyses being applied to superplasticity and reactive metal
infiltration of ceramic preforms to create metal ceramic composites.
Also, a statistical analysis of the many dislocation problem may be at
hand. These calculations may now be possible: Computers have advanced
to the point in which a single dislocation simulation will run on a
single processor.  Parallel processing machines allow one to obtain the
necessary statistics.

The tools necessary to accomplish this task are at hand.  Monte Carlo
techniques are advancing at a rapid pace, and they are easily
"parallelized."  In terms of other tools, what would be really nice is
a "toolbox" which would allow for rapid prototyping of statistical
models.  (The "toolbox" might be something along the lines of MatLab's
Simulink, but allow for more complicated models involving thousands of
elements, and flexible couplings between them.) As with all modeling
efforts, experiments must be designed and executed to verify/disprove
the ideas arising from the analysis.  In situ microscopy experiments
seem the most natural means to check many of these ideas.  The overlap
between these ideas, and those necessary for some geophysical
calculations is evident.  (For example, one may view reactive metal
infiltration as an invasion percolation model.)

Stephen Foiles -- Sandia NL, Livermore

Importance of Entropy in Materials Modeling

One of the current goals of materials modeling is to bridge between
length scales.  One approach to this problem is to use atomic scale
simulations to determine critical materials properties that are then
input into continuum level descriptions.  As examples, the energy of
interphase interfaces is a crucial input to models of the nucleation
and growth of second phase precipitates.  Another is the importance of
fault energies to the determination of the structure and mechanical
properties of dislocations.  In most cases, the quantity needed is not
an internal energy, but rather a free energy.  The general problem of
the determination of the free energy of material defects in the
relevant case of complex multi-component systems and finite temperature
does not have a general computationally tractable and quantitatively
accurate solution.

There are various approaches to these issues that are currently
available.  Each of them has significant drawbacks, though.  MD or MC
approaches are typically performed based on approximate interatomic
interactions.  The accuracy of these interactions limits the accuracy of
the simulations.  Further, the free energy is not a simple ensemble
average, so the determination of the free energy of the defects in
general requires time consuming thermodynamic integrations.  Current
computational hardware and algorithms do not allow first-principles
based MD or MC to be performed on complex systems with high statistical
accuracy.  Methods based on cluster expansions of the energy in terms
of lattice variables are useful for coherent interfaces and faults
where there is a common lattice.  These models can use ab initio
methods to determine the energy parameters.  However, these methods
cannot treat general structural defects and only treat the
compositional entropy, not entropy due to nuclear motion.  With regard
to nuclear motion, it is now possible to compute the contribution of
phonons to thermodynamic properties based on ab initio density
functional techniques.  Again, computational limitations restrict these
applications to simple systems and this does not treat the
compositional entropy.   The development of computational techniques to
accurately determine this class of problems would be a great value.


Your comments and suggestions are appreciated.
[Previous] [Schedule of presentation] [Wilkins Home Page] [Computational Science Initiative]

To cite this page:
One-Page Summaries of Presentations <http://www.physics.ohio-state.edu/~wilkins/doe/summaries.html>
Edited by: wilkins@mps.ohio-state.edu [January 1998]