Large-scale Normal Mode Analysis for Polymer Materials

Joint work with Donald Noid and Bobby Sumpter at ORNL's Chemical and Analytical Sciences Division
and
Robert Tuzun at the State University of New York

1. Introduction

Normal Mode Analysis (NMA) is a classical technique for studying the vibrational and thermal properties of various molecular structures at the atomic level.

Although this technique is widely used for molecular systems consisting of a small number of atoms, performing NMA on large-scale systems is computational challenging. Mathematically, the motion of the molecule is often described by a second order ordinary differential equation

where the matrix F is a force constant matrix derived from the second derivative of the potential with respect to the Cartesian coordinates.

The standard procedure for solving this equation is to diagonalize the matrix F by computing its eigenvalues and eigenvectors. Each eigenvector is often referred to as a normal mode with certain vibrational frequency. The frequency is determined by the eigenvalue. The overall dynamics of the molecular system can be described by a superposition of a number of linearly independent normal modes.

It is well known that the lower temparature behavior of the system (vibrations near an equilibrium configurature) consists mainly of the low frequency normal modes. Therefore, small eigenvalues and corresponding eigenvectors are of most interest. The lowest six eigenvalues are known to be exactly zero, the eigenvectors associated with these eigenvalues represent the pure translational and rotational modes. We have recently developed a numerical procedure for calculating the lowest 100 normal modes of up to 24,000-atom polyethylene type of polymer particles and crystals. The following figure shows a 6000-atom polyethylene particle we have studied.

In addition to studying the motion of each atom, we are also interested in the thermal property of the material. This can be partially described by the heat capacity Cv defined as

where kB is the Boltzmann's constant, c is the speed of light, h is the Plank's constant, w is the frequency in cm -1 , and g(w) is the probabilty density function of the frequency. Computing C v requires us to determine the eigenvalue distribution of F.

2. Computational Challenge

Efficient computational methods have been developed to construct polymer particles that are as similar to those experimentally generated as possible. The geometry of the particle is optimized to reach a near equilibrium configuration before NMA is performed. For polyethylene type of particles, the potential field is chosen to be a combination of
  • Harmonic/Morse potentials for bond stretches;
  • Harmonic potential for bending between two bonds;
  • Truncated Fourier series for the torsional potential;
  • Lenard-Jones 6-12 potential for the nonbonded interaction;
The model parameters have been well tested and shown to provide a realistic representation of the structure and vibrational spectroscopy of a number of polymer systems.

Once the potential model is established, the major computational tasks in NMA include:

  • Taking the second derivatives of the potential with respect to the Cartesian coordinates. This yields the force constant matrix F;
  • Computing desired eigenvalues and eigenvectors of F.

2.1. Second derivative calculation

The bruce force way of calculating the second derivative is quite complicated because the bending or torsional angles are often nontrivial functions of the Cartesian coordinates. Therefore, computing each entry of F may require hundreds of lines of codes. Noid, Sumpter and Tuzun have recently devised a method that utilizes internal coordinates and the translational invariance principle to reduce the amount of calculation. These techniques lead to a very efficient way of generating the force constant matrix. The details can be found in reference [].

Because non-bonded interaction are turned off when the distance between atoms are beyond some threshold, the F matrix is sparse. If the atoms in the system are ordered in an appropriate sequence, the matrix entries corresponding to the bonded interaction will appear near the diagonal. The following figure shows the nonzero pattern of the F matrix obtained from the derivative calculation performed on a 3000-atom polyethylene particle. Since the matrix is symmetric, only the lower triangular part is shown.

2.2. Large-scale eigenvalue calculation

Currently, we are interested in mainly the low temperature behavior of the molecule. As mentioned above, this can be accurately described by normal modes associated with low frequency vibrations. Therefore, we need to compute the lowest few eigenvalues and eigenvectors of F.

Because we need only a small number of eigenvalues, and because F is large and sparse, a Krylov subspace method such as the Lanczos algorithm is an appropriate tool. However, it is well known that small and clustered eigenvalues converge slowly in the Lanczos process. Therefore, additional acceleration strategies are needed to improve the computational efficiency. We list some of them below.

The main tool we have used in our eigenvalue calculation is ARPACK.

Implicity Restart
ARPACK uses implicit restart to filter out the unwanted (high frequency) eigencomponents from the starting vector of the Lanczos process, thereby improving the convergence of the desired eigenvalues and eigenvectors. It allows us to compute the desired eigenvalues and eigenvectors within a Krylov subspace of small dimension.

Shift-invert
Futher improvement in convergence rate can be achieved by using the technique of shift-invert . That is, we compute eigenvalues and eigenvectors of the shifted and inverted matrix , where is a target shift near the eigenvalues we are interested in. Under the rational transformation eigenvalues of F that are near become dominant eigenvalues of . Eigenvectors remain the same. The dominant eigenvalues of the transformed problem and corresponding eigenvectors converge rapidly in the Implicitly Restarted Lanczos (IRL) process. Eigenvalues of the original problem can be recovered from by .

However, in order to carry out IRL on the transformed matrix, we must factor F into LDLT, where L is unit lower triangular and D is diagonal. Sparse triangular solves must be performed at each step of the Lanczos iteration in place of the sparse matrix vector (MATVEC) multiplication used in an ordinary Lanczos process.

Because the lower triangular factor L usually contains more nonzero elements than F, additional memory is required to store L. The following table shows the CPU time and memory required to compute the lowest 100 eigenvalues and eigenvectors of F matrices with different dimensions. The computation is carried out on a 2Gflop vector machine.

n CPU seconds Memory (MB)
9000 145 224
18000 515 459
36000 1472 1333


Our experience indicates that with shift-invert, the eigenvalue calculation can be done in a reasonable amount of time. However, the memory requirement is quite high. For a 24,000-atom system, more than 2 GB memory is needed to compute the lowest 100 eigenvalues and corresponding eigenvectors.

Polynomial Transformation
An alternative to shift-invert is to work with p(F), where p is a polynomial. The advantage of a polynomial transformation is that no matrix factorization is required in the Lanczos iteration The single matrix-vector (MATVEC) multiplication at each step of the ordinary Lanczos iteration is replaced by a sequence of MATVECs that yield
y = p(F) x . For symmetric eigenvalue problem, it is not difficult to find a polynomial that maps small eigenvalues of F to dominant eigenvalues of p(F). However, seperating clustered eigenvalue by polynomial transformation could be difficult. This is shown in the following figure in which a Chebyshev polynomial maps a few small eigenvalues of F to almost the same location on the y-axis.

In order to achieve better seperation, a high-degree polynomial is often needed. As a consequently, more work is required at each step of the Lanczos iteration. Finding the optimal choice of degree is not an easy task.

Another problem associated with a high-degree polynomial is rounding error. When the degree of the polynomial becomes very high computing y = p(F) x accurately could be very difficult.

A comparison of IRL, shift-invert and polynomial accelerated IRL is shown below. The calculation is performed on a 6000-atom (matrix size 18,000) polyethylene particle.

Method CPU seconds Memory (MB)
IRL 9204 74
Shift-invert 515 459
Polynomial 2859 74

Although the polynomial accelerated IRL method appears to be a reasonable comprise between shift-invert and the direct IRL approach, we would like to further reduce the cost of eigenvalue calculation.

Other Factorization-free Methods
Currently, we are looking into using other factorization-free methods to improve the efficiency of the eigenvalue computation. In particluar, we are investingating the feasibility of applying Jacobi-Davidson or Truncated-RQ algorithm to F. Both of these algorithms require solving a sequence of projected linear systems. However, it has been shown that these systems do not need to be solved to full accuracy. Therefore, a preconditioned iterative solver is prefered. The key to achieving a satisfactory performance in these method is to identify a good preconditioner.

3. Beyond Eigenvalue Calculation

Once eigenvalue calculation is completed, we would like to examine the lowest normal modes to identify interesting vibrational features of the particle. This will enable us to gain additional insight in building new and better materials. In order to do that, we must have an effective way to visualize normal modes with tens of thousands elements in a meaning way. We have been working with ORNL's Viz Lab to develop effective visualization tools for displaying and manipulating normal modes. The following two figures provide a clear description of two lowest normal modes computed. The left figure corresponds to a pure rotational mode, and the right figure a pure tranlational mode.