Search

# Continuum Mechanics 3: Spectral Representation

## Introduction

This article is part 3 in my series on continuum mechanics. The focus is on the spectral decomposition of the deformation gradient, and the left and right Cauchy-Green tensors. The goal of this series is to cover the necessary theory for understanding how modern constitutive models are written. This will allow you to read and understand articles and theory manuals related to material models.

### Next Part

The spectral decomposition is useful since it explicitly indicates in what basis (coordinate system) a quantity is expressed in!

## Example 1: Eigenvalue Definition

One way to define eigenvalues and eigenvectors is the following expression: $$\mathbf{A} \hat{\mathbf{n}}_i = \lambda_i \hat{\mathbf{n}}_i$$. I think that a useful way to better understand any theory is to code it up. This can make the theory less abstract, and more alive. For that reason, I will in this article show Python implementations of most equations. In the example below I first generate a random symmetrical 3×3 matrix. I then calculate the eigenvalues and eigenvectors for this matrix. Note that since the  matrix is symmetrical, the eigenvalues will be real numbers. Finally i show that the left- and right-hand-sides are equal in the equation. Cool.

Python Code⬇️

				
import numpy as np
def make_sym_3x3():
Tmp = np.random.rand(3,3)
return Tmp + Tmp.T
def print_arr(str, var):
print(str)
print(np.array_str(var, precision=5))
print("")
A = make_sym_3x3()
print_arr("A=", A)
eigval, eigvec = np.linalg.eig(A)
print_arr("eigval=", eigval)
print_arr("eigvec=", eigvec)
n1 = eigvec[:,0]
print_arr("n1=", n1)
print_arr("A * n1 =", np.matmul(A, n1))
print_arr("lambda1 * n1 =", eigval[0] * n1)



Output ⬇️

				
A=
[[0.45849 1.37849 0.56475]
[1.37849 1.7532  0.71148]
[0.56475 0.71148 1.27939]]
eigval=
[ 3.08211 -0.4234   0.83237]
eigvec=
[[ 0.48947  0.85261 -0.18296]
[ 0.74785 -0.51834 -0.41479]
[ 0.44849 -0.0662   0.89133]]
n1=
[0.48947 0.74785 0.44849]
A * n1 =
[1.5086  2.30495 1.3823 ]
lambda1 * n1 =
[1.5086  2.30495 1.3823 ]




## Example 2: Eigenvalue Decomposition

In this second example I will examine the equation: $$\mathbf{A} = \mathbf{Q}\boldsymbol{\Lambda} \mathbf{Q}^{\top}$$. This equation shows that a symmetric matrix (A) can be divided into an orthogonal matrix multiplying a diagonal matrix, and then multiplying the results with the transpose of the first orthogonal matrix. Here’s the code:

Python Code⬇️

				
import numpy as np
def make_sym_3x3():
Tmp = np.random.rand(3,3)
return Tmp + Tmp.T
def print_arr(str, var):
print(str)
print(np.array_str(var, precision=5))
print("")
A = make_sym_3x3()
print_arr("A=", A)
eigval, eigvec = np.linalg.eig(A)
print_arr("eigval=", eigval)
print_arr("eigvec=", eigvec)
Lambda = np.array([[eigval[0],0,0], [0,eigval[1],0], [0,0,eigval[2]]])
print_arr("Lambda=", Lambda)
# eigvec * Lambda * eigvec.T
Test1 = np.matmul(eigvec, Lambda)
Test2 = np.matmul(Test1, eigvec.T)
print_arr("eigvec * Lambda * eigvec.T=", Test2)




Output ⬇️

				
A=
[[1.23124 1.0465  0.57428]
[1.0465  0.53636 0.72278]
[0.57428 0.72278 1.86985]]
eigval=
[ 2.82836  1.06325 -0.25416]
eigvec=
[[-0.55476 -0.63829  0.53369]
[-0.46983 -0.28906 -0.83409]
[-0.68666  0.71346  0.13954]]
Lambda=
[[ 2.82836  0.       0.     ]
[ 0.       1.06325  0.     ]
[ 0.       0.      -0.25416]]
eigvec * Lambda * eigvec.T=
[[1.23124 1.0465  0.57428]
[1.0465  0.53636 0.72278]
[0.57428 0.72278 1.86985]]




## Example 3: Spectral Decomposition of a Symmetric Matrix

Now, let’s take a look at the spectral representation of a symmetric 3×3 matrix: $$\mathbf{A} = \sum_{i=1}^3 \lambda_i \hat{\mathbf{n}}_i \otimes \hat{\mathbf{n}}_i.$$ In the code below I use the numpy einsum function to calculate the diadic products. The output from running the code demonstrates that the equation is valid.

Python Code⬇️

				
import numpy as np
def make_sym_3x3():
Tmp = np.random.rand(3,3)
return Tmp + Tmp.T
def print_arr(str, var):
print(str)
print(np.array_str(var, precision=5))
print("")
A = make_sym_3x3()
print_arr("A=", A)
eigval, eigvec = np.linalg.eig(A)
print_arr("eigval=", eigval)
print_arr("eigvec=", eigvec)
n1 = eigvec[:,0]
n2 = eigvec[:,1]
n3 = eigvec[:,2]
print_arr("n1=", n1)
print_arr("n2=", n2)
print_arr("n3=", n3)
M1 = eigval[0] * np.einsum('i,j', n1, n1)
print_arr("M1=", M1)
M2 = eigval[1] * np.einsum('i,j', n2, n2)
M3 = eigval[2] * np.einsum('i,j', n3, n3)
MM = M1 + M2 + M3
print_arr("MM=", MM)




Output ⬇️

				
A=
[[0.34217 1.64694 0.58637]
[1.64694 1.28252 0.81616]
[0.58637 0.81616 0.02437]]
eigval=
[ 2.87874 -0.90127 -0.32841]
eigvec=
[[ 0.56646  0.79112 -0.23074]
[ 0.75424 -0.61053 -0.24163]
[ 0.33203  0.03716  0.94254]]
n1=
[0.56646 0.75424 0.33203]
n2=
[ 0.79112 -0.61053  0.03716]
n3=
[-0.23074 -0.24163  0.94254]
M1=
[[0.92374 1.22994 0.54144]
[1.22994 1.63764 0.72092]
[0.54144 0.72092 0.31736]]
MM=
[[0.34217 1.64694 0.58637]
[1.64694 1.28252 0.81616]
[0.58637 0.81616 0.02437]]




## Singular Value Decomposition

After going through those examples, I can now finally attack the super cool Singular Value Decomposition. The SVD is like the eigenvalue decomposition, but it also works for unsymmetric matrices! That is quite handy since the deformation gradient is unsymmetric. In short, the SVD can be written: $$\mathbf{F} = \mathbf{Q}_1 \boldsymbol{\Lambda} \mathbf{Q}_2$$. Note that $$\mathbf{Q}_1$$ and $$\mathbf{Q}_2$$ are both orthogonal, not not the same. Also recall that an orthogonal matrix has the following property: $$\mathbf{Q}^{-1} = \mathbf{Q}^\top$$. The spectral representation for the deformation gradient can therefore be written: $$\mathbf{F}=\mathbf{Q}_1 \left[ \sum_{i=1}^3 \lambda_i \hat{\mathbf{e}}_i \otimes \hat{\mathbf{e}}_i \right] \mathbf{Q}_2^\top,$$ which can be written $$\mathbf{F}=\sum_{i=1}^3 \lambda_i (\mathbf{Q}_1\hat{\mathbf{e}}_i) \otimes (\mathbf{Q}_2\hat{\mathbf{e}}_i).$$ Finally, this can be simplified to: $$\mathbf{F}=\sum_{i=1}^3 \lambda_i\hat{\mathbf{n}}_i \otimes\hat{\mathbf{N}}_i.$$In Python (numpy) the singular value decomposition can be obtained from the np.linalg.svd() function.

## Cauchy-Green Tensors

Any general deformation can be uniquely decomposed into a rotation followed by a stretch, or a stretched followed by a rotation: $$\mathbf{F} = \mathbf{R}\mathbf{U} = \mathbf{v} \mathbf{R}.$$ You should read these equations from right-to-left. That is, first the stretch U is applied, and then the rotation R. Or alternatively, first is a rotation R applied, and then the stretch v. This shows that the U matrix is expressed in the original reference frame, and v is expressed in the current (rotated) reference frame.

Since the deformation gradient F contains both stretches and rotations, it cannot directly be used to calculate stresses or strains. Instead, depending on the type of stress and strain, it needs to be expressed in terms of either U or v. Let’s define the Right Cauchy-Green tensor C: $$\mathbf{F}^\top \mathbf{F} = (\mathbf{U}^\top \mathbf{R}^\top) (\mathbf{R} \mathbf{U}) = \mathbf{U}^2 \equiv \mathbf{C},$$and the Left Cauchy-Green tensor (b): $$\mathbf{F}\mathbf{F}^\top = (\mathbf{v} \mathbf{R}) (\mathbf{R}^\top \mathbf{v}^\top) = \mathbf{v}^2 \equiv \mathbf{b}.$$ In other words, both Cauchy-Green tensors contain only stretching (no rotation). The Right Cauchy-Green tensor is expressed in the reference configuration, and the Left Cauchy-Green tensor is in the current configuration.

If we introduce the following rule: $$(\mathbf{a} \otimes \mathbf{b}) (\mathbf{c} \otimes \mathbf{d}) = (\mathbf{b} \cdot \mathbf{c}) \mathbf{a} \otimes \mathbf{d}$$, then it becomes easy to show that the Right Cauchy-Green tensor can be written: $$\mathbf{C} = \mathbf{U}^2 =\mathbf{F}^\top \mathbf{F} = \sum_{i=1}^3 \lambda_i^2 \hat{\mathbf{N}}_i \otimes \hat{\mathbf{N}}_i.$$ And similarly, the Left Cauchy-Green tensor can be written: $$\mathbf{b} = \mathbf{v}^2 =\mathbf{F}\mathbf{F}^\top = \sum_{i=1}^3 \lambda_i^2 \hat{\mathbf{n}}_i \otimes \hat{\mathbf{n}}_i.$$
The unit vectors $$\hat{\mathbf{N}}_i$$ form the basis vectors in the reference configuration, and $$\hat{\mathbf{n}}_i$$ form the basis vectors in the current configuration. Finally, since the deformation gradient is expressed in terms of both of these sets of basis vectors, the deformation gradient is a two-point tensor.