\(\newcommand{\beps}{\boldsymbol\varepsilon}\) \(\newcommand{\bsig}{\boldsymbol\sigma}\) \(\newcommand{\ud}{d}\) \(\newcommand{\us}{\mathrm{s}}\) \(\newcommand{\ba}{\mathbf{a}}\) \(\newcommand{\bb}{\mathbf{b}}\) \(\newcommand{\bc}{\mathbf{c}}\) \(\newcommand{\bt}{\mathbf{t}}\) \(\newcommand{\bu}{\mathbf{u}}\) \(\newcommand{\bw}{\mathbf{w}}\) \(\newcommand{\bN}{\mathbf{N}}\) \(\newcommand{\bB}{\mathbf{B}}\) \(\newcommand{\bD}{\mathbf{D}}\) \(\newcommand{\bK}{\mathbf{K}}\) \(\newcommand{\pder}[2]{\frac{\partial #1}{\partial #2}}\) \(\newcommand{\iD}{\boldsymbol{\mathcal{D}}}\) \(\newcommand{\mbf}[1]{\mathbf{#1}}\) \(\newcommand{\mrm}[1]{\mathrm{#1}}\) \(\newcommand{\bs}[1]{\boldsymbol{#1}}\) \(\newcommand{\T}{^\mathrm{T}}\) \(\newcommand{\myVec}[1]{\left\{ \begin{matrix} #1 \end{matrix} \right\}}\) \(\newcommand{\myMat}[1]{\left[ \begin{matrix} #1 \end{matrix} \right]}\) \(\newcommand{cA}[1]{\textcolor[RGB]{1,113,136}{#1}}\) \(\newcommand{cB}[1]{\textcolor[RGB]{195,49,47}{#1}}\) \(\newcommand{cC}[1]{\textcolor[RGB]{0,102,162}{#1}}\) \(\newcommand{cD}[1]{\textcolor[RGB]{0,183,211}{#1}}\) \(\newcommand{cE}[1]{\textcolor[RGB]{0,163,144}{#1}}\) \(\newcommand{cF}[1]{\textcolor[RGB]{97,164,180}{#1}}\) \(\newcommand{cG}[1]{\textcolor[RGB]{130,215,198}{#1}}\) \(\newcommand{cH}[1]{\textcolor[RGB]{153,210,140}{#1}}\) \(\newcommand{cI}[1]{\textcolor[RGB]{235,114,70}{#1}}\) \(\newcommand{cJ}[1]{\textcolor[RGB]{241,190,62}{#1}}\) \(\newcommand{cK}[1]{\textcolor[RGB]{231,41,138}{#1}}\)
1.1. Tensor basics#
In this page, we will briefly cover basic tensor concepts which will be used throughout the rest of the book. Give this page a quick look at first and keep referring back to it as you go through the rest of the content in case the notation is unclear to you.
We start with a brief comment on the different notation systems we will be using. Scalars are always represented by lower-case symbols, for instance:
For vectors and matrices, apart from writing down all their components in full, we will be using the following three notation systems:
Vectors represented by lower-case bold characters:
Matrices represented by upper-case bold characters:
Higher-order tensors cannot be represented.
Vector operations made unique by transposing them:
Vectors represented by lower-case bold characters:
Matrices represented by upper-case bold characters:
Higher-order tensors represented by bold calligraphic characters:
Vector operations made unique through different symbols:
Vectors represented by a single subscript:
Matrices represented by two subscripts:
Higher-order tensors represented by more subscripts:
Operations made unique by summing over repeated symbols:
where the repeated index \(i\) in \(a=b_ic_i\) implies that summation takes place \(b_ic_i\equiv\sum_ib_ic_i\).
As scalars require no further explanation, we can move straight to vectors.
Vectors#
First-order tensors are also known as vectors. In space \(\mathbb{R}^m\), they may be expressed in terms of their components. For instance in a 3D space (\(m=3\)):
where in index notation it is usual to adopt indices \(i\), \(j\), \(k\) and \(l\) ranging between \(1\) and the number of dimensions of the PDE we are solving (i.e. \(n=2\) for 2D problems and \(n=3\) for 3D problems).
We often need to compute the \(L_2\) norm of vectors, given by:
which is therefore the square root of the sum of the squared vector components. When using index notation, remember that summation (in this case over \(i\)) is implied.
Matrices#
Second-order tensors are also known as matrices. A matrix in space \(\mathbb{R}^{m\times n}\) may be expressed in terms of its components. For instance with \(m=n=3\) we have:
A matrix is said to be square when it has as many rows as columns (\(m=n\)). A square matrix is said to be diagonal when only its diagonal entries are non-zero:
and one important diagonal matrix is the identity matrix:
whose entries can be represented in index notation by the Kronecker delta function:
A matrix is said to be symmetric when its off-diagonal entries are mirrored with respect to the diagonal:
By interchanging rows and columns we can get the transpose of a matrix:
which also implies that for a symmetric matrix \(\mbf{A}\) we can write \(\mbf{A}\T=\mbf{A}\). For a symmetric matrix we can also write in index notation:
Finally, a matrix is said to be sparse if most of its entries are zero. For instance, diagonal matrices are intrinsically sparse, as are several important matrices involved in the Finite Element Method. The opposite of a sparse matrix is a dense matrix.
Coding FEM
The fact that certain matrices are sparse becomes important when implementing FEM in computer code. It is much cheaper computationally to only store and operate on the non-zero entries of sparse matrices instead of on their full versions. There are several clever ways this can be coded, for instance:
Coordinate list (COO): Instead of storing the full matrix, we store three 1D arrays with row indices
i
, column indicesj
, and valuesv
that map all non-zeroA[i,j]=v
entries of sparse matrix \(\mbf{A}\);Compressed sparse row (CSR): Similar to COO, but the array with row indices is compressed to size \(m+1\) where \(m\) is the number of matrix rows;
Compressed sparse column (CSC): Similar to COO, but the array with column indices is compressed to size \(n+1\) where \(n\) is the number of matrix columns.
Different encodings lead to different performances for certain operations (e.g. matrix multiplications) and work better or worse depending on the sizes and topology of the original matrix.
Higher-order tensors#
We will also encounter tensors of order higher than two. They are more difficult to write down and visualize, and we will be instead representing them using either tensor notation (e.g. \(\bs{\mathcal{D}}\)) or index notation (e.g. \(\mathcal{D}_{ijkl}\)).