\(\newcommand{\beps}{\boldsymbol\varepsilon}\) \(\newcommand{\bsig}{\boldsymbol\sigma}\) \(\newcommand{\ud}{\mathrm{d}}\) \(\newcommand{\us}{\mathrm{s}}\) \(\newcommand{\ba}{\mathbf{a}}\) \(\newcommand{\bb}{\mathbf{b}}\) \(\newcommand{\bc}{\mathbf{c}}\) \(\newcommand{\bt}{\mathbf{t}}\) \(\newcommand{\bu}{\mathbf{u}}\) \(\newcommand{\bw}{\mathbf{w}}\) \(\newcommand{\bN}{\mathbf{N}}\) \(\newcommand{\bB}{\mathbf{B}}\) \(\newcommand{\bD}{\mathbf{D}}\) \(\newcommand{\bK}{\mathbf{K}}\) \(\newcommand{\pder}[2]{\frac{\partial #1}{\partial #2}}\) \(\newcommand{\iD}{\boldsymbol{\mathcal{D}}}\) \(\newcommand{\mbf}[1]{\mathbf{#1}}\) \(\newcommand{\mrm}[1]{\mathrm{#1}}\) \(\newcommand{\bs}[1]{\boldsymbol{#1}}\) \(\newcommand{\T}{^\mathrm{T}}\) \(\newcommand{\inv}{^{-1}}\) \(\newcommand{\myVec}[1]{\left\{ \begin{matrix} #1 \end{matrix} \right\}}\) \(\newcommand{\myMat}[1]{\left[ \begin{matrix} #1 \end{matrix} \right]}\) \(\newcommand{cA}[1]{\textcolor[RGB]{1,113,136}{#1}}\) \(\newcommand{cB}[1]{\textcolor[RGB]{195,49,47}{#1}}\) \(\newcommand{cC}[1]{\textcolor[RGB]{0,102,162}{#1}}\) \(\newcommand{cD}[1]{\textcolor[RGB]{0,183,211}{#1}}\) \(\newcommand{cE}[1]{\textcolor[RGB]{0,163,144}{#1}}\) \(\newcommand{cF}[1]{\textcolor[RGB]{97,164,180}{#1}}\) \(\newcommand{cG}[1]{\textcolor[RGB]{130,215,198}{#1}}\) \(\newcommand{cH}[1]{\textcolor[RGB]{153,210,140}{#1}}\) \(\newcommand{cI}[1]{\textcolor[RGB]{235,114,70}{#1}}\) \(\newcommand{cJ}[1]{\textcolor[RGB]{241,190,62}{#1}}\) \(\newcommand{cK}[1]{\textcolor[RGB]{231,41,138}{#1}}\)
1.3. Tensor calculus#
In this page we go through a few basic vector calculus results that will be used throughout the rest of the book. We build upon the notations and linear algebra concepts from the previous pages.
Derivatives, gradients, divergence#
In FEM we will often deal with spatial derivatives. We will be representing partial derivatives in two different notations. Let \(u(\mbf{x})\) be a function dependent on spatial coordinates \(\mbf{x}\). Some partial spatial derivatives of this function could be:
We can gather the first order derivatives into a gradient vector, which we represent by using the operator \(\nabla\). Assuming \(\mbf{x}\) is three-dimensional, we have:
which we can also neatly represent using index notation:
Note above how the operator \(\nabla\) increases the dimensionality of its operand by one. We can generalize this result by taking the gradient of a vector, which results in a matrix:
We can also define the divergence operator using the same symbol \(\nabla\) but one that takes the opposite role of aggregating derivatives in all directions and reducing the dimensionality of its operand by one. For a vector this would be defined as:
resulting in a scalar. We can define an analogous operation for a matrix, which then results in a vector:
Divergence theorem and integration by parts#
An important result for deriving the Finite Element Method is the divergence theorem, also known as Gauss’s theorem. It states that integrating the divergence of a vector \(\mbf{a}\) on a domain \(\Omega\) is equivalent to integrating only at the boundary \(\Gamma=\partial\Omega\) of the domain:
where \(\mbf{n}\) is the vector normal to the surface \(\Gamma\).
We can use this result to arrive at useful integration by parts expressions. Starting from a scalar/vector product, integrating over \(\Omega\) and employing the derivative product rule \((fg)'=f'g + fg'\) we have:
Now, using Eq. (1.29) but setting \(\mbf{a}=a\mbf{b}\) we can substitute the divergence theorem result into the left hand side of Eq. (1.30) to get:
Finally, restructuring gives an equality that will be used in finite element derivations:
We can proceed in the exact same way for integrating the divergence of a second-order tensor. In that case the divergence theorem reads:
and again using the derivative of a product we get to the integration by parts expression:
Check the expression above and make sure you can see that all integrals evaluate to scalars.