# Linear Controls Notes

## 1 Linear Algebra

### 1.1 Determinant

$\left| \begin{array}{cc} x_{11} & x_{12} \\ x_{21} & x_{22} \end{array}\right| = x_{11}x_{22} - x_{12}x_{21}$

### 1.2 Null Space

${\rm null}({\bf A})\ \equiv\ \{ {\bf x}\ \arrowvert\ {\bf A}: m\times n \ \land\ {\bf x} \in \Re^n\ \land\ {\bf A} {\bf x} = 0 \}$

### 1.3 Span

${\rm span}({\bf v_1}, {\bf v_2}, \dots, {\bf v_n})\ \equiv\ % \{ {\bf x}\ \arrowvert\ % {\bf x} = c_1{\bf v_1} + c_2{\bf v_2} + \dots + c_n {\bf v_n}\ % \forall\ % \{c_1, c_2, \dots, c_n\} \in \Re \}$

### 1.4 Rank

${\rm rank}({\bf A})\ \equiv\ {\rm number\ of\ linearly\ independent\ colums\ of\ }{\bf A}$ ${\rm rank}(A) = {\rm rank}(A^T)$

### 1.5 Column Space

$\mathcal{R}({\bf A})\ \equiv\ {\rm span}({\rm columns}({\bf A}))$

### 1.6 Row Space

${\rm row\_space}({\bf A})\ \equiv\ {\mathcal R}({\bf A}^T)$

### 1.7 Eigen

Let $$\lambda$$ be the eigenvalues of matrix $$\bf A$$.

${\bf A}{\bf x} = \lambda {\bf x}$

Characteristic Equation:

$\lvert {\bf A} - \lambda {\bf I} \rvert = 0$ $({\bf A} - \lambda {\bf I}) {\bf x} = 0$

### 1.8 Cayley-Hamilton

$\lvert {\bf A} - {\bf A} {\bf I} \rvert = 0$

Thus, you can substitute $$\bf A$$ in it's own characteristic equation. This can be used to reduce the order of a matrix polynomial by using the characteristic equation to find the value of a high order matrix power as a linear combination of lower order powers of that matrix.

### 1.9 Similarity

${\bf P}^{-1} {\bf A} {\bf P} = {\bf B}\ \iff\ {\bf P} {\bf B} {\bf P}^{-1}= {\bf A}$

Similarity Transform: $${\bf A} \to {\bf P} {\bf B} {\bf P}^{-1}$$

### 1.10 Orthogonal Vectors

${\bf z} \in {\bf x}^\perp \implies {\bf z}^T{\bf x} = 0$

### 1.11 Matrix Exponential

$e^{\bf A} = \sum_{k=0}^\infty \frac{{\bf A}^k}{k!}$ $e^{{\bf A} t} = \mathscr{L}^{-1}\left\{(s {\bf I} - {\bf A} )^{-1}\right\}$

### 1.12 Inverse

${\bf A} {\bf A}^{-1} = {\bf I}$

For matrix $${\bf A} \in \Re^{2\times 2}$$: \begin{eqnarray} {\bf A} = \left[\begin{array}{cc} a_{11} & a_{12} \\ a_{21} & a_{22} \end{array}\right] \nonumber \\ {\bf A}^{-1} = \frac{1}{\left|{\bf A}\right|} \left[\begin{array}{cc} a_{22} & -a_{12} \\ -a_{21} & a_{11} \end{array}\right] \nonumber \end{eqnarray}

### 1.13 Higher Order Functions

Higher order functions – the derivative, integral, and Laplace transform – operate on matrices elementwise.

\begin{eqnarray} \frac{d {\bf A}}{dt} = \left[\begin{array}{cccc} \frac{ d a_{11}}{dt} & \frac{d a_{12}}{dt} & \cdots & \frac{d a_{1n}}{dt} \\ \frac{ d a_{22}}{dt} & \frac{d a_{22}}{dt} & \cdots & \frac{d a_{2n}}{dt} \\ \vdots & \vdots & \ddots & \frac{d a_{1n}}{dt} \\ \frac{ d a_{n1}}{dt} & \frac{d a_{n2}}{dt} & \cdots & \frac{d a_{nn}}{dt} \\ \end{array}\right] \end{eqnarray}

### 1.14 Diagonalizable

${\bf A} = \left[ \begin{array}{cccc} a_1 & 0 & \cdots & 0 \\ 0 & a_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & 0 \\ 0 & 0 & \cdots &a_n \end{array} \right] \implies e^{\bf A} = \left[ \begin{array}{cccc} e^{a_1} & 0 & \cdots & 0 \\ 0 & e^{a_2} & \cdots & 0 \\ \vdots & \vdots & \ddots & 0 \\ 0 & 0 & \cdots & e^{a_n} \end{array} \right]$ ${\bf A} = {\bf P}{\bf D}{\bf P}^{-1} \implies e^{\bf A} = {\bf P}e^{\bf D}{\bf P}^{-1}$

### 1.15 Nilpotent Matrix

If matrix $${\bf A}$$ is nilpotent, then

${\bf A}^q = 0 \implies e^{\bf A} = \sum_{k=0}^{q-1} \frac{{\bf A}^k}{k!}$

## 2 Stability

### 2.1 Definitions

stable
$$\forall\ \epsilon>0,\quad \exists\, \delta(\epsilon,t_0)\ \arrowvert\ x(t_0) = \delta \land x(t) < \epsilon \forall t$$
asymptotically stable
stable and $$\exists\, \delta'(t_0)\ \arrowvert\ x(t_0) = \delta' \land \lim_{t \to \infty } x(t) = 0$$
global asymptotically stable
$$\bf x \to \bf 0\ {\rm as}\ t\to\infty \ \forall\ \bf x(t_0) = \bf x_0$$

### 2.2 Criterion

#### 2.2.1 Continuous

$\bf \dot{x} = Ax$ $\lambda_i = \beta_i + \jmath \omega_i$
Unstable
$$\exists\,\beta_i > 0$$ for simple eigenvalue OR $$\exists\,\beta_i\geq 0$$ for repeated eigenvalue
Stable
$$\exists\,\beta_i \leq 0$$ for simple eigenvalue OR $$\exists\,\beta_i < 0$$ for repeated eigenvalue
Asymptotically Stable
$$\exists\,\beta_i < 0$$ for all eigenvalues

#### 2.2.2 Discrete

${\bf x}(k+1) = {\bf A}{\bf x}(k)$
Unstable
$$\exists\,\lvert\lambda\rvert_i > 1$$ for simple roots OR $$\exists\,\lvert\lambda\rvert_i\geq 1$$ for repeated roots
Stable
$$\exists\,\lvert\lambda\rvert_i \leq 1$$ for simple roots OR $$\exists\,\lvert\lambda\rvert_i < 1$$ for repeated roots
Asymptotically Stable
$$\exists\,\lvert\lambda\rvert_i < 1$$ for all roots

### 2.3 Lyapunov's Direct Method

Employs a Lyapunov function, $$V({\bf x}): \Re^n \mapsto \Re$$. Analogous to physical energy, but must satisfy certain mathematical considerations.

positive definite
for some region $$\Omega$$ about $${\bf x} = {\bf 0}$$, $\left(V({\bf 0}) = 0 \right)\ \land\ \left( V({\bf x}) > 0\ \forall\ \{{\bf x} \arrowvert {\bf x} \neq {\bf 0} \land {\bf x} \in \Omega \} \right)$
positive semidefinite
for some region $$\Omega$$ about $${\bf x} = {\bf 0}$$, $\left(V({\bf 0}) = 0 \right)\ \land\ \left( V({\bf x}) \geq 0\ \forall\ \{{\bf x} \arrowvert {\bf x} \neq {\bf 0} \land {\bf x} \in \Omega \} \right)$
negative definite
for some region $$\Omega$$ about $${\bf x} = {\bf 0}$$, $\left(V({\bf 0}) = 0 \right)\ \land\ \left( V({\bf x}) < 0\ \forall\ \{{\bf x} \arrowvert {\bf x} \neq {\bf 0} \land {\bf x} \in \Omega \} \right)$
negative semidefinite
for some region $$\Omega$$ about $${\bf x} = {\bf 0}$$, $\left(V({\bf 0}) = 0 \right)\ \land\ \left( V({\bf x}) \leq 0\ \forall\ \{{\bf x} \arrowvert {\bf x} \neq {\bf 0} \land {\bf x} \in \Omega \} \right)$
Stable
positive definite $$V({\bf x})$$ and negative semidefinite $$\dot{V}({\bf x})$$, system is stable about origin
Asymptotically Stable
positive definite $$V({\bf x})$$ and negative definite $$\dot{V}({\bf x})$$, system is stable about origin
GAS
$V({\bf x}) > 0\,\forall\,{\bf x}\neq 0 \quad\land\quad V(\bf 0)=0 \quad\land\quad \dot{V}(\bf x)<0\,\forall\,x\neq 0 \quad\land\quad V(\bf x) \to \infty\ {\rm as}\ \lVert\bf x\rVert \to \infty$

## 3 State Transition Matrix

\begin{eqnarray} \dot{\bf x} = {\bf A}(t) {\bf x}(t) + {\bf B}(t) {\bf u}(t)\\ {\bf x}(t) = {\bf \Phi}(t,t_0) {\bf x}(t) + \int_{t_0}^{t} {\bf \Phi}(t,s){\bf B}(s){\bf u}(s) ds \end{eqnarray}

If $$\bf A$$ is constant:

• $${\bf \Phi}(t,t_0) = e^{(t-t_0){\bf A}}$$
• $${\bf \Phi}(t,0) = \mathscr{L}^{-1}\left\{\left({\bf I}s - {\bf A}\right)^{-1}\right\}$$ $${\bf \Phi}(t,t_0) = {\bf \Phi}(t-t_0,0)$$

Otherwise:

• Let $${\bf B}(t,t_0) = \int_{t_0}^{t}{\bf A}(s)ds$$.
• If $${\bf A} {\bf B} = {\bf B} {\bf A}$$, then $${\bf \Phi}(t,t_0) = e^{{\bf B}(t,t_0)}$$
• $${\bf A} {\bf B} = {\bf B} {\bf A}$$ when $$\bf A$$ is constant or diagonal
• In general, $${\bf A} = {\bf P} {\bf D} {\bf P}^{-1}$$ where $$\bf D$$ is diagonal, then $$e^{\bf A} = {\bf P} e^{\bf D} P^{-1}$$.

## 4 Reachability and Observability

Note that complete controllability or observability allows arbitrary pole placement while there lack thereof does not necessarily mean a given set of poles are impossible to produce.

### 4.1 Reachability Grammian

$W(t_0,t_1) = \int_{t_0}^{t_1} \Phi(t_1,s) B(s) B^T(s) \Phi^T(t_1,s) ds$ $x(t_0) = x_0 \to x(t_1) = x_1 \iff x_t - \Phi(t_1,t_0)x_0 \in {\mathcal R}(W(t_0,t_1))$

### 4.2 Reachability Matrix

${\bf \Gamma} = \left[ {\bf A}^0 {\bf B}\ {\bf A}^1 {\bf B}\ \dots\ {\bf A}^{n-1} {\bf B} \right]$ ${\rm reachable}({\bf x}) \iff {\bf x} \in \mathcal{R}({\bf \Gamma})$ ${\rm controllable}({\bf x}) \iff e^{{\bf A}t}{\bf x} \in \mathcal{R}({\bf \Gamma})$ ${\rm rank}({\bf \Gamma}) = n \implies {\rm completely\ controllable}$

### 4.3 Observability Matrix

$\bf \Omega = \left[ \begin{array}{c} \bf C \\ \bf C \bf A \\ \bf C \bf A^2 \\ \vdots \\ \bf C \bf A^{n-1} \end{array} \right]$ ${\rm rank}(\bf \Gamma) = n \implies {\rm completely\ observable}$

## 5 Decomposition

### 5.1 Kalman Decomposition

Transformation by $\bf T = \left[\begin{array}{cccc} {\bf v}_{co} & {\bf v}_{c\bar{o}} & {\bf v}_{\bar{c}o} & {\bf v}_{\bar{c}\bar{o}} \end{array}\right]$ where: \begin{eqnarray} {\bf v}_{co} \equiv \left\{{\bf x} \arrowvert {\bf x} \in \mathcal{R}({\bf \Gamma}) \cap \mathcal{N}({\bf \Omega})^\bot\right\} \nonumber \\ {\bf v}_{c\bar{o}} \equiv \left\{{\bf x} \arrowvert {\bf x} \in \mathcal{R}({\bf \Gamma}) \cap \mathcal{N}({\bf \Omega})\right\} \nonumber \\ {\bf v}_{\bar{c}o} \equiv \left\{{\bf x} \arrowvert {\bf x} \in \mathcal(R)({\bf \Gamma})^\bot \cap \mathcal{N}({\bf \Omega})^\bot\right\} \nonumber \\ {\bf v}_{\bar{c}\bar{o}} \equiv \left\{{\bf x} \arrowvert {\bf x} \in \mathcal{R}({\bf \Gamma})^\bot \cap \mathcal{N}({\bf \Omega})\right\} \nonumber \end{eqnarray} with: \begin{eqnarray} \tilde{\bf x} = \bf T^{-1} \bf x \nonumber \\ \dot{\tilde{\bf x}} = \bf T^{-1} \bf A \bf T \tilde{\bf x} + \bf T^{-1} \bf B \bf u \nonumber \\ \bf y = \bf C \bf T \tilde{\bf x} \end{eqnarray} Noting that \begin{eqnarray} \bf C \bf T = \left[\begin{array}{cccc} \bf c_1 & 0 & \bf c_3 & 0 \end{array}\right] \nonumber \\ \bf T^{-1} \bf B = \left[ \begin{array}{c} \bf b_1 \\ \bf b_2 \\ 0 \\ 0 \end{array}\right] \nonumber \\ \bf T^{-1} \bf A \bf T = \left[ \begin{array}{cccc} A_{11}& 0 &A_{12} & 0 \\ A_{21}& A_{22}&A_{23} &A_{24} \\ 0 & 0 &A_{33} & 0 \\ 0 & 0 &A_{43} & A_{44} \end{array}\right] \nonumber \end{eqnarray} Since:

• Controllable states cannot affect uncontrollable states
• Uncontrollable states may affect controllable states
• Observable states may affect unobservable states
• Unobservable states cannot affect observable states

#### 5.1.1 McMillan Degree

The McMillan degree is given by the dimension of the intersection between the controllable and observable subspaces. That is, the $${\bf v}_{co}$$ portion of the Kalman Decomposition.

$dim(\mathcal{R}({\Gamma}) \cap \mathcal{N}({\Omega})^\bot )$

## 6 Controllers

With our system

$\dot{{\bf x}} = {\bf A}{\bf x} + {\bf B} {\bf u}\quad {\rm and}\quad {\bf y} = {\bf C}{\bf x}$

We assume that we will determine $${\bf u}$$ based on our current state

\begin{eqnarray} \dot{{\bf x}} = {\bf A}{\bf x} + {\bf B} \left(-{\bf K} {\bf x}\right) \nonumber \\ \dot{{\bf x}} = \left( {\bf A} - {\bf B} {\bf K}\right) {\bf x} \end{eqnarray}

### 6.1 Pole Placement

The idea is to pick eigenvalues, $$\lambda$$, giving the desired behavior, then determine a gain matrix $$\bf K$$ that will give the controlled system those eigenvalues. This is done by solving the characteristic equation for $$\bf K$$. This gives you one scalar equation for each $$\lambda$$.

$\left| (\bf A - \bf B \bf K) - \lambda \bf I \right| = 0$

An identical but computationally easier approach is to solve this equation by matching the proper coefficients:

$\left| (\bf A - \bf B \bf K) - \lambda \bf I \right| = \prod_{j=1}^n(\lambda - \lambda_j)$

### 6.2 Optimal Control

We define some metric that our controller must minimize. For the LQ problem, we minimize:

$J = \int_{t_0}^{t_1} ({\bf x}^T{\bf Q}{\bf x} + {\bf U}^T {\bf R} {\bf u})dt + {\bf x}^T(t_1) {\bf S} {\bf x}(t_1)$

This gives:

${\bf u}(t) = -{\bf R}^{-1}{\bf B}^T{\bf P}(t){\bf x}(t)$

Where $${\bf P}(t)$$ solves the Riccati Equation:

\begin{eqnarray} \dot{\bf P} = -{\bf A}^T {\bf P} - {\bf P}{\bf A} + {\bf P}{\bf B} {\bf R}^{-1}{\bf B}^T{\bf P} - {\bf Q} \nonumber \\ {\bf P}(t_1) = {\bf S}\nonumber \end{eqnarray}

#### 6.2.1 Infinite Horizon

At steady state, $$\dot{\bf P} = 0$$, so the Riccati equation becomes:

$0 = -{\bf A}^T {\bf P} - {\bf P}{\bf A} + {\bf P}{\bf B} {\bf R}^{-1}{\bf B}^T{\bf P} - {\bf Q} \nonumber$

## 7 Observers

### 7.1 Luenberger

With our system

$\dot{{\bf x}} = {\bf A}{\bf x} + {\bf B} {\bf u}\quad {\rm and}\quad {\bf y} = {\bf C}{\bf x}$

We have an observed state, $$\hat{\bf x}$$:

\begin{eqnarray} \hat{ \bf y } = {\bf C} \hat{\bf x}\\ \dot{ \hat{\bf x}} = {\bf A} \hat{\bf x} + {\bf B} {\bf u} + {\bf L} \left( {\bf y} - \hat{\bf y} \right) \\ \end{eqnarray}

Where $${\bf L}$$ is the gain matrix for the Luenberger observer.

\begin{eqnarray} {\bf e} = {\bf x} - \hat{\bf x} \\ \dot{{\bf e}} = \dot{{\bf x}} - \dot{\hat{\bf x}} \nonumber \\ \dot{{\bf e}} = ({\bf A} - {\bf L} {\bf C}) {\bf e} \end{eqnarray}

Where $${\bf e}$$ is the error. The derivation is $$\dot{\bf e}$$ is simple algebra. Thus, the problem of observation reduces to the problem of control. Note that the poles for $${\bf A} - {\bf L} {\bf C}$$ can essentially be arbitrarily fast as the observer system only exists inside a computer. In practice, observer poles should be about 10 times faster than controller poles.

### 7.2 Kalman

• $${\bf x} \in \Re^n$$ – state vector
• $${\bf u} \in \Re^{m}$$ – input vector
• $${\bf z} \in \Re^s$$ – measurement vector
• $${\bf A} \in \Re^{n \times n}$$ – process matrix
• $${\bf B} \in \Re^{n \times m}$$ – input gain matrix
• $${\bf H} \in \Re^{s \times n}$$ – state to measurement mapping matrix (${\bf z} = {\bf H} {\bf x}$)
• $${\bf K} \in \Re^{n \times s}$$ – Kalman gain
• $${\bf P} \in \Re^{n \times n}$$ – Error Covariance
• $${\bf Q} \in \Re^{n \times n}$$ – Process Noise Covariance
• $${\bf R} \in \Re^{s \times s}$$ – Measurement Noise Covariance

#### 7.2.1 Predict

\begin{eqnarray} {\hat{\bf x}^-}_k = {\bf A} \hat{\bf x}_{k-1} + {\bf B} {\bf u}_{k-1} \\ {\bf P}^-_k = {\bf A}{\bf P}_{k-1}{\bf A}^T + {\bf Q} \end{eqnarray}

#### 7.2.2 Correct

\begin{eqnarray} {\bf K}_k = {\bf P}^-_k {\bf H}^T \left({\bf H} {\bf P}^-_k {\bf H}^T + R\right)^{-1} \\ \hat{\bf x}_k = \hat{\bf x}^-_k + {\bf K}_k \left({\bf z}_k - {\bf H}\hat{\bf x}^-_k\right) \\ {\bf P}_k = \left({\bf I} - {\bf K}_k {\bf H}\right) {\bf P}^-_k \end{eqnarray}

### 7.3 Separation Principle

The separation principle states that one may design a feedback controller assuming real $$\bf x$$ and an observer to find $$\hat{\bf x}$$ independently, then feed $$\hat{\bf x}$$ to the controller and have everything still work. This is explained by:

\begin{eqnarray} \dot{\bf x} = \bf A \bf x - \bf B \bf K \hat{\bf x} = (\bf A - \bf B \bf K)\bf x + \bf B \bf K \bf e \nonumber\\ \dot{\bf e} = (\bf A - \bf L \bf C)\bf e \nonumber \end{eqnarray}

These equations can then be combined into the following system:

$\left[ \begin{array}{c} \dot{\bf x}\\ \dot{\bf e} \end{array}\right] = \left[ \begin{array}{cc} \bf A - \bf B \bf K & \bf B \bf K \\ 0 & \bf A - \bf L \bf C \end{array}\right]$

Because this is an upper triangular block matrix, only the diagonals determine its stability and the controller and observer design already proved those two terms are stable.

Date: 2013-07-24 17:25:21 EDT

Org version 7.8.11 with Emacs version 23

Validate XHTML 1.0