Solutions of Exercise 7.2

Solutions of Exercise 7.2#

Consider the system:

\[\begin{split} \begin{dcases} \dot{x} = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} x + \begin{bmatrix} 0 \\ 1 \end{bmatrix} u \\ y = \begin{bmatrix} 1 & 0 \end{bmatrix} x \end{dcases} \end{split}\]
  1. Construct a state observer whose state estimation error decays as \(\varepsilon(t) \propto e^{-10t}\).

  2. Suppose that \(u(t) = 0\), and let \(\hat{y} = C \hat{x}\). Compute the transfer function between \(y\) and \(\hat{y}\).

Estimation error decay rate

Recall that the state estimation error evolves according to \(\dot{\varepsilon} = (A-KC) \varepsilon\).

Then, \(\varepsilon\) decays proportionally to \(e^{-10t}\) if all eigenvalues of \(A-KC\) have \(\Re (\lambda) \leq -10\).


Solution#

Question 1#

As explained in the callout Estimation error decay rate, requiring the state estimation error to decay \(\propto e^{-10t}\) is equivalent to require that the slowest eigenvalue of \(A - KC\) is \(\lambda = -10\). The other eigenvalue could be the same, or it could be faster (e.g., \(\lambda = -20\)), but not slower.

Lets pick, for simplicity, \(\lambda = 10\) and \(\lambda= 10\). Our desired characteristic polynomial will hence be

\[ \varphi^\star(\lambda) = (\lambda + 10)^2 = \lambda^2 + 20 \lambda + 100 \]

The characteristic polynomial of the state estimation error is

\[\begin{split} \varphi(\lambda) = \det( \lambda I - (A - KC)) = \det \begin{bmatrix} \lambda + \kappa_1 & -1 \\ \kappa_2 & \lambda \end{bmatrix} = \lambda^2 + \kappa_1 \lambda + \kappa_2 \end{split}\]

The innovation gain is therefore

\[\begin{split} K = \begin{bmatrix} 20 \\ 100 \end{bmatrix} \end{split}\]

Question 2#

The dynamical equation of the state observer is

(38)#\[ \dot{\hat{x}} = A \hat{x} + B u + K(y - C \hat{x}) \]

Because

  • \(u(t) \, \forall t\)

  • \(\dot{\hat{x}} \rightarrow s \hat{X}(s)\)

(38) can be moved to the Laplace domain, resulting in

\[ s \hat{X}(s) = A \hat{X}(s) - KC \hat{X}(s) + K Y(s) \]

Moving all the terms depending upon \(\hat{X}(s)\) on the left-hand side, we get

\[ (sI - A + KC) \hat{X}(s) = K Y(s) \quad \rightarrow \quad \hat{X}(s) = (sI - A + KC)^{-1} K Y(s) \]

We know that the estimated output is \(\hat{y} = C \hat{x}\). This means

\[ \hat{Y}(s) = \underbrace{C \left( s I - A + KC \right) K}_{G_{\text{obsv}}(s)} \, Y(s) \]

Replacing \(A\), \(K\), and \(C\) we get the following transfer function

\[\begin{split} \begin{aligned} G_{\text{obsv}}(s) &= \begin{bmatrix} 1 & 0 \end{bmatrix} \, \begin{bmatrix} s + 20 & -1 \\ 100 & s \end{bmatrix}^{-1} \begin{bmatrix} 20 \\ 100\end{bmatrix} \\ &= \frac{1}{s^2 + 20s + 100} \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} s & 1 \\ -100 & s + 20 \end{bmatrix} \begin{bmatrix} 20 \\ 100\end{bmatrix} \\ &= \frac{1}{s^2 + 20s + 100} \begin{bmatrix} s & 1 \end{bmatrix} \begin{bmatrix} 20 \\ 100\end{bmatrix} \\ &= \frac{20s + 100}{s^2 + 20s + 100} \end{aligned} \end{split}\]