Algorithms for Walking, Running, Swimming, Flying, and Manipulation

© Russ Tedrake, 2020

How to cite these notes |
Send me your feedback

**Note:** These are working notes used for a course being taught
at MIT. They will be updated throughout the Spring 2020 semester. Lecture videos are available on YouTube.

Previous Chapter | Table of contents | Next Chapter |

A great deal of work in the control of underactuated systems has been done in the context of low-dimensional model systems. These model systems capture the essence of the problem without introducing all of the complexity that is often involved in more real-world examples. In this chapter we will focus on two of the most well-known and well-studied model systems--the Acrobot and the Cart-Pole. After we have developed some tools, we will see that they can be applied directly to other model systems; we will give a number of examples using Quadrotors. All of these systems are trivially underactuated, having less actuators than degrees of freedom.

The Acrobot is a planar two-link robotic arm in the vertical plane
(working against gravity), with an actuator at the elbow, but no actuator at
the shoulder. It was first described in detail in

The Acrobot is representative of the primary challenge in underactuated robots. In order to swing up and balance the entire system, the controller must reason about and exploit the state-dependent coupling between the actuated degree of freedom and the unactuated degree of freedom. It is also an important system because, as we will see, it closely resembles one of the simplest models of a walking robot.

Figure 3.1 illustrates the model parameters used in our analysis. $\theta_1$ is the shoulder joint angle, $\theta_2$ is the elbow (relative) joint angle, and we will use $\bq = [\theta_1,\theta_2]^T$, $\bx = [\bq,\dot\bq]^T$. The zero configuration is with both links pointed directly down. The moments of inertia, $I_1,I_2$ are taken about the pivots. The task is to stabilize the unstable fixed point $\bx = [\pi,0,0,0]^T$.

We will derive the equations of motion for the Acrobot using the method of Lagrange. The locations of the center of mass of each link, $\bp_{c1}, \bp_{c2},$ are given by the kinematics: \begin{equation} \bp_{c1} = \begin{bmatrix} l_{c1} s_1 \\ -l_{c1} c_1 \end{bmatrix}, \quad \bp_{c2} = \begin{bmatrix} l_1 s_1 + l_{c2} s_{1+2} \\ -l_1 c_1 - l_{c2} c_{1+2} \end{bmatrix} . \end{equation} The energy is given by: \begin{gather} T = T_1 + T_2, \quad T_1 = \frac{1}{2} I_1 \dot{q}_1^2 \\ T_2 = \frac{1}{2} ( m_2 l_1^2 + I_2 + 2 m_2 l_1 l_{c2} c_2 ) \dot{q}_1^2 + \frac{1}{2} I_2 \dot{q}_2^2 + (I_2 + m_2 l_1 l_{c2} c_2) \dot{q}_1 \dot{q}_2 \\ U = -m_1 g l_{c1} c_1 - m_2 g (l_1 c_1 + l_{c2} c_{1+2}) \end{gather} Entering these quantities into the Lagrangian yields the equations of motion: \begin{gather} (I_1 + I_2 + m_2 l_1^2 + 2m_2 l_1 l_{c2} c_2) \ddot{q}_1 + (I_2 + m_2 l_1 l_{c2} c_2)\ddot{q}_2 - 2m_2 l_1 l_{c2} s_2 \dot{q}_1 \dot{q}_2 \\ \quad -m_2 l_1 l_{c2} s_2 \dot{q}_2^2 + m_1 g l_{c1}s_1 + m_2 g (l_1 s_1 + l_{c2} s_{1+2}) = 0 \\ (I_2 + m_2 l_1 l_{c2} c_2) \ddot{q}_1 + I_2 \ddot{q}_2 + m_2 l_1 l_{c2} s_2 \dot{q}_1^2 + m_2 g l_{c2} s_{1+2} = \tau \end{gather} In standard, manipulator equation form: $$\bM(\bq)\ddot\bq + \bC(\bq,\dot\bq)\dot\bq = \btau_g(\bq) + \bB\bu,$$ using $\bq = [\theta_1,\theta_2]^T$, $\bu = \tau$ we have: \begin{gather} \bM(\bq) = \begin{bmatrix} I_1 + I_2 + m_2 l_1^2 + 2m_2 l_1 l_{c2} c_2 & I_2 + m_2 l_1 l_{c2} c_2 \\ I_2 + m_2 l_1 l_{c2} c_2 & I_2 \end{bmatrix},\label{eq:Hacrobot}\\ \bC(\bq,\dot{\bq}) = \begin{bmatrix} -2 m_2 l_1 l_{c2} s_2 \dot{q}_2 & -m_2 l_1 l_{c2} s_2 \dot{q}_2 \\ m_2 l_1 l_{c2} s_2 \dot{q}_1 & 0 \end{bmatrix}, \\ \btau_g(\bq) = \begin{bmatrix} -m_1 g l_{c1}s_1 - m_2 g (l_1 s_1 + l_{c2}s_{1+2}) \\ -m_2 g l_{c2} s_{1+2} \end{bmatrix}, \quad \bB = \begin{bmatrix} 0 \\ 1 \end{bmatrix}. \end{gather}

You can experiment with the Acrobot dynamics in

The other model system that we will investigate here is the cart-pole system, in which the task is to balance a simple pendulum around its unstable equilibrium, using only horizontal forces on the cart. Balancing the cart-pole system is used in many introductory courses in control, including 6.003 at MIT, because it can be accomplished with simple linear control (e.g. pole placement) techniques. In this chapter we will consider the full swing-up and balance control problem, which requires a full nonlinear control treatment.

The figure shows our parameterization of the system. $x$ is the horizontal position of the cart, $\theta$ is the counter-clockwise angle of the pendulum (zero is hanging straight down). We will use $\bq = [x,\theta]^T$, and $\bx = [\bq,\dot\bq]^T$. The task is to stabilize the unstable fixed point at $\bx = [0,\pi,0,0]^T.$

The kinematics of the system are given by \begin{equation}\bx_1 = \begin{bmatrix} x \\ 0 \end{bmatrix}, \quad \bx_2 = \begin{bmatrix} x + l\sin\theta \\ -l\cos\theta \end{bmatrix}. \end{equation} The energy is given by \begin{align} T=& \frac{1}{2} (m_c + m_p)\dot{x}^2 + m_p \dot{x}\dot\theta l \cos{\theta} + \frac{1}{2}m_p l^2 \dot\theta^2 \\ U =& -m_p g l \cos\theta. \end{align} The Lagrangian yields the equations of motion: \begin{gather} (m_c + m_p)\ddot{x} + m_p l \ddot\theta \cos\theta - m_p l \dot\theta^2 \sin\theta = f \\ m_p l \ddot{x} \cos\theta + m_p l^2 \ddot\theta + m_p g l \sin\theta = 0 \end{gather} In standard, manipulator equation form: $$\bM(\bq)\ddot\bq + \bC(\bq,\dot\bq)\dot\bq = \btau_g(\bq) + \bB\bu,$$ using $\bq = [x,\theta]^T$, $\bu = f$, we have: \begin{gather*} \bM(\bq) = \begin{bmatrix} m_c + m_p & m_p l \cos\theta \\ m_p l \cos\theta & m_p l^2 \end{bmatrix}, \quad \bC(\bq,\dot{\bq}) = \begin{bmatrix} 0 & -m_p l \dot\theta \sin\theta \\ 0 & 0 \end{bmatrix}, \\ \btau_g(\bq) = \begin{bmatrix} 0 \\ - m_p gl \sin\theta \end{bmatrix}, \quad \bB = \begin{bmatrix} 1 \\ 0 \end{bmatrix} \end{gather*} In this case, it is particularly easy to solve directly for the accelerations: \begin{align} \ddot{x} =& \frac{1}{m_c + m_p \sin^2\theta}\left[ f+m_p \sin\theta (l \dot\theta^2 + g\cos\theta)\right] \label{eq:ddot_x}\\ \ddot{\theta} =& \frac{1}{l(m_c + m_p \sin^2\theta)} \left[ -f \cos\theta - m_p l \dot\theta^2 \cos\theta \sin\theta - (m_c + m_p) g \sin\theta \right] \label{eq:ddot_theta} \end{align} In some of the analysis that follows, we will study the form of the equations of motion, ignoring the details, by arbitrarily setting all constants to 1: \begin{gather} 2\ddot{x} + \ddot\theta \cos\theta - \dot\theta^2 \sin\theta = f \label{eq:simple}\\ \ddot{x}\cos\theta + \ddot\theta + \sin\theta = 0. \label{eq:simple2} \end{gather}

You can experiment with the Cart-Pole dynamics in

Quadrotors have become immensely popular over the last few years -- advances in outrunner motors from the hobby community made them powerful, light-weight, and inexpensive! They are strong enough to carry an interesting payload (e.g. of sensors for mapping / photographing the environment), but dynamic enough to move relatively quickly. The most interesting dynamics start showing up at higher speeds, when the propellors start achieving lift from the airflow across them due to horizontal motion, or when they are close enough to the ground to experience significant ground-effect, but we won't try to capture those effects (yet) here.

When the quadrotor revolution started to happen, I predicted that it would be followed quickly by a realization that fixed-wing vehicles are better for most applications. Propellors are almost optimally efficient for producing thrust -- making quadrotors very efficient for hovering -- but to be efficient in forward flight you probably want to have an airfoil. Wings are a really good idea! But I was wrong -- quadrotors have completely dominated fixed-wings for commercial UAVs. Perhaps it's only because they are easier to control? Maybe there is still hope...

We can get started with an extremely simple model of a quadrotor that is restricted to live in the plane. In that case, it actually only needs two propellors, but calling it a "birotor" doesn't have the same ring to it. The equations of motion are almost trivial, since it is only a single rigid body, and certainly fit into our standard manipulator equations: \begin{gather} m \ddot{x} = -(u_1 + u_2)\sin\theta, \label{eq:quad_x}\\ m \ddot{y} = (u_1 + u_2)\cos\theta - mg, \label{eq:quad_y}\\ I \ddot\theta = r (u_1 - u_2) \label{eq:quad_theta} \end{gather}

See `drake/examples/Quadrotor`

. Will fill in the text
description/derivation here soon! The most interesting part is that the
model needs to include the moment produced by the rotating props,
otherwise the system linearized about the hovering configuration is
actually not controllable.

For both the Acrobot and the Cart-Pole systems, we will begin by designing a linear controller which can balance the system when it begins in the vicinity of the unstable fixed point. To accomplish this, we will linearize the nonlinear equations about the fixed point, examine the controllability of this linear system, then using linear quadratic regulator (LQR) theory to design our feedback controller.

Although the equations of motion of both of these model systems are relatively tractable, the forward dynamics still involve quite a few nonlinear terms that must be considered in any linearization. Let's consider the general problem of linearizing a system described by the manipulator equations.

We can perform linearization around a fixed point, $(\bx^*, \bu^*)$, using a Taylor expansion: \begin{equation} \dot\bx = {\bf f}(\bx,\bu) \approx {\bf f}(\bx^*,\bu^*) + \left[ \pd{\bf f}{\bx}\right]_{\bx=\bx^*,\bu=\bu^*} (\bx - \bx^*) + \left[ \pd{\bf f}{\bu}\right]_{\bx=\bx^*,\bu=\bu^*} (\bu - \bu^*) \end{equation} Let us consider the specific problem of linearizing the manipulator equations around a (stable or unstable) fixed point. In this case, ${\bf f}(\bx^*,\bu^*)$ is zero, and we are left with the standard linear state-space form: \begin{align} \dot\bx =& \begin{bmatrix} \dot\bq \\ \bM^{-1}(\bq) \left[ \btau_g(\bq) + {\bf B}(\bq)\bu - \bC(\bq,\dot\bq)\dot\bq \right] \end{bmatrix},\\ \approx& {\bf A}_{lin} (\bx-\bx^*) + \bB_{lin} (\bu - \bu^*), \end{align} where ${\bf A}_{lin}$, and $\bB_{lin}$ are constant matrices. Let us define $\bar\bx = \bx - \bx^*, \bar\bu = \bu - \bu^*$, and write $$\dot{\bar\bx} = {\bf A}_{lin}\bar\bx + \bB_{lin}\bar\bu.$$ Evaluation of the Taylor expansion around a fixed point yields the following, very simple equations, given in block form by: \begin{align} {\bf A}_{lin} =& \begin{bmatrix} {\bf 0} & {\bf I} \\ \bM^{-1} \pd{\btau_g}{\bq} + \sum_{j} \bM^{-1}\pd{\bB_j}{\bq} u_j & {\bf 0} \end{bmatrix}_{\bx=\bx^*,\bu=\bu^*} \\ \bB_{lin} =& \begin{bmatrix} {\bf 0} \\ \bM^{-1} \bB \end{bmatrix}_{\bx=\bx^*, \bu=\bu^*} \end{align} where $\bB_j$ is the $j$th column of $\bB$. Note that the term involving $\pd{\bM^{-1}}{q_i}$ disappears because $\btau_g + \bB\bu - \bC\dot{\bq}$ must be zero at the fixed point. All of the $\bC\dot\bq$ derivatives drop out, too, because $\dot{\bq}^* = 0$, and any terms with $\bC$ drop out as well, since centripetal and centrifugal forces are zero at when velocity is zero. In many cases, including both the Acrobot and Cart-Pole systems (but not the Quadrotors), the matrix $\bB(\bq)$ is a constant, so the $\pd\bB{\bq}$ terms also drop out.

Linearizing around the (unstable) upright point, we have: \begin{gather} \left[\pd{\bf \tau_g}{\bq}\right]_{\bx=\bx^*} = \begin{bmatrix} g (m_1 l_{c1} + m_2 l_1 + m_2 l_{c2}) & m_2 g l_{c2} \\ m_2 g l_{c2} & m_2 g l_{c2} \end{bmatrix} \end{gather} The linear dynamics follow directly from these equations and the manipulator form of the Acrobot equations.

Linearizing around the unstable fixed point in this system, we have: \begin{gather} \left[\pd{\btau_g}{\bq}\right]_{\bx=\bx^*} = \begin{bmatrix} 0 & 0 \\ 0 & m_p g l \end{bmatrix} \end{gather} Again, the linear dynamics follow simply.

Studying the properties of the linearized system can tell us some
things about the (local) properties of the nonlinear system. For
instance, having a stable linearization implies local exponential
stability of the nonlinear system

For the linear system $$\dot{\bx} = {\bf A}\bx + \bB\bu,$$ we can integrate this linear system in closed form, so it is possible to derive the exact conditions of controllability. In particular, for linear systems it is sufficient to demonstrate that there exists a control input which drives any initial condition to the origin.

Let us first examine a special case, which falls short as a general tool but may be more useful for understanding the intuition of controllability. Let's perform an eigenvalue analysis of the system matrix ${\bf A}$, so that: $${\bf A}{\bf v}_i = \lambda_i {\bf v}_i,$$ where $\lambda_i$ is the $i$th eigenvalue, and ${\bf v}_i$ is the corresponding (right) eigenvector. There will be $n$ eigenvalues for the $n \times n$ matrix ${\bf A}$. Collecting the (column) eigenvectors into the matrix ${\bf V}$ and the eigenvalues into a diagonal matrix ${\bf \Lambda}$, we have $${\bf A}{\bf V} = {\bf V}{\bf \Lambda}.$$ Here comes our primary assumption: let us assume that each of these $n$ eigenvalues takes on a distinct value (no repeats). With this assumption, it can be shown that the eigenvectors ${\bf v}_i$ form a linearly independent basis set, and therefore ${\bf V}^{-1}$ is well-defined.

We can continue our eigenmodal analysis of the linear system by defining the modal coordinates, ${\bf r}$, with: $$\bx = {\bf V}{\bf r},\quad \text{or}\quad {\bf r} = {\bf V}^{-1}\bx.$$ In modal coordinates, the dynamics of the linear system are given by $$\dot{\bf r} = {\bf V}^{-1} {\bf A} {\bf V} {\bf r} + {\bf V}^{-1} \bB \bu = {\bf \Lambda} {\bf r} + {\bf V}^{-1}\bB \bu.$$ This illustrates the power of modal analysis; in modal coordinates, the dynamics diagonalize yielding independent linear equations: $$\dot{r}_i = \lambda_i r_i + \sum_j \beta_{ij} u_j,\quad {\bf \beta} = {\bf V}^{-1} \bB.$$

Now the concept of controllability becomes clear. Input $j$ can
influence the dynamics in modal coordinate $i$ if and only if ${\bf
\beta}_{ij} \neq 0$. In the special case of non-repeated eigenvalues,
having control over each individual eigenmode is sufficient to (in
finite time) regulate all of the eigenmodes

A more general solution to the controllability issue, which removes
our assumption about the eigenvalues, can be obtained by examining the
time-domain solution of the linear equations. The solution of this
system is $$\bx(t) = e^{{\bf A}t} \bx(0) + \int_0^{t} e^{{\bf A}(t -
\tau)} \bB \bu(\tau) d\tau.$$ Without loss of generality, lets consider
the that the final state of the system is zero. Then we have: $$\bx(0)
= - \int_0^{t_f} e^{-{\bf A}\tau}\bB \bu(\tau) d\tau.$$ You might be
wondering what we mean by $e^{{\bf A}t}$; a scalar raised to the power
of a matrix..? Recall that $e^{z}$ is actually defined by a convergent
infinite sum: $$e^{z} =1 + z + \frac{1}{2} z^2 + \frac{1}{6} z^3 + ...
.$$ The notation $e^{{\bf A}t}$ uses the same definition: $$e^{{\bf A}t}
= {\bf I} + {\bf A}t + \frac{1}{2}({\bf A}t)^2 + \frac{1}{6}({\bf A}t)^3
+ ... .$$ Not surprisingly, this has many special forms. For instance,
if ${\bf A}$ is diagonalizable, $e^{{\bf A}t} = {\bf
V}e^{{\bf\Lambda}t}{\bf V}^{-1},$ where ${\bf A} = {\bf V \Lambda
V}^{-1}$ is the eigenvalue decomposition of ${\bf A}$
`ControllabilityMatrix`

, or check the rank condition directly using `IsControllable`

.

Note that a linear feedback to change the eigenvalues of the
eigenmodes is not sufficient to accomplish our goal of getting to the
goal in finite time. In fact, the open-loop control to reach the goal
is easily obtained with a final-value LQR problem, and (for ${\bf
R}={\bf I}$) is actually a simple function of the controllability
Grammian

Analysis of the controllability of both the Acrobot and Cart-Pole systems reveals that the linearized dynamics about the upright are, in fact, controllable. This implies that the linearized system, if started away from the zero state, can be returned to the zero state in finite time. This is potentially surprising - after all the systems are underactuated. For example, it is interesting and surprising that the Acrobot can balance itself in the upright position without having a shoulder motor.

The controllability of these model systems demonstrates an extremely
important, point: *An underactuated system is not necessarily an
uncontrollable system.* Underactuated systems cannot follow
arbitrary trajectories, but that does not imply that they cannot arrive
at arbitrary points in state space. However, the trajectory required to
place the system into a particular state may be arbitrarily complex.

The controllability analysis presented here is for linear time-invariant (LTI) systems. A comparable analysis exists for linear time-varying (LTV) systems. We will even see extensions to nonlinear systems; although it will often be referred to by the synonym of "reachability" analysis.

Controllability tells us that a trajectory to the fixed point exists, but does not tell us which one we should take or what control inputs cause it to occur. Why not? There are potentially infinitely many solutions. We have to pick one.

The tools for controller design in linear systems are very advanced.
In particular, as we describe in the linear
optimal control chapter, one can easily design an optimal feedback
controller for a regulation task like balancing, so long as we are
willing to linearize the system around the operating point and define
optimality in terms of a quadratic cost function: $$J(\bx_0) =
\int_0^\infty \left[ \bx^T(t) \bQ \bx(t) + \bu^T(t) \bR \bu(t) \right]dt,
\quad \bx(0)=\bx_0, \bQ=\bQ^T>0, \bR=\bR^T>0.$$ The linear feedback
matrix $\bK$ used as $$\bu(t) = - \bK\bx(t),$$ is the so-called optimal
linear quadratic regulator (LQR). Even without understanding the
detailed derivation, we can quickly become practitioners of LQR.

```
K =
LinearQuadraticRegulator(A, B, Q, R)
```

for linear
systems. It also provides a version ```
controller =
LinearQuadraticRegulator(system, context, Q, R)
```

that
will linearize the system for you around an equilibrium and return the
controller (in the original coordinates). Therefore, to use LQR, one
simply needs to define the symmetric positive-definite cost matrices,
${\bf Q}$ and ${\bf R}$. In their most common form, ${\bf Q}$ and ${\bf
R}$ are positive diagonal matrices, where the entries $Q_{ii}$ penalize
the relative errors in state variable $x_i$ compared to the other state
variables, and the entries $R_{ii}$ penalize actions in $u_i$.
Take a minute to play around with the LQR controller for the Acrobot and the Cart-Pole

Make sure that you take a minute to look at the code which runs during these examples. Can you set the ${\bf Q}$ and ${\bf R}$ matrices differently, to improve the performance?

Simulation of the closed-loop response with LQR feedback shows that the task is indeed completed - and in an impressive manner. Oftentimes the state of the system has to move violently away from the origin in order to ultimately reach the origin. Further inspection reveals the (linearized) closed-loop dynamics are in fact non-minimum phase (acrobot has 3 right-half zeros, cart-pole has 1).

LQR works essentially out of the box for Quadrotors, if linearized around a nominal fixed point (where the non-zero thrust from the propellers is balancing gravity).

Note that LQR, although it is optimal for the linearized system, is
not necessarily the best linear control solution for maximizing basin of
attraction of the fixed-point. The theory of *robust
control*(e.g.,

In the introductory chapters, we made the point that the underactuated
systems are not feedback equivalent to $\ddot{q} = \bu$. Although we
cannot always simplify the full dynamics of the system, it is still possible
to linearize a portion of the system dynamics. The technique is called
*partial feedback linearization*.

Consider the cart-pole example. The dynamics of the cart are affected by the motions of the pendulum. If we know the model, then it seems quite reasonable to think that we could create a feedback controller which would push the cart in exactly the way necessary to counter-act the dynamic contributions from the pendulum - thereby linearizing the cart dynamics. What we will see, which is potentially more surprising, is that we can also use a feedback controller for the cart to feedback linearize the dynamics of the passive pendulum joint.

We'll use the term *collocated* partial feedback linearization to
describe a controller which linearizes the dynamics of the actuated joints.
What's more surprising is that it is often possible to achieve
*non-collocated* partial feedback linearization - a controller which
linearizes the dynamics of the unactuated joints. The treatment presented
here follows from

Starting from the equations \ref{eq:simple} and \ref{eq:simple2}, we have \begin{gather*} \ddot\theta = -\ddot{x}c - s \\ % \ddot\theta = -\frac{1}{l} (\ddot{x} c + g s) \ddot{x}(2-c^2) - sc - \dot\theta^2 s = f \end{gather*} Therefore, applying the feedback control \begin{equation} f = (2 - c^2) \ddot{x}^d - sc - \dot\theta^2 s % f = (m_c + m_p) u + m_p (u c + g s) c - m_p l \dot\theta^2 s \end{equation} results in \begin{align*} \ddot{x} =& \ddot{x}^d \\ \ddot{\theta} =& -\ddot{x}^dc - s, \end{align*} which are valid globally.

Look carefully at the resulting equations. Of course, it says that we can impose whatever accelerations we like on the cart. But even the resulting equations of the pole happen to take a nice form here: they have been reduced to the equations of the simple pendulum (without a cart), where the torque input is now given instead by $\ddot{x}c$. It's as if we have a simple pendulum with torque control, except our command is modulated by a $\cos\theta$ term, and this $\cos\theta$ term is fundamental -- it's true that our control authority goes to zero when the pole is horizontal, because no amount of force on the cart in that configuration can act like a torque on the pole.

Starting again from equations \ref{eq:simple} and \ref{eq:simple2}, we have \begin{gather*} \ddot{x} = -\frac{\ddot\theta + s}{c} \\ \ddot\theta(c - \frac{2}{c}) - 2 \tan\theta - \dot\theta^2 s = f \end{gather*} Applying the feedback control \begin{equation} f = (c - \frac{2}{c}) \ddot\theta^d - 2 \tan\theta - \dot\theta^2 s \end{equation} results in \begin{align*} \ddot\theta =& \ddot\theta^d \\ \ddot{x} =& -\frac{1}{c} \ddot\theta^d - \tan\theta. \end{align*} Note that this expression is only valid when $\cos\theta \neq 0$. Once again, we know that the force cannot create a torque when the pole is perfectly horizontal. In fact, the controller we have written will "blow-up" -- requesting infinite force at $\cos\theta = 0$; so make sure you saturate the command before you implement it on hardware (or even in simulation). Although it may come as a small consolation, at least we have that $(c - \frac{2}{c})$ never goes to zero; in fact you can check for yourself that $|c - \frac{2}{c}| \ge 1$.

For systems that are trivially underactuated (torques on some joints, no torques on other joints), we can, without loss of generality, reorganize the joint coordinates in any underactuated system described by the manipulator equations into the form: \begin{align} \bM_{11} \ddot{\bq}_1 + \bM_{12} \ddot{\bq}_2 &= \btau_1, \label{eq:passive_dyn}\\ \bM_{21} \ddot{\bq}_1 + \bM_{22} \ddot{\bq}_2 &= \btau_2 + \bu, \label{eq:active_dyn} \end{align} with $\bq \in \Re^n$, $\bq_1 \in \Re^l$, $\bq_2 \in \Re^m$, $l=n-m$. $\bq_1$ represents all of the passive joints, and $\bq_2$ represents all of the actuated joints, and the $\btau = \btau_g - \bC\dot\bq$ terms capture all of the Coriolis and gravitational terms, and $$\bM(\bq) = \begin{bmatrix} \bM_{11} & \bM_{12} \\ \bM_{21} & \bM_{22} \end{bmatrix}.$$ Fortunately, because $\bM$ is uniformly (e.g. for all $\bq$) positive definite, $\bM_{11}$ and $\bM_{22}$ are also positive definite, by the Schur complement condition for positive definiteness.

Performing the same substitutions into the full manipulator
equations, we get:
\begin{gather}
\ddot\bq_1 = \bM_{11}^{-1} \left[ \btau_1 - \bM_{12} \ddot\bq_2
\right] \\
(\bM_{22} - \bM_{21} \bM_{11}^{-1} \bM_{12})
\ddot\bq_2 - \btau_2 + \bM_{21} \bM_{11}^{-1} \btau_1 = \bu
\end{gather}
It can be easily shown that the matrix $(\bM_{22} - \bM_{21}
\bM_{11}^{-1} \bM_{12})$ is invertible

\begin{gather} \ddot\bq_2 = \bM_{12}^+ \left[ \btau_1 - \bM_{11} \ddot\bq_1 \right] \\ (\bM_{21} - \bM_{22} \bM_{12}^+ \bM_{11}) \ddot\bq_1 - \btau_2 + \bM_{22} \bM_{12}^+ \btau_1 = \bu \end{gather} where $\bM_{12}^+$ is a Moore-Penrose pseudo-inverse. This inverse provides a unique solution when the rank of $\bM_{12}$ equals $l$, the number of passive degrees of freedom in the system (it cannot be more, since the matrix only has $l$ rows). The rank condition in this context is sometimes called the property of "Strong Inertial Coupling". It is state dependent; in the cart-pole example above $\bM_{12} = \cos\theta$ and drops rank exactly when $\cos\theta = 0$. A system has Global Strong Inertial Coupling if it exhibits Strong Inertial Coupling in every state.

In general, we can define some combination of active and passive
joints that we would like to control. This combination is sometimes
called a "task space".

If the actuated joints are commanded so that \begin{equation} \ddot\bq_2 = \bar\bH^+ \left [\ddot\by^d - \dot\bH\dot\bq - \bH_1 \bM_{11}^{-1}\btau_1 \right], \label{eq:q2cmd} \end{equation} where $\bar{\bH} = \bH_2 - \bH_1 \bM_{11}^{-1} \bM_{12}.$ and $\bar\bH^+$ is the right Moore-Penrose pseudo-inverse, $$\bar\bH^+ = \bar\bH^T (\bar\bH \bar\bH^T)^{-1},$$ then we have \begin{equation} \ddot\by = \ddot\by^d.\end{equation} subject to \begin{equation}\text{rank}\left(\bar{\bH} \right) = p, \label{eq:rank_condition}\end{equation}

**Proof Sketch.**
Differentiating the output function we have \begin{gather*} \dot\by =
\bH \dot\bq \\ \ddot\by = \dot\bH \dot\bq + \bH_1 \ddot\bq_1 + \bH_2
\ddot\bq_2. \end{gather*} Solving \ref{eq:passive_dyn} for the dynamics
of the unactuated joints we have: \begin{equation} \ddot\bq_1 =
\bM_{11}^{-1} (\btau_1 - \bM_{12} \ddot\bq_2) \label{eq:q1cmd}
\end{equation} Substituting, we have \begin{align} \ddot\by =& \dot\bH
\dot\bq + \bH_1 \bM_{11}^{-1}(\btau_1 - \bM_{12}\ddot\bq_2) + \bH_2
\ddot\bq_2 \\ =& \dot\bH \dot\bq + \bar{\bH} \ddot\bq_2 + \bH_1
\bM_{11}^{-1}\btau_1 \\ =& \ddot\by^d \end{align} Note that the last
line required the rank condition ($\ref{eq:rank_condition}$) on
$\bar\bH$ to ensure that the rows of $\bar{\bH}$ are linearly
independent, allowing $\bar\bH \bar\bH^+ = \bI$.

In order to execute a task space trajectory one could command
$$\ddot\by^d = \ddot{\bar{\by}}^d + \bK_d (\dot{\bar{\by}}^d - \dot\by)
+ \bK_p (\bar{\by}^d -\by).$$ Assuming the internal dynamics are stable,
this yields converging error dynamics, $(\bar{\by}^d - \by)$, when
$\bK_p,\bK_d > 0$

Consider the task of trying to track a desired kinematic trajectory with the endpoint of pendulum in the cart-pole system. With one actuator and kinematic constraints, we might be hard-pressed to track a trajectory in both the horizontal and vertical coordinates. But we can at least try to track a trajectory in the vertical position of the end-effector.

Using the task-space PFL derivation, we have: \begin{gather*} y = h(\bq) = -l \cos\theta \\ \dot{y} = l \dot\theta \sin\theta \end{gather*} If we define a desired trajectory: $$\bar{y}^d(t) = \frac{l}{2} + \frac{l}{4} \sin(t),$$ then the task-space controller is easily implemented using the derivation above.

The task space derivation above provides a convenient generalization
of the partial feedback linearization (PFL)

If we choose ${\bf y} = \bq_1$ (non-collocated), we have $$\bH_1 =
\bI, \bH_2 = 0, \dot\bH = 0, \bar{\bH}=-\bM_{11}^{-1}\bM_{12}.$$ The
rank condition ($\ref{eq:rank_condition}$) requires that
$\text{rank}(\bM_{12}) = l$, in which case we can write
$\bar{\bH}^+=-\bM_{12}^+\bM_{11}$, reducing the rank condition to
precisely the "Strong Inertial Coupling" condition described in

Recall in the last chapter, we used energy shaping to swing up the simple pendulum. This idea turns out to be a bit more general than just for the simple pendulum. As we will see, we can use similar concepts of "energy shaping" to produce swing-up controllers for the acrobot and cart-pole systems. It's important to note that it only takes one actuator to change the total energy of a system.

Although a large variety of swing-up controllers have been proposed
for these model systems (c.f.

Let's try to apply the energy-shaping ideas to the cart-pole system.
The basic idea, from *decoupled* pendulum, just with a slightly odd control input that is
modulated by $\cos\theta$.

Let us regulate the energy of this decoupled pendulum using energy shaping. The energy of the pendulum (a unit mass, unit length, simple pendulum in unit gravity) is given by: $$E(\bx) = \frac{1}{2} \dot\theta^2 - \cos\theta.$$ The desired energy, equivalent to the energy at the desired fixed-point, is $$E^d = 1.$$ Again defining $\tilde{E}(\bx) = E(\bx) - E^d$, we now observe that \begin{align*} \dot{\tilde{E}}(\bx) =& \dot{E}(\bx) = \dot\theta \ddot\theta + \dot\theta s \\ % \dot\tilde{E} = ml^2 \dot\theta \ddot\theta + mgl s =& \dot\theta [ -uc - s] + \dot\theta s \\ % - ml \dot\theta [ u c + g s ] + mgl s =& - u\dot\theta \cos\theta. % - ml u \dot\theta c \end{align*}

Therefore, if we design a controller of the form $$u = k \dot\theta\cos\theta \tilde{E},\quad k>0$$ the result is $$\dot{\tilde{E}} = - k \dot\theta^2 \cos^2\theta \tilde{E}.$$ This guarantees that $| \tilde{E} |$ is non-increasing, but isn't quite enough to guarantee that it will go to zero. For example, if $\theta = \dot\theta = 0$, the system will never move. However, if we have that $$\int_0^t \dot\theta^2(t') \cos^2 \theta(t') dt' \rightarrow \infty,\quad \text{as } t \rightarrow \infty,$$ then we have $\tilde{E}(t) \rightarrow 0$. This condition, a version of the LaSalle's theorem that we will develop in our notes on Lyapunov methods, is satisfied for all but the trivial constant trajectories at fixed points.

Now we return to the full system dynamics (which includes the cart).

Swing-up control for the acrobot can be accomplished in much the same
way.

Use collocated PFL. ($\ddot{q}_2 = \ddot{q}_2^d$). $$E(\bx) = \frac{1}{2}\dot\bq^T\bM\dot{\bq} + U(\bx).$$ $$ E_d = U(\bx^*).$$ $$\bar{u} = \dot{q}_1 \tilde{E}.$$ $$\ddot{q}_2^d = - k_1 q_2 - k_2 \dot{q}_2 + k_3 \bar{u},$$

Extra PD terms prevent proof of asymptotic convergence to homoclinic
orbit. Proof of another energy-based controller in

The energy shaping controller for swing-up presented here is a pretty faithful representative from the field of nonlinear underactuated control. Typically these control derivations require some clever tricks for simplifying or canceling out terms in the nonlinear equations, then some clever Lyapunov function to prove stability. In these cases, PFL was used to simplify the equations, and therefore the controller design.

These controllers are important, representative, and relevant. But clever tricks with nonlinear equations seem to be fundamentally limited. Most of the rest of the material presented in this book will emphasize more general computational approaches to formulating and solving these and other control problems.

Although the "swing up and balance" task is quite interesting for the acrobot and cart-pole systems, quadrotors tend to inspire a slightly different question. First of all, stabilizing a hovering fixed point is relatively easy on a quadrotor -- even our LQR controller will have an extremely large region of attraction. But people also tend to want to do more with their quadrotors than just have them hover in place. The task we'll consider here is trajectory planning: how can you find a feasible trajectory through state space for the quadrotor, even if there are obstacles to avoid that are only known at runtime?

Trajectory design, and especially trajectory optimization, is a big idea that we will explore more thoroughly later in the text. But there is one idea that I would like to present here, because in addition to being a very satisfying solution for quadrotors, it is philosophically quite close to the idea of partial feedback linearization. That idea is called differential flatness.

One of the most important lessons from partial feedback linearization, is the idea that if you have $m$ actuators, then you basically get to control exactly $m$ quantities of your system. With the "task space" partial feedback linearization, we showed that these can be chosen in a fairly general way. Of course there are some requirements -- captured above by the rank condition which checks for "inertial coupling". Differential flatness exploits this same idea (choosing $m$ outputs), but in the opposite direction. The idea is that, for some systems, if you give me a trajectory in those $m$ coordinates, it may in fact dictate what all of the states/actuators must have been doing. The fact that you can only execute a subset of possible trajectories can, in this case, make trajectory planning much easier!

Let's start with an example...

The planar quadrotor model described above has 3 degrees of freedom ($x,y,\theta$) but only two actuators (one for each propellor). My claim is that, if you give me a trajectory for just the location of the center of mass: $x(t), y(t), \forall t \in [t_0, t_f],$ then I will be able to infer what $\theta(t)$ must be over that same interval in order to be feasible. Furthermore, I can even infer $\bu(t)$. There is one technical condition required: the trajectory you give me for $x(t)$ and $y(t)$ needs to be continuously differentiable (four times).

To see this, first observe that from ($\ref{eq:quad_x}$) and ($\ref{eq:quad_y}$) we have $$\frac{-m \ddot{x}}{ m \ddot{y} + mg} = \frac{(u_1 + u_2)\sin\theta}{(u_1+u_2)\cos\theta} = \tan\theta.$$ In words, given $\ddot{x}(t)$ and $\ddot{y}(t)$, I can solve for $\theta$(t). I can differentiate this relationship (in time) twice more to obtain $\ddot\theta$. Using ($\ref{eq:quad_theta}$) with ($\ref{eq:quad_x}$) or ($\ref{eq:quad_y}$) give us two linear equations for $u_1$ and $u_2$ which are easily solved.

Now you can see why we need the original trajectories to be smooth -- the solution to $\bu(t)$ depends on $\ddot\theta(t)$ which depends on $\frac{d^4 x(t)}{dt^4}$ and $\frac{d^4 y(t)}{dt^4}$; we need those derivatives to exist along the entire trajectory.

What's more -- if we ignore input limits for a moment -- *any*
sufficiently smooth trajectory of $x(t), y(t)$ is feasible, so if I can
simply find one that avoids the obstacles, then I have designed my state
trajectory. As we will see, optimizing even high-degree
piecewise-polynomials is actually an easy problem (it works out to be a
quadratic program), assuming the constraints are convex. In practice,
this means that once you have determined whether to go left or right
around the obstacles, trajectory design is easy and fast.

I've coded up a simple example of that for you here:

The example above demonstrates that the planar quadrotor system is
differentially flat in the outputs $x(t)$, $y(t)$. More generally, if we
have a dynamical system $$\dot{\bx} = f(\bx, \bu),$$ and we design some
output coordinates (essentially "task-space"): $$\bz(t) = h\left(\bx, \bu,
\frac{d\bu}{dt}, ..., \frac{d^k\bu}{dt^k}\right),$$ such that we can write
the $\bx$ and $\bu$ purely as a function of the output and it's time
derivatives, \begin{gather*} \bx (t) = \bx\left(\bz, \frac{d\bz}{dt}, ...,
\frac{d^k\bz}{dt^k}\right), \\ \bu (t) = \bu\left(\bz, \frac{d\bz}{dt}, ..,
\frac{d^k\bz}{dt^k}\right),\end{gather*} then we say that the system $f$ is
differentially flat in the outputs $\bz$

I'm not aware of any numerical recipes for showing that a system is differentially flat nor for finding potential flat outputs, but I admit I haven't worked on it nor looked for those recipes. That would be interesting! I think of differential flatness as a property that one must find for your system -- typically via a little mechanical intuition and a lot of algebra. Once found, it can be very useful.

Probably the most famous example of differential flatness is on the
full 3D quadrotors.

A few things to note about these examples, just so we also understand the limitations. First, the technique above is great for designing trajectories, but additional work is required to stabilizing those trajectories (we'll cover that topic in more detail later in the notes). Trajectory stabilization benefits greatly from good state estimation, and the examples above were all performed in a motion capture arena. Also, the "simple" version of the trajectory design that is fast enough to be used for flying through moving hoops is restricted to convex optimization formulations -- which means one has to hard-code apriori the decisions about whether to go left or right / up or down around each obstacle.

The acrobot and cart-pole systems are just two of the model systems used heavily in underactuated control research. Other examples include:

- Pendubot
- Inertia wheel pendulum
- Furuta pendulum (horizontal rotation and vertical pendulum)
- Hovercraft

For this exercise you will work exclusively in this notebook. You will be asked to complete the following steps.

- Derive the state-space dynamics $\dot{\bx} = f(\bx, \bu)$ of the cart-pole system.
- Linearize the dynamics from point (a) around the unstable equilibrium point.
- Analyze the linearization error for different values of the state $\bx$ and the control $\bu$.
- Among the states from point (c), identify which ones are stabilized to the origin by the LQR controller.

Previous Chapter | Table of contents | Next Chapter |