The equations of motion for a standard robot can be derived using the
method of Lagrange. Using as the total kinetic energy of the system,
and as the total potential energy of the system, , and
as the element of the generalized force vector corresponding to
, the Lagrangian dynamic equations are: You
can think of them as a generalization of Newton's equations. For a
particle, we have so
and
amounting to . But the Lagrangian derivation works in generalized
coordinate systems and for constrained motion.
Consider the simple double pendulum with torque actuation at both
joints and all of the mass concentrated in two points (for simplicity).
Using , and to denote
the locations of , respectively, the kinematics of this system
are:
Note that is shorthand for , is shorthand for
, etc. From this we can write the kinetic and potential
energy:
Taking the partial derivatives
and plugging them into the Lagrangian, reveals the equations of motion:
As we saw in chapter 1, numerically integrating (and animating) these
equations in Drake produces the expected result.
If you are not comfortable with these equations, then any good book
chapter on rigid body mechanics can bring you up to speed.
[1] is an excellent practical guide to robot
kinematics/dynamics. But [2] is my favorite mechanics
book, by far; I highly recommend it if you want something more. It's also
ok to continue on for now thinking of Lagrangian mechanics simply as a
recipe than you can apply in a great many situations to generate the
equations of motion.
I've also included a few more details about the connections between
these equations and the variational principles of mechanics in the notes below, to the extent they may prove useful in our
quest to write better optimizations for mechanical systems.
If you crank through the Lagrangian dynamics for a few simple robotic
manipulators, you will begin to see a pattern emerge - the resulting
equations of motion all have a characteristic form. For example, the
kinetic energy of your robot can always be written in the form:
where is the state-dependent inertia matrix
(aka mass matrix). This observation affords some insight into general
manipulator dynamics - for example we know that is always
positive definite, and symmetric[3, p.107] and has a
beautiful sparsity pattern[4] that we should take
advantage of in our algorithms.
Continuing our abstractions, we find that the equations of motion of a
general robotic manipulator (without kinematic loops) take the form
where is
the joint position vector, is the inertia matrix, captures
Coriolis forces, and is the gravity vector. The matrix
maps control inputs into generalized forces. Note that we pair
together on the left side because these terms
together represent the force of inertia; the
contribution from kinetic energy to the Lagrangian:
Whenever I write Eq. (), I see .
The equations of motion from
Example 1 can be written compactly as: Note that this choice of the matrix was not unique.
The manipulator equations are very general, but they do define some
important characteristics. For example, is (state-dependent)
linearly related to the control input, . This observation justifies
the control-affine form of the dynamics assumed throughout the notes. There
are many other important properties that might prove useful. For instance,
we know that the element of is given by [3, §5.2.3]:
(e.g. it also quadratic in ). For the appropriate choice of
, we have that is
skew-symmetric[5, §9.1].
Note that we have chosen to use the notation of second-order systems
(with and appearing in the equations) throughout
this book. Although I believe it provides more clarity, there is an
important limitation to this notation: it is impossible to describe 3D
rotations in "minimal coordinates" using this notation without introducing
kinematic singularities (like the famous "gimbal lock"). For instance, a
common singularity-free choice for representing a 3D rotation is the unit
quaternion, described by 4 real values (plus a norm constraint). However
we can still represent the rotational velocity without singularities using
just 3 real values. This means that the length of our velocity vector is
no longer the same as the length of our position vector. For this reason,
you will see that most of the software in Drake uses the more
general notation with to represent velocity, to represent
positions, and the manipulator equations are written as which is not necessarily a second-order system. and
are related linearly by . Although
is not necessarily square, its left Moore-Penrose
pseudo-inverse can be used to invert that relationship
without residual error, provided that is in the range space of
(that is, if it could have been produced as for some ). See [6] for a nice
discussion of this topic.
Recursive Dynamics Algorithms
The equations of motions for our machines get complicated quickly.
Fortunately, for robots with a tree-link kinematic structure, there are
very efficient and natural recursive algorithms for generating these
equations of motion. For a detailed reference on these methods, see
[7]; some people prefer reading about the
Articulated Body Method in [8]. The implementation in
Drake uses a related formulation from [9].
Bilateral Position Constraints
If our robot has closed-kinematic chains, for instance those that arise
from a four-bar
linkage, then we need a little more. The Lagrangian machinery above
assumes "minimal coordinates"; if our state vector contains all of
the links in the kinematic chain, then we do not have a minimal
parameterization -- each kinematic loop adds (at least) one constraint so
should remove (at least) one degree of freedom. Although some constraints
can be solved away, the more general solution is to use the Lagrangian to
derive the dynamics of the unconstrained system (a kinematic tree without
the closed-loop constraints), then add additional generalized forces that
ensure that the constraint is always satisfied.
Consider the constraint equation For example, in the case of the kinematic closed-chain,
this can be the kinematic constraint that the location of one end of the
chain is equal to the location of the other end of the chain. In the
following derivation, we'll be using the time derivatives of this
constraint: where is the "Jacobian
with respect to ". Note that is best
calculated directly (rather than trying to solve for as
an object by itself).
The equations of motion can be written
where is the constraint force. Let's use the shorthand
In an
optimization-based formulation (for instance, to implement this
constraint in a trajectory optimization), we can simply make a
decision variable, and use equations () and
() as constraints.
In other cases, it can be desirable to solve explicitly for
. Inserting the dynamics () into
() yields In many cases, this
matrix will be full rank (for instance, in the case of multiple
independent four-bar linkages) and the traditional inverse could have
been used. When the matrix drops rank (multiple solutions), then the
pseudo-inverse will select the solution with the smallest constraint
forces in the least-squares sense.
To combat numerical "constraint drift", one might like to add a
restoring force in the event that the constraint is not satisfied to
numerical precision. To accomplish this, rather than solving for
as above, we can instead solve for where is a stiffness parameter. This is
known as Baumgarte's stabilization technique, implemented here with a
single parameter to provide a critically-damped response. Carrying this
through yields
There is an important optimization-based derivation/interpretation of
these equations, which we will build on below, using Gauss's
Principle of Least Constraint. This principle says that we can
derive the constrained dynamics as : where is the
"unconstrained acceleration"
[10]. Equation comes right
out of the optimality conditions with acting precisely as the
Lagrange multiplier. This is a convex quadratic program with equality
constraints, which has a closed-form solution that is precisely Equation
above. It is also illustrative to look at the
dual formulation of this optimization, which can be written as
Observe that the
linear term is contains the acceleration of if the dynamics evolved
without the constraint forces applied; let's call that :
The primal formulation has accelerations as
the decision variables, and the dual formulation has constraint forces as
the decision variables. For now this is merely a cute observation; we'll
build on it more when we get to discussing contact forces.
Bilateral Velocity Constraints
Consider the constraint equation where These are less common,
but arise when, for instance, a joint is driven through a prescribed
motion. Again, we will use the derivatives, and, the manipulator equations given by
These constraints can be added directly to an
optimization with as a decision variable. Alternatively, we
can solve for explicitly yielding
Again, to combat constraint drift, we can ask instead for , which yields
Consider the simple, but important, case of "locking" a joint
(making a particular joint act as if it is welded in the current
configuration). One could write this as a bilateral velocity constraint
Since (the vector with in the th element and zero everywhere
else, equation () reduces to , and
impacts the equation for and
\emph{only} the equation for . The resulting equations
can be obtained by simply removing the terms associated with
from the manipulator dynamics (and setting ).
Hybrid models via constraint forces
For hybrid models of contact, it can be useful to work with a
description of the dynamics in the floating-base coordinates and solve
for the contact forces which enforce the constraints in each mode. We'll
define a mode by a list of pairs of contact geometries that are in
contact, and even define whether each contact is in sticking contact
(with zero relative tangential velocity) or sliding contact. We can ask
our geometry engine to return the contact points on the bodies,
and where is the contact
point (or closest point to contact) in body and is the contact
point (or closest point to contact) in body . More generally, we will
refer to and as contact frames with their origins at these
points, their -axis pointing out along the contact normal, and their
-axes chosen as an orthonormal basis orthogonal to the normal.
For each sticking contact, we have a bilateral position constraint of
the form We can assemble all of these constraints into a
single bilateral constraint vector and use almost
precisely the recipe above. Note that applying Baumgarte stabilization
here would only stabilize in the normal direction, by definition the
vector between the two points, which is consistent with having (only) a
zero-velocity constraint in the tangential direction. For sliding
contacts, we have just the scalar constraint, which
says that the distance is zero in the normal direction, plus a constraint
the imposes maximum dissipation
that can be accommodated when we solve for .
The dynamics of multibody systems that make and break contact are closely
related to the dynamics of constrained systems, but tend to be much more
complex. In the simplest form, you can think of non-penetration as an
inequality constraint: the signed distance between collision bodies
must be non-negative. But, as we have seen in the chapters on walking, the
transitions when these constraints become active correspond to collisions,
and for systems with momentum they require some care. We'll also see that
frictional contact adds its own challenges.
There are three main approaches used to modeling contact with "rigid"
body systems: 1) rigid contact approximated with stiff compliant contact,
2) hybrid models with collision event detection, impulsive reset maps, and
continuous (constrained) dynamics between collision events, and 3) rigid
contact approximated with time-averaged forces (impulses) in a time-stepping
scheme. Each modeling approach has advantages/disadvantages for different
applications.
Before we begin, there is a bit of notation that we will use throughout.
Let indicate the relative (signed) distance between two rigid
bodies. For rigid contact, we would like to enforce the unilateral
constraint: We'll use
to denote the "contact normal", as well as and as a basis for the tangents at the contact
(orthonormal vectors in the Cartesian space, projected into joint space),
all represented as row vectors in joint coordinates.
Contact coordinates in 2D. (a) The signed distance between
contact bodies, . (b) The normal () and tangent () contact vectors -- note that these can be drawn in 2D when the is
the positions of one of the bodies, but more generally the vectors
live in the configuration space. (c) Sometimes it will be helpful for us to
express the tangential coordinates using and ; this will make
more sense in the 3D case.Contact coordinates in 3D.
We will also find it useful to assemble the contact normal and tangent
vectors into a single matrix, , that we'll call the contact
Jacobian: As written, gives the relative velocity of the
closest points in the contact coordinates; it can be also be extended with
three more rows to output a full spatial velocity (e.g. when modeling
torsional friction). The generalized force produced by the contact is given
by , where
Compliant Contact Models
Most compliant contact models are conceptually straight-forward: we
will implement contact forces using a stiff spring (and
damper[11]) that produces forces to resist penetration (and
grossly model the dissipation of collision and/or friction). For
instance, for the normal force, , we can use a simple (piecewise)
linear spring law:
Coulomb friction is described by two parameters -- and
-- which are the coefficients of static and dynamic
friction. When the contact tangential velocity (given by ) is
zero, then friction will produce whatever force is necessary to resist
motion, up to the threshold But once this threshold is
exceeding, we have slip (non-zero contact tangential velocity); during
slip friction will produce a constant drag force in the direction opposing the motion. This behavior is not a simple
function of the current state, but we can approximate it with a continuous
function as pictured below.
(left) The Coloumb friction model. (right) A continuous
piecewise-linear approximation of friction (blue) and the Stribeck
approximation of Coloumb friction (green); the -axis is the contact
tangential velocity, and the -axis is the friction coefficient.
With these two laws, we can recover the contact forces as relatively
simple functions of the current state. However, the devil is in the
details. There are a few features of this approach that can make it
numerically unstable. If you've ever been working in a robotics simulator
and watched your robot take a step only to explode out of the ground and
go flying through the air, you know what I mean.
In order to tightly approximate the (nearly) rigid contact that most
robots make with the world, the stiffness of the contact "springs" must be
quite high. For instance, I might want my 180kg humanoid robot model to
penetrate into the ground no more than 1mm during steady-state standing.
The challenge with adding stiff springs to our model is that this results
in stiff
differential equations (it is not a coincidence that the word
stiff is the common term for this in both springs and ODEs). As a
result, the best implementations of compliant contact for simulation make
use of special-purpose numerical integration recipes (e.g.
[12]) and compliant contact models are often difficult to
use in e.g. trajectory/feedback optimization.
Note that there is one other type of friction that we talk about in
robotics. In joints we also, additionally, model joint damping, often as
a linear force proportional to the velocity which resists motion.
Phenomenologically, this is due to a viscous (fluid) drag force produced
by lubricants in the joint. The static and dynamic friction described
above are sometimes collectively referred to as "dry" friction.
But there is another serious numerical challenge with the basic
implementation of this model. Computing the signed-distance function,
, when it is non-negative is straightforward, but robustly
computing the "penetration depth" once two bodies are in collision is not.
Consider the case of use to represent the maximum penetration
depth of, say, two boxes that are contacting each other. With compliant
contact, we must have some penetration to produce any contact force. But
the direction of the normal vector, can
easily change discontinuously as the point of maximal penetration moves,
as I've tried to illustrate here:
(Left) Two boxes in penetration, with the signed distance
defined by the maximum penetration depth. (Right) The contact normal for
various points of maximal penetration.
If you really think about this example, it actually reveals even more
of the foibles of trying to even define the contact points and normals
consistently. It seems reasonable to define the point on body B as the
point of maximal penetration into body A, but notice that as I've drawn
it, that isn't actually unique! The corresponding point on body A should
probably be the point on the surface with the minimal distance from the
max penetration point (other tempting choices would likely cause the
distance to be discontinuous at the onset of penetration). But this
whole strategy is asymmetric; why shouldn't I have picked the vector
going with maximal penetration into body B? My point is simply that
these cases exist, and that even our best software implementations get
pretty unsatisfying when you look into the details. And in practice,
it's a lot to expect the collision engine to give consistent and smooth
contact points out in every situation.
Some of the advantages of this approach include (1) the fact that it is
easy to implement, at least for simple geometries, (2) by virtue of being
a continuous-time model it can be simulated with error-controlled
integrators, and (3) the tightness of the approximation of "rigid" contact
can be controlled through relatively intuitive parameters. However, the
numerical challenges of dealing with penetration are very real, and they
motivate our other two approaches that attempt to more strictly enforce
the non-penetration constraints.
Rigid Contact with Event Detection
Impulsive Collisions
The collision event is described by the zero-crossings (from positive
to negative) of . Let us start by assuming frictionless
collisions, allowing us to write where is the "contact normal"
we defined above and is now a (scalar) impulsive force
that is well-defined when integrated over the time of the collision
(denoted to ). Integrate both sides of the equation over
that (instantaneous) interval: Since , , and are constants over this interval, we are left with where
is short-hand for . Multiplying both
sides by , we have After the collision, we have ,
with denoting the
coefficient of restitution, yielding: and therefore
I've used the notation
here to denote the pseudoinverse of (I would normally write
but have changed it for this section to avoid confusion).
Substituting this back in above results in
We can add friction at the contact. Rigid bodies will almost always
experience contact at only a point, so we typically ignore torsional
friction [7], and model only tangential
friction by extending to a matrix to capture the
gradient of the contact location tangent to the contact surface. Then
becomes a Cartesian vector representing the contact impulse,
and (for infinite friction) the post-impact velocity condition becomes
resulting in the equations: Often in optimization,
we can directly use the implicit form If is
restricted to a friction cone, as in Coulomb friction, in general we
have to solve an optimization to solve for subject to the
friction cone constraints on
Imagine a ball (a hollow-sphere) in the plane with mass , radius
. The configuration is given by The equations of motion are
where are the contact forces (which are zero except
during the impulsive collision). Given a coefficient of restitution
and a collision with a horizontal ground, the post-impact
velocities are:
Putting it all together
We can put the bilateral constraint equations and the impulsive
collision equations together to implement a hybrid model for unilateral
constraints of the form. Let us generalize to be the vector of all pairwise (signed) distance
functions between rigid bodies; this vector describes all possible
contacts. At every moment, some subset of the contacts are active, and
can be treated as a bilateral constraint (). The guards of
the hybrid model are when an inactive constraint becomes active
(), or when an active constraint
becomes inactive ( and
). Note that collision events will (almost always) only
result in new active constraints when , e.g. we have pure
inelastic collisions, because elastic collisions will rarely permit
sustained contact.
If the contact involves Coulomb friction, then the transitions
between sticking and sliding can be modeled as additional hybrid
guards.
Time-stepping Approximations for Rigid Contact
So far we have seen two different approaches to enforcing the
inequality constraints of contact, and the friction
cone. First we introduced compliant contact which effectively enforces
non-penetration with a stiff spring. Then we discussed event detection as
a means to switch between different models which treat the active
constraints as equality constraints. But, perhaps surprisingly, one of
the most popular and scalable approaches is something different: it
involves formulating a mathematical program that can solve the inequality
constraints directly on each time step of the simulation. These
algorithms may be more expensive to compute on each time step, but they
allow for stable simulations with potentially much larger and more
consistent time steps.
Complementarity formulations
What class of mathematical program due we need to simulate contact?
The discrete nature of contact suggests that we might need some form of
combinatorial optimization. Indeed, the most common transcription is to
a Linear Complementarity Problem (LCP) [13], as
introduced by [14] and [15]. An
LCP is typically written as They are directly related to the optimality conditions of
a (potentially non-convex) quadratic program and
[15]> showed that the LCP's generated by our
rigid-body contact problems can be solved by Lemke's algorithm. As
short-hand we will write these complementarity constraints as
Rather than dive into the full transcription, which has many terms
and can be relatively difficult to parse, let me start with two very
simple examples.
Consider our favorite simple mass being actuated by a horizontal
force (with the double integrator dynamics), but this time we will add
a wall that will prevent the cart position from taking negative
values: our non-penetration constraint is . Physically, this
constraint is implemented by a normal (horizontal) force, ,
yielding the equations of motion:
is defined as the force required to enforce the non-penetration
constraint; certainly the following are true: and . is the "complementarity constraint", and you
can read it here as "either q is zero or force is zero" (or both); it
is our "no force at a distance" constraint, and it is clearly
non-convex. It turns out that satisfying these constraints, plus , is sufficient to fully define .
To define the LCP, we first discretize time, by approximating
This is almost the standard
Euler approximation, but note the use of in the right-hand
side of the first equation -- this is actually a semi-implicit
Euler approximation, and this choice is essential in the
derivation.
Given and , we can solve for and
simultaneously, by solving the following LCP: Take a moment and convince
yourself that this fits into the LCP prescription given above.
Note: Please don't confuse this visualization with the more common
visualization of the solution space for an LCP (in two or more
dimensions) in terms of "complementary cones"[13].
Perhaps it's also helpful to plot the solution, as a function of . I've done it here for :
In the (time-stepping, "impulse-velocity") LCP formulation, we write
the contact problem in it's combinatorial form. In the simple example
above, the complementarity constraints force any solution to lie on
either the positive x-axis () or the positive
y-axis (). The equality constraint is simply a line that
will intersect with this constraint manifold at the solution. But in
this frictionless case, it is important to realize that these are simply
the optimality conditions of a convex optimization problem: the
discrete-time version of Gauss's principle that we used above. Using
as shorthand for , and replacing in Eq and scaling the
objective by we have: The objective is even cleaner/more intuitive if we
denote the next step velocity that would have occurred before the
contact impulse is applied as :
The LCP for the frictionless
contact dynamics is precisely the optimality conditions of this convex
(because ) quadratic program, and once again the contact
impulse, , plays the role of the Lagrange multiplier
(with units ).
So why do we talk so much about LCPs instead of QPs? Well LCPs can
also represent a wider class of problems, which is what we arrive at
with the standard transcription of Coulomb friction. In the limit of
infinite friction, then we could add an additional constraint that the
tangential velocity at each contact point was equal to zero (but these
equation may not always have a feasible solution). Once we admit limits on the magnitude of the friction forces, the non-convexity of the disjunctive form rear's its ugly head.
We can use LCP to find a feasible solution with Coulomb
friction, but it requires some gymnastics with slack variables to make
it work. For this case in particular, I believe a very simple
example is best. Let's take our brick and remove the wall and the
wheels (so we now have friction with the ground).
The dynamics are the same as our previous example, but this time I'm using for the friction force which is
inside the friction cone if and on the boundary of the
friction cone if this is known as the principle of
maximum dissipation[14]. Here the magnitude of the
normal force is always , so we have where
is the coefficient of friction. And we will use the same
semi-implicit Euler approximation to cast this into discrete time.
Now, to write the friction constraints as an LCP, we must introduce
some slack variables. First, we'll break the force into a positive
component and a negative component: And we will
introduce one more variable which will be non-zero if the
velocity is non-zero (in either direction). Now we can write the LCP:
where each instance of we write
out with It's enough
to make your head spin! But if you work it out, all of the
constraints we want are there. For example, for , then we
must have , , and . When
, we can have , , and
those forces must add up to make .
We can put it all together and write an LCP with both normal forces
and friction forces, related by Coulomb friction (using a polyhedral
approximation of the friction cone in 3d)[14].
Although the solution to any LCP can
also be represented as the solution to a QP, the QP for this problem
is non-convex.
Anitescu's convex formulation
In [16], Anitescu described the natural convex
formulation of this problem by dropping the maximum dissipation
inequalities and allowing any force inside the friction cone. Consider
the following optimization: where are a set of tangent row vectors in joint coordinates
parameterizing the polyhedral approximation of the friction cone (as in
[14]). Importantly, for each we must also
have in the set. By writing the Lagrangian, and checking the
stationarity condition: we can see that the
Lagrange multiplier, , associated with the th
constraint is the magnitude of the impulse (with units ) in
the direction where is the contact normal; the sum of the forces is an
affine combination of these extreme rays of the polyhedral friction
cone. As written, the optimization is a QP.
But be careful! Although the primal solution was convex, and the
dual problems are always convex, the objective here can be positive
semi-definite. This isn't a mistake -- [16]
describes a few simple examples where the solution to is unique,
but the impulses that produce it are not (think of a table with four
legs).
When the tangential velocity is zero, this constraint is tight; for
sliding contact the relaxation effectively causes the contact to
"hydroplane" up out of contact, because It seems like a quite reasonable
approximation to me, especially for small !
Let's continue on and write the dual program. To make the notation a
little better, let us stack the Jacobian vectors into , such
that the th row is , so that we have
Substituting this back into the Lagrangian give us the dual program:
One final note: I've written just one contact point here to simplify
the notation, but of course we repeat the constraint(s) for each contact
point; [16] studies the typical case of only
including potential contact pairs with , for
small .
Todorov's regularization
According to [17], the popular MuJoCo simulator uses a slight
variation of this convex relaxation, with the optimization:
where is a "desired contact velocity", and
describes the friction cone. The friction cone
constraint looks new but is not; observe that , where is a clever parameterization of
the friction cone in the polyhedral case. The bigger difference is in
the linear term: [17] proposed with
and stabilization gains that are chosen to
yield a critically damped response.
How should we interpret this? If you expand the linear terms, you
will see that this is almost exactly the dual form we get from the
position equality constraints formulation, including the Baumgarte
stabilization. It's interesting to think
about the implications of this formulation -- like the Anitescu
relaxation, it is possible to get some "force at a distance" because we
have not in any way expressed that when
In Anitescu, it happened in a velocity-dependent boundary layer; here it
could happen if you are "coming in hot" -- moving towards contact with
enough relative velocity that the stabilization terms want to slow you
down.
In dual form, it is natural to consider the full conic description of
the friction cone: The resulting dual optimization is has a quadratic
objective and second-order cone constraints (SOCP).
There are a number of other important relaxations that happen in
MuJoCo. To address the positive indefiniteness in the objective,
[17] relaxes the dual objective by adding a small term
to the diagonal. This guarantees convexity, but the convex optimization
can still be poorly conditioned. The stabilization terms in the
objective are another form of relaxation, and [17]
also implements adding additional stabilization to the inertial matrix,
called the "implicit damping inertia". These relaxations can
dramatically improve simulation speed, both by making each convex
optimization faster to solve, and by allowing the simulator to take
larger integration time steps. MuJoCo boasts a special-purpose solver
that can simulate complex scenes at seriously impressive speeds -- often
orders of magnitude faster than real time. But these relaxations can
also be abused -- it's unfortunately quite common to see physically
absurd MuJoCo simulations, especially when researchers who don't care
much about the physics start changing simulation parameters to optimize
their learning curves!
The Semi-Analytic Primal (SAP) solver
The contact solver that we use in Drake further extends and improves upon this line of work... [18].
What equations do we get if we simulate the simple cart colliding
with a wall example that we used above?
Following [17], the first relaxation we find is
the addition of "implicit damping inertia", which effectively adds
some inertia and some damping, parameterized by the regularization
term , to the explicit form of the equations. As a result our
discrete-time cart dynamics are given by
What remains is to compute , and here is where things get
interesting. Motivated by the idea of "softening the Gauss principle"
(e.g. the principle of least constraint [10]) by
replacing the hard constraints with a soft penalty term, we end up
with a convex optimization which, in this scalar frictionless case,
reduces to: where is the next-step velocity before
the contact impulse. is another regularization term: the
"desired contact velocity" which is implemented like a by Baumgarte
stabilization: This introduced two additional
relaxation parameters; MuJoCo calls the "impulse
regularization scaling" and the "error-reduction time
constant".
Also a single point force cannot capture effects like torsional
friction, and performs badly in some very simple cases (imaging a box
sitting on a table). As a result, many of the most effective algorithms
for contact restrict themselves to very limited/simplified geometry.
For instance, one can place "point contacts" (zero-radius spheres) in
the four corners of a robot's foot; in this case adding forces at
exactly those four points as they penetrate some other body/mesh can
give more consistent contact force locations/directions. A surprising
number of simulators, especially for legged robots, do this.
In practice, most collision detection algorithms will return a list
of "collision point pairs" given the list of bodies (with one point pair
per pair of bodies in collision, or more if they are using the
aforementioned "multi-contact" heuristics), and our contact force model
simply attaches springs between these pairs. Given a point-pair,
and , both in world coordinates, ...
numerical integration. discrete-time approximations of continuous
systems (lots of edx people had trouble with this. some found:
http://web.mit.edu/10.001/Web/Course_Notes/Differential_Equations_Notes/node3.html)
The field of analytic mechanics (or variational mechanics), has given us
a number of different and potentially powerful interpretations of these
equations. While our principle of virtual work was very much rooted in
mechanical work, these variational principles have proven to be much more
general. In the context of these notes, the variational principles also
provide some deep connections between mechanics and optimization. So let's
consider some of the equations above, but through the more general lens of
variational mechanics.
Virtual work
Most of us are comfortable with the idea that a body is in equilibrium
if the sum of all forces acting on that body (including gravity) is zero:
It turns out that there are some limitations
in working with forces directly: forces require us to (arbitrarily)
define a coordinate system, and to achieve force balance we might need to
consider all of the forces in the system, including internal
forces like the forces at the joints that hold our robots together.
So it turns out to be better to think not about forces directly, but
to think instead about the work, , done by those forces. Work
is force times distance. It's a scalar quantity, and it is invariant to
the choice of coordinate system. Even for a system that is in
equilibrium, we can reason about the virtual work that would occur
if we were to make an infinitesimal change to the configuration of the
system. Even better, since we know that the internal forces do no work,
we can ignore them completely and only consider the virtual work that
occurs due to a variation in the configuration variables. A multibody
system in equilibrium must have zero virtual work: Notice that these forces are the
generalized forces (they have the same dimension as ). We can map
between a Cartesian force and the generalized force using a kinematic
Jacobian: In particular,
forces due to gravity, or any forces described by a "potential energy"
function, are can be written as
If all of the generalized forces can be derived from such a potential
function, then we can go even further. For a system to be in equilibrium,
we must have that varations in the potential energy function must be
zero, This follows simply from and the principle of virtual work. Moreover, for
the system to be in a stable equilibrium, must not only be a
stationary point of the potential, but a minimum of the potential.
This is our first important connection in this section between mechanics
and optimization.
D'Alembert's principle and the force of
inertia
What about bodies in motion? d'Alembert's principle states that we can
reduce dynamics problems to statics problems by simply including a "force
of inertia", which is the familiar mass times acceleration for simple
particles. In order to have for all , we must have so
for multibody systems we know that the force of inertia must be: But where did that term come from? Is it
the result of nature "optimizing" some scalar function?
A natural candidate would be the kinetic energy: What would it mean to take
a variation of the kinetic energy, So far we've only had
variations with respect to , but the kinetic energy depends
on both and . We cannot vary these variables
independently, but how are they related?
The solution is to take a variation not at a single point, but along a
trajectory, where the relationship between and is well defined. If we consider then a variation of (using the calculus of
variations) w.r.t. any trajectory , will take the form:
where is defined to be an
arbitrary (small) differentiable function in the space of that is
zero at and
The derivation takes a few steps, which can be found in most
mechanics texts. Click here to see my version.
We can expand the term in the first integral:
Substituting this back in and
simplifying leaves: Now use integration by parts on the second
term: and observe that the first term in the right-hand side is zero
since . This leaves
If our goal
is to analyze the stationary points of this variation, then this
quantity must integrate to zero for any , so we must have
The result is that we find that the natural generalization of
for stationary points of variations of quantities
that also depend on velocity is This is exactly (the negation of) our
familiar
Principle of Stationary Action
The integral quantity that we used in the last section is commonly
called the "action" of the system. More generally, the action is defined
as where is
an appropriate "Lagrangian" for the system. The Lagrangian has units of
energy, but it is not uniquely defined. Any function which generates the
correct equations of motion, in agreement with physical laws, can be
taken as a Lagrangian. For our mechanical systems, the familiar form of
the Lagrangian is what we presented above: (kinetic energy
minus potential energy). Since the terms involving do not
depend directly on , they only contribute the
terms that we expect, and taking for this Lagrangian is a
way to generate the entire equations of motion in one go. For our
purposes, where we care about conservative forces but also control inputs
(as generalized forces ), the principle of virtual work tells us
that we will have
The viewpoint of variational mechanics is that it is this scalar
quantity that should be the central object of study (as
opposed to coordinate dependent forces), and this has proven to
generalize beautifully. [20] says
The principle of least action -- really the principle of
stationary action -- is the most compact form of the classical laws of
physics. This simple rule (it can be written in a single line)
summarizes everything! Not only the principles of classical mechanics,
but electromagnetism, general relativity, quantum mechanics, everything
known about chemistry -- right down to the ultimate known constituents of
matter, elementary particles.
It's only a stationary point, not a minima. Can I say that stable
trajectories are minima? (it's the natural extension of the potential
energy case)? In what sense, precisely, are they stable?
Hamiltonian Mechanics
In some cases, it might be preferable to use the Hamiltonian
formulation of the dynamics, with state variables and
(instead of ), where
results in the dynamics: Recall
that the Hamiltonian is ; these equations of motion are the familiar
Suppose you have a block A of width 1, block B of width 1, interacting on a friction-less surface. The state of a single block in our system is , which represents the horizontal position (where the position point is located at the geometric center of the block) and horizontal velocity of the associated block. We have as our initial condition and . In other words, at time step block A is moving towards a stationary block B located at the origin.
Derive the positions, velocities, and contact forces at time steps 1, 2, and 3 for the two block collision scenario described at the start of this problem, under implicit Euler integration. Use time step and block masses . Hint: See Example B.4 in the Appendix setting up the LCP optimization problem for a single cart and wall under semi-implicit integration. In implicit Euler integration, both the position and velocity are integrated with backward finite differences (), whereas in the semi-implicit integration, only the position is integrated with backward finite differences. Hint: To solve your LCP by hand, you should use the fact that time-stepping LCP produces perfectly inelastic collisions. What does the perfectly inelastic nature of collision imply about the signed distance during collision?
Since the block B is stationary, we might expect . Why is this not the case?
What would you expect the velocities to be post-collision for perfectly inelastic collision and under conservation of momentum in continuous-time physics? How do these velocities compare with your time-stepped velocities?
References
John Craig,
"Introduction to Robotics: Mechanics and Control", Addison Wesley
, January, 1989.
Cornelius Lanczos,
"The variational principles of mechanics", University of Toronto Press
, no. no. 4, 1970.
H. Asada and J.E. Slotine,
"Robot Analysis and Control",
, pp. 93-131, 1986.
Roy Featherstone,
"Efficient Factorization of the Joint Space Inertia Matrix for Branched Kinematic Trees",
International Journal of Robotics Research, vol. 24, no. 6, pp. 487–500, 2005.
Jean-Jacques E. Slotine and Weiping Li,
"Applied Nonlinear Control", Prentice Hall
, October, 1990.
Vincent Duindam,
"Port-Based Modeling and Control for Efficient Bipedal Walking Robots",
PhD thesis, University of Twente, March, 2006.
Roy Featherstone,
"Rigid Body Dynamics Algorithms", Springer
, 2007.
Brian Mirtich,
"Impulse-based Dynamic Simulation of Rigid Body Systems",
PhD thesis, University of California at Berkeley, 1996.
Abhinandan Jain,
"Robot and Multibody Dynamics: Analysis and Algorithms", Springer US
, 2011.
Firdaus E Udwadia and Robert E Kalaba,
"A new perspective on constrained motion",
Proceedings of the Royal Society of London. Series A: Mathematical and Physical Sciences, vol. 439, no. 1906, pp. 407--410, 1992.
K. H. Hunt and F. R. E. Crossley,
"Coefficient of restitution interpreted as damping in vibroimpact",
Journal of Applied Mechanics, vol. 42 Series E, no. 2, pp. 440-445, 1975.
Alejandro M Castro and Ante Qu and Naveen Kuppuswamy and Alex Alspach and Michael Sherman,
"A Transition-Aware Method for the Simulation of Compliant Contact with Regularized Friction",
IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1859--1866, 2020.
Katta G Murty and Feng-Tien Yu,
"Linear complementarity, linear and nonlinear programming", Citeseer
, vol. 3, 1988.
D.E. Stewart and J.C. Trinkle,
"AN IMPLICIT TIME-STEPPING SCHEME FOR RIGID BODY DYNAMICS WITH INELASTIC COLLISIONS AND COULOMB FRICTION",
International Journal for Numerical Methods in Engineering, vol. 39, no. 15, pp. 2673--2691, 1996.
M. Anitescu and F.A. Potra,
"Formulating dynamic multi-rigid-body contact problems with friction as solvable linear complementarity problems",
Nonlinear Dynamics, vol. 14, no. 3, pp. 231--247, 1997.
Mihai Anitescu,
"Optimization-based simulation of nonsmooth rigid multibody dynamics",
Mathematical Programming, vol. 105, no. 1, pp. 113--143, jan, 2006.
Emanuel Todorov,
"Convex and analytically-invertible dynamics with contacts and constraints: Theory and implementation in MuJoCo",
2014 IEEE International Conference on Robotics and Automation (ICRA) , pp. 6054--6061, 2014.
Alejandro Castro and Frank Permenter and Xuchen Han,
"An Unconstrained Convex Formulation of Compliant Contact",
arXiv preprint arXiv:2110.10107, 2021.
Ryan Elandt and Evan Drumwright and Michael Sherman and Andy Ruina,
"A pressure field model for fast, robust approximation of net contact force and moment between nominally rigid objects",
, pp. 8238-8245, 11, 2019.
Leonard Susskind and George Hrabovsky,
"The theoretical minimum: what you need to know to start doing physics", Basic Books
, 2014.