Underactuated Robotics

Algorithms for Walking, Running, Swimming, Flying, and Manipulation

Russ Tedrake

© Russ Tedrake, 2024
Last modified .
How to cite these notes, use annotations, and give feedback.

Note: These are working notes used for a course being taught at MIT. They will be updated throughout the Spring 2024 semester. Lecture videos are available on YouTube.

Previous Chapter Table of contents Next Chapter

Imitation Learning

Imitation learning, also known as "learning from demonstrations" (LfD), is the problem of learning a policy from a collection of demonstrations. For state-based feedback, these demonstrations take the form of a set of state-action sequences, $\left[ \bx[\cdot], \bu[\cdot]\right]$. For the richer class of output feedback, this takes the form of observation-action sequences, $\left[ \by[\cdot], \bu[\cdot]\right]$. Note that we do not require any explicit definition of a cost or reward function; in most cases we assume that the demonstrations are obtained from an optimal or near-optimal policy.

Broadly speaking, most approaches to imitation learning can be categorized as either behavioral cloning (BC) or inverse reinforcement learning (IRL). Behavior cloning attempts to learn a policy directly from the data using supervised learning. Inverse RL (aka inverse optimal control) attempts to learn a cost function from the data, and then uses potentially more traditional optimal control approaches to synthesize a policy for this cost function, in the hopes of generalizing significantly beyond the demonstration data. See Argall09+Bagnell15+Osa18 for some fairly recent surveys, though admittedly they appeared a bit before the latest boom.

I think it's fair to say that today, in 2024, behavior cloning is once again taking the robotics world by storm (especially in manipulation research), and it now seems like the shortest path to building "robot foundation models". We'll devote most of this chapter to it.

Behavior cloning

Behavior cloning Pomerleau88+Bain95 attempts to learn a policy directly from the policy's input-output data using supervised learning. In the case of full-state feedback, where we know that the optimal policy can be represented as a simple function of the state, this is typically cast as a simple regression problem, e.g. $\min_\theta \sum_{i} |\bu_i - \pi_\theta(\bx_i)|^2.$ In the richer case of output feedback, where we have learned to think about a policy as a dynamical system, then this becomes a sequence learning problem (given a sequence of observations, predict a sequence of actions).

Famously, Large Language Models are trained with behavior cloning (and then fine-tuned to make them more aligned with human preferences Ziegler19). OpenAI's GPT models are autoregressive models that predict the next token given a "context" of recent tokens Radford18+Radford19+Brown20+Bubeck23. More recently, we've seen the extension to multi-modal models, such as visually-conditioned language models (like GPT-4V and LLaVa Liu23) with an increasing wealth of open-source reproductions Karamcheti24+Laurençon24.

Is predicting actions fundamentally different from next-token prediction in language? There are a few reasons why it might be. Actions are continuous and high-dimensional, whereas language tokens are discrete. Our control systems get put into the feedback loop with physics, and have to deal with stochasticity from the environment that LLMs don't experience. However, I would say that in 2024, there were two main lines of work which I believe made people realize that predicting future actions may not be so fundamentally different than language, at least for dexterous manipulation. Those were the Diffusion Policy Chi24 and the Action Chunking Transformer (ACT) from the ALOHA project Zhao23. Of course, these were built on a long line of progress which started anew in the age of deep learning (e.g. Levine16+Florence20+Florence22).

Diffusion Policy started as an intern project(!) by Cheng Chi at TRI, and blossomed into a great collaboration with Shuran Song.

Diffusion Policy and ALOHA seem to have been the watershed results for dexterous manipulation in robotics. Since that time, the internet is now rich with videos of highly dexterous manipulation from all sorts of robots up to and including humanoid robots with dexterous hands.

In one sense, it might seem a little disappointing that, after we've spent so much time in these notes exploring the rich mathematical foundations of dynamics and control, that behavior cloning from human teleop demonstrations can outperform some of our best methods, at least for some class of problems (which are arguably more about understanding the world than about dynamics and control). But think of it this way: using supervised learning is an awfully clever way to explore the space of policy parameterizations, and has accelerated us into bigger questions about using cameras in the feedback loop, learning multitask/foundation models, and leveraging structure (such as 3D geometry / objectness) in our representations or not. I must say that the success of LLMs (and now multimodal models) is undeniable, and it would be a mistake to ignore the great new possibilities that have opened up. I'm very confident that our knowledge of dynamics and control will help us penetrate the new vistas enabled by these high-capacity models.

Let me make one important point. Sometimes people say that BC is fundamentally limited because it can never outperform the human demonstrator. But that is not strictly true. cite the classic result; i forget which classic paper it came from. Chowdery23 figure 6 is relevant, but they seem to hedge a bit because perhaps the metric isn't great. Certainly Chat-GPT can produce text output that by some metrics surpasses the capabilities of any one human.

There is also a precident for combining BC with other methods to improve beyond the original demonstrations. For DeepMind's AlphaGoSilver16, behavior cloning was only the first step -- it was enough to get us to a strong Go player, but not enough to be championship level. Adding "self-play" and improvement via Monte-Carlo tree search was fundamental to "mastering the game of Go".

Visuomotor policies (aka control from pixels)

In 2016, a few years after the start of the deep learning revolution, Levine16 introduced the concept of a "visuomotor policy", and produced fairly stunning videos of robots performing complicated tasks. The name "visuomotor" is chosen to emphasize that the policy is trained to predict actions directly from RGB camera images, using architectures similar to the one pictured below; in a sense this was a return to the early ideas of Pomerleau88, but now powered with deep learning architectures and pretraining.

The (now) classical visuomotor policy architecture, adapted from Levine16+Florence20. The emphasis here is that $z$ is a learned state representation which encodes the state of the environment.

My own journey with imitation learning started with a project led by Pete Florence and Lucas Manuelli on using a particular form of self-supervised learning for 3D geometries to train the policies Florence20. For me, this was the first I got to really experience directly how the model was trained, and I got to experience the incredibly impressive robustness that was possible in the roll-outs, even with a relatively modest amount of robot training data.

Writing feedback controllers which operate directly on the RGB camera images was something entirely new for me. We had been using RGB-D cameras to do some amount of visual state estimation / pose estimation before that, but were leaning pretty heavily on the depth channel and very explicit 3D reasoning/matching. But I remember around the time of our first imitation learning project, I asked the students "if you could only choose one, RGB or depth, which would you choose". They chose RGB. One could easily underestimate just how valuable the RGB channel is, but there are many situations where a task is unclear or even ambiguous when looking only at a depth image which become clear in RGB, and there are many depth cues in RGB that allow us and our visuomotor policies to be successfull for 3D tasks even without an explicit depth sensor.

In my experience, control theorists have very satisfying answers to almost any dynamics and control problem. But (with a few rare exceptions) they didn't do computer vision. This was something entirely new. The sensor model -- the mapping from, e.g. the state and parameters of a MultibodyPlant to the output image $y$ -- is potentially a full game-engine-quality renderer. Even though there are lots of projects now on making differentiable renderers, these can only do so much because the pixelation process is inherently very local/non-smooth. Going from the image back into a manageable intermediate (latent) state representation, $z$, simply did not work nearly as well until we got deep networks for perception. Data/learning does feel fundamental here -- mapping from RGB into a meaningful representation for control is more about the statistics of natural scenes than about the model-based physics of propagating light.

Supervised learning in a feedback loop: dealing with distribution shift

One of the famous challenges in imitation learning is the problem of distribution shift. Imagine that you are training a policy for driving a car...

DAGGER Ross11

Teacher/Student.

Dealing with suboptimal and multimodal demonstrations

Discrete maze example w/ symmetries. Push-T example.

Architectures for visuomotor policies

(Multi-modal) input encoders

ImageNET, Dense object nets, CLIP-pretrained ViT, VLMs, Spatial VLMs, ...

Output/action decoders

RT-2. Diffusion policies Chi24. Action-chunking transformers Zhao23.

Diffusion Policy

One particularly successful form of behavior cloning for visuomotor policies with continuous action spaces is the Diffusion Policy Chi23. The dexterous manipulation team at TRI had been working on behavior cloning for some time, but the Diffusion Policy (which started as a summer internship project!) architecture has allowed us to very reliably train incredibly dexterous tasks and really start to scale up our ambitions for manipulation.

Denoising Diffusion models

"Denoising Diffusion" models are an approach to generative AI, made famous by their ability to generate high-resolution photorealistic images. Inspired by the "manifold hypothesis" (e.g. the idea that realistic images live on a low-dimensional manifold in pixel space), the intuition behind denoising diffusion is that we train a model by adding noise to samples drawn from the data distribution, then learn to predict the noise from the noisy images, in order to "denoise" random images back on to the manifold. While image generation made these models famous, they have proven to be highly capable in generating samples from a wide variety of high-dimension continuous distributions, even distributions that are conditioned on high-dimensional inputs.

Let's consider samples $\bu \in \Re^m$ drawn from a training dataset $\mathcal{D}.$ Diffusion models are trained to estimate a noise vector ${\bf \epsilon} \in \Re^m$ to minimize the loss function $$\ell(\theta) = \mathbb{E}_{\bu, {\bf \epsilon}, \sigma} || {\bf f}_\theta(\bu + \sigma {\bf \epsilon}, \sigma ) - {\bf \epsilon} ||^2,$$ where $\theta$ is the parameter vector, and $f_\theta$ is typically some high-capacity neural network. In practice, training is done by randomly sampling $\bu$ from $\mathcal{D}$, ${\bf \epsilon}$ from $\mathcal{N}({\bf 0}_m, {\bf I}_{m \times m})$, and $\sigma$ from a uniform distribution over a positive set of numbers denoted as $\{\sigma_k\}_{k=0}^K,$ where we have $\sigma_k > \sigma_{k-1}.$

To sample a new output from the model, the denoising diffusion implicit models (DDIM) sampler Song20 takes multiple steps: $$\bu_{k-1} = \bu_k + (\sigma_{k-1} - \sigma_k)f_\theta(\bu_k, \sigma_k).$$ This specific parameterization of the update (and my preferred notation more generally) comes from Permenter23. I've presented the deterministic denoiser here, but some analysis suggests that Langevin sampling can improve performance if one takes many denoising stepsXu23.

Diffusion models have a slightly convoluted history. The term "diffusion" came from a paper Sohl-Dickstein15 which used an analogy from thermodynamics to use a prescribed diffusion process to slowly transform data into random noise, and then learned to reverse this procedure by training an inverse diffusion. Well before that, a series of work starting with Hyvarinen05 studied the problem of learning the score function (the gradient of the log probability of a distribution) of a data distribution, and Vincent11 made a connection to denoising autoencoders. Song19 put all of this together beautifully and combined it with deep learning to propose denoising diffusion as a generative modeling techinque. They learned a single network that was conditioned on the noise level. This was followed quickly by Ho20 which introduced denoise diffusion probabilistic models (DDPM) using an even simpler update and showed results competitive with other leading generative modeling techniques leading to Song20 giving us the DDIM update above. Permenter23 gives a deterministic interpretation as learning the distance function from the data manifold, and sampling as performing approximate gradient descent on this function.

Examples! Something like Figure 1 from Sohl-Dickstein15 would be good.

It is straight-forward to condition the generative model on an exogeneous input, by simply adding an additional signal, $\by$, to the denoiser: $f_\theta(\bu, \sigma, \by).$

Diffusion Policy

Behavior cloning is perhaps the simplest form of imitation learning -- it simply attempts learn a policy using supervised learning to match expert demonstrations. While it is tempting to learn deterministic output-feedback policies (maps from history of observations to actions), one quickly finds that human demonstrations are typically not unique. Perhaps this is not surprising, as we know that optimal feedback policies in general are not unique! To address this non-uniqueness / multi-modality in the human demonstrations, it's well understood that behavior cloning benefits from learning a conditional distribution over actions.

Diffusion Policy is the natural application of (conditional) denoising diffusion models to learning these policies. It was inspired, in particular, but the modeling choices in DiffuserJanner22. In particular, rather than generating a single action, the denoiser in diffusion policy outputs a sequence of actions with horizon $H_u$; like Zhao23 we found experimentally that this leads to more stable roll-outs. Block23b provides some possible theoretical justification for this choice. We condition the input on a history of observations of length $H_y.$

Diffusion Policy for LQG

Let me be clear, it almost certainly does not make sense to use a diffusion policy to implement LQG control. But because we understand LQG so well at this point, it can be helpful to understand what the Diffusion Policy looks like in this extremely simplified case.

Consider the case where we have the standard linear-Gaussian dynamical system: \begin{gather*} \bx[n+1] = \bA\bx[n] + \bB\bu[n] + \bw[n], \\ \by[n] = \bC\bx[n] + \bD\bu[n] + \bv[n], \\ \bw[n] \sim \mathcal{N}({\bf 0}, {\bf \Sigma}_w), \quad \bv[n] \sim \mathcal{N}({\bf 0}, {\bf \Sigma}_v). \end{gather*} Imagine that we create a dataset by rolling out trajectory demonstrations using the optimal LQG policy. The question is: what (exactly) does the diffusion policy learn?

Let's start with the $\mathcal{H}_2$ problem (e.g. LQR with Gaussian noise), where the observation and prediction horizons are limited to a single step, $H_y = H_u = 1$, and the denoiser is conditioned directly on state observations. We will generate roll-outs using the optimal policy, $\bu = - \bK^*\bx$, given a Gaussian distribution of intial conditions and Gaussian process noise. In this case, the training loss function reduces to $$\ell(\theta) = \mathbb{E}_{\bx, {\bf \epsilon}, \sigma} || {\bf f}_\theta(-\bK\bx + \sigma {\bf \epsilon}, \sigma, \bx) - {\bf \epsilon} ||^2,$$ where the expectation in $\bx$ is over the stationary distribution of the optimal policy. In this case, we don't need a neural network; take $f_\theta$ to be a simple function. In particular the optimal denoiser is given by $${\bf f}_\theta(\bu, \sigma, \bx) = \frac{1}{\sigma}\left[\bu + \bK\bx\right].$$ At evaluation time, the sampling iterations, $$\bu_{k-1} = \bu_k + \frac{\sigma_{k-1} - \sigma_k}{\sigma_k}\left[\bu_k + \bK\bx\right],$$ will converge on $\bu_0 = -\bK\bx.$ (Clearly $\bu_k = -\bK\bx$ is a fixed point of the iteration, and the $\frac{\sigma_{k-1} - \sigma_k}{\sigma_k}$ term is like the step-size of gradient descent.)

Returning to LQG, the diffusion policy architecture (with $H_u=1$) will be learning a denoiser conditioned on a finite history of actions and observations, \begin{gather*}f_\theta(\bu[n], \sigma, \bar{\by}_{H_y}, \bar{\bu}_{H_y}), \\ \bar{\by}_{H_y} = \left[\by[n-1],... ,\by[n-H_y]\right], \\ \bar{\bu}_{H_y} = \left[\bu[n-1],... ,\bu[n-H_y]\right].\end{gather*} We know that for LQG, the optimal actor that we will use for generating training data takes the form of a Kalman filter followed by LQR feedback on the estimated state. We can "unroll" the (truncated) Kalman filter into a linear function of the history of actions and observations; for observable and stabilizable linear systems we know that this truncation error will converge to zero as we increase $H_y$. Let's call this unrolled policy $\bu[n] = \hat{\pi}_{LQG}(\bar{\by}_{H_y}, \bar{\bu}_{H_y}).$ With some care, it can be shown that the optimal denoiser is given by $${\bf f}_\theta(\bu, \sigma, \bar{\by}_{H_y}, \bar{\bu}_{H_y}) = \frac{1}{\sigma}\left[\bu + \hat{\pi}_{LQG}(\bar{\by}_{H_y}, \bar{\bu}_{H_y})\right],$$ which will converge onto the truncated Kalman filter.

Notebook example

Predicting actions multiple steps into the future is a fundamentally important aspect of the Diffusion Policy architecture Block23b. For LQG, ...

Reduced-order LQG

Inverse reinforcement learning

Some good notes here: https://web.stanford.edu/class/cs237b/pdfs/lecture/lecture_10111213.pdf

Vistas

Multitask / foundation models for control

If control directly from pixels was the first capability unlocked by imitation learning, I would say that large-scale multitask decision making is the second. Multitask learning has a long history in the machine learning community Zhang21, and multitask learning for control was popular first in "multitask RL" citation?. But the rise of foundation models Bommasani21 in language and now in multimodal models, has completely changed our capabilities and our expectations. Suddenly, one can imagine a single policy, represented in a high-capacity neural network model, whose inputs are the robot sensors (including RGB cameras) plus a natural language command ("Robot, please make me a pizza"), and for the first time ever it seems like something like this could plausibly work.

In my view, this has potentially profound implications for how we think about control. Our basic control definitions start with, e.g. we have a state $\bx$, inputs $\bu$, outputs $\by.$ The discussion on output feedback got us thinking a little about state representations for control -- for instance a belief state is a sufficient (but not necessary) state because it is a sufficient statistic of the history of actions and observations. But multitask in the imitation learning setting changes things. In the simple case, we'll say that our inputs $\bu$, and outputs $\by$ are the same across all of the tasks. But it may well be that the underlying state space is not. (I admit that philosophically there is a state of the entire universe which is the same across tasks, but I mean the more tractable representations of state that we've been using through the notes.) What does it mean to learn a state representation for control across tasks where even the dimension/cardinality of the state space can be different? Even our catch-all definition of belief state breaks down in this case.

Are there tractable ways to describe distributions over tasks that are amenable to our strongest theoretical tools, but still relevant for the complexity and diversity of the real world? When we talked about stochastic optimal control, we gave examples where taking an average over many possible rollouts can actually simplify the loss landscape, avoiding some local minima and making optimization easier. Can multitask control formulations have a similar effect?

Going further, how exactly is it that solving/learning one task can potentially help us in solving/learning another? This brings up basic questions about designing a curriculum for our control systems. Is is possible for us to soar to higher and higher heights if we sequence our control problem instances correctly?

Distributed decentralized learning (aka "fleet learning")

When I start using phrases like "learning" and "curriculum", then it becomes very natural to think in terms of our natural intelligence. How did we learn to walk? To play tennis? But let's remember that these analogies only go so far. For me, the GPT series of models are clearly unlike any single natural intelligence, they are more like a collective intelligence of the entire species (though still certainly deficient in some metrics). In the age of foundation models, it may not be the case that every robot needs to learn to use a toaster; the dream of "fleet learning" is that one robot will learn how to use a toaster and then they will all have learned.

This brings up fundamental questions about the learning algorithms, about data efficiency (and privacy). But it also challenges our theories of dynamics and control. For instance, there are open questions about how to balance being a generalist and using only shared data vs being a specialist. Certainly if a particular robot is solving problems in a particular warehouse, then while the statistics of tasks across the world may help form robust representations, this robot can almost certainly perform better if it narrows and specializes the policies (and world models) to exploit the distribution of tasks in the warehouse.

A particular version of this question appears in the context of "cross-embodiment" data and models. Right now, robot data with action labels is scarce (compared with online data for text and images). This, in part, has motivated the use of datasets which combine data from many robots/platforms Shah23+Padalkar23, and architectures that can share representations and transfer learning even when the number/types of sensors and actuators changes across the fleet.

Be rigorous

The fact that many of these fundamental questions are now being asked makes this a simply amazing time to be a roboticist. However, the pace of new innovations is so fast that often researchers feel pressure to race to publication before having done proper rigorous theoretical or empirical work. We are building tall towers but with somewhat shakey foundations. I firmly believe that the tenants of dynamics and control (amongst other rigorous technical tools) have a lot to contribute to understanding and continuing to push the field forward, and that some of the maturity with which we can understand these simpler (but not very simple!) problems can serve as a model for what we should expect about our understanding of the even more complex ones.

References

  1. Brenna D Argall and Sonia Chernova and Manuela Veloso and Brett Browning, "A survey of robot learning from demonstration", Robotics and autonomous systems, vol. 57, no. 5, pp. 469--483, 2009.

  2. J.A. Bagnell, "An Invitation to Imitation", Tech. Report, CMU-RI-TR-15-08, March, 2015.

  3. Takayuki Osa and Joni Pajarinen and Gerhard Neumann and J Andrew Bagnell and Pieter Abbeel and Jan Peters and others, "An algorithmic perspective on imitation learning", Foundations and Trends in Robotics, vol. 7, no. 1-2, pp. 1--179, 2018.

  4. Dean A Pomerleau, "Alvinn: An autonomous land vehicle in a neural network", Advances in neural information processing systems, vol. 1, 1988.

  5. Michael Bain and Claude Sammut, "A Framework for Behavioural Cloning.", Machine Intelligence 15 , pp. 103--129, 1995.

  6. Daniel M Ziegler and Nisan Stiennon and Jeffrey Wu and Tom B Brown and Alec Radford and Dario Amodei and Paul Christiano and Geoffrey Irving, "Fine-tuning language models from human preferences", arXiv preprint arXiv:1909.08593, 2019.

  7. Alec Radford and Karthik Narasimhan and Tim Salimans and Ilya Sutskever and others, "Improving language understanding by generative pre-training", , 2018.

  8. Alec Radford and Jeffrey Wu and Rewon Child and David Luan and Dario Amodei and Ilya Sutskever and others, "Language models are unsupervised multitask learners", OpenAI blog, vol. 1, no. 8, pp. 9, 2019.

  9. Tom Brown and Benjamin Mann and Nick Ryder and Melanie Subbiah and Jared D Kaplan and Prafulla Dhariwal and Arvind Neelakantan and Pranav Shyam and Girish Sastry and Amanda Askell and others, "Language models are few-shot learners", Advances in neural information processing systems, vol. 33, pp. 1877--1901, 2020.

  10. S{\'e}bastien Bubeck and Varun Chandrasekaran and Ronen Eldan and Johannes Gehrke and Eric Horvitz and Ece Kamar and Peter Lee and Yin Tat Lee and Yuanzhi Li and Scott Lundberg and others, "Sparks of artificial general intelligence: Early experiments with gpt-4", arXiv preprint arXiv:2303.12712, 2023.

  11. Haotian Liu and Chunyuan Li and Qingyang Wu and Yong Jae Lee, "Visual Instruction Tuning", , 2023.

  12. Siddharth Karamcheti and Suraj Nair and Ashwin Balakrishna and Percy Liang and Thomas Kollar and Dorsa Sadigh, "Prismatic vlms: Investigating the design space of visually-conditioned language models", arXiv preprint arXiv:2402.07865, 2024.

  13. Hugo Laurençon and Léo Tronchon and Matthieu Cord and Victor Sanh, "What matters when building vision-language models?", , 2024.

  14. Cheng Chi and Zhenjia Xu and Siyuan Feng and Eric Cousineau and Yilun Du and Benjamin Burchfiel and Russ Tedrake and Shuran Song, "Diffusion Policy: Visuomotor Policy Learning via Action Diffusion", , 2024.

  15. Tony Z Zhao and Vikash Kumar and Sergey Levine and Chelsea Finn, "Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware", arXiv preprint arXiv:2304.13705, 2023.

  16. Sergey Levine and Chelsea Finn and Trevor Darrell and Pieter Abbeel, "End-to-end training of deep visuomotor policies", The Journal of Machine Learning Research, vol. 17, no. 1, pp. 1334--1373, 2016.

  17. Peter Florence and Lucas Manuelli and Russ Tedrake, "Self-Supervised Correspondence in Visuomotor Policy Learning", IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 492-499, April, 2020. [ link ]

  18. Pete Florence and Corey Lynch and Andy Zeng and Oscar A Ramirez and Ayzaan Wahid and Laura Downs and Adrian Wong and Johnny Lee and Igor Mordatch and Jonathan Tompson, "Implicit behavioral cloning", Conference on Robot Learning , pp. 158--168, 2022.

  19. David Silver and Aja Huang and Chris J Maddison and Arthur Guez and Laurent Sifre and George Van Den Driessche and Julian Schrittwieser and Ioannis Antonoglou and Veda Panneershelvam and Marc Lanctot and others, "Mastering the game of Go with deep neural networks and tree search", nature, vol. 529, no. 7587, pp. 484--489, 2016.

  20. St{\'e}phane Ross and Geoffrey Gordon and Drew Bagnell, "A reduction of imitation learning and structured prediction to no-regret online learning", Proceedings of the fourteenth international conference on artificial intelligence and statistics , pp. 627--635, 2011.

  21. Cheng Chi and Siyuan Feng and Yilun Du and Zhenjia Xu and Eric Cousineau and Benjamin Burchfiel and Shuran Song, "Diffusion Policy: Visuomotor Policy Learning via Action Diffusion", Proceedings of Robotics: Science and Systems , 2023.

  22. Jiaming Song and Chenlin Meng and Stefano Ermon, "Denoising Diffusion Implicit Models", International Conference on Learning Representations , 2020.

  23. Frank Permenter and Chenyang Yuan, "Interpreting and Improving Diffusion Models Using the Euclidean Distance Function", arXiv preprint arXiv:2306.04848, 2023.

  24. Yilun Xu and Mingyang Deng and Xiang Cheng and Yonglong Tian and Ziming Liu and Tommi Jaakkola, "Restart Sampling for Improving Generative Processes", Advances in Neural Information Processing Systems , vol. 36, pp. 76806--76838, 2023.

  25. Jascha Sohl-Dickstein and Eric Weiss and Niru Maheswaranathan and Surya Ganguli, "Deep unsupervised learning using nonequilibrium thermodynamics", International conference on machine learning , pp. 2256--2265, 2015.

  26. Aapo Hyvarinen, "Estimation of {Non}-{Normalized} {Statistical} {Models} by {Score} {Matching}", Journal of Machine Learning Research, vol. 6, pp. 695–708, 2005.

  27. Pascal Vincent, "A connection between score matching and denoising autoencoders", Neural computation, vol. 23, no. 7, pp. 1661--1674, 2011.

  28. Yang Song and Stefano Ermon, "Generative Modeling by Estimating Gradients of the Data Distribution", Advances in Neural Information Processing Systems , vol. 32, 2019.

  29. Jonathan Ho and Ajay Jain and Pieter Abbeel, "Denoising diffusion probabilistic models", Advances in neural information processing systems, vol. 33, pp. 6840--6851, 2020.

  30. Michael Janner and Yilun Du and Joshua Tenenbaum and Sergey Levine, "Planning with Diffusion for Flexible Behavior Synthesis", International Conference on Machine Learning , pp. 9902--9915, 2022.

  31. Adam Block and Ali Jadbabaie and Daniel Pfrommer and Max Simchowitz and Russ Tedrake, "Provable Guarantees for Generative Behavior Cloning: Bridging Low-Level Stability and High-Level Behavior", Thirty-seventh Conference on Neural Information Processing Systems , 2023.

  32. Yu Zhang and Qiang Yang, "A survey on multi-task learning", IEEE Transactions on Knowledge and Data Engineering, vol. 34, no. 12, pp. 5586--5609, 2021.

  33. Rishi Bommasani and Drew A Hudson and Ehsan Adeli and Russ Altman and Simran Arora and Sydney von Arx and Michael S Bernstein and Jeannette Bohg and Antoine Bosselut and Emma Brunskill and others, "On the opportunities and risks of foundation models", arXiv preprint arXiv:2108.07258, 2021.

  34. Dhruv Shah and Ajay Sridhar and Nitish Dashora and Kyle Stachowicz and Kevin Black and Noriaki Hirose and Sergey Levine, "ViNT: A Foundation Model for Visual Navigation", Conference on Robot Learning , pp. 711--733, 2023.

  35. Abhishek Padalkar and Acorn Pooley and Ajinkya Jain and Alex Bewley and Alex Herzog and Alex Irpan and Alexander Khazatsky and Anant Rai and Anikait Singh and Anthony Brohan and others, "Open x-embodiment: Robotic learning datasets and rt-x models", arXiv preprint arXiv:2310.08864, 2023.

Previous Chapter Table of contents Next Chapter