Posts

Reimplementing GCS (Shortest Paths in Graphs of Convex Sets)

Image
In a previous blog post , I discussed my experiences reimplementing Iterative Regional Inflation by Semidefinite programming (IRIS) , which is an algorithm for finding large convex regions of collision free space (C-Free). In this blog post, I'll be discussing my experiences reimplementing shortest paths in Graphs of Convex Sets (GCS) , an algorithm for motion planning when the configuration space is described as the union of convex sets. Disclaimer: the intent is not to build a codebase for future use, since the Drake toolbox has mature implementations of both IRIS and GCS . Disclaimer: if you have not read the previous blog post , you should do so before reading this one. In this blogpost, I'll give a brief overview of the GCS paper, and then discuss my experiences reimplementing it. Before reading this blog post, you should read my previous blog post about reimplementing IRIS, as it provides some background and context which I will not repeat in this post. I have also ma

Reimplementing IRIS (Computing Large Convex Regions of Obstacle-Free Space Through Semidefinite Programming)

Image
I've just begun my studying towards my PhD in computer science at MIT, and am working in the Robot Locomotion Group . This group has developed a series of projects that use formal mathematical optimization for tasks in motion planning. Because I'm interested in this area of research, I've been reading several of their papers; to better understand these papers, I've endeavored to reimplement some of their work. The two specific algorithms I'm interested in are close companions: Iterative Regional Inflation by Semidefinite programming (IRIS) can be used to find large convex regions of collision free space, and shortest paths in Graphs of Convex Sets (GCS) can be used for motion planning when the configuration space is described as the union of convex sets. I've attempted to reimplement these algorithms for simple, toy problems. Disclaimer: the intent is not to build a codebase for future use, since the Drake toolbox has mature implementations of both IRIS and

Paper Breakdown: Parallel Transport Unfolding (Max Budninskiy et. al.)

Image
Parallel Transport Unfolding: A Connection-based Manifold Learning Approach (Max Budninskiy, Gloria Yin, Leman Feng, Yiying Tong, and Mathieu Desbrun) is a recent manifold learning paper from the Applied Geometry Lab at Caltech. The paper presents the Parallel Transport Unfolding (PTU) algorithm, which is an extension to the well-known ISOMAP manifold learning algorithm. ISOMAP estimates the pairwise geodesic distances (distance along the manifold) between all of the input points in the high-dimensional ambient space, and produces an embedding in a low-dimensional space that best preserves these distances. Utilizing tools from the field of differential geometry, PTU improves the estimation of these geodesic distances, in turn improving the resulting embedding of the manifold. Additionally, PTU no longer requires that the source data be geodesically convex, which is an important requirement of the ISOMAP algorithm. In this blog post, I'll be summarizing the contents of the

Manifold Learning Primer

Image
Almost all of my research experience up to the present has been related to manifold learning, as are many of the papers I'm looking forward to breaking down on this blog. So this post represents a quick, self-contained primer on manifold learning, including the theory, practice, and why you would care about it in the robotics world. An example of the manifold learning process, on the classic "Swiss Roll" dataset, using ISOMAP. The Curse of Dimensionality Broadly speaking, there is no precise definition for the "Curse of Dimensionality". It's a catchall term used by machine learning practitioners to refer to the variety of issues that can be caused by high dimensional data. In this section, I'll cover a variety of these issues, and discuss why the dimensionality of the problem specifically causes the problems. One of the clearest examples of the curse of dimensionality is the intractability of many sampling-based probabilistic algorithms in high dimension

ICRA 2022 Recap / First Blog Post

Image
Last week, I attended the 2022 International Conference on Robotics and Automation (ICRA). This was my first in-person conference, and it was a wonderful opportunity to meet new people in the robotics world and catch up with Laboratory for Progress alumni. Of course, the main reason I attended ICRA was to presented the accepted paper Topologically-Informed Atlas Learning . I had the opportunity to give a 4 minute technical talk in the Representation Learning session and discuss this work further at the subsequent poster session. While at the conference I attended a variety of interesting technical talks. From our lab, Anthony Opipari gave a workshop talk on his work Differentiable Nonparametric Belief Propagation , and Emily Sheetz gave a technical talk on her publication Composable Causality in Semantic Robot Programming (to appear in the conference proceedings). My advisor Chad Jenkins also gave an invited workshop talk on how best to combine probabilistic reasoning with data-dri