
Implementing Unsupervised Learning of Visual 3D Keypoints.
This was the final group project for EECS 545: Machine Learning class taught by Prof. Clayton Scott and Professor Alfred Hero . It was done in collaboration with Wami Ognubi and Andrea Sipos.
Summary.
The goal of this project was to use the 3D keypoints algorithm from [1] to enable the robot Cassie to dodge balls. We used PyBullet.
Excerpt from the abstract of the final report: “In a fast-moving world with environments full of objects, a growing challenge for roboticist is to determine meaningful ways for robots to act based on useful representations of the world. This work uses particle based representations (or, keypoints) to assign semantic meaning to points in 3D space. We apply this method in a simulation environment where randomly colored balls are thrown at a Cassie
robot.”
Conclusion from the final report: “In this project, we implemented KeypointNet from Chen et al. [2021] on a new environment and task. We implemented a controller for a bipedal robot and generated a dataset for our problem in simulation. We were able to implement our own training pipeline and separate KeypointNet from PPO for our specific task. We found promising preliminary results with ideas in how to improve those results in the future and gained valuable practical experience in defining and implementing a
learning solution for a specific problem relevant to our research“
References
Boyuan Chen, Pieter Abbeel, and Deepak Pathak. Unsupervised learning of visual 3d keypoints for
control, 2021.