Creative Machines Lab Projects
Columbia University
Eligibility
All Students
Accepts Applications Until
Dec 20, 2025
Project Duration
Flexible
Description
Host of ML projects:
Smart Buildings:
Work on applying RL to optimize HVAC systems of commercial buildings. In partnership with Google.
See: https://arxiv.org/pdf/2410.03756
Meta Learning:
Develop algorithms that “learn how to learn”, where various parts of the learning process are themselves learned in a bi-level optimization.
AI For Biometrics:
Develop AI algorithms for the next generation of biometric analysis.
See https://www.science.org/doi/pdf/10.1126/sciadv.adi0329
AutoURDF:
Unsupervised Robot Modeling from Point Cloud Videos
AutoURDF is a pipeline that automatically generates URDF (Unified Robot Description Format) files from time-series 3D point cloud data. It segments moving parts, infers the robot’s kinematic topology, and estimates joint parameters, without any ground-truth annotations or manual intervention. This makes AutoURDF a scalable and fully visual solution for automated robot modeling.
The first paper can be found here: https://openaccess.thecvf.com/content/CVPR2025/papers/Lin_AutoURDF_Unsupervised_Robot_Modeling_from_Point_Cloud_Frames_Using_Cluster_CVPR_2025_paper.pdf
Generative Kinematics Synthesis:
Image-based Kinematics Synthesis with Generative Models
This project develops an image-based representation of the Planar Linkages dataset, covering a range of mechanisms from simple four-bar linkages to complex structures like the Jansen mechanism. Compared to previous studies, this dataset further includes crank-slider mechanisms, enabling broader exploration. A joint latent-space VAE model is used to investigate the potential of image generative models for simulating unseen kinematics and synthesizing novel motion trajectories. The same architecture also supports kinematic synthesis conditioned on both trajectory shape and velocity profiles. Preliminary results highlight the flexibility of image-based representations in generative mechanical design, showing that linkages, revolute and prismatic joints, and in future work, cams and gears can be unified under the same pixel-based encoding.
Knolling:
Design a robot that can look at a cluttered pile of Lego brick and sort them out neatly by color, shape and size. Use end-to-end ML.
Foosball:
Use reinforcement learning or other methods to train a robotic system to play on a foosball table. Start in simulation and continue in reality (physical system in development).
2D Shape vectorization:
Go from a raster 2D shape (e.g. polygon) to a CSG tree of primitives (e.g. logic operations on simple shapes). If successful, apply in 3D.
Supervisory ML:
Explore whether a neural network (NN) can learn to determine whether another observed neural network is confident in its answers, simply by looking at some of the observed NN internal states.
Self replicating NN:
See if a NN can learn to output the value of its own weights (all of them) and at the same time also learn to perform some other task.
Required Skills
experience with deep learning and an interest in research
Additional Information
All projects are described here:
https://docs.google.com/document/d/1iJyA0qXoGKP1t8I6IIfcxn3LcmMmilT_BMWxTcWk4q4/edit?usp=sharing
Students will have weekly meetings with the lab and be expected to present at the end of the semester. All students must register for credit.
