There are a lot of impressive uses for machine learning these days, like detecting objects in images, helping to detect diseases, and even enabling cars to drive themselves. But AI can also be used in more playful ways.
That’s why we made Move Mirror—an AI Experiment that lets you explore pictures in a fun new way, just by moving around. Move in front of your webcam and Move Mirror will match your real-time movements to hundreds of images of people doing similar poses around the world. It feels like a magical mirror that reflects your moves with images of all kinds of human activity—from sports and dance to martial arts, acting and beyond. You can even capture the experience as a GIF and share it with your friends.
With Move Mirror, we’re showing how computer vision techniques like pose estimation can be available to anyone with a computer and a webcam. We also wanted to make machine learning more accessible to coders and makers by bringing pose estimation into the browser—hopefully inspiring them to experiment with this technology.
To build this experiment, we used PoseNet, a model that can detect human figures in images and videos by identifying where key body joints are. Move Mirror takes the input from your camera feed and maps it to a database of more than 80,000 images to find the best match. It’s powered by Tensorflow.js—a library that runs machine learning models on-device, in your browser—which means the pose estimation happens directly in the browser, and your images are not being stored or sent to a server. For a deep dive into how we built this experiment, check out this Medium post.
We hope you’ll play around with Move Mirror and share your experience by making a GIF. Try it out now at g.co/movemirror.