By Joshua Koopferstock & Christian Laforte
For you technology lovers who are still kids at heart, Disneyland has recently opened up the Innoventions Dream Home,
showcasing cool high tech integration in a futuristic home. What caught
my attention was one invention called the Magic Mirror. Getting its
name from the mirror in Snow White, this Magic Mirror does not tell you
“who is the fairest of them all” (for that you’ll still need HotOrNot.com).
What it does do is allow you to virtually try on clothes in your
wardrobe. In fact, the Magic Mirror is not a mirror at all; it is a
large display monitor with a video camera next to it.
Trying on a dress in the Magic Mirror. Photo: cepro.com
While the concept is neat and would probably be even more useful in
the department store dressing room than the bedroom, by the looks of it
in the video below,
the concept is still far from the realism necessary for a technology
like this to take off. A few years back, it was thought that by today,
virtual clothes shopping would be mainstream, and companies like My Virtual Model had signed contracts with major apparel retailers to integrate their technology into online stores.
It turns out that the technology wasn’t ready, and by the looks of this Magic Mirror, it still has a long way to go.
Here’s how I think they do it:
The dress moves roughly according to the orientation of the head, so
they are most likely using a simple real-time head tracker and applying
the pose of the head to the top of the dress. The bottom seems to be
animated randomly, or maybe through secondary animation.
Later this week I’ll post on a face tracking algorithm that could make this easily possible.
How could we do it better?
One imperfection is very noticeable: the dress doesn’t follow the
shoulders and hips properly. Part of it may be anatomical (this is a
guy after all), but I think this problem should be easily solved by
tracking the silhouette (using background subtraction) and identifying
the shoulders and hips using simple heuristics, e.g. areas of low
curvature and roughly horizontal or vertical slopes. This would
immediately improve the realism of this solution.
Another improvement would be to track features on the user’s
T-shirt, so we can have a better estimate of the body pose, its size
and maybe even the person’s sex. I’d start my search with Automatic Non-Rigid 3D Modeling from Video
(Torresani and Hertzmann, 2004), since I remember being impressed with
the results way back then: it handles occlusions and variations in
illumination very nicely. In the picture below, you can see one of the
researchers moving his hands in front of his T-shirt… the algorithm can
still capture a 3D representation of the deforming T-shirt. Doing this
in real-time may be challenging, but fast GPUs and multi-core systems
should make it possible.
Still a Long Way to Go
With the method that we have suggested, there is one major sticking
point that we have not addressed: content creation. Assuming you have
the ability to accurately track a person and render the image in real
time, you still need a way to create the clothes in 3D. This is not a
simple task, as clothes can be highly variable in elasticity,
reflectiveness, etc., which would make automation of the modeling
process complex.
Before we see these Magic Mirrors in department stores like Sears or
Macy’s that have hundreds of thousands of different apparel items each
year, a method for automatic creation of clothing content will have to
be developed. And while automatic 3D content creation is going to take
great strides in the next couple of years, the quality of 3D
reconstruction needed for clothing is still a long way off.