With machine learning, you can finally make politicians say what you want them to

10.12.2015
Anyone who's watched a political debate has probably wished they could influence the words coming out of a candidate's mouth. Now, machine learning is making that possible -- at least to some extent.

Researchers at the University of Washington have found a way to create fully interactive, 3D digital personas from photos albums and videos of famous people such as Tom Hanks, Barrack Obama, Hillary Clinton and George W. Bush. Equipped with those 3D models, they could then impose another person's voice, expressions and sentiments on them, essentially rendering the models as 3D digital puppets.

“Imagine being able to have a conversation with anyone you can’t actually get to meet in person -- LeBron James, Barack Obama, Charlie Chaplin -- and interact with them,” said Steve Seitz, a UW professor of computer science and engineering.

To construct such personas, the team used machine learning algorithms to mine 200 or so Internet images taken over time of a particular person in various scenarios and poses. They then developed techniques to capture expression-dependent textures  -- small differences that occur when a person smiles or looks puzzled or moves his or her mouth, for example.

By manipulating the lighting conditions across different photographs, they developed a new approach to densely map the differences from one person’s features and expressions onto another person’s face, making it possible to “control” the digital model with a video of another person.

The video below explains more about the research. 

“How do you map one person’s performance onto someone else’s face without losing their identity” said Seitz. “That’s one of the more interesting aspects of this work. We’ve shown you can have George Bush’s expressions and mouth and movements, but it still looks like George Clooney.”

The technology relies on advances in 3D face reconstruction, tracking, alignment, multi-texture modeling and puppeteering that have been developed over the last five years by a research group led by UW assistant professor of computer science and engineering Ira Kemelmacher-Shlizerman. The results will be presented next week in a paper at the International Conference on Computer Vision in Chile.

The research was funded by Samsung, Google, Intel and the University of Washington.

Katherine Noyes

Zur Startseite