Hussein Mehanna is showing off a new incarnation of the Facebook smartphone app. It can transform a photo of your backyard barbecue into a Picasso. Or a Van Gogh. Or a Warhol. The app includes a particularly extravagant photo filter.
Hussein Mehana is showing off a new incarnation of the Facebook smartphone app. It can transform a photo of your backyard barbecue into a Picasso. Or a Van Gogh. Or a Warhol.
The app includes a particularly extravagant photo filter. You select a work of art—something akin to, say, a 1907 Picasso—and it creates a Cubist incarnation of your backyard barbecue. It’s fun, and it works even with live video. Turn the camera on yourself, and you too can be a Picasso. But that’s not half as interesting as the technology that underpins Facebook’s new app and its extravagant photo filter. Mehana is one of the Facebook engineers working to push artificial intelligence across the company, and as he explains, the app includes several deep neural networks, a form of artificial intelligence that’s rapidly reinventing the tech world.
‘We perceive the world in real-time. Why wouldn’t you want the same thing from your AI?’
Based loosely on the web of neurons in the human brain, neural networks can learn discrete tasks by analyzing vast amounts of data. This is what identifies faces in the photos you post to Facebook, recognize the commands you speak into your Android phone, and helps translate your Skype calls into foreign languages. Now, using various works of art, Facebook is training neural networks to inject a new look into your personal pics.
Typically, neural networks run on large numbers of computer servers packed into data centers on the other side of the Internet—they don’t work unless your phone is online—but with its new app, Facebook takes a different approach. The Picasso filter is driven by a neural network efficient enough to run on the phone itself. “We perceive the world in real-time,” Mehana says. “Why wouldn’t you want the same thing from your AI?”
Already available in Ireland and due soon here in the States, this new Facebook app is another sign that deep neural networks will push beyond the data center and onto phones, cameras, and various other devices spread across the so-called Internet of Things. Last summer, Google squeezed a neural network into its Google Translate app, which can identify words in photos and translate them in new languages. And so many other operations, including the Allen Institute for Artificial Intelligence, are developing similarly svelte neural networks.
Yes, these tools can operate without an Internet connection. And that points to a future where our smartphone apps can perform a much wider range of tasks while offline. But it also shows we’re moving towards technology that can handle more complex AI tasks with less delay. Ultimately, if you can complete a task without sending a bunch of data across the wire, it will happen quicker.
Imagine apps that can instantly recognize faces or objects when you point your phone at them. Think what this could do for people who are blind or otherwise visually impaired. “Doing this on the phone changes the nature of the game,” says Allen Institute CEO Oren Etzioni, pointing out that this can even help drive augmented reality headsets like the Microsoft Hololens. If a device can more accurately recognize the world around it, it can more accurately augment that reality.
Training Versus Execution
A neural network operates in two stages. First, a company like Facebook or Google trains it for a particular task, like image recognition or machine translation. Facebook might teach a neural network to recognize goats, for instance, by feeding it millions of goat photos. Then someone like you or me executes the neural network. We give it a photo, and