Why is Deep Dream turning the world into a doggy monster hellscape?

(1)

Raphaël Bastide, Handmade Deep Dream (2015). If this were a real Deep Dream image these would be dogs probably.

Participants in social media will by now be well aware of the artistic renaissance that has been underway since the release of Google's Deep Dream visualization tool last week. Antony Antonellis' A-Ha Deep Dream captures well the experience of encountering these unsettling images on the internet:

Antony Antonellis, A-ha Deep Dream (2015).

By way of recap: Deep Dream uses a machine vision system typically used to classify images that is tweaked so that it over-analyzes images until it sees objects that aren't "really there." The project was developed by researchers at Google who were interested in the question, how do machines see? Thanks to Deep Dream, we now know that machines see things through a kind of fractal prism that puts doggy faces everywhere. 

It seems strange that Google researchers would even need to ask this question, but that's the nature of image classification systems, which generally "learn" through a process of trial and error. As the researchers described it,

we train networks by simply showing them many examples of what we want them to learn, hoping they extract the essence of the matter at hand (e.g., a fork needs a handle and 2-4 tines), and learn to ignore what doesn't matter (a fork can be any shape, size, color or orientation). But how do you check that the network has correctly learned the right features? It can help to visualize the network's representation of a fork.

READ ON »