Yes. Let me skip all of the expository material, and find something along those lines. This is just a fun picture that may actually come across in the camera. This is a picture of Rosenblatt, who was a very early pioneer in computer science, who actually did attempt to implement the neuron-based computational model that von Neumann and Turing talked about, way back in the ’40s and ’50s.

This device is called the Perceptron. It’s actually a physical instantiation of a brain built using wires. Of course, this also shows you why that could not have worked, using that technology. He died in 1971.

This is maybe just interesting for a bit of historical color. One of the earliest data sets that was really used in a systematic way for testing various kinds of machine learning and machine intelligence is called MNIST. It was put together by the US’s Bureau of Standards for solving a very, very simple problem. Just for reading the numbers on the zip codes of postal addresses.

It’s really just designed for testing various different kinds of approaches to reading numbers. They commissioned a lot of schoolchildren and also teachers to write numbers again and again and again, in order to have enough data to train all of these kinds of systems and also to test them and see how good they were. This was the benchmark for many years for all kinds of machine learning approaches.

It got steadily a little bit better but still actually sort of crappy for many, many years, up until the point when we really returned to deep networks, networks that were similar in structure to the kinds of things that Rosenblatt had done, but with many more neurons and with the full power of all this training data. Then, suddenly, this problem was immediately solved.

The way that the solution to the problem looks can be visualized in these kinds of diagrams, which I don’t want to get too technical, but essentially, what it amounts to is models of neurons in layers, each one processing a patch of image and feeding forward the output of that analysis on that patch image to another layer. These models proceed in layers just like cortical layers, just like layers of cortex in the brain.

What I find really most compelling about these things is not only that they solve these kinds of simple problems better than any previous technology did, but also that they learned to solve them in ways that look very much like what you actually see when you put electrodes into the brains of rats or macaque monkeys or other animals and observe what happens in real brains.

These are obviously experiments with some ethical implications, but they’re also very important experiments if we want to understand how these things work. What you’re seeing here are the learned patterns for these artificial neural networks trained to recognize simple images.

What you see on the right are the so-called receptive fields of neurons early in visual cortex of a real animal. You see that the patterns are essentially the same. You can see this is a kind of example of convergent evolution, if you like, in which we design a system that is unconstrained with respect to how it solves the problem, but it has a brain-like architecture.

We look at a real system that has solved the problem with a brain architecture, and we see that they’ve learned how to do this in the same way. If you look at the responses of the neurons higher up in these artificial neural networks, you see sensitivity to more and more sophisticated forms of patterns and shapes and so on.

I know I’m providing a lot of very visual examples. We do obviously more things than just the analysis of pictures, but this is easy to show in slides. At least, it would be easy to show in slides. Let me show you what happens if you now take one of those kinds of networks and you reverse it.

What I mean by reverse it is you train a network to recognize what’s in a picture, but then instead of using it forward, you use it in reverse. You take a picture that is known... Let me skip this style-transfer for the moment. Let me go to something else. You take a picture that is known, like this one. This is not a trick image. It’s just a picture of some clouds in the sky.

You feed them to a neural network that is looking for meaning in this picture. What is meant by meaning here is 1 of roughly 1,000 categories of label, including various breeds of dogs and cars and so on. Then you say, "Instead of just telling me what you see, why don’t you modify the image in order to enhance the things that you see? Show us what you see in the clouds."

If you do that, you begin to see some patterns emerge in the picture. Progressively, what emerges is something that looks, to my eye, a little bit like a sort of Buddhist fantasia with all kinds of crazy structures appearing in the clouds. Are you able to see this on the screen in enough detail to make anything out? These were fascinating and surrealistic images.

When we first saw them in the beginning of the summer, I really was blown away. One of the researchers who did this work, Mike Tyka, realized that what this procedure did was to add detail to images. You could try letting this deep neural network hallucinate or free associate by alternating this process with zooming in on the image.

This generates something like a semantic fractal, which looks like this. I was going to show you one other crazy hallucination that began with a surfer and ended with something very strange happening in the scene. Here’s the zooming-in video. We start with the clouds with all kinds of things hallucinated onto them.

Keyboard shortcuts

j previous speech k next speech