The ability to recognize people from an image of their face and recognize a bunch of their characteristics, of the same kinds of characteristics that you or I would see when we look at a person’s face, is in itself a neutral technology.
I say that it’s neutral because it really is just about an algorithm perceiving things from an image that we perceive ourselves with our own eyes and brains. What’s not neutral is where that technology is put and what it’s used for.
For example, if that’s running locally on your own device...Let’s imagine for the moment that you have a retinal implant that runs the Face Net algorithm, which by the way is also one that our team developed. Face Net takes a picture of a face, and represents it as a small set of numbers that are unique to that face.
If the face is from another point of view or lit differently, it’ll resolve to the same numbers. This can be very, very useful if it’s implanted in your retina, and you meet many people and you forget their names, the way I do, because I could hear or I could perhaps see the name tag, the reminder, and this would make me much less socially awkward in a lot of different situations.
If, on the other hand, we take the same exact technology and we attach it to all of the cameras that are surveilling the street corners in London, then we have a massive surveillance technology for tracking everybody, wherever they go in the city. That’s not OK.
One of these applications is, I think, quite sinister and quite invasive of people’s privacy and agency. The other one is, I think, purely empowering for at least certain classes of people who meet a lot of others that are not very good with names, and has really very few downsides.
In this case, I think a lot has to do with not what the brain does, because it’s the same thing that our brains do, but where that brain runs and who owns the brain, and where does the output of it go? Those are exactly the same kinds of questions that arise from cameras.