-
We’re recording our conversation.
-
OK.
-
We’ll provide a transcription for you to read it before we publish it.
-
Sure.
-
(laughter)
-
I was asking...
-
She was asking, "What kind of profile resists like working in VR?" I’m like, "No, [laughs] because it really doesn’t take any learning, especially the hand controllers and this generation motion controllers. We were born learning this, so it’s actually easier to teach than keyboard."
-
It’s interesting because I was showing a 360 video yesterday HTC in a Vive and someone had left the controller out, so the ghost of the controller was living inside of 360 video like I kept turning over and I would see the controller in my field of view.
-
Yeah, yeah.
-
Which made me think just about how the medium is asking to be more and more interactive and more and more immersive. Then in a way, when I started out doing 360 video, there’s a time when that’s just going to be gone when videogrammetry replaces it. Everything’s going that way.
-
Yeah.
-
There’s a real investment and involvement with this virtual space, probably social virtual space.
-
Yeah, I mean last year was mostly about solo experiences, but I think as of last October or so, we have pretty mature social platforms. There’s not a lot of massive adoption but technologically they’re mature.
-
I talked to this guy who was a VR journalist about his time in social VR when he was interviewing me and my wife. We asked him how long he had stayed inside the platform. I can’t remember which one it was. It was like, "I was in there for..."
-
What was the platform again?
-
I can’t remember.
-
OK.
-
But he was like, "I was in there for 21 hours."
-
The only open‑source ones High Fidelity at the moment, or Hi‑Fi for short.
-
I didn’t know this one.
-
There’s a lot of commercial ones as well.
-
Trying to explain to people like that’s the reason behind Facebook’s acquisition of Oculus...
-
Uh‑huh, sure.
-
...that’s been a bit of a stumbling block because I think at this point, we’re still on the rollercoaster.
-
Right, right, right. This is...
-
That’s you?
-
Yeah, this is me, 3‑D scan and everything, lip‑sync, also. There’s whole social plan face thing. As we can see, this is done by the guy who did Second Life. You see plenty of places with zero people at the moment.
-
That’s so interesting. One of my mentors is Nonny de la Peña, who’s a VR journalist, basically, one of the first adopters of the Vive and also one of the first people to bring, or I think the first person to bring virtual reality to film festivals. One of the projects she did called, "Hunger in Los Angeles," her initial work before she got into VR was on Second Life.
-
It was academic analysis of Second Life, so it makes a lot of sense...
-
What in Los Angeles?
-
It’s called "Hunger in Los Angeles."
-
"Hunger in Los Angeles."
-
Yeah, so if you look up Nonny de la Peña...
-
Like this?
-
No, "s."
-
OK, "Hungry in Los Angeles."
-
Then Nonny de la Peña is...Yeah, there you go. I think she got rid of Immersive Journalism. She has a slightly more updated website. By "slightly more updated," I mean a Squarespace website. [laughs]
-
This piece was in many ways the beginning of virtual reality as we know it, because she had to figure out how to show this at Sundance. Originally, she was using some like military‑grade virtual reality helmet. This is outside a...it’s modeled after a...How do I put this? Sorry. I just had a moment of realizing that I was being recorded.
-
(laughter)
-
We can edit the transcript afterward.
-
This is modeled after an incident that one of her researchers went out and recorded at a soup kitchen in Los Angeles where someone went into diabetic shock while waiting for their food. Because hunger in Los Angeles is a real problem, she wanted to figure out a way to actually get people to understand it.
-
She essentially created this walk around volumetric virtual reality system that could put people into this situation. Obviously it’s made in Unity. Those pixels are quite blocky at this point, but it was very effective at the time. People bursting into tears when they take off the headsets or what have you.
-
Her intern was a guy named Palmer Luckey [laughs] who was just somebody who was really into playing with phones in his garage. He helped to build the first system that was not using military technology to make this piece available to people at Sundance.
-
Is that a glove, or was there a hand‑capture device?
-
There was no hand‑capture device. The hand is in the pocket. What it’s actually doing...See the cross on the head is what’s...I’m sure...Right? Get it?
-
Right, right, right.
-
The cross on the head is the sensor that’s placing you within the virtual context.
-
Ooh, that’s nice.
-
Yeah, so she’s done a lot with this as well, this ghosting effect which is also in her piece, "Project Syria."
-
I see.
-
When my partner Eline and I, we won an award called the Tim Hetherington Visionary Award that allowed us to basically choose a mentor for the first VR project that we ever made, we chose her because she held her breath for so long and stuck there while nobody was believing in the technology.
-
Yeah, it’s been pretty amazing to see her life change in the past three years as this has become more and more of a mainstream technology.
-
This is great, this is great. This is like recreating an actual event of a few minutes long and you can just immersively participate into it.
-
Exactly. She has a very specific methodology with a very specific journalistic ethic behind it. She’ll do recordings, field recordings, or use existing news footage to recreate events. The piece of hers that I think is the most powerful for me is the piece called "Kiya," that’s about a domestic violence situation. Actually, about a murder.
-
What’s the keyword?
-
Actually, you know what? I would say, instead of going to Immersivejournalism.com, because she’s in the process of closing this one out...
-
I was just going on here.
-
...her company now is called Emblematic.
-
Emblematic.
-
Which is very up your alley, I feel like after reading that piece. It’s Emblematic Media, I believe. I don’t even know what it is now.
-
That’s it.
-
You should be able to find it. Emblematic Group, yeah. [pointing at a Formula 1 VR experience on the screen] That’s a commercial experience. "Use of Force," is a project that was at Tribeca. She produces a lot. "Kiya" is the one on the right.
-
"Two sisters’ efforts to rescue their third sister, Kiya..."
-
I think that that may actually be available on Steam. I’m not sure.
-
OK. It’s loading. I’m not sure what it’s doing. Oops.
-
Do you want to log into Squarespace?
-
(laughter)
-
I think there’s just one screenshot there.
-
Yeah. I honestly think that Steam may be the best way to get it if they put it up there yet. I think that they must have because that was an Al Jazeera commission and I would assume that they want to get it out there in any way that they can. Yep. You can get it.
-
There we go. Yeah, yeah. It’s interesting. Thank you. That’s great. I wish we got a video.
-
But even this. You look at this. It’s puppets. It’s motion capture and then there’s a thin wrap of pixels on top of it. All of this now would be done by volumetrically, like with a volumetric scanner. Sorry, not volumetric scanner but with videogrammetry. Same thing, basically.
-
Basically, you act one from this menu?
-
That’s an interesting thing and this leads to a conversation that I’m really interested to have with you, which is about perspective in VR. Because it’s something that has become a pretty recent obsession of mine and I feel like you have interesting thoughts on it, knowing a little bit about what you said about your experiences.
-
This is my piece...
-
Yeah, which I’ve read.
-
Yeah. The relevant part here is the fact that I work with some students to recreate their classroom and connect those classrooms together and had interview with some high school students and finally with some anchor. This is the High Fidelity space that I mentioned and everything. It’s mo‑cap, or whatever Vive can offer as a mo‑cap substitute.
-
(laughter)
-
I like how they’re all just sitting on nothing. That’s wonderful.
-
Yeah, perspectives. What would you like to talk about?
-
It’s just that I think there’s a capacity in the technology to address perspective and motion in different ways. You can address it from an objective perspective, so you can be...The what do you call it? The Swayze effect, you can be a ghost which is more common in 360‑video. There’s also this idea of inhabiting yourself in a virtual space. There’s the idea of becoming someone else when in virtual space.
-
Then there’s something that I felt after reading your piece that if you don’t know about it, you should, because, I think, it would be very up your alley and that’s the ability to merge with someone else’s identity through VR. Do you know the piece, "The Machine to be Another"?
-
I think so. Yeah.
-
"Gender Swap," have you ever heard of that? Have you done it?
-
Sure, yeah.
-
Where did you do it?
-
I started in Second Life. It was very long ago, actually, 2006, ’07, something. At that time, I was working on a Second Life search engine. The search engine was supposed to send those invisible robots to everybody’s spaces in Second Life and look around just like real robot would do.
-
Visually, because Second Life the engine only feeds the textures if you can see it, If it’s in your line of sight. It learned to crawl the spaces, literally, and collect all the vision information so that we can build a search engine so you can ask it like, "Where are all the red clothes, or white necklaces?" in the whole multi ‑list, so you’re looking in and searching around.
-
But one part of that led me to basically record the movement of the one real avatar because we want to learn how a person actually cross the space and how to tell which one is the interesting alley here or something or which one is not interesting and so on to save some CPU time.
-
As part of that work, I was just recording people’s movement and the ways they look. Of course, it’s not VR, it’s not mo‑cap, but it’s a predecessor of that. As part of that, I could just replay the path and the head position. That’s pretty much it. There’s nothing else in Second Life anyway. [laughs]
-
But that also means that I get to immerse it through, at least today, my co‑developers’ viewpoints and lifestyles and everything because we did capture all the visual information that they got. It’s like the Wayback Machine, the Web archives, but for Second Life.
-
But there’s also a tagging possibility, right, where you could actually...? Like could you track metadata on the individuals and also track...? You could be like, "All brown‑haired people are in these spaces and..."
-
Exactly.
-
That’s so interesting. I get it.
-
Basically, recorder but the recorder with depth perception because that’s what Second Life gives us. You can recreate just the part of the experience that’s relevant. You can take all the other persons and their accessories out and just focus on the interaction between two people in a very large part so the other people are just transparent. That’s the technology but...
-
But it’s not in this time, or whatever, right?
-
Exactly. You can say to it, "Before entering this discussion, attend this replay of the meeting from the other side’s perspective," but only for five minutes or something. It’s like an inform session before a deliberation but it’s actually working on a fact level. That’s the main idea.
-
Machine to be Another, their piece, "Gender Swap," is a real‑time meet‑in‑a‑world version of that using virtual reality as the interface.
-
I saw the media.
-
That piece, I feel like that’s one of the most powerful applications that I’ve ever seen of virtual reality because it opens up this idea of real empathy. Because I think that people always talk about empathy being [inaudible 18:42] machine and all that rhetoric.
-
Which I don’t really believe with VR, because I think that there’s an aspect of alienation that’s inherent to the media. That’s not for augmented reality. Right?
-
Right. I understand the issue.
-
How come people don’t acknowledge that? Marketing?
-
I don’t think they had sufficient firsthand experience in both media to tell one from another. Because as you said, it’s clunky. With the WiFi appendage, I think, we’re getting there. Just to let you guys know what we’re talking about.
-
(laughter)
-
I wrote an essay on it as well that I’ll send to you. It’s about who the people are behind the project. One of the interesting things about it is that there a lot of them have these...They’re what you would call "third culture kids." They’re from multiple cultures and therefore have a unique understanding of culture and identity.
-
I feel like since we’re all going in that direction anyway, that was one of the most profound experiences that I’ve ever had in the medium.
-
"Library of Ourselves."
-
Did "Library of Ourselves" actually happen? I feel like "Library of Ourselves" is, basically, Snapchat spectacles in a way. See? Oh no, this is a different one.
-
(pause)
-
Oh no. I’ve never seen this one. That’s amazing.
-
Awesome. Awesome.
-
That’s amazing.
-
(pause)
-
Fantastic. I don’t know what they’re using now, but when I had that experience in 2014, which was the gender version of the one you just saw, that was using like a Chinese Oculus knock‑off with a low‑quality streaming camera.
-
Yeah, I’m aware of, yes. Just a second. Which was this one, right?
-
That was, yeah, "Gender Swap."
-
This was the one you did?
-
Yeah. I also had [laughs] another experience of it at Tribeca in 2015, where I did it with my friend and designer Clint who’s my collaborator. It was really funny because we have very similar perspectives on the world, so it didn’t really feel like anything. It just felt like dancing with myself.
-
Sure. [laughs] They’re orchestrated to basically perform the same movements just for the sake of recording.
-
Actually, what’s interesting about it is there is some level of orchestration, but there’s also a Ouija board effect where nobody’s pushing and nobody’s pulling, but obviously, that’s not what’s actually happening. You fall into a subtle...
-
So that you dance in resonance.
-
Exactly. Yeah. A lot of what makes that happen is if they emphasize that you slow down, as you work through the experience, there’s a meditative aspect to it as well.
-
(pause)
-
In interviewing them, I brought up this idea of the third entity. That’s the term that I used, which was something that’s borrowed from a theater teacher who talked about, "When person A meets person B, they create person C."
-
There’s this sharing of me and you, and then there’s you distinctly, me distinctly, and they...The funny thing was one of the people who was part of the group that does this, he was like, "’The third entity,’ that’s the term that we use in our workshop." It’s something that’s there in all of our brains that we can reach for the third entity like we all understand the concept.
-
I work in psychoanalysis, so it’s like...
-
Got it.
-
...part of the setting. This is, of course, a very powerful demo. But I do think the scalable way of doing this is some blend between synthetic and the real, like in‑the‑flesh. Like this, I think, is very convincing.
-
I agree.
-
The hand‑touching part.
-
Here I’m sending you the essay that I wrote about it. There you go. That also opens up this question of, "How are you going to get people to buy these devices when they’re most effective in a performance setting?" I think that there’s been an unhealthy rush to sell the devices to an unwilling public and it’s going to take some time.
-
The defense is, "Probably, people are going to find them anyway."
-
That’s true. But how far off do you think we are from room‑scale experiences with mobile VR?
-
I think Daydream is already pretty much there.
-
But there are limitations to Daydream in terms of Z access so it’s still, you’re talking about an X‑Y flat experience on some level.
-
Oh yeah, you have to be sitting there. But the most powerful experiences so far that I’ve seen involves very little mobility, as far as the Vive and Oculus experience goes.
-
That’s interesting. Do you view Tilt Brush as a powerful experience or do you view it just as an amusement?
-
[laughs] A lot of my experience is just sitting in a rotating chair. I think the rotating chair is part of the mobile VR experience.
-
Yeah, it is.
-
It doesn’t work if you take that out.
-
That’s one thing that we played with a lot in our first 360‑piece because it’s a piece where a lot of it is split screen, so you have an opening of two worlds touching each other.
-
There’s something about watching the way that people rotate around to be facing front and seeing Kenya and facing at the back and seeing San Diego, there’s something about that, once again, choreography that they fall into when they’re doing that that really it feels magical. It feels like breaking new ground in the medium.
-
I think that that’s one of the things that’s really interesting is that it feels, sometimes, working on this specific technology, like, "I am the ant, and all of my friends are ants who work in the field and we’re all just working on something that we don’t fully understand every aspect of."
-
Yeah, part of the swarm.
-
Part of the swarm.
-
Yeah.
-
Which you would think would be a disconcerting experience.
-
Why? We’re social animals. We’re part of the swarm.
-
But...
-
Better than part of the herd.
-
(laughter)
-
But it’s also the identity thread of it, of being part of something larger than yourself at least from the perspective of an American where individualism is supposedly what it’s all about all the time, you feel a little bit like you’re not supposed to be a finger or a fingernail. You’re supposed to be the whole body.
-
I guess, the endpoint of that is terrible people [laughs] who are...I’m trying very hard not to say Donald Trump’s name right now.
-
I think there are symptoms, there are causes.
-
(laughter)
-
The psychoanalysis thing, because you’re very good at listening clearly. Do you actually practice psychoanalysis?
-
Yeah, sure.
-
On both sides of the couch?
-
A little bit. I study group therapy. I don’t actually analyze people on the couch, but I’ve been doing personality analysis for six years or more. But anyway, it’s a very long time. But I do facilitate groups, group therapy and group dynamics and things like that.
-
It’s the thing because anyone who works in social working or care worker or psychotherapy knows that it’s not really the therapist doing the healing. We’re just channelers, and a lot of experience doing that humbles us a little bit.
-
Interesting.
-
It’s not an individualist‑collectivist thing. It’s just knowing that personalities, identities, they’re artifacts of communication. If you take the social part out of it that there’s no identities. You can cling to memories and replay and try to confine yourself into identity but it doesn’t have to be like that. That’s the main perspective I’m coming from.
-
Did you ever read that Neil Gaiman, "Miracleman" reboot?
-
No.
-
There’s a race of aliens in it whose form of creativity is changing bodies so they can swap their consciousness into different bodies depending on how they want to express themselves artistically.
-
Awesome.
-
Yeah, amazing. But once that’s available to us through this technology or some other technology, what does that mean to a group versus an individual? What does that mean to identity overall?
-
They’re just false...I do think it’s like ends of the spectrum that you can fluctuate over the course of the day like from being very meditative to very social. I think it’s healthy to explain the phenomenon like this. But it’s not any fixed point in time where you can say, "This is me and the others are not, the others are just performance." No, it’s all us.
-
It’s interesting because if you talk about it that way, you have the question of, "What limits--in terms of what you can put out into the world, how you can improve the world-- are there actually, and what are just the limits that you’ve created for yourself?"
-
There is a certain amount of achievement or, I would almost say like, "help" that you can offer to other people that you may not be aware you can offer to other people.
-
If you say, "Time box five minutes, 15 minutes, or something to help other people even though they’re complete strangers," then, you discover part of yourself that you didn’t already know.
-
Yeah, but there’s also something like, and this gets back to the empathy mission idea, that how you define help and what actually is help is sometimes hard to really know. Because the idea that these technologies are going to actually help anyone and not just lock us in some strange techno dystopia, that’s a leap of faith in and of itself. Like, are these things going to improve society?
-
Because I’ve been doing VR classes and everything for 16 years, so for me it’s always about what we ask of the technology, not the other way around.
-
But with AR, I feel that’s very right on, because that’s a technology that feels very human and I feel much more in the driver’s seat of an AR experience and I feel like just if you look at the enterprise applications and you look at the social applications, it can definitely benefit people and expose layers of reality that we’re unaware of.
-
The idea of augmenting reality is a little bit off. I feel like it’s more about exposing stuff that already exists.
-
It’s surfacing.
-
Surfacing.
-
Yeah, surfacing different levels of reality.
-
Exactly. But VR, the isolation, the sealing yourself off from the world, what is the benefit of that? Do you think that there’s a benefit to that socially, culturally?
-
To me it’s also a spectrum. There’s earphones. There’s earphones that have noise cancellation which is the audio kind of a VR and there’s earphones that have pass‑through. I have a pair of earphones that basically have, say, very good noise cancellation but it has a microphone that takes everything from the outside and places it to your ear...
-
(laughter)
-
...if you enable the pass‑through mode. That is actually a lot of where VR is going. I gave classes to students in Kaohsiung and Hangzhou and I teach them to use the Tron mode of Vive, which is this camera in the front of Vive. You can enable it and then you’re in a VR space but you see this blue outline of the world. It’s AR.
-
I told them to use SketchUp or whatever to draw their classrooms, which are very regular and to the point where if they put it and switch to Tron mode, you see it’s abled but you also see the surroundings.
-
That’s interesting. That’s very interesting.
-
This is also AR or mixed reality but coming from that isolation part of it.
-
On a technology level, clearly, that spectrum is being acknowledged by the major hardware manufacturers or software manufacturers. In the case of Microsoft, they’re probably not going to be making many HoloLens as compared to other device manufacturers who make the augmented reality glasses that run HoloGraphic.
-
Like the head of Microsoft’s HoloLens program is a woman named, Lorraine Bardeen who talks about that spectrum, how there’s VR on one side and there’s augmented reality on the other side and a lot of the technology that’s been developed for HoloLens is now being fed over to the virtual reality side.
-
Which is what Acer and all the other companies that are now manufacturing, or have contracts to manufacture the Microsoft VR devices are using.
-
What I’m saying is that complete isolation is other than looking at the Milky Way for meditative purposes...
-
(laughter)
-
...that’s probably the very good use of it, but it’s very limited and I think everybody will switch to the social side.
-
Slowly move down the spectrum.
-
Probably within a year or two.
-
I agree with that. I would say, "Google agrees with that as well," because Tango and Daydream are on a collision course it seems with each other.
-
We have a product concept now. [laughs]
-
Right, and you have the camera going through. You have the Daydream, soft utopian Google Daydream, headset and if you have pass‑through going through the camera with Tango, you basically have like another box...
-
They just rolled out a phone that has both so...
-
Amazing.
-
... just need a cell phone.
-
Wait. What is the phone that they rolled out?
-
I think it’s an Asus phone. ZenFone is a start. ZenFone AR.
-
I think my friend, Andy, told me that this was going to happen. When did this come out?
-
A couple of weeks ago.
-
Because it used to only...Tango used to be only on like a giant Phablet.
-
Yeah, this is Tango.
-
(laughter)
-
This is, actually, pretty well‑built. It’s not heavy. It’s not a giant Phablet. This is the usual three cams. The idea is that you can use Tango to do real‑time modeling and then put on Daydream to preview. It has weird amount of RAM.
-
(laughter)
-
They don’t trust it yet to have the systems fully integrated. They don’t have...
-
It’s all software at this point. The HoloLens is ready and the software part is, at the moment, two virtual machines, so you can’t be in the same mode as Daydream as in Tango but it’s a software problem. I’m sure it could be solved, if they came together and joined and promote this.
-
Didn’t they move Daydream over the VR department?
-
To what?
-
Did they move Daydream, I’m sorry, did they move Tango over to the VR department at a certain point? I feel like they did.
-
I think it’s in a supergroup but it’s not really in the same department. Anyway, this is just the first. I’m pretty sure that there will be plenty of builds like this because it’s proven to work in a phone.
-
It’s funny. Look at all those things. [pointing at the Tango and Daydream logos] Do you think the graphic designers are just waiting to push those together?
-
(laughter)
-
Yeah, probably.
-
That’s hilarious. Have you played with Tango?
-
In its very, very early days but not in expansion. I had those three different cameras [laughs] separately in the software stages.
-
Hilarious. You’ve developed for Daydream as well?
-
I did the SDK but I don’t have the Daydream device yet.
-
I’ve heard really good things from developers, like my friends who are developing for it. But the thing that’s funny about all this, I’m so not technological. I’ve disengaged from all social media which Jonna thinks is hilarious. I’m not a developer at all. It’s just about... there’s something about all this stuff that really calls to me, like, I’m a filmmaker and an artist.
-
All this is just so that people can make art.
-
(laughter)
-
This is literally what the cameras do.
-
Absolutely. What are you going to do?
-
I’ve been spending a lot of time with those AR devices.
-
This is the audio?
-
AR glasses.
-
OK. Those are the new ones.
-
But this is really AR because after I’ve been wearing this for hours, I didn’t really feel that it’s there.
-
That’s interesting. Do you know Roundware?
-
No.
-
Roundware is interesting. You’d have to have the iPhone in your pocket, but check it out. It’s an audio AR platform that never...It’s called Roundware. It’s open‑source. It’s really brilliant.
-
Roundware?
-
Roundware, yeah.
-
Roundware.
-
Halsey Burgund is the creator. [Audrey pulls up a Roundware link from Google] When did this come out? Did they build this on Roundware?
-
I have no idea.
-
Yeah. I saw Halsey’s name in that. Man, you’re quick with keyboards.
-
(laughter)
-
It says, "Costs, game." I see. It’s a video game that’s built on this platform.
-
He’s an artist in residence at MIT Open Doc Lab. Do you know when this is published? Because I know he wanted to start working with the (MIT) Game Lab. Oh, no. This is old.
-
Yeah, this is old.
-
That must’ve been something that he was just playing around with. Basically, you can embed audio using Google Maps or using the Maps API. You can see it behind there. You can tell stories through it. He’s a pretty interesting guy. He’s got a good spirit.
-
It’s informed by everything that happens in your vicinity and just maps them to the auditory spectrum.
-
You could do something as basic as having multiple drone pieces, drone‑like audio pieces playing in different zones and then as you pass through the pieces, you transition. Or you could just tell a complex story using the real world as your canvas.
-
The thing that he’s actually added into it that’s really fun is that there’s the ability for users to tag the spaces as they’re going through the pieces with their own comments. As a user, you can encounter the...
-
It’s a participatory form of sounds. In High Fidelity, because it’s got the open‑source platform that’s built, there’s a really good mixer and they wrote their own audio codec for that. It’s basically, it’s been used for things like this, but also to mix real‑time jamming events.
-
For example, you have soloist and then...You always start with the drummer, actually. There was this drummer and then the drummer pipelines to the soloist and to guitar and whatever. Everybody adds a layer to the jam and with very minimal latency and the people in the High Fidelity world just listen to the final mixed sound but as they walk towards the different players.
-
Spatialized?
-
Right. Exactly. Although they’re physically in different places, they could still jam using this kind of time‑delayed technology.
-
I teach...There’s a State Department program I’m involved with where filmmakers from the Middle East come to the US and learn about American production methods. I do the immersive and interactive mentoring for them.
-
I’ve been trying to explain wwhat social VR is to them and the best way that I can explain it is that you’re always the person sitting in the turning chair even if you are, like in a solitary VR experience, even if it’s very convincing how it shifts your perspective. But with social VR, I can be in Moscow and you can be in Taipei, and you can be a lion and I can be a snake and we believe each other.
-
I think this has tremendous potential for post‑conflict work.
-
(laughter)
-
As evidence.
-
As evidenced by this. Seriously, I do think the more the conflict that they are, the more traumatic it is, the more socially needs for the healing process to happen.
-
I see that.
-
In the clinics realm, group therapy or psychoanalysis, we just fix one endpoint form this large traumatic conflict thing. Of course, you hope that this person can carry some of the help or some of the resilience back to their home network. But wouldn’t that be even more effective if the home network can use such a VR to work on their...?
-
Let me tell you something else. Do you know who Skip Rizzo is?
-
Nope.
-
Skip Rizzo is another person who works at USC with Nonny de la Peña in a separate area. Skip does PTSD therapy for veterans of Iraq and Afghanistan and now, the Vietnam War as well, through VR. He was one of the first people to adopt VR as a therapeutic tool. It’s interesting because his experiences that he creates...[Finding a link about Skip Rizzo for Audrey] Let me just get this...To my knowledge, he’s not working on anything that’s social.
-
But I find what you’re saying very interesting. The stuff he’s doing is AI‑based or immersion therapy‑based. Let me find...I have a video that I’ll share with you.
-
That’s great. So far, the proven literature for immersive therapy is mostly around desensitization, which is, of course, an important part, but very small part of it.
-
Desensitization literally means "numbing," so the question is, "Are you actually getting to the root of the trauma when you do that?"
-
Exactly. It’s just scratching the surface so to speak.
-
But it can do so much more.
-
Sure. Desensitization has its place. An example that worked, like, before having stage fright, anticipate stage fright and use the VR to simulate a huge room. [laughs]
-
I’ve heard about that being very effective, actually.
-
It’s a very quick fix. It’s basically, that you engage stage fright two hours or two days before the speech itself.
-
(laughter)
-
Have you seen any of the footage of the initial immersion therapy VR programs? "Fear of Heights," and all that stuff?
-
Yeah.
-
There’s a certain camp appeal to pretty much all of the ’90s VR stuff. Especially, older media reports on ’90s VR. It’s really fun to watch how it is.
-
What are your current projects?
-
That’s so interesting. I am here, in a way, because I feel like I need to set the reset button, just press the reset button and look a little bit inside and figure out what comes next. Because I feel like I was barreling along for two or three years and I want to take some time to rethink al lot of stuff and rethink a lot of the work that lies ahead.
-
There’s a slate of virtual reality projects that’s developing right now that my wife and I are working on that are just starting to take shape. In addition to all of the work and meetings and everything that are happening here on the surface, underneath the surface, all that work is starting to build up because I never have any time.
-
This conversation, an hour packs in all of the sitting back and thinking I get to do in two weeks or a month at home, because I’m always just working, working, working. We’ll see. I can’t give you any definitive answers. But I can say that there’s three or four things that are on the horizon that are just starting to bloom.
-
That’s great. We’re in a similar place. [laughs]
-
But Kel is going to have video shot during this trip, right?
-
I think so. Yeah. I think so. The best thing I could say about that video that we’re intending on shooting, "It’s very amorphous right now." But essentially, we want to find a place in Taiwan that no one will recognize outside of Taiwan, so there’s a feeling of complete displacement in the environment.
-
We want to find a place that has certain cultural signifiers attached to it that send you off in false directions to give viewers a sense that the world is a much bigger place than you can ever possibly imagine.
-
Expand their minds.
-
What?
-
Expand their minds.
-
Yeah. Basically. But I don’t know what that means and I don’t know whether it’s going to be documentary or fiction aspects. Do you know, I’m pretty obsessed right now with one‑shot virtual reality experiences? 360‑degree experiences that don’t have traditional editing. I think that that comes from, it’s a reaction to the last couple of pieces that we worked on where I just don’t...I don’t know.
-
Every time we make something I want to go in exactly the opposite direction. But the odd thing about that is that you start building up enough work, it all start to look alike, no matter how much you’re thinking you’re like pulling the wheel toward another direction, it’s still coming out of you. That speaks to the limits of identity that you were talking about before.
-
This spectrum of identity does actually exist. That you are who you are no matter what you’re creating, it’s always going to look like you. It’s always going to come out as you.
-
There’s going to be continuity, of course, because we don’t get every day as a blank slate. But I do think that the forms or the experiments or the creations, they are probably just the projections of your dimension through other people’s dimensions. You see what I mean?
-
Yeah, totally.
-
The thing is to make sense to say that while not entirely open to everybody else’s dimensions, it does make sense to grow your own dimension out of it and, of course, you’re going to be continuous on that dimension of a life’s trajectory.
-
If you’re talking about projecting your own dimensionality, there’s certain specificity that is the only key to universality. If I were to go out tomorrow and, say, "I’m going to make a universal piece of work that everybody is going to understand and it’s going to sum up the world," overall it would be the most nonspecific piece of shit ever made.
-
But if I say, "I’m going to burrow into my own perspective and create something that’s very true to me," that sometimes feel like the only pathway to...
-
Universality.
-
...universality. Yeah.
-
My favorite novel, "Finnegans Wake."
-
Wow, look at you.
-
(laughter)
-
You’re like the only person that got through it.
-
I translated part of it, but it’s very time‑consuming.
-
(laughter)
-
In any case, what I’m saying is at the same time the vocabulary of experience is private, like entirely specific to that particular person, but because of the execution of not limiting to other people’s grammar, it’s not limited to other people’s language, it’s at the same time entirely universal, so I don’t think it’s two things.
-
It’s just being authentic to your experiences, which of course involves interactions with people who are your contemporaries, who carries with them cultures, and everything like this.
-
It’s not possible to say, "I create something," without all the embeddings [laughs] in the local culture. But it’s important at the same time to say that, "But I’m not adhered to that culture," or, "I’m not charged with furthering the culture." I think it’s the same thing to say that we’re not limited by identity. Of course every day we’re limited by the limits imposed by the previous days.
-
Yeah.
-
Though I couldn’t fully follow your conversation it has been so interesting. There are a few things coming into my mind. I have a question for Audrey. You just mentioned that you taught in Kaohsiung and Hangzhou.
-
Yeah, yeah.
-
What was that occasion? Are you still teaching now as a minister?
-
The class finished a while ago. That was before my visit to Paris, but it was after I had become a minister, so around September and October, last one.
-
Is it possible you teach again?
-
Sure. Why not? The thing is that when I said, "I teach in Kaohsiung and Hangzhou," I mean that I took a half‑day off in my dormitory...
-
(laughter)
-
Makes sense.
-
...and sent an avatar of me, photorealistic, to Kaohsiung and Hangzhou [laughs] asking them to connect to this virtual classroom. Then I asked all my students to give a VR model of themselves and recreate their classrooms and we staged a classroom together. I’m synchronously there [laughs] but we’re all in this VR space.
-
Is it possible...? After our workshop next week, I’m thinking to extend similar meaningful programs for the young students here, so if there’s a possibility I might come and listen to it, to follow, to try to maximize.
-
I do tour around the world in robotic doubles...
-
(laughter)
-
...giving lectures and everything. Double Robotics is really pretty good for that purpose. Have you ever tried it?
-
No. It’s fascinating to me.
-
Because we are trying to set up a virtual college on digital arts in Chengchi University.
-
Oh yeah, OK. I thought you meant...
-
Snowden wrote it. He’s probably the most famous user.
-
(laughter)
-
There’s a really good "Community" episode about it.
-
They have this too.
-
Amazing. That’s hilarious.
-
They designed it so that stitching algorithm will cancel this entire thing out, so you get a floating perspective.
-
(laughter)
-
I’m talking about work that I made a year and a half ago, and it’s like, "In my day stitching was terrible." Now it’s like it doesn’t matter, everything’s...
-
It doesn’t matter anymore. It is a solved problem.
-
(laughter)
-
Solved problem. Literally everything that was published before like April, 2016, has these irreversible stitching errors.
-
Exactly.
-
Will we accept that in the future?
-
Why not?
-
Yeah, I guess. I hope so.
-
I’m sure some AI will look at those old films and completely patch over them...
-
(laughter)
-
... and fix all the artifacts.
-
Or just be like, "This is a terrible film. Make the human slaves fix them."
-
(laughter)
-
Seriously, the generative model is...
-
Yeah.
-
You know what the generative model is.
-
(laughter)
-
Basically what this does is that it looks at huge amount of bedroom images and [inaudible 56:11] are really just initial few rows and a mission autocompletes based on...
-
That’s amazing.
-
It’s very sub‑descriptive. [laughs] All this is synthetic. It’s as of a few months ago, a solved problem. You can very easily teach it to complete and remove the stitching artifacts in a very convincing way. It also generates album covers out of nowhere, out of noise. This is entirely dreamed up but still pretty convincing. Yeah.
-
Isn’t it nice?
-
On some days, yeah. If you’re wearing the right pair of glasses and you’re looking at it, it’s very nice. I don’t know. The collaboration with technology, you seem to have very healthy attitude toward it. Does it ever feel to you like things are moving too quickly?
-
No. Not quick enough.
-
(laughter)
-
What’s your ideal thing that will happen next month with technology?
-
It’s probably already happened. It’s just not evenly distributed. All these things like the Google Translate team, last October, solved the Chinese‑English translation problem.
-
I haven’t downloaded the app, the magic app, yet.
-
Then after they solved it, it only took three or four months for a comparable performance, open‑source technology, OpenNMT, to appear on GitHub courtesy of Harvard University.
-
When I said, "Not quick enough," I mean, we had to wait three or four months but it’s, actually, quick enough for most of business uses because people would, of course, take even more time to evaluate and fit into their daily flows. But knowing that Chinese‑English translation is a solved problem, actually changes pretty much everything.
-
Not to turn this into...Not to take us out of very dark alley, but I remember being in Accra in Ghana and seeing all the smoke rising from one neighborhood and being told later by our camera assistant that that’s where all of the computer waste was sent and lit on fire.
-
It is definitely a problem.
-
The problems though, can those be solved as quickly as...? Can a problem involving real environmental degradation be solved as quickly as a problem like communication? I guess, you could say that the problem like communication, once it’s solved, can lead to the solutions to the problems of the environment and the problems of politics, if we just understand each other well.
-
Yeah, because then, the entire systematic fault, the systematic shortcoming is before everybody’s eyes and people can’t really go around ignoring it. But just do go back to your question.
-
(laughter)
-
I’m happy to appear like this anywhere in the world.
-
He’ll send a robot to you.
-
My second thought is that, I really enjoy your relaxing conversation. Is it possible for you to have a similar occasion to be on the Google things you did just a few weeks ago, and talking about the VR and try to open the eyes for those in audience who are not very familiar with...?
-
You’re talking about my speech in Mix Taiwan about artificial intelligence and Daydreams?
-
Yeah. That’s right.
-
Sure. I’m happy to share this because I did give a talk in Paris which is what you read.
-
You two together, sitting together, have a very relaxing conversation.
-
That’s what we’re doing now, right? [laughs]
-
Yeah, but try to rebuild it online.
-
Online?
-
Yeah.
-
If we start filming then.
-
(laughter)
-
We’re running out of time. I need more coffee.
-
You need more coffee. I’m sure. Drop by anytime and we’ll get a video recording.
-
Is it possible...? [non‑English speech] .
-
I think I’m good for now.
-
[non‑English speech]
-
[non‑English speech] .
-
[non‑English speech] .
-
[non‑English speech]
-
You’re welcome to drop by any time and just to say that maybe next time, we actually do film ourselves so that it becomes an educational click of some sort.
-
Can you then set us up as Second Life avatars?
-
Sure.
-
Then it can be broadcast out as our virtual selves talking to each other?
-
Yeah. That’s what I did, actually, for a lot of interviews.
-
Nice.
-
I just send my audio stream and then my avatar autocompletes the movement and everything.
-
How are you doing your photogrammetry scans of yourself? Just with one, two, three detach or are you doing...?
-
It is just with regular, a few cameras in a cylinder rim.
-
Where did you do yours?
-
In Paris, somewhere else in Paris. But in Taiwan, there’s a studio, I think, which is a very similar arrangement and they do deliver pretty good scans. That’s where I got Chen Ya-Lin to scan her image in the interview I show briefly of us hugging each other. It’s pretty convincing.
-
We could do that.
-
We can do that.
-
Oh, OK.
-
Yeah. Or if you have your Adobe Creative CC account, there’s Adobe Fuse where you can very easily model yourself using a Sims‑like...It will be more cartoonish but actually details will be even more detailed. There’s a tradeoff. [laughs]
-
See, that’s what I’m waiting for in terms of VR capture is just somebody to crack videogrammetry in a way that it’s not this 30‑camera, green screen, whatever. I actually think what you...
-
Tango’s going to. Supposedly, Tango is going to solve that.
-
You know all of the open‑source stuff that James George and Alexander Porter and all the Scatter people did?
-
Mm‑hmm.
-
It seems to me like they’re so closely aligned that all of the stuff that they did with the Kinect scans and the 5D mount and all that, it’s basically the same, right?
-
Yeah. It is.
-
But Tango, you’re still talking about single perspective with this one so if you have three Tango phones, you could do...?
-
Or just one Tango phone on a rail.
-
But with movement of subject. If you’re talking about the movement of subject like you’re actually talking about live cap of the subject and then blowing that out into room‑scale VR experience.
-
Yes.
-
How would you do that? You just...
-
You just switch two types of phone systems.
-
That’s a funny thing about VR, as well, is that we always thought that the more cameras, the better and what we’re slowly discovering is that two cameras are better than anything.
-
That’s why Vive is designed this way, firmware update issues notwithstanding.
-
(laughter)
-
But that’s all it takes, really.
-
That’s going to be a moment when we actually have the ability to put volumetric scans into space with relative ease and simple authoring tools. That’s when the masters of the medium are going to arise.
-
Right. I have this sticker here, but it could be a masquerade or something. Supposedly that I wear this. Supposedly, this is Tango phone. Suppose it has a projection of a synthetic rendering of the foveation of where my eyes are looking so that people see a pair of eyes. Not real eyes but where my eyes are looking at.
-
Suppose that we both wear this, and both of our friends has this Tango mode scan on. Then, actually, we are like sensors.
-
That’s very interesting.
-
Then we don’t have to install anything, as long as the field of view, we stay in each other’s field of view, we’ve got all the information we need.
-
I think if you throw area learning into that, specifically through the settings that HoloLens has been playing with, then you have the ability to create this forest style, like world on top of world, on top of world, on top of world that’s psychedelic, to say the least.
-
Yeah.
-
Cool.
-
Kel has been talking with me about his idea to have VR festival here.
-
Sure.
-
I’m going to work that through the pathway of the university. I think that was a good idea. I’m trying to invite Kel and his wife back to set some fires everywhere.
-
That’s great.
-
I’m expecting to work with you.
-
Of course, anytime. Drop by. Or I’ll send my digital double.
-
(laughter)
-
OK.
-
It’s really, really great to meet you.
-
Sure.