-
Excellent. I’m glad you were able to join in. I know that the meeting schedule moved around a little at the last moment there. Time is a flat circle.
-
It’s raining outside, and I was just like, yeah, it was crazy.
-
That’s good, good.
-
Great. There’s two things…
-
Hey, here’s your headset. It doesn’t rain, the XR.
-
(laughter)
-
In XR, it never rains. Well, when it rains, it doesn’t get wet. [laughs]
-
So it’s been all of no wind.
-
Wow, that’s an art form in itself. A collage, a collage. We call it a collage.
-
This is a sculpture. This is a sculpture.
-
That’s right, Yngve sculpture. [laughs]
-
That’s amazing. Let’s go through the state of play about a few things before we get into VR or anything. The state of play is that we had originally thought that we could work towards a recorded presentation today, which at one point could even have been live.
-
As the conversation progressed, it became clear we needed more time. We are still in a planning conversation today, of course. We’ve been recording the planning conversations throughout. We can hopefully use some clips that are interesting or relevant when we have final edit to make.
-
The conversation so far has been revolving around a few different kinds of avenues. ETOPS was a starting point with Yngve’s magazine, which is the conversation-based magazine where there are conversations around a topic that are printed without authors being identified.
-
The conversation began with the idea of extending that magazine into a format that was facilitated by machine learning, databases, or other things. That seemed to take it too far away from ETOPS as a very conversational, interpersonal project. That was part.
-
Through that conversation, we did come onto just wanting to experiment with machine learning in general. The update on that is that I have given a number of Minister Tang’s speeches to machine learning artist Ross Goodwin. Ross is available to jump on the call with us today to talk about some ideas for experiments around that, just to see if there’s anything of interest.
-
I want to say that Ross, he’s got a very fast-moving mind. He has a lot to say, so maybe that should be the last part of the call. I’m not sure maybe that we need to make an excuse to sign off at a certain point. I’m sorry, this is being recorded. I shouldn’t be so defensive.
-
(laughter)
-
I’m fine as long as Yngve is fine.
-
(laughter)
-
Ross, he does really have a good, thorough knowledge of using text corpuses as creative material for AI projects. He will be able to give us some insight into what the possibilities are.
-
The other part of the conversation that was quite fun was the XRSpace part, which we all know about – of course, Yngve’s had quite a process getting his headset – the idea being some sort of encounter in XRSpace.
-
As part of that, we came upon the idea of the background of the 360-degree cinema being a possibility for a format that Yngve could explore that might align with some aspects of his work.
-
Those are all of the things that are on the table. It’s probably fair to say that none of them feel like a final project exactly necessarily at this point.
-
Yngve and I had a conversation this week, Minister Tang, where we were talking about what our presentation will be like given that we’re coming to the close of our time altogether.
-
[laughs]
-
It seemed like it could be of interest to have the presentation happen in XRSpace as an experience, less than necessarily like a finished project. Within XRSpace, there could be a conversation around the topics that we have discussed, the two of you talking about your own practices.
-
Perhaps we can create some backdrops that would happen within that that we could use as prompts or just as [inaudible 5:10] material. That does leave open the possibility that, in the spring, the Kunsthall Stavanger show is still on the table.
-
There is still the possibility that either the machine learning or 360-degree carpet aspects could be further developed by everyone collectively or by individuals within the collaboration into things that would be exhibited.
-
There’s still the possibility of developing something further from this point, but it’s maybe good to start thinking in terms of just having an interesting presentation and less the final product. That said, we can see how today goes. Maybe we’ll get there.
-
Is that a fair summary of everything?
-
Yeah. I want to add two things I thought about. I don’t even know how it looks like in XRSpace because I was on the outside. We have to find a way to record the inside in some way that looks…Maybe it’s also OK to be on the outside and look at it if we have a conversation on the inside and we have carpets and stuff, but I couldn’t see…
-
Oh, right. That’s great.
-
I was outside. Now it’s like…
-
That is a crucial problem in that. Anyway, keep going.
-
Maybe we could see what’s possible there. That’s why I’m excited to get into XRSpace, because I don’t how it feels like.
-
Second thing I’d like to…
-
[inaudible 6:49] go through the motion of setting up your avatar and stuff?
-
I haven’t. I just got it [inaudible 6:56] .
-
That will take a while though. That will take 10 minutes, 20.
-
To make an avatar?
-
Yeah, to make an avatar, choose your likeness, and get acquainted with the controller or hand gestures. That would easily take you 15 to 20 minutes, just saying.
-
Probably two hours, if I know myself. [laughs]
-
No, it’s fine. It’s really easier than other VR pieces, but still there’s some learning curve.
-
I’m excited to see that. That’s why I wanted to ask. To the AI, the machine learning part, I was thinking yesterday the AI is now fed with a set of data from Audrey Tang. With the GPT…I always forget it.
-
Yeah, GPT, that’s right.
-
I didn’t know how to ask a question. It’s a hard thing to imagine how you can move with it or play with it. I was thinking maybe I also need a set of me in there, and we can have a conversation where they just bounce back and forth like a conversation forever. There could be a conversation from Kunsthall Stavanger, basically.
-
Yeah, that’s right. It takes forever to prepare for the show, and the show is just this forever preparation. I like the idea.
-
(laughter)
-
A dress rehearsal, a hash mark day.
-
It also solves the problem of all of the time zone coordination that we’ve had to do. If you have synthetic beings that are able to carry out your conversation for you, it makes scheduling easier.
-
Yes, that’s right, [laughs] exactly. Before you joined, Yngve, I was sharing with Michael a few digitally scanned assets from the Taiwan Digital Asset Library. For example, this is a famous temple, and there’s many temples like that in the Taiwan Digital Asset Library.
-
I’m showing you this temple only because it makes sense in the context of a very Taiwanese thing called lottery poetry. I don’t know whether you’ve heard of it. It’s basically a set of randomly generated fortune cookies writ large, larger than average fortune cookies.
-
You will come to the temple with a topic in mind, but you don’t necessarily have to say it loud. Then you draw one of those lottery poetry sticks, and it contains an auspicious message or something. It’s always in a cryptic form like a sphinx.
-
Then you pay for the service for the temple scholar to interpret that lottery poetry for you. It’s a very Taiwanese thing.
-
(laughter)
-
What’s the best thing about this is, of course, that it’s a pre-generated set of randomly generated poems, but it could be interpreted to mean anything. This interpretational part is like the assistive intelligence part that Michael was referring to, basically, treating me as a generator of lottery poetry.
-
That’s a very apt metaphor that totally works in Taiwan, but probably doesn’t work in other places that doesn’t have this culture. [laughs]
-
We have star signs, right? There’s horoscopes every day. It’s not so far away when my partner wants to read me my future, [laughs] “Pardon me,” I’m like, “I’m a Taurus. I’m this star sign that’s least capable of believing in star signs.” Whenever I said, “It’s not my thing,” it’s that thing.
-
This is very interesting. I also like the format of those papers. I also think it’s a nice way of…They almost look like bigger labels or something like that.
-
That’s exactly right. It’s in the form of hiccup poems. It’s interesting, and it’s very conversation, interpersonal. The parts where maybe having two avatars will work. One will basically generate a poem, and the other will then interpret.
-
Then we will keep talking about the message generated by the prompt from the individual seeking, I don’t know, fortunetelling. Now we become fortunetellers. Anyway, that was just an idea I was introducing to Michael as he joined and just for context.
-
Also very printable. I could just imagine the printers somewhere just spitting out the…
-
That’s right.
-
[laughs]
-
This is really interesting. I’ve heard of other randomness-based practices connected with Taoism and Buddhism, but I hadn’t heard of this before.
-
The most famous one is the national fortune lottery poetry. At the beginning of every year, this Grand Temple draws out what will happen to the country the next year. It’s a thing in Taiwan.
-
(laughter)
-
Lottery poetry, it’s beautiful. It’s a nice…
-
It even rhymes.
-
I’ve got a last subject, just collect a bunch of interviews that I have done in the past. I think she can turn them to, Michael, and maybe that’s something. I don’t know how that works, but maybe that’s something that can generate these two.
-
It’s definitely possible. We should get Ross on to talk through that in some more detail. There’s a lot of nuance to it but, yes, Yngve, if you create a corpus or we can create a corpus of your text, it’s absolutely possible for you to have an AI that replicates your voice in some strange way.
-
If somebody needs to ask the questions or direct the poems in a way, I wonder if it would also be done in that way. If I post…
-
I wonder if we should make a decision because Kurt is available to record our XRSpace interaction. Are we deciding that I should tell him we’re going to postpone that? Or should we try and…
-
It depends whether Yngve already has an avatar now. From what I have heard, it’s not even set up yet, right?
-
Should I try to do it on XR?
-
You’ll have to, for example, install an app called XRSpace, pair your phone with the XRSpace headset and all that. It takes time is what I’m saying.
-
I think we should say that’s not going to happen today. Maybe that’s a good moment before we get further into the lottery poetry/bot idea with Ross. Maybe we should just finalize our last schedule. We have scheduled in Sunday at 6:00 I believe.
-
That’s right.
-
Which is only a few days from now. Yngve has some conflicts coming up on his side, but I think there are a couple of other windows if we wanted to try and move that meeting a bit later to give, for example, Ross more time to prepare material.
-
I think you had another window or two, Yngve, that could work. Minister Tang, can we check with your schedule now, or do we need to have Zack on-hand for that?
-
I can speak for my schedule. If it’s not a Sunday, of course, it’s even better.
-
It’s better? OK.
-
It’s better. I never schedule Sunday. I made an exception for this.
-
Thank you. Yngve, you had some other possibilities?
-
I have to be in Berlin on the first of October. I have a plan with meetings then. After that, it’s hard. I don’t know how we can do it, but I’m down in Europe in different countries to try to do all my meetings by car. We’re like COVID times.
-
I will be in Berlin…It depends on how I drive down from Oslo to Berlin. I could be in Berlin the 29th, the 28th, and have full free days to do that recording. If you would do it on the 30th, it’s double as much time as we have now through Sunday. That’s two more working days, which are weekends. I don’t know. Some people don’t have weekends, some people do.
-
30th, your time is the first of October my time.
-
It would either be the 1st in the morning or the 30th in the evening, so whichever…
-
The first, in the morning, is easier for me because it’s a vacation day, and I don’t have anything. If we schedule for the 1st and in the morning, I can be up as early as 7:00, if that works. I can also do afternoons.
-
If you’re in Berlin, that would be 7:00 PM my time and noon your time. That will also work if it’s in the 1st of October.
-
That would be the 1st of October, that would be the 30th of September for me?
-
No, it will either be 30th of September for you late in the night, or it could be also the 1st of October for you around noon.
-
Around noon. OK, let’s see. Better for me late at night, yeah.
-
OK.
-
Yngve, I think if it was 7:00 AM for Minister Tang, it would be 1:00 AM for you, which is quite late.
-
Yeah, it’s actually quite late.
-
OK. Could we do…I have a meeting at 12 o’clock in Berlin. It’s a physical meeting, so I look at the architecture. I think I’m finished at two o’clock. Is that 7:00 in the morning to you, 2:00? That doesn’t work that way, right? Two is…
-
If it’s in the afternoon…
-
That would be 8:00 PM.
-
…then it would be 8:00 PM for me. If it’s on the 1st, then it will work, but if it’s on the 30th, it will not. I have the entire day of 30th booked.
-
I could do on the 1st, I could do…OK, that’s a stretch. I think eight o’clock in the evening, right? That would be eight o’clock in the evening for you, OK.
-
That’s fine.
-
That would be 2:00 PM for you, which is a little close to your meeting. You can maybe have a slightly early start or so.
-
I can stay longer into the night too. I’m fine. If it’s on the 1st, I’m fine the entire day.
-
What about 3:00 PM for you, Yngve, on October 1st?
-
And 9:00 for me.
-
3:00 PM is OK. I could do 2:00 PM. 3:00 PM would be perfect, yeah.
-
OK. Basically we meet at the same hour, like today, or an hour earlier?
-
Same as today, or earlier if you want.
-
Same as today. Maybe half an hour earlier if that works for you.
-
Yeah.
-
OK, so 8:30 PM for me.
-
Yeah.
-
OK, I can do that.
-
Yeah. Do you want me, if I then could do even half an hour earlier to let you know, or should we just…It’s easier just to say 4:30. Then it’s safe.
-
Yeah.
-
[laughs]
-
Are we talking about 3:30, Berlin time?
-
2:30.
-
2:30.
-
I thought Yngve was saying that 3:30 would be easier for him, which is like today but half an hour earlier…
-
Oh, it’s so complicated. I never get this right. [laughs]
-
If you can do 2:30, that’s actually preferable, and still that would be 8:30 for me.
-
Yeah, I can do 2:30, no problem.
-
OK.
-
Very good.
-
OK. I am sending you a calendar invite just to make sure that we all are on the same page. Which is probably a good thing to do in the beginning of this, but now that we are toward the end of this, it doesn’t hurt. I have sent both of you a calendar invite just to make sure that we are on the same page.
-
Thank you for doing that. That would have probably helped all along, for sure.
-
That’s right.
-
Norway time. Great.
-
(child speaks)
-
Nico, I’m on a phone call. [laughs]
-
(off-mic speech)
-
…where were we? I think I’m going to tell Kurt now actually that we’re not going to make it into XRSpace today.
-
(background sounds only)
-
Oh, I just realized I muted myself. I’m interested in the question of whether things in XRSpace can be scriptable, because that might bring these two ideas together, if it’s possible to create a bot in XRSpace.
-
I have a feeling that bots create this potential for abuse and for bad language. That might be not ideal for XRSpace, because they can somehow absorb things in their training that might be not good for content purposes.
-
Mm-hmm.
-
I’ll ask the question anyway.
-
(background sounds only)
-
There’s this just something completely different. There’s this one sculpture in Taipei, which is kind of like my favorite in the world.
-
Oh, yeah, the Meat-shaped Stone. We have a pretty good 3D scan of that.
-
You have pretty good 3D scan of that. Maybe that’s the one with me.
-
That’s right.
-
Maybe we can look at that too.
-
Yeah.
-
I don’t know why. [laughs] It’s a cool object, huh?
-
That’s right. It’s even in the Nintendo game, the “Animal Crossing.” They offer an Animal Crossing version of that.
-
Oh.
-
Yeah. I’m trying to find a good resolution version, but there is none. I’ll just paste a random photo here. Basically, the Animal Crossing community in Taiwan gets a free license from the National Palace Museum to basically carry those open data from the Palace Museum.
-
(laughter)
-
If you’re interested in the 3D scans that are Animal Crossing resolution, here’s the open data part, right? [laughs]
-
[laughs] OK, wow. It’s in the back there, right?.
-
Yeah, that’s right.
-
(laughter)
-
Oh, yeah.
-
The one next to it, I saw the one next to it but I didn’t see the meat one when I was in Taipei. I’m so disappointed myself. How could I have missed that?
-
Now you can import it to XR and look at it all day.
-
(laughter)
-
I don’t know what you do with it all day though, but it’s there for you to look at. [laughs]
-
Very cool. [laughs]
-
Cool. Should I see if we can get Ross on the line now?
-
Yeah. I think it’s a good idea.
-
I think it’s time. OK, give me just a moment. Let me give him a quick call.
-
Mm-hmm.
-
(background sounds only)
-
[laughs] He is muted there.
-
Yeah.
-
Kind of cool.
-
I think he’ll join us through the Skype.
-
Yeah. Michael muted himself. That’s good.
-
Yeah.
-
Nice.
-
(background sounds only)
-
You know more how about these things work than me. I’m really new to this. We have understood from your mail that one part of the set was something that you actually used for what you call format, or structure, or how it’s going to look like, right?
-
That’s right. There’s a content and there’s the style. It’s technically called a style transfer.
-
Yeah. That’s like four lines, five lines, or like…
-
Exactly.
-
I see. Two columns, one column. That’s graphic design. It’s just column A…
-
That’s right. It’s the same thing. It’s the same thing.
-
Yeah.
-
We get to practice our lip reading, how I try to read Michael’s lips. [laughs]
-
How are you?
-
(laughter)
-
Ross to be on in just a moment.
-
OK.
-
Yngve, you had sent a number of parameter of different topics you wanted the Virtual Minister to talk about, I think. I’d shared this with him. We had a chat last night about where he’s gotten up to. I think he was specking out different approaches to how to train something.
-
Maybe one of our big conversations is about a freelance, something that can respond to live, or if it’s something that can generate texts in a more synthetic way or something. I guess he’ll give us more information when he is on in a moment.
-
Yeah.
-
I like the idea of the two bots being in conversation as continuation of…I wonder if actually we could use the corpus of this conversation.
-
Yeah. I actually did that in my previous job, by the way. Like literally. There’s an Easter egg that I put in my previous job, and in all the customers that installed that piece of software, there’s a special point not advertised anywhere which it recreates. If you refresh, it generates randomly again.
-
The real conversation that we had when making the product together in the memory of Ken Pier when he passed away, my way to mourn him was just to get all the IRC chat logs and generate this virtual conversation. If you refresh sufficient number of times, you’ll probably see my name, which is Au in the chat room, but you will also see our project manager, Adina Levin, our fellow developer, Luke Closs, our fellow QA person, Matt Heusser, and so on.
-
It’s almost like we’re making the product over again. It’s in all the Socialtext instances all around the world…. This is the first time I said this on record.
-
Wow.
-
Was that repeating the exact scripted conversation or was it generated from…
-
Well, if you keep refreshing. That’s actually my first deep learning project using just character level RNN when it came out.
-
It’s actual stuff that’s been written to each other. It’s not developed. When you refresh, it takes you to different pages in the project. 2007 jumps to 2008.
-
That’s right. That’s not our actual conversation, but it could have been.
-
Oh, it’s generated.
-
It’s machine generated. It’s our avatars talking to one another.
-
I see. It’s based on what you all talked about, but then it’s just randomized like that. I see.
-
That’s right. There’s no trade secret to being divulged.
-
Exactly. It’s just a feeling or the…
-
The atmosphere of working.
-
That’s nice. [laughs] Can read it on for a long time. It’s cool. What is an Easter egg? I don’t know anything about programming. Is it…
-
An Easter egg in the software is a function in the software that’s not part of its specification. When our customer bought the software for Socialtext, they buy something like Slack. It’s a productivity software. Nowhere in its menu or nowhere in its product description says you can actually generate some lottery poetry.
-
It’s off menu. That’s what an Easter egg means. It’s also harmless because it doesn’t do anything bad to the product. It’s just unadvertised. One of the more interesting Easter eggs is that it used to be in the 1997 version of Microsoft Excel, you can actually open a flight simulator.
-
In Microsoft Word, you can open a pinball and so on. There’s games embedded in those productivity software that you can trigger if you know the right commands.
-
I see. It’s beautiful. I like it, the Socialtext Easter egg. Ken was the person who was passed away?
-
Ken Pier was the person who passed away. It’s my way to remember him.
-
(pause)
-
Minister Tang, I know that GPT-3 is considered a gold standard for machine learning right now and it’s the open AI version. When I spoke with Ross, one of the main topics of our conversation was that he had a number of other possible reference…
-
That’s right. It doesn’t have to be GPT-3. That was just this saying that if you want photo realism, something that is passable, like passing Turing test stuff, GPT-3 is where it’s at. For the chat log that you’re looking at right now, that’s just elementary software that anybody can run from a personal computer.
-
A more simple approach could also yield good results for something like this.
-
That’s why lottery poetry, to me, makes more sense than, say, a five-page essay because for an essay to work, it has to be self-consistent. If it’s a short poetry, the consistency is in the readers’ minds.
-
Hello. Can you hear me?
-
Yes. Hello. Greetings. Good local time.
-
I really can’t…Great. Hello.
-
Hi. [laughs]
-
Sorry, it’s early here. It just got light. Can everybody hear me? I couldn’t hear you for a second there.
-
I can hear you just fine.
-
I can hear you, too.
-
Great.
-
Mic is muted. Oh, no.
-
I’m so sorry. I just woke up. How late am I? I hope you aren’t waiting for a long time, but I apologize if you were.
-
We were talking about the matters, Ross.
-
Great. Nice to meet you all on a screen as we do everything in America these days, wherever you are.
-
So glad you could join us, Ross. Let me give you a quick recap of where we’re up to with things, OK?
-
Please, yeah.
-
Yngve and Minister Tang are in the midst of their Seven on Seven collaboration, where we pair artists and technologists to make something new, as you know. Conversation has been a big theme of their collaboration and also poetry. Let’s say that those are two things that have been talked about a lot because they relate to both of their practice in different ways.
-
Along those lines, just to give you a bit of background, we have talked about a couple of interesting concepts that might relate to a project, ultimately. One is the idea of lottery poetry, which is a randomized poem that tells a fortune in Taiwan, or in a kind of Buddhist or Taoist tradition. That’s something that was maybe one point of inspiration.
-
We’ve also been talking about trying to meet in virtual space through the XRSpace headset.
-
That’s great.
-
There’s been some idea of virtualizing characters. It’s been a theme of the conversation so far.
-
In relation to both those things, the idea of trying to synthesize one or more of us through text corpuses has become an idea we want to explore more. That’s where I reached out to you, and sent you some of Minister Tangs texts to start with.
-
What we want to do with that is something we should discuss. Maybe at this point, we should weave to the side questions about the technical process that we would use to achieve difference. Maybe just talk a little about what we want to achieve with this and what you think some possibilities might be. You had yesterday really to get on board with understanding the corpora.
-
I have a pretty good sense of it now. Sorry, go ahead.
-
I guess we want to explore either generating texts in the style of this corpus or even potentially having conversational entities that you could interact with.
-
There’s a lot of interesting possibilities in that arena. I guess now that I have a better idea of the media that the output’s going to exist in, can you be more specific about the XR’s headset thing that you mentioned?
-
That was mostly because of the corpus. Is it you’re using that as an example of something, a place this could go or…?
-
It’s the example at this point, but it’s something that…
-
Got it.
-
We all have the XRSpace headset now.
-
What’s the venue and the intended audience for this? Is it just worldwide on the Internet? Is it a specific location?
-
We’re going to be recording a conversation amongst the three of us. In the conversation, we will demonstrate some sketches or ideas potentially towards a larger project.
-
This wouldn’t exist as a live piece of software for people to interact with online at this point. It might develop into that in the future as we talked about.
-
What we’re interested in is something that we can use as a sketch or demo of an idea potentially, even if it’s like to say we tried this and it didn’t work out, we would just want to give an example through it.
-
You mentioned different entities. Is that the desired direction here, that it’s like we’re showing…I don’t know. I really like the idea of working with someone who’s alive in order to represent different voices that they might write with or different types of deconstructions of a person’s writing.
-
When you’re dealing with an individual, it’s quite different than dealing with a group of people, let’s say, who are all writing together or who are all writing separate work. Documents that are studied in the field I work in, I suppose, to the extent that it’s like a unified field.
-
We don’t think about the fact that like Wikipedia, which is what GPT-3 is mostly trained on or the Internet as a whole is not an individually authored thing. When you reduce the number of authors, quantitatively, you end up with an interesting opportunity to…A lot is said about the way that crowds can speak and a lot of work focuses on crowds in different ways.
-
There’s a lot of visual paradigms and text conversational design paradigms that have to do with speaking to an archive. If you flip that on its head in this case, which is probably the best way to go because you have relatively…Even Noam Chomsky who’s the most published author in English, you really can’t train a neural net with just his work.
-
Neural net as big as GPT-3 requires the work of millions of human beings. I guess the thing is that I would love to think about how to elevate.
-
Have you seen the movie “Inside Out,” or is that a popular movie where you are, with different voices inside their heads?
-
Of course. I refer to that movie all the time when I explain the humor over rumor strategy of disinformation fighting.
-
This is just an example. I haven’t actually seen it. I’m just familiar with the overall concept of the film. I guess if the idea is to associate the individual with influences, that could be really interesting.
-
I’ve been thinking a lot lately about the cognitive science of parallels of just the type of deep learning stuff that’s happening right now. It’s very interesting to consider the ways in which we as individuals, we’re certainly more complicated than GPT-3.
-
The question that I asked the folks I know at open AI is what about GPT-300? What about GPT-3000? At what point does it become an entity that you’re talking with on an equal footing?
-
The design that reaches toward that reverence of the model or the thing that’s speaking can be a little bit treacherous when we’re dealing with one person’s voice. What I would say is that there’s some interesting routes in terms of visualizing data that have to do with isolating influences and bodies of work.
-
Think about if GPT-3 in that style of model is like a tree and, at the top of the tree, you have all the work on Wikipedia and the Internet that filters down into GPT-3 or whatever. I’m using GPT-3 as an example. If you have one person’s work around them or this corpus, for example, of influences.
-
I would like to do something like clustering on it to identify labels within the subject, within the corpus, prior to doing topic-based generativity in a particular mold that’s in the spirit.
-
I guess the question is whether we could condense or expand this work. Are we trying to remix? Are we trying to create an alternate version of it? Or, are we trying to expand it?
-
The goal, obviously, isn’t just to create a walking, talking humanoid robot of any kind. It’s like what’s the design vision beyond…I’m sorry, it’s very early and I’m still brewing coffee.
-
Go ahead. Interject. Jump in.
-
What you’re saying is synthesizing a textual entity through machine learning is limited because the entity is really being driven more by Wikipedia than by the fine-tuned data of, for example, Minister Tang. Is that what you’re saying?
-
Yeah.
-
Is that different when you do a…
-
I apologize for going too deep about it. I was trying to stay on the surface, but it’s hard.
-
Is it an advantage in that case?
-
Do you all mind I brew coffee while we’re talking? I really need some coffee.
-
Please. Go for it. We all want you to have coffee.
-
I hope you don’t mind. It’s going to leave me a lot more coherent.
-
Go ahead and brew while I ask questions. The problem is that when you use machine learning or GPT-3 or whatever, you’re bringing in all these other voices. They might be voices that are actually really inappropriate.
-
Exactly. It’s like if you’re using GPT-3, then you’re putting a big Elon Musk stamp on the project, essentially.
-
Is that problem less if you use something more simple? You mentioned Markov chains in our previous conversation.
-
It means much less empirically. I would say like LSTMs, BERT is associated with Google. The truth is that there’s versions of it on GitHub that are not…GPT-3 is brand name ones. It’s all marketing.
-
What you’re really looking for, part of what I was telling Michael is that part of the marketing of these models, these big models that are being sold to business interests is like, “Oh, we’re not really going to give you a name or technology, we just hope that our name becomes like the Kleenex.” It’s really annoying actually. That’s what’s happening.
-
Just before you joined, Ross, I was just sharing my first project with the character level RNN, and that was like five years ago, to mourn my coworker in my previous job, Socialtext. All I did is to put all the IRC logs of us working together.
-
On the second link is an Easter egg that I put in the product. If you keep refreshing that endpoint, you will see an endless stream of IRC conversations modeled loosely after the actual IRC conversations we had with Ken.
-
I took all the IRC logs and then focused the character level RNN only on the parts of Ken’s speaking and the surroundings, the contexts that prompt Ken Pier to speak.
-
You see occasionally my handle at the other co-workers, but the focus is on Ken and also Mac Ken, which is the nickname Ken used when he used a Mac. It doesn’t have any knowledge of Wikipedia or anything like that because it’s a small corpus and it only has character level linkage.
-
Still, the timestamps and all generated is quite convincingly IRC. That’s the kind of self-sufficient model with five years old technology now that we can reliably get was just to stay at corpus.
-
I was just telling my Michael that if you don’t want to bring in GPT-3 or BERT, that’s entirely fine. We can use even our preparatory conversations as a small corpus and still generate quite coherent stuff.
-
That’s great. I love AI and working with AI interest in these tech companies. I have a lot of friends in tech now. It’s not that I have an antagonistic relationship with them in any sense. It’s more that I like actually working under the umbrella of computational creative writing more than I like working under the umbrella of artificial intelligence.
-
When I can just call it computational creative writing and not AI, the audience doesn’t have expectations that are unrealistic for like, “It’s a walking, talking person and it serves you drinks” or whatever. That’s like the Boston Dynamics vision. Go ahead.
-
Go ahead, Yngve.
-
The beginning of this conversation was that we were talking about the magazine I was doing which is called ETOPS, which is like a Q&A anonymized magazine on one topic. Audrey said something that we could just generate into magazine and very quickly fill out these pages.
-
My interest was in her job description. It’s so accurate and so distilled, and not a cumulative…which isn’t somehow also like what ETOPS is. It’s actually more going a place, meeting the person, having the conversation and just…
-
Got it.
-
…distilling it down so you get the essence, anonymizing only because certain interview objects that you will have will have been talking about…The last one was about neuroscience, and then you talk about a famous person at the Max Planck Institute in Germany. The interview you got would be the same interview that he would have 20 times already.
-
That’s an interesting idea for a XR thing we could try. Now that I have a better sense of this being a conversation-driven thing, primarily, as the concept, because that’s sort of what I was looking for, was “OK, we’re trying to represent a conversation.”
-
I think what’s required here is more deconstruction, perhaps. That’s just an idea I have, sort of in the way you might present…I’m digressing too much, but what I guess I was going to say is that this…Do you watch the show “Rick and Morty”?
-
They do a scene that could be represented interestingly…You know the unity episode, where Rick is walking down the street talking to everyone, and it’s all the same person?
-
What if it was a group augmented reality experience where…And this is another idea. I really don’t like headsets when you don’t have to use them. I think that we’re overly reliant on screens, and especially during COVID. If the plan is to make this on the Internet, then obviously, it has to be on a screen.
-
If the plan is to make this post-COVID in order to play so people can gather…I went to this beautiful skeleton of a performance once in Amsterdam that I think – it was a performance. It was just more music-based and dance-based than conversation-based. Everyone had these headphones that were all synced up.
-
I’ve always wanted to do something with that media, and I think it could be done remotely or in person, where everyone wears headphones and there’s a synced set of conversations where they’re paired in spread algorithm.
-
There’s a third person in each conversation that is the individual that we’re working with, his writing or his conversation. Basically, there could be a third voice in the conversation. Then I think how the voice works can be sort of opaque.
-
Really, the approach that I would use, I’ll talk about with Michael if you didn’t hear about it, something called Tf-IDF summarization scoring. I would literally use sentences verbatim, and insert them in smart ways into the conversations, but make the AI part of it for conversation pairings.
-
We can do some generativity on top of that if there’s a desire to have poetry or something original that is said. But grounding it in quotes is going to be important to represent the individual well in this case.
-
What I don’t want is for the AI to be kind of an expressive system that doesn’t represent the seriousness of the person behind the words initially. It’s not about necessarily that they’re serious words all the time. It’s more about the AI can misrepresent tone very easily.
-
I think what you said just now, Ross, is that it’s possible to do a slightly hybrid model where it’s largely remixed. But then maybe…
-
(child cries)
-
Oh, are you OK? Come here.
-
Is this your child? Oh, my gosh.
-
(background noises)
-
Yeah, so I think that a hybrid approach could be good. Maybe we have to make some decisions about the format that this would occupy from the side.
-
Maybe that’s a good point to let you go and make coffee, and get ready for your call with Germany. We can pick up the conversation with just a quick…
-
There’s 10 minutes. We already had a prep for it yesterday, so I can keep talking for another five, just so I have little more direction here.
-
Do we want to delve into this more, Ross, or do we want to quickly…
-
Yeah, I could also drop off if you want to settle the question of what the media is. The venue and the audience, I think is a really important question. This is physical versus virtual, it’s totally different.
-
I’m sure that XRSpace will be happy just make this a permanent exhibit because they will get more users. Of course, few people in the world currently owns a XRSpace headset, so it will a really privileged exhibition. We’re looking to present the primary exhibition as something that could be easily projected in places, that is to say probably as a film-ish thing.
-
That’s amazing.
-
I’m telling what I heard from Yngve last time.
-
One idea I’ve been throwing around a lot is the idea of a movie that’s very long or possibly never-ending that anyone can be in if they just log onto a website and wait in line. It would be funny if you could do a version of that as a conversation with the minister.
-
Sorry, what were you going to say?
-
A 10 years’ conversations, like a movie that’s exactly 10 years long.
-
Yeah, some magical realist amount of time. I’ve wanted to do something like that for a long time. That’d be very cool.
-
I did like the idea that someone had a few minutes ago of it being this conversation replicated endlessly.
-
(laughter)
-
That’s right. We just take 10 years to prepare for 7x7.
-
That’s a nice way of describing it. I like that. I also like the format of the lottery poetry because it’s like in blocks, right? You have these lines, it’s ready, the next one is ready, the next one is ready. It continues, it’s cumulative, but not in the…just open version it’s like…
-
It goes more into what you talked about, minting something in the digital realm. How do you fix something? Or, like a building block that you can’t take away. That’s what I like about the lottery ticket format, poetry.
-
It’s stochastic.
-
Yeah. The way I talked about doing this from before, the tech is pretty simple. There was a question we reached. The first question is, “Is it in real time?” That adds complexity. The simplest way to do it is to have a system by which people are uploading clips, and then they construct it, but that’s less interesting. You lose some glitz by losing a real-time aspect. “Is it real time?” is one question.
-
The next question is now much context does each speaker get? That was where I got stuck on this before. In the Web interface that everyone sees, which maybe is different for every individual person, which is pretty easy to do, there’s a bunch of unique URLs. That’s the simplest way to do it. You could have a actual Web app of some kind as well with a framework. That takes a little longer.
-
The main thing is that everyone gets their own version of the site, essentially, or everyone gets the same version. Then everyone gets the version, and then Oscar Sharp and I, when we discussed it, we’re like it sort of changes the dynamic, because then everyone can see the script in real time as it goes by.
-
It’s just different if you only see your lines versus you see everyone’s lines, that’s what I’m saying, or something in between. Then the question also is how far into the future do you get? Are you getting the whole script in a window, are you getting the next page, are you getting the next few lines, the next few words?
-
It’s a question of how much text to give to the person. It’s a pretty simple UX kind of…
-
Which person do you mean?
-
…thing. The user, the person who’s going to be experience this installation or whomever, the audience.
-
Right, but it might be that the script is recited by some sort of entity. It might not be…
-
It’s a performance?
-
Maybe a bot performance.
-
If it’s a performance, then it’s a performance. [laughs]
-
Back in the…
-
Or text-to-speech.
-
…Easter egg, I generated 20 to 30 lines of chat log at a time precisely because that seems like the right amount of closure for each random lottery. It’s around the right number of lines to fit something on the page too.
-
I would be happy if, as a window in time, just like with the time stamps, that’s actually quite easy because it implicitly takes you back. We can also use the same technique, except it takes us forward to the future. You get to see a future version of Audrey-Yngve talking endlessly about possible topics to explore on 7x7 for the next 49 years or something like that.
-
Each time you roll the dice, you see a snippet into that window. You can, of course, also prime it with your own theme and so on. I would suggest it’s very limited. Just contribute one sentence into the conversation between me and Yngve, which we will treat as if you have introduced a conversation topic. Then we have some conversation around that slash poetry.
-
That makes a lot of sense. If it’s performance-based, the vision is yours, essentially. I’m here to assist in whatever capacity. It’s hard to say.
-
I’m running into roadblock in my own work, I have to be honest, at certain points these days, because the technology is moving so fast now with respect to language modeling that, as soon as one thing arrives, it’s almost like the next thing is already being talked about, well in advance of the first thing being available. That happens over and over again.
-
We’re past peak AI in a way this time. That’s really kind of startling. I don’t know. These are interesting things to think about with respect to interface. I’m sorry if I’m getting too abstract here.
-
Yngve has something to say.
-
I’ll stay calm. [laughs]
-
I think we should get more abstract actually. I think we’re almost too much on the format at the moment. I come from a very classical perspective. I work with very, very slow medium. I work with glass and materials, high-tech stuff.
-
Yeah, very slow.
-
I don’t know if we’re even going to be on the panel, me and Audrey, talking about the project. I think it’s almost going to be a prerecorded thing. I also don’t think it needs to be a finished project. Just going back to the poem, what interests me with the poem that was the starting point from that is that it’s basically a job description as a poem.
-
It’s like a misuse of format in order to say something very precise but also keep it open, which is very special. This printed, it could exist…basically the alphabet together is the materiality. Whether it’s a performance or, in one year, a printer prints it out or somebody recites it, that’s up to time.
-
What I understand as format now or talking to Audrey is that in order to achieve a result, we have to decide a format in terms of how many lines or how big the thing is going to be. Is it a A4 page? Is it three lines or four lines, the structure of the things. Sorry if I use the wrong words here.
-
For me, at this point, it’s a question of where the boundaries are on this knowledge graph. The way to think about working with a language model is that you’re never just working with a language model. It’s always embedded in a system of some kind, even if it’s really simple.
-
A chatbot, for example, that’s not just a language model. It’s also a UI. It’s got a visual element, probably, and literally just audio or whatever. If the deliverable is a document, that’s very simple if it’s a job description. My question as an artist is what’s the presentation for the job description in that case that you imagine.
-
I don’t know yet. [laughs]
-
You don’t know yet. That’s what we’re discussing.
-
Do you have a vision for that? Can you speak to that at all?
-
Actually, I announced my job description as a tweet. It’s set in this Front Desk sans font. That’s the canonical presentation form for my job description.
-
OK. [laughs]
-
Yngve, go ahead. What were you going to say?
-
I understand more and more now that different corporations creating AI, language tool, whatever you want to call it. What I don’t want to is we create something which is making Audrey Tang just scanning Wikipedia, which is super boring. It needs to be the most pure, crystal form of Audrey Tang. We need to have a conversation which is real, like the poem is real. It needs to come from a real place.
-
What I would say is, if you wanted to be really faithful, then it’s a question of whether the ideas represent…Work involving artificial intelligence can say a lot of different things. By default, it always says something about artificial intelligence.
-
My question is, is your vision here AI scary or is it AI good? I’m not making a value judgment. I’m asking you, by default, what’s the attitude about AI? Is it positive and playful or is it…
-
My default attitude is assistive intelligence.
-
Assisted intelligence, OK. That makes sense.
-
In the sense that we’re all blind and deaf in some regards when it comes to understanding a holistic topic. AIs are assistive just as assistive technologies are. Basically, it restores us to a shared reality for collaborative learning about human experience, through a plurality, on the Internet of beings. That’s what I mean by assistive intelligence. It’s also in my job description.
-
I really like that ontology. That’s a nice way of encapsulating it that’s refreshing compared to what I hear usually. Engineers have a hard time encapsulating what they’re working on sometimes, even the smartest ones. I would agree with that.
-
The other way I’ve heard it said or the buttress to that I’ve heard from people more inside the industry is every augmentation is also an amputation. That’s the caveat to that worldview. You’re going, “OK, if we’re going to go trans-human, then where do we stop cutting things off?” If it’s true that…
-
I see where you’re coming from. To me, assistive brings, with a default, like when you’re caring for someone with different disabilities. The idea is to restore human dignity, not to take away their agency from them.
-
The two important thing about AI, which is value alignment and accountability, goes with this assistive objective. That’s what you would expect to assistive person to a disabled person. They need to be value-aligned to them, to their agency. Also, there needs to be accountability for the decisions made on behalf of that person.
-
I think assistive captures the twin aspects of accountability and value alignment, for me, when it comes to AI.
-
Absolutely. I always point to the example that the typewriter was originally invented as an assistive device. A lot of my work, camera stuff, has been called assistive in many ways because it’s assistive devices that are for everyone in a way.
-
That’s definitely a really cool worldview on technology. I think a lot of people are waking up to that worldview, at least most of the reality, if not the whole reality. Humans have always used tools. It’s a really old story.
-
Tool-making is what separates us from monkeys in a way, or whatever our primate ancestors. It’s a blurry line if you look at the science. My dad’s a paleontologist, so he knows a lot more about it than me. I think that technology’s always been embedded in our species.
-
The more woke we are to that, at least the less likely we are to be harming each other with our technology and our pursuit of it. If we’re aware of what it is as a societal addiction to doing things better and better and better, in certain contexts like AI represent very dark, old urges in the aggregate.
-
It’s like this idea that, “I want to be lazy. I want to have a robot do all my work for me.” Really, we have to be more collaborators with our tool systems in the end, no matter what.
-
Ross, can I say something here?
-
Yeah, go ahead. I have to go in second. I’ll let you all talk. If you have one more, if you have anything else, tell me. I’ve got to go in five minutes.
-
The question of condensation of a less corpus corpus seemed like a crucial issue. I know that in Yngve’s work, the accumulation is less important than the distilling of something. I think a lot of machine learning, because of this training aspect…I’m just drawing out this dynamic, but I can’t finish my thought for some reason.
-
That’s all right. My coffee still is not finished. It took me five minutes to press the button.
-
We have Taipei… I know in Taipei, but where are you again, Minister Tang – in Taiwan, I mean?
-
I’m in Taoyuan at the moment, at a social innovation summit, and, interestingly, using 5G connection at the moment because the WiFi in hotel, I don’t have the password, but that’s fine. 5G is as good as fiber optics.
-
I was going to say, if you’re on a data connection, that’s really crazy. Yeah, wow.
-
In Taiwan, we have unlimited data plans. Actually, anywhere in Taiwan, even on top of Taiwan 4,000 meters high almost, you have 10 megabits per second guaranteed as a human right for unlimited data, like US$16 a month. Otherwise, it’s my fault, personally.
-
I’ve never been. My close friend from MIT, his mother is Taiwanese, and he goes a couple times a year to visit her family and his family there. I’ve never been. I’ve always wanted to go with him. One of our friends who has a little more money than me does go with him to Asia sometimes.
-
I’ve only been able to go a few times, but the trips I’ve gone on with him so far just to Korea. That’s it recently. I went to the Philippines a very long time ago. I’ve wanted to go, come back to somewhere on the continent. I’ve heard a lot of amazing things about Taiwan from my friend Dan, who is a close friend from college.
-
Ross, we’ll say good-bye and thank you. We’ll send some further notes. I’ll send a recap after we finish. Good day.
-
Thank you.
-
Good morning.
-
Bye. Live long and prosper.
-
Quickly to round up, [laughs] Yngve, why don’t you give us a debrief on where your head is at right now. We had an interesting concept with just continuing the Seven on Seven conversation as a synthetic entity. It seemed like we pulled back from that concept. Where are we now?
-
Where are we now?
-
You’ve too much to think about, but…
-
(pause)
-
It sounds difficult to use those. It sounds easy and it sounds difficult. It has to be very clear.
-
I’m very afraid when I hear Wikipedia, and it’s just searching the Web to randomly take content. It doesn’t sound like the right thing.
-
Of course, I always get very alert when people start talking about formats, performances, and equipment in the exhibition spaces because it’s a very little…
-
To be clear, we’re looking for a format for some experiments or one experiment that we can discuss and assess in a sense. It’s like a public experiment, not a finished work at this stage. I wanted to clarify. Ross, he’s obviously obsessed with the really deep structure of artificial intelligence.
-
If you create a character using GPT-3, it’s not that you say things directly from Wikipedia. It’s more like a style that comes across.
-
(child yelling)
-
Nico, I don’t have sausage sweetie. I have cucumber. I have sweet potato, I have smoothie.
-
(child yelling)
-
She wants sausage.
-
I would love to be able to give Ross some kind of instruction to do some experiment. Then we can see what it’s like.
-
Yeah, we can do that. If I could add something, I really like the format of the lottery poetry. I just don’t know. I like it. I like even how it looks.
-
It’s very self-contained. I really like how portable it is.
-
I like that you could also potentially print it out and give it somebody. I even like the fact that you have pay for the interpretation.
-
[laughs] A consultation, right. The first one is free.
-
[laughs] I could go in that direction. When I looked at it, I also liked that some had two lines and some had four lines, some had five lines, but that’s just a graphical thing which I liked, that they’re bigger and smaller and say more, say less, but it has a max or something like that.
-
I think that’s something. I don’t know what it should be, if it’s you speaking or we asking a question and there’s a question-answer type thing and a thing like that. The question-answer doesn’t make sense, and maybe that’s the poetry in that.
-
It’s hard, but I would like the sort of thing that writes these things to be trained on Audrey’s writing and not being an intelligence that then gets a question and just swipes through Wikipedia to tell you what time it is or something like that.
-
We can limit the answer from my current transcript corpus, which already have more than 50k, 45k-ish speeches. That’s more than what we need. If you read through all that I sent Ross, then there’s more than enough. We don’t have to consult any other external sources.
-
In some way, it could be like, “Oh, there’s so much. I don’t even know how to read through it.” That was first feeling when I got that link. It’s so much, I just jump in here. I remember I spoke to my studio manager, and I said, “It’s just so confusing. It’s so much information.”
-
I tried to send her something that she should read, but then she was reading something completely different. We talked about half an hour, reading different stuff, talking about different stuff. Maybe there’s a way to for me ask you questions.
-
Then again I wouldn’t want my questions or the simple questions that I wrote in May, which was just to kick start something, to go somewhere. At the moment, I wouldn’t want that to be defining the whole programming. Then it’s programmed, and a bit like, “Oh, I don’t know enough, if that’s the right way to…”
-
I have a lot of topics, not so many, 10, 20, something like that. I could feed them into something.
-
That will work. For example, if you have a input, through very simple math like the Tf-IDF Ross was just describing, we somehow decide a lottery is the main topic of your input. Then it can very simply just run a search. Then if you click the link I just pasted to you, you will list exactly this many times that I mention lottery in my public speech.
-
Then each of them can be seen as a kind of lottery poetry. If the machine draws one by chance, then it may or may not have anything to do with the actual topic that you’re talking about, but at least it solves the voice problem because it actually came from one of my conversations.
-
But it would be a copy-pasted thing. It wouldn’t be a new combination of things.
-
What I’m trying to say is that all the different times that I talk about lottery are not, of course, the same topic. In the link, I first talk about a receipt lottery, which is electronic. I talk about sandbox application which is like a lottery in reverse. If you run just a search result and combine them into a poem-ish thing, then it actually takes a index life on its own.
-
The challenge is how to make sure that it fits actually the poetry lottery format. The idea is that each word or each short sentence, if you click on it, actually takes you back to a word that I actually said. The link would be at word level, not at copy-paste level.
-
I say this with some experience because when I was 20 years old or so, I would log into the Perl chat room on a IRC on the chat space, and I wrote a bot. Whenever it sees me type river run, it would just pull randomly from Finnegans Wake a certain paragraph and to insert into the conversation.
-
It’s almost like poetry lottery because Joyce always has something relevant to say. It just pull random sentence out of Finnegans Wake. The corpus that you see here is roughly at the length of Finnegans Wake, actually longer. If I can do that to Joyce, I’m sure that other people can do that to me.
-
Sometimes in my work, topics are really banal. Topic is about the experience of flying economy class. If I could ask, could I get something out of it…
-
Of course. That’s the whole idea.
-
That would be super cool. [laughs] That would be very fun, actually.
-
How do we…
-
(background sounds only)
-
Guys, do you still hear me?
-
I still hear you. I’m just…
-
With no formatting whatsoever, if you just said flying economic class, and then I just do say a search for the three words, because we were saying word level links, that the raw result will be like this, which I can read out. I can say I fly out Saturday early morning, something like that, in the free economic pilot zone. Exactly. Now, it’s just 15 or 20 per class, and that’s much easier.
-
It’s not quite an answer to your question about flying economic class, but it’s really random sentences that somehow combines just in a freestyle way that talks about flying in the economic and the class. This kind of software, I’m sure that Ross can very easily write.
-
I guess the first, I fly out Saturday early morning, something like that, is copy-pasted as a whole or it’s just put together…
-
Currently, it’s copy-pasted as a whole. We can also decompose it so that it’s not a whole copy-paste, but rather a sentence level thing.
-
In some way, the project could be asking topics and using all of the writing of yours to then generate a certain point.
-
Just for the record, here are the three links where the three lines came from.
-
I see. Now I understand.
-
That’s just for the record.
-
It’s good just for me to understand it. I never spend time programming. I understand structure in different ways. [laughs] This is interesting. Cool.
-
Could we link this in some way to the XRSpace? Is this something that…
-
Yeah, we can read it out in XRSpace.
-
If it’s in the cinema, we could just watch…
-
That’s right. We can watch with a backdrop of pretty much anything. I can perform some of my answers, I guess, with my digital avatar. You can also recreate, say in your avatar, “What’s it like to fly economic class,” and that will feel quite natural like me and you having a conversation.
-
That could be cool. That’d be nice to have, because we are pre-recorded and then we have something that’s we have a conversation in space. It’s already visual because we recorded in XRSpace. That takes up how it looks like. We don’t have to think so much about that, except how we look like as avatars, right?
-
That’s exactly right.
-
Mike Connor will present the project as a project in creation. We just have to see how we tweak it now so it feels right, feels good, and performs what we want to. Maybe it would look like the lottery poetry on that screen. Maybe it’s pass-along.
-
We can definitely model that.
-
It doesn’t need to have an engine font like that.
-
(laughter)
-
It could be simpler. Cool. Is Michael still here or he left?
-
Michael is still here.
-
Hi.
-
Hi.
-
Hi.
-
Are you good?
-
Yeah. I caught just the last few couple minutes. I was just carrying around my laptop but it wasn’t plugged in so the battery died. It sounds like you have a way forward.
-
I guess at the moment, it’s just this one idea that because there’s so much writing poetry that the way of interweaving, it needs to happen by some artificial intelligence. The way that needs to be something…You can probably explain it better than me.
-
We just went through something where if I asked the question, if would ask something, how it is to fly economy class, then there’s probably ways to use a word…What do you call it? The word prompt, the word…?
-
The prompt. The three words as individual prompts.
-
That’s going through the database. Putting it together, we’d get three sentences out of it. We probably have to work in it so it looks differently because there’s just three copy-paste happening here, taking the three words together. Every word makes a sentence, I guess.
-
Is this going to be us doing something or no?
-
Is it going to give us something? I don’t know. I was thinking maybe it’s possible to be in XRSpace and just look at random poetry.
-
Of course, because there’s a browser in XRSpace where people can browse Web pages together. If we put it together as a Web page or as a video, we can watch it together.
-
Just searching for a way where we could access something like this online and already use just the XRSpace as the place where we are with our avatars. That’s the visual thing that’s pre-recorded for. If it’s a project yet, I don’t know but it’s a conversation that happens. It has somehow…
-
Some parameters to get somewhere at that’s…
-
I’m just trying to make sure I understand what I need to do but I can probably watch back your conversation up to this point and get my instruction from it.
-
That’s right. That’s the beauty of recording.
-
Maybe we’ll let you finally get to sleep, Minister Tang.
-
Sleep sounds like a good idea. [laughs] We can put it like that. We really do have something here that’s very doable and also fun as well, which is always the most important part.
-
It can be a very simple random numbers of pictures that we just watch together in XR, which is like that three lines formatted like a lottery poetry paper. That will be something. Generating that sort of thing from the corpus is well within Ross’ capabilities.
-
If we were, for some reason, to find backdrops or images to illustrate it, then I would have no problem searching the Internet with AI images, but the writing should be pure.
-
Should be pure.
-
Cool.
-
Are we good?
-
Do you want to meet me, Michael, to meet in that space at some point? I know you’re busy right now but we could do it like…
-
Yeah. I’m available most of the day today. Just a little bit distracted right now, but then I’ll be OK again for a while.
-
I will go to sleep. We have something red — that’s not herrings — for once. Thank you for the collaboration.
-
(laughter)
-
I’ll sign out here. Cheers. Bye.
-
Take care. Bye.
-
Bye.