-
So, we got this graph from some people who work at a couple of the companies and one option now on the left if you scroll over to all the cruxes and assumptions about this… If you zoom in there’s like all these beliefs that I wanted to explore with you, because these are like, you know, key consensus.
-
You could realize that there’s actually a lot of consensuses, like eventually it’ll be very cheap to build very powerful AI. There can be a lot of people on many sides of the fence who agree with that statement. Butlerian Jihad against AI where we just sort of declare like doom style - we’re not gonna build certain things, won’t work because it only gets the nice people to stand down and the bad people will keep going. Maybe people disagree with that because they believe there’s some enforcement.
-
So anyway, you get the idea that behind this, we thought that a lot of these sub statements… these are the cruxes of beliefs that people have that enable them to believe in different endgame scenarios. And what we’re trying to do right now is just interviewing… In fact, we’re hosting a workshop on August 19th and 20th with a bunch of people who work in the space and sort of imagine what are their endgames, and also what are the beliefs that are inside of their cruxes of why they think that’s possible. I’ve gotten kind of fast here, I don’t know if this makes sense.
-
This is like where we were at this March, so we’re just recapping.
-
Yeah, well, it seemed like the beliefs that you were looking for consensus around in March were slightly different than these ones, but not that it has to be, because it’s obviously generalized.
-
Yeah, I think the consensus we got in March was similar to the open-source/free-software non-debate, right? I think it’s Richard Stallman that said, even though the endpoint of open source and free software is completely different because they’re different ideologies, on the short term, they do exactly the same things, right? So, that was the thing that we looked at in March.
-
While the “race to safety”, let’s call it that, the left column, and “shut it all down” on the right column, may differ like five years from now, 10 years from now projections, it’s quite clear that they… and that’s very clear in March already, that on the short term, they’re really just things that they both really agree on, right? So, nobody wants democracies to depend on black boxes. This is an obvious thing. So, interference with elections and democratic principles, that’s a big no. And if actual harm on that thing, either biosafety or democratic safety or whatever, wants it to surface as quickly as possible and hold it liable for it.
-
So, on the short term, in March, I believe everybody agreed about three things. First is that our current liberal democratic order is a good thing, and its processes should not be interfered by those black boxes. Everybody’s got to understand the harm it causes here and now, first thing.
-
Second, when actual harm, or likely harm of escalation is caused, people want the AI companies providing such services to be liable, and not just in a financial damage sense, but also in an early forecasting, warning, and mitigation sense. So, this is the second thing.
-
And the third thing is that people don’t want governments to disappear. They want sufficient regulation so that on the short term, people who don’t prove that they can raise to safety can be escalated in an overtime window to villains maybe one year or two years from now.
-
So, although on the midterm, like three years, five years from now, things vary wildly. I think around March, what we’re seeing is that there’s a wide agreement on what to do the year leading up to elections, which is next year.
-
What was the third one again? People don’t want governance; they want special regulations…?
-
They don’t want governments to disappear. So basically, there should be very clear guardrails that says these are the players that play the race to safety game, and maybe there’s some peer pressure, maybe there’s some naming and shaming. But at the end of the day, is that anyone who doesn’t play the “race to safety” game can be clearly marked by governments as such, and sanctions or whatever measures you have in mind can be applied very swiftly after a certain threshold.
-
Got it. And then of course, the question becomes what constitutes being a player that is racing to safety? Because can we define, like the letter of the law of safety will lag the spirit of safety as it needs to be defined?
-
Yes. So personally, I think during the pandemic, what we have seen between cooperating players and non-cooperating players, is simply that how quickly they respond with measures that redress the harm — in a publicly transparent and accountable way.
-
So, you measure the time from when the harm has surfaced to the harm is mitigated. And when the actual harm, albeit slower to respond initially, you see a good-faith effort. This is exactly like bug bounties and responsible disclosure and so on in cybersecurity. After a while, you very quickly see which companies are the ones that respond in a good-faith to the white hat hackers and which ones are not.
-
So, once there is a culture of doing scoreboard of sorts, it is not difficult to tell good-faith actors versus non-good-faith actors.
-
Got it. Interesting. So, but what happens when the harm is more structural? So, it’s not like I found this specific jailbreak method and there’s gonna be literally an infinite set of…
-
Oh yeah, yeah. I mean, that’s the same in pandemic, right? The virus mutates and there’s a new way of aerosol spray or whatever, right? So, the idea is not that you measure one specific harm to one specific representative person. This is more about collective intelligence mechanisms with continuous integration of listening.
-
As you know, in Taiwan, anyone who witnessed a new virus mutation or whatever, or even if they just suspect that, can just call this toll-free line. And then we address these things 24 hours at most later in daily conferences. Almost like a daily forecast thing where you measure those harms and then in a collective intelligence way, and then say that these are the companies that are willing to investigate together and so on, contact tracing or whatever.
-
Right, so I remember from our conversation, you were saying like, basically you’re creating… cause you have this number that people can report things to, and then you have the daily broadcast. So, it’s in our language, it’s like, she’s creating this little like Truman Show in which there’s a daily broadcast, a daily shared reality. Not the Truman Show, it’s a daily shared reality for everybody that they can optionally tune into that’s created by the government where they are going to basically do continuous…
-
Right, it’s like a weather forecast, basically.
-
Yeah, so it’s both what we integrated, the harms that we integrated, the bug bounties that we’re acting upon, and also the forecast of the things that we’re seeing.
-
Right, right, right, exactly. And to do that, and that’s a dynamic that we didn’t go too deep on, but I believe for Hugging Faces and Meta, We need to paint them at this stage not as black sheeps, but rather as important vehicles for people to be able to identify and reproduce those harms on their MacBooks.
-
Because at this current level, they’re not yet causing widespread societal harm, but if they don’t see a positive role they can enter into, then we have this internal division that is never good for spreading public messages.
-
Wait, I didn’t understand the last part. If they don’t see the…
-
If Meta, Hugging Face and so on don’t see a way to contribute to race to safety, then they might keep making public statements that decimates the legitimacy of racing to safety.
-
Then they will not be… Like their big letter that lots and lots of people signed, even some of who we thought were our allies saying, like oh, actually open source is safe. This is an example of them decimating the race to safety.
-
Right, exactly, exactly. And of Meta not signing these statements, actually.
-
Well, no, I think there might have been a miscommunication. Aza was talking about when Meta released LLaMa 2, they actually gathered signatures of people to endorse the fact that these models and releasing them in an open way is a good thing.
-
Yeah, exactly, exactly. The bridge builds both ways, right? So, LLaMa 2 did a lot of innovation, especially for enterprise environments, on low-cost alignment and really fine tuning for alignment. And so, on this side of the bridge, people who signed that extinction risk statement, including me, would go publicly and say it’s a good thing.
-
And then in exchange or reciprocally, the Meta team could say that this is part of “race into safety”, instead of “risks are overblown” or things like that.
-
I feel like they, I didn’t maybe read this statement closely, but it does feel like Facebook is generally of the opinion that risks are overblown because that’s the, maybe I’m captured by their previous rhetoric on social media where their whole business model is to deny the risks and be the Exxon of causing loneliness and saying that we’re not, driving up loneliness rather than, yeah. Aza, is it where you are at in this conversation?
-
I was just thinking… The kinds of things you can make leaderboards for are generally like acute, attributable harms. And then there’s this whole… which of course is gonna force all the harms into the likes of chronic, long-term, diffuse, hard to attribute harm. So, I’m just curious that your thoughts for institutions that are good at spotting that kind of harm. And I still don’t know…
-
You’re saying from AI, so equivalent of DuPont chemistry saying “better living through chemistry.” And then basically we end up with PFAS that are literally invisible. Aza just got like a body, full-body test. And I have done full body tests. And you realize that you actually have this stuff in you because you can’t visibly sense it. And so how do we have institutions?
-
Very, very personal when I was like, oh, I have very high mercury levels, arsenic levels, and glyphosate found in my body. And I’m like, oh, those externalities that I think of as just like out there and abstract, actually all the things we talk about are inside of me. It’s actually very humbling. Anyway, sorry, I didn’t mean to interrupt, Tristan.
-
No, no. And just to close out the series, you’re getting the full picture, Audrey. Like we’re just interested in, so how do in the Ken Wilber model of like first person, second person, and then singular and plural, like first person subjective experience of a singular person is like in that quadrant exists like phenomenology and human experience. In first person objective quadrant, we get neuroscience and fMRI readings from the outside. And so, you get the picture.
-
When we have externalities that are showing up in these other quadrants, like for example, Facebook doesn’t make people, or Instagram and their doom scrolling doesn’t make people feel very good, but that isn’t measured in the systems. It’s not internalized. So, as we’re thinking about 21st century institutions with AI that deal with chronic, diffuse, long-term, cumulative, and generally invisible piece by piece kind of death by a thousand cuts type harms that you literally don’t even be able to measure on your own even, or like smelling or tasting or touching, but are there as AI threatens to create more externalities in many more quadrants, we need institutions that also are forecasting harms in that area.
-
Right. I think here, we probably need to make the distinction between harming like the status quo, the business as usual, the institutions, and so on. That is to say harm as in disruptive technologies. Vis-a-vis harm to some cherished, phenomenologically speaking, subjectively desirable experiences, immersive experience that people care about. Because these are sometimes confused.
-
Like, people say that synthetic media will threaten people’s trust in political expressions, and so nobody will be able to tell a campaign speech from a synthetic campaign speech, and things like that.
-
And it looks like that is threatening the status quo when it comes to campaigning, but what people are actually saying is that they previously enjoyed this capability of building personal connections around a social object, that is political speech around political figures, and now that feeling is being decimated, is being taken out.
-
So, I think if we focus on the later, it is possible to surface that sort of harm, usually through ethnography, interactive ones like that, and so on, and the question then become how do we scale that sort of ethnography so that everybody can do it, and also the result is meaningfully blended or aggregated.
-
But if we focus on the first, then it becomes just protecting the bureaucratic processes, the existing institutions, and so on. And there’s a lot of ways to do that, but I don’t think that is where we should focus most of our energy on.
-
I think I didn’t catch all of that. Aza, are you tracking?
-
So, let me simplify the argument. There are financial institutions, there are election institutions, there are nuclear nonproliferation institutions, and so on. These institutions are all very interested in protecting their existing processes. And so, the national security people already have ways to track generative AI harms within their purview, and ways to redress them. For example, if everybody can synthesize lethal chemical or virus agents, they know exactly which choke points the synthetic labs and so on to defend, right? So, they are experts in this.
-
In that case, there are choke points in a bunch of the fields, so like the number of people who have access to the DNA synthesizers or whatever in that hardware. So, you’re just talking about, so in domains where there’s an existing institution, and there’s a concern about an application of AI that might be dangerous, there are choke points that can be identified. Of course, the challenge is what happens when it moves to the realm where there’s less of a physical choke point, and it’s more in the realm of bits.
-
Yes, that’s it.
-
And revenge porn, and automating a bank run by saying, here’s fake photos of people standing in front of banks, with Wells Fargo and Citibank logos that’s completely decoupled from the material reality.
-
Right. So, my point is that, no, we’re not saying that we don’t forecast those harms. We’re saying that we forecast it with the principle of subsidiarity. The idea is that these institutions are closer to the harms, so they should be part of this subsidiarity scheme where they’re empowered fully to address those harms.
-
But the more insidious unknown unknowns is the kind that you mentioned, which is the subjective, phenomenological, plural subjects, the harms that changes the fabric of trust, changes the fabric of society, that don’t currently have an institution in charge of that. And that’s, I think, where we should focus our energy on through collective intelligence.
-
Yeah, exactly. Well, and so we use both philosophical concepts and institutions whose mandate did not include the increased dimensionality of potential harm. So for example, free speech versus censorship is a two-dimensional answer to a many more dimensional problem of virality, engagement-based ranking AI, plus virality.
-
Exactly, yes.
-
And an example there that we take from Daniel Schmachtenberger… By the way, have you ever met Daniel?
-
No, I don’t think so.
-
Oh my God.
-
I know of Daniel, but not personally.
-
Okay, can I make that introduction?
-
Of course.
-
I know that you have such limited time, but that is a conversation that I think needs to happen. So, but just to say, he’ll use the framing that for speech, it’s not about whether it’s fact-checking, we need whether it’s true, truthful, and representative, because that deals with decontextualization, cherry picking, and stripping things of the context, warping the context to make a fact land in a human nervous system and an epistemology in the way that’s desired.
-
And that we will care about whether the implications of this speech will lead someone to believe a true, truthful, and representatively true thing epistemologically, so we can reason about consequentialism of speech when people are trying to be truthful. Care about speech that uses one example of a person to decontextualize from a broader trend or these kinds of things as one example.
-
But then the question is, we don’t have an institution that is the effectively a kind of 10-dimensional speech police, but speech police that cares about the, they’re the epistemology police, and they’re less police and more like safety standards. It’s like more like an EPA, but for epistemology.
-
And we can care about making different distinctions about the kind of speech that includes a complexity and synthesizes multiple worldviews versus the kind of speech which does ad hominem attacks and denies other worldviews.
-
I don’t know, I’m thinking about this in real time in the Ouija board of this conversation, but like in thinking about like, is there an institution in your mind that could be like a EPA for the epistemic commons that makes distinctions about the kinds of speech that has better, cleaner, safer, purer epistemic practices or more representative or synthesis oriented epistemic practices? As an example, I’m playing around here. There’s a lot of things to talk about with AIs too, but this is kind of our previous work.
-
So, in the document that I pasted to you, which is a draft of the Generative AI guidelines for our public sector. So, there’s two pillars in the Taiwan strategy, in the Taiwan model. One is that the science ministry, actually makes its own foundation model, named TAIDE. And the second pillar is the digital ministry, who works on assurance and democratizing, not just alignment, but also forecasting of the impact of generative models.
-
And these two pillars are important because if a government, sovereign government, do not offer their own foundation models in a democratically accessible way, they’re less informed in a position vis-a-vis the largest AI labs when it comes to industrial use because there is currently no easy way for the API based largest AI labs to share telemetry, or to share any privacy-preserving signals that can surface large scale harm.
-
And this is natural because there’s currently no widely adopted privacy-preserving way of sharing such harms. So even if all the GPT-4 users are suffering some epistemic harm, or phenomenally, logically speaking, the harm on their everyday lives, there’s no easy way to surface that at the moment on the API level vis-a-vis GPT-4 because OpenAI will say it happens downstream in the applications, not in the APIs themselves.
-
On the other hand, the great thing about making or tuning our own model is that we can include by default ways of good practices, especially around governmental agencies that do follow the cybersecurity rules. So then exactly the same threat indicator and reporting and red teaming and white hat, I mean all these terms came from the cybersecurity world anyway, can then be repurposed to surface as we did for foreign information meddling and manipulation and so on, or scamming or whatever. We reuse the same threat indicator world of cybersecurity, even the same cross-national agreements to surface that sort of harm. So, as long as data comes in that format, we have the capability of ringing the alarm bells of the world.
-
So long story short, I believe there are existing cybersecurity multi-stakeholder conversations that are capable of surfacing this kind of unseen harms that lurks for a very long time, like those advanced persistent threats, and that if governments start issuing guidelines on how to use things responsibly with the accredited models, the UK people call it assurance framework, then they’re in a better position to measure those harms.
-
So, part of what you’re saying though is that there are existing cybersecurity laws that you’re leveraging to enable this?
-
When we talk to the National Security Council in the US, they’re very interested in what are the existing national security laws that we could… Because everyone’s trying to find ways to leverage the law as it already is written to be able to act on the generative AI issues, but people don’t know where to look as clearly. So, I’m just curious how that worked, that we might be able to send to friends in the US government.
-
Yeah. I mean, there are a lot of laws in the US. The cybersecurity information sharing… it is not a new thing, right? It’s already existing. The thing was just that they address vulnerabilities of computer viruses in computer systems. And just recently, I think the CISA, the US equivalent of our institute, expanded that to also include foreign information interference and things like that.
-
So, even though that is not a computer virus, it is a coordinated inauthentic behavior on social networks and such that aims to achieve effects similar to cybersecurity attacks: stealth, lateral movement, and so on. There’s a kill chain, right, in terms of trying to sway people this way or another leading up to the election day. And so, they finally adopted the same, it’s called MITRE ATT&CK matrix, but for information manipulation warfare. So, that’s one extra dimension folded into the cybersecurity reporting thing.
-
What I’m trying to say is just that when the harms are gradually being assembled and aggregated, as long as it conforms to the format of the MITRE ATT&CK, the cybersecurity institutions in private and public sectors know how to address and coordinate responses if it’s of that format.
-
Yeah, you’re sort of saying you already have a Lego block and you can make this new thing fit the Lego block, like put it into that format, then it’s sort of like reduced to a previously solved case. It’s sort of what you’re saying.
-
Right, exactly. And yes, that’s the metric, and this is the structure of threat information expression. And the good thing about these frameworks is that they’re designed to counter the threat actors that are already assumed to be AI in some way, right? Because as a black hat, you would plant those stealth agents and when you want to complete the kill chain, you will not manually type those commands, you’ll press enter and your botnet does the work for you.
-
So, in a sense, that threat is very much like what we envision those more highly AGI-level threat actors to be. And unlike pretty much any other domain, this domain already understands automated threats.
-
Part of it is I’m just not aware of how skilled we are at dealing with cyber threats, and obviously you had to deal with this a lot because of your role.
-
Yeah, it’s my job.
-
It’s your whole job, yes, you’re an expert in it. One of the open questions that I’ve been holding is like how much worse is, a GPT-4 could find cybersecurity vulnerabilities in code, GPT-3 could not do that, but GPT-4 is limited in the vulnerabilities that it can identify.
-
If GPT-5 or 6 had the ability to find zero-day vulnerabilities in lots of things, what would we do? What’s the right response? Like what should we not, should… if that’s one of the ARC Eval type, you know, capabilities that it’s looking at, does it, do we need it to pause?
-
And obviously ideally not release that to the world, but then if the model weights leaked or trying to steal the model and then it had the ability to do that, there’s a lot of things that could be baked into that scenario. So, I mean, I’m curious, do you have any kind of… can you help educate us about what you think should happen there in advance of that?
-
There are specialized companies, like Pentera, already has this capability. I was in Israel; they did a demo to me. They’re basically the top red teamers from the Israeli Defense Force, battle-tested. And they just made an automated system that is capable of discovering and synthesizing zero days in the target environments, living entirely off the land. That is to say, using the CPUs and GPUs, the target network has, it can just deploy malware on the fly, tailored to that particular environment, which is very difficult to detect because there’s no human from the outside directing anything. So, it’s an entirely automated routine.
-
And so, their recommended use of that is just to put it in a carbon copy of your actual production environments, into your staging area, and just let it do its thing. And so that you can upgrade to defense in depth, so you can upgrade to better defenses and so on. And they compare this to daily checkups of your health, basically. This is a red team that hacks you 24/7. And every day just sends you new things that your network is vulnerable to.
-
So, this capability already exists and is used already in this way, which is why defense in depth is so important, because we have to always assume any single door or any single vendor is already breached by this sort of technology.
-
Right. So, the good news is that there are things that are capable of identifying a lot of these vulnerabilities right now, which is crazy, because we have such a narrow AI system that sounds like finds how to exploit any system. And that we don’t need to wait for GPT-5 or 6 to have that capability, because there’s obviously specialized actors that have this capability now, and that those actors are run by currently Western-friendly Israeli companies that I guess are currently okay. Maybe they’re trustworthy. Are they secretly, you know, giving themselves back doors into every system?
-
I’m willing to trust them in a staging environment.
-
Yeah. So, the vision of the future of an endgame here is, God, I wish Jeffrey was on this call. He’s at DEFCON actually right now, but he is on our team and studies the intersection of AI and cybersecurity risks. He also worked on nuclear risks and things like that. And I wish that he could be here, but we can delay that for another time.
-
But basically, a vision here for the future is that GPT-5 and GPT-6, if we were to roll out something like this Pentera thing, which would have to be rolled out to like, you know… The problem is that often its dominance balance in… we’re not gonna be able to patch every hospital and every water system.
-
Every legacy system just became like a giant surface area.
-
Exactly.
-
But Audrey’s saying that it’s already true because things like Pentera already exist, right? So, it’s not… this is really actually a big update for us because it means that, obviously the things we’re concerned about with AI are legitimate concerns. It’s just that those capabilities already exist with enough of these sort of like very military grade, you know, vulnerability finders.
-
And then the question is like, so we’re already living in the same vulnerable world where we’re gonna be living in with more systems that are vulnerable, but like hospital systems around the world are already compromised and industrial systems and water treatment systems and nuclear power plants because things like Pentera exist, probably both produced by the Chinese and produced by Israel and sold between the Eastern and Western kind of worlds.
-
Yes. So, I think that the idea of “automated continuous validation” really is the kind of unifying meme here. So, like such models exist instead of giving them autonomy, we narrow it, so for each possible domain, they become continuous validation tools. And then we redesigned the infrastructure because we have to, assuming that breach already happens, but there is a coordinated effort to also do continuous mitigation and continuous forecasting in light of this continuous validation capability.
-
Hmm. And then when you say validation, we’re talking about, which… I understand the continuous testing.
-
Yeah, to validate that your defense-in-depth actually works, so that your system stays anti-fragile even when each and every vendor is assumed to be breached at some point.
-
Right, right. Even when each and every vendor is assumed to be breached, you’d still proceed with the defense.
-
Right, exactly, because they cannot be breached at the same time, right? So, an anti-fragile system sees a vendor, a single layer being breached, automatically updates and so on, and in 24 hours reinforces itself. But it can do so only because the other two layers of different vendors are not yet breached and were able to detect this attack basically.
-
Mm-hmm. No single point of failure and then self-healing when one point has failed.
-
Right, exactly. So, it’s self-healing and also designed to basically expect that at any given time it will probably fail, and that’s the chance of updating our threat model, basically.
-
Fascinating. I have a whole other conversation I’d love to have with you just about how you’re all doing with respect to Chinese attacks I know that it’s been ramping up, but that’s probably not the best part of our time.
-
Oh, I mean, I’m happy to talk about it too, but if you have another meeting to run, too, that’s good.
-
No, no, it’s not actually that. I mean, I actually would love to use all of our time together because I think there’s so much to learn here. I mean, getting back to the main thing, which is we’re mapping what are the endgames with generative AI continuing to scale according to scaling laws with competitive pressures and Moloch running the show with it being integrated into more and more systems faster than we know what’s safe.
-
And I think I told you when we did our podcast episode, like I’ve been worried about the example I saw with Facebook Pages. Facebook Pages looked like a totally innocuous, you know, friendly, positive, fun feature until October, 2020, you find out 140 million Americans a month, including the top 15 Christian American groups on Facebook are all run by Eastern European troll farms. And so, Facebook Pages has now been weaponized and is the reason why you have more radicalized set of political tribes across the US.
-
And so that’s an example where like, once we kind of embedded and entangled Facebook Pages and built whole, you know, people have successful, those pages are worth money. It’s like, there’s property there, right? People will literally have a page that has like a 10 million followers for cancer survivors and they’ll sell that to someone else. And so, all this is to say, I’m worried that we are, Moloch is governing how fast we entangle and embed generative AI that has both zero breaks and vulnerabilities and other things in, you know, into every piece of infrastructure, into government startups.
-
And we don’t know whether the capabilities that are there, we may find out later that GPT-4 right now has more capabilities than we, like other dangerous capability we’ll discover down the road that right now are available to everyone, but we won’t know that until later because someone will do that test later or something like that.
-
Yes, so for that particular, like over-dependence threat, the generative AI guideline that I shared with you the very first link, the section four, is what we’re looking at right now. So, the section four basically says government agencies and the people they pay may not use gen AI to collect or process personal data. Now we’re working on an exception that says, if you deploy it locally, it’s not dependent on the internet, then you can do that.
-
So, we understand that it is impossible to completely ban what’s called shadow IT, right? People just bring in their own devices to work. On the other hand, the kind of Facebook page harm you mentioned only happens when there’s dependence on choke points. However, in an offline edge-deployed model way, there’s no choke point. People naturally deploy their own fine-tuned and re-narrowed, re-contextualized version of Llama 2 or whatever other open-source AI.
-
And so, if it causes harm, first there’s diversity in the narrowing, in the fine-tuning, so unlikely to be subject to the same harm at the same time. But even if they do, it’s in their best interest to switch to an updated model, exactly like you would upgrade your operating system when a new vulnerability is discovered. But there’s no Moloch in this sense, because there is no single point that will be convinced not to upgrade because of some other financial concerns.
-
It’s in everyone’s interest to upgrade their integrated models, if it has vulnerabilities that gets the software update. Everyone will want to update to the latest thing if it gets patched.
-
Exactly.
-
So, like zooming out, how worried… is your default… What’s your view of the default trajectory of what happens with the competitive pressures between Anthropic, Google, OpenAI, et cetera, with scaling laws and with Dario from Anthropic saying two days ago in a podcast that he expects to reach AGI in the next two years and that existing problems that Anthropic has synthetic bioweapons that you can creatively get it to answer questions if you know how to answer it correctly, maybe doing private demos of that kind of thing. So, what’s your sense of how this goes, just so I understand what your default view of the future is.
-
Yeah, so the threat actors currently are not relying on GPT-4 because for each narrow threat domain, there already exist specialized models that does it better. And it’s the same for synthesizing fraud or scamming or political manipulation and things like that, simply because it takes time to learn these general models. But actually, for the narrow models, they already have decades of knowledge anyway.
-
So, I think it’s the new surface, new attack, the unknown unknowns that we should be worried about, not the proliferation of capability on existing domains. Not because that these are not important, but rather because of the principle of subsidiarity, anything that we can name already probably has institutions that are aware of those narrow AI threats. But for completely new attack models, there are no corresponding institutions.
-
So, you’re, for example, just to make sure I’m understanding that, like with cybersecurity, we shouldn’t be worrying about GPT-5 or 6 because we already have Pantera and we already have things that are defending against the next generation exploits.
-
And while it’s maybe alarming that there is a general intelligence that’s able to integrate knowledge across, you know, cyber and info and whatever, and how to do chemistry and bio at the same time, its narrow capabilities in one domain, any of those domains, do not exceed currently specialized knowledge. Except maybe other, that’s not totally true, right? Like with code generation or… like it’s, and also like the paper that we have.
-
Or even chemistry synthesis, it was performing better than specialized models.
-
The effect that generative AI generalized model has is that it lowered the accessibility. Previously, you will have to be a top expert in order to synthesize your deepfake clone. But generative AI makes the ladder of expertise very smooth in that everybody with just a little bit of script-kiddie knowledge can now kind of in a Khan Academy way learn with generative AI until they are of that specialized capability.
-
So, people who are dedicated to do harm were previously limited by their institutional access, to such top-level tools. But now with generative AI, they can get almost halfway or 80% there just by interacting with generative AI. So, people who are dedicated to do harm are also democratizing their education, basically.
-
It doesn’t change the ceiling, but it does change the mass. And this is my second point, which is when there is a mass of sheer number argument, that a mass of people who are dedicated to do harm and that their capability can be amplified through partial automation, then that creates a new threat model that is unlike the one bioterrorist model.
-
And there are no existing institutions that defend against it, because this is more exponential, right? When criminal organization act in some way, there is a more smooth buildup and you can detect the activity. But if it is just a large number of individuals deploying these agents, which is no AGI in capability, but in sheer numbers, they create this denial of service and so on in a way that the society is not ready for.
-
This is the main threat model that I’m worried about. So, in cybersecurity, I mean, DDoS is not the most impactful threat. And yet during the Pelosi visit, the thing that harmed us the most was DDoS.
-
Interesting, super numerous. Because the thing I heard in a podcast about AI from Holden, he’s been funding AI safety for like the last decade, is he said, people are worried about superhuman AI, we should be worrying about super numerous AI. Just like literally having, not whether GPT-5 is dangerous, but if I just spin up like a thousand GPT-4s doing things, it’s equivalent to like…
-
Yeah, there’s a lot of things you can do with just having… And like you’re saying, I mean, I think of like institutions have kind of a, it’s sort of like the hospital beds COVID thing, of like how many…
-
Exactly, yes.
-
… How much base load can you take before you overwhelm your emergency room beds and how much, yeah, DDoSing existing institutional capacities, which we say this in our work. It’s like the meta-crisis is the complexity gap, but the increasing the complexity and numerosity of issues and threats. Like FEMA, it has to do with environmental disasters in the US, but as climate change increases and there’s gonna be more environmental disasters to respond to in a tighter and tighter clip, it’s like not equipped to deal with all that.
-
I just got a call today from my insurance company that they’re pulling out of covering homes in California, because in a way insurance as an institution was built for a smaller number of hospital beds and metaphorically, and as you increase the numerosity of the issues. So, it’s a threat model, that’s the biggest category that you’re looking at?
-
Yes, so as I mentioned, like last August when Pelosi visited, the attackers never attack our strong suits, right? So, they don’t just break into critical infrastructure or hack nuclear plants or things like that, because these are very well defended, especially in Taiwan. Instead, through DDoS, they’re essentially attacking our coordination ability.
-
And that is the weak link. There was no digital ministry, so we had to rely on the human in the loop to coordinate the counter DDoS mitigation and people need to sleep. So just keep DDoSing the weak points of coordination proved to be very effective if you’re willing to throw unlimited resource at the problem.
-
And we only changed our defense posture after we realized that we could not defend in the existing way. We have to work with Cloudflare, Microsoft, and so on of the world. We have to essentially make our website static. We ban foreign POST requests, that is to say, form requests, and just serialize the static content to distribute on Web3, IPFS, and so on.
-
I get into this because there are existing anti-fragile resilience networks that counter DDoS, but they’re not currently used by the existing institutions. Not many governments simply say, oh, let’s just publish our website on IPFS, because they exist in different worlds. But the web3 world already had too many scams and DDoS, so they’re already resilient against that sort of threat actor, because they specialize in coordination, for good or bad, another thing.
-
We’re talking about technical coordination, resilience of systems that automatically…
-
Right, exactly.
-
… Not human coordination level. But you said earlier, though, that humans have to sleep, so I thought I was hearing you say… It’s about your human level coordination rather than your technical…
-
Right, so there are two levels. One is that Ethereum and other people design systems in such a way that it requires very little human coordination to mitigate against this sort of all hazards scenarios, and simply because there’s such scenarios happening many times, right, with a financial incentive in their history.
-
And the second layer is that these technologies are general purpose and open source, so that people who want to coordinate can reuse substrates such as Ethereum, without depending on the goodwill of, say, Vitalik Buterin.
-
So, I think that’s the main two points I’m making, in that we need to reuse existing technically coordinating structures, and we also need to deploy them in such a way that enhance human coordination, collective intelligence.
-
So, you work on both the human layer and is getting as much decentralized, I mean, you’re working on both layers, because you’re both implementing the technical infrastructure and the resilient Ethereum Web3 IPFS stuff, and also doing the human coordination stuff of shared reality TV channels, 24/7 continuous integration, threat monitoring, red teaming, bug bounties, 24/7 hotlines with citizens can report things and consensus finding and all of that.
-
Yes.
-
There is no one solution I’m hearing from this, there’s just an ecology of like reinforcements and solutions.
-
But it is interesting, because essentially, you’re painting a vision, Audrey, of like, look, this is a 21st century post-AI democracy, meaning a democracy that can live in the presence of 21st century language model, a large language model level threats, at least some that exist now. I mean, we haven’t seen the full.
-
Yeah. Because like one of the other institutions that I don’t think exists and we haven’t talked about is, of course, there exists right now, lots and lots of bots, and they have real influence on elections and outcome, but we don’t have any institutions that deal with the transformative effect of being in relationship. And we’re about to see deployed all over at massive scales, counterfeit humans that people are gonna form long-term dependent relationships on. And I don’t even think we have the philosophy or the philosophical basis for how we would distinguish what is a harmful relationship and what is a good relationship.
-
Yeah, so that seems, yeah, I’ll just stop there, because I’m like, how? How do we think about creating institutions that go beyond good speech, bad speech into good relationship, bad relationship? It seems very tenuous.
-
Yeah, I mean, there’s the easy part, right, which is providing meaningful human-centered relationships. But this is a lot like those addictive gaming stuff, right? Around the turn of the century, whether the shooter games actually increases shooting… I mean, every generation has that sort of addictions, and we probably already know how to put them into words. So epistemically, we can say that this kind of addiction as shown in the movies and Black Mirror and whatever has a name and it is simply addiction. And that is one part.
-
But the other part is more insidious to me, is a human, but with a sort of augmented, like superstimulus, right, making more copies of ourselves. The human really is still in the loop, but there is a one-to-one relationship between one person with many, many people. And then that person is not entirely inauthentic, not entirely synthetic, yet that person has more ability to form person-to-person relationships at a scale.
-
So again, not super intelligence, but rather sheer numbers. And that is for elections, for campaigning, that is the kind of technology that has an incentive of introducing itself.
-
Yeah, sort of like pay for more representation. More like in society, yeah.
-
Yeah, exactly.
-
And do you have any thoughts on, like, is that just laws? You’re like, nope, that’s illegal? Or how do you think about that?
-
Yeah, what are the laws that you need?
-
Yeah, so I think there’s two things, right? One is that it’s about avoiding over-centralization. So, if there is a way for this kind of… it’s actually another way of saying listening at scale and having a conversation at scale, in a way that people understand how it works, that mitigates most of the psychological harm.
-
It is like the deepfake videos only are of novelty and can get sensational because people cannot easily synthesize it on their phone. But once they can, actually, they lose their sense of novelty and don’t actually travel that far because people will look at it and say, “oh, there’s no provenance, there is no blue check mark, there is no digital signature, and therefore it’s fake, although it’s a nice movie, I’m not going to share it blindly,” right?
-
So just putting the capability — maybe clipped or re-narrowed in a way that doesn’t cause widespread harm — into everybody’s hands, mitigates maybe 80% of this sort of threats. And the other 20% is just very quickly naming and shaming them, right? So, when that sort of thing happens, the observatory, the forecast person within 24 hours will say that there is a huge inauthentic behavior that is happening right here, and here are the threats indicators from our shared observatories around the world and be aware there’s a typhoon coming or something like that.
-
Is that the place that you were talking about was just the in relationship, the way I can slowly influence a person over time?
-
Yeah, that’s exactly right.
-
Yes.
-
And that it’s not obvious that it’s like, maybe it’s tuned a little bit for retention, but it’s not obviously addictive. And it’s more like the Facebook pages that slowly drift a population over time.
-
When he uses the word drift, he means like drift your values, identity, affiliation, the kinds of things that you feel close to.
-
The GPT-4 liberal bias?
-
Yes, exactly. There’s obviously like fractal levels of this phenomenon assumptions. But the question though with that one is, as I’ve always wondered like, for that to actually be a threat that’s like on the top five list of like major things being worth being worried about…
-
Someone would have to create an environment in which it’s easy to spin up such a set of things. Like, and I don’t know, yeah, I don’t know. But it seems like there have come… I mean, the problem is that bots were already a problem. So, like it comes back down to check verification or other ways that people spam, spoof these systems. And I just, even there, I’m like, are you…
-
I mean, so the examples, so here’s a couple of things. The 23-year-old influencer on Snapchat, this girl who then said, she made a girlfriend as a service version of her. So she made, excuse me, a digital avatar of herself where she basically sells access to her as a girlfriend, speaking in her voice, deep faking, et cetera. I know that you do this in a different way from what I understand at least, but Glenn told me that when you have a press interview that you can’t make, you hand them a digital avatar version of you that will answer because we can’t scale you and you’re brilliant.
-
And so, it’s great to be able to scale you, but I guess, sorry, I’m just kind of catching up because I’m, just so you know, it’s been a late long day. We started at like 8 or 9 a.m. So, we’re… I’m totally here for this, but like, I may not be as sharp as like tracking everything that you’re sharing.
-
Yeah, it’s fine.
-
Can I jump to a different thing for a little bit?
-
Yeah.
-
I’m thinking about like ways of upgrading the institutions, all the things you normally think about for how do you make the liberation go at the speed that we need to match the OODA loops of like all the tech that’s coming? How do we have institutions of deliberation scale with the scale of the technology?
-
Because if you don’t do that, then like your deliberation speeds do not match the scale or the speed at which like the tech is going to move. And sort of originally thinking about this because we’ve been sitting with Wojciech from OpenAI, like watching all of the like democratic governance, like democratic input work go.
-
And one of the things we’ve been talking a lot about is essentially simulation. Like, can you either by sort of silicon sampling and taking from a general population and simulating deliberation, getting a distribution of outcomes and using that as a way to like reflect or in the world of people having assistance, if you trust your agent, that agent can act on you and then like I can send my agent and your agent, they can deliberate and come back to us as like the best possible outcomes. It can actually… that thing can share hidden information that you have and I have that we don’t wanna share with each other, but this thing up here, we can trust that. It never shares that information so it can find like pretty optimal solutions.
-
And like, I find it interesting because it’s not like… every other method has to reduce the complexity of the problem of, like, how do we represent our values? And this is like the one way that doesn’t really have to reduce the complexity of like the values that we hold to write decisions. And the whole thing becomes autopoietic in the sense that, you know, Margaret Mead maps the power of a small group of people to change the world, map the small groups that make good decisions that have other groups look at those decisions as a deliberation log in the future, figure out who and how and what processes made that decision effective. And then the whole system like upgrades itself.
-
Sorry, I’m not sure if I’m articulating well because end of the day, but like that general direction seems like the thing we must do.
-
Yes.
-
Alpha deliberate, alpha synthesize and consensus and simulate things at a fast, simulate the deliberations at a faster scale than they could have happened otherwise. And also learn from what worked.
-
The challenge that I’ve always struggled with is how do you know that a policy works because you’d have to wait 10 years and we’re not gonna have 10 years for a lot of these things.
-
Right, exactly.
-
The things that you won’t know after the fact, when the complexity is there, how does that actually work on the upgrade that you’re talking about? But the rest of it, I’d love to hear Audrey’s.
-
Yes. My conversation with Wojciech on that topic is on public record. So, I think we’re on the same page. And one thing that I mentioned to Wojciech in our conversation is talk.polis.tw, which is a daily snapshot of people in Taiwan who participate in our ministries ideas on and ask a very simple Polis question: “What do you think about generative AI?” It’s a very open-ended and people upvote and downvote and there’s bridging statements and so on.
-
And the main thing here is that you can click into any cluster and have a real-time conversation with an avatar of that cluster, basically. And so, it’s a language model that is informed by the matrix. And we worked a lot on translation with AI Objectives Institute.
-
And so, the next thing that we’re going to do at end of this month is just to let people play with this in-silico version of their opinions. And then, with their help, we create a very information-rich deliberative workshop that lasts a whole day. At the end of the day, we’re going to capture everything in a very large context and use that to tune a model that is attuned to all the concerns in that all-day conversation.
-
So, the interactive poll listing is an agenda setter for a face-to-face deliberation. It involves maybe 40 people. And then the long conversation in a long context is used in a constitutional way to realign the AI to be responsive. Now, if we can get that interactive loop going in a less than 24-hour cadence, then you have a continuously integrating deliberation that can be part of our institution that can actually write both the alignment code and also tune the legal code that I just pasted you on the first link, so they evolve in tandem.
-
And this is not the endgame. This is the game that we must deliver before end of the year so that these things work in tandem. Because if the law changes but the implementation fails to continuously integrate, then it causes new harms in a way that Jennifer Pahlka has a new book that outlines all sort of ways of harms that well-intentioned policies and lack of implementation and delivery capability can cause. “Recoding America”, that’s the book.
-
But wait, this is really, I’m tracking, if I’m tracking correctly, this is really fascinating to me. You’re saying you do the deliberation, you do the online thing, which sets the agenda, the face-to-face thing then debates that agenda. They come to agreement and synthesis. You find the bridging statements. Those bridging statements go into an anthropic style constitution, so that becomes a new constitution AI. So now, the AI is actually aligning with the liberation of the people. So, you literally have a closed loop that operates at a super-fast…
-
Yes, and we use that model for the next generation of this online talk to cluster thing. So, it’s something that transparently tunes itself based on the people’s response to it.
-
Yeah, this is fascinating. It’s like, yeah…
-
Yeah.
-
Yeah, I just want to do the recapitulation to make sure I’m understanding. And is this specifically on like, let’s choose some topic. And I’m assuming the first topic you’re doing here is like, what does it mean? Like what can an AI say that’s good or like, it’s not here nice, or are you doing this for something more general?
-
More general, basically anything within the scope of the public sector use of generative AI guideline that I pasted you, any of those 10 points are good agenda.
-
Got it. So, then this becomes like people enter in sort of like free form text, you get the clusters, the clusters then become the agenda setting for the conversation. People then like have a full conversation in real time with appropriate facilitators that all gets recorded, transcribed, placed back in.
-
In a context. And say we re-tune the language model of the constitutional style AI to be maximally, but not over fittingly conforming to this worldview capture in this conversation.
-
Got it. Interesting. And then now that you have the output for that, that becomes the new basis for whatever gen AI is used in government. And then…
-
Yeah, we deploy that and then we find new harms. But because this can be done within 24 hours, we just assemble another alignment assembly and then realign. So, it’s a symbiosis kind of thing. Yeah.
-
I’m sure it’s just on your mind, like lens when you hear that. I’m tracking most of it in my mind. Like I’m both interested in visions of like how, the US can follow in Taiwan’s footsteps of creating a 21st century democracy and painting that vision. And I’m just, not selfishly, but thinking ahead to like, okay, could we capture some of this to speak to? Cause we wanna do… what are positive endgames and solutions in which democracy can survive into the 21st century and articulate that and do that in a very accessible way that people could see a comprehensive vision for like what a 21st century constitutional upgrades look like, what 21st century FBI and police look like, what’s 21st century medicine look like.
-
And like using these things, but being resilient to the attacks that we brought up in the AI Dilemma. So resilient to cyber-attacks, resilient to bio and info without surveillance, resilient to… Yeah, but it’s a big agenda of things. So, I’m both interested in that.
-
I mean, another way of saying, sorry, I’m just like slowly letting my brain catch up with what Audrey is saying is like, you have, essentially what you’re doing is you’re aligning an AI to the deliberation process of a specific set of people that you can apply to any process. And that what you end up with is an agent essentially that represents the collective intelligence, collective will… this blended…
-
Blended volition.
-
Yes. Exactly. And so, then you can have, of course, those agents, like this meta agent talk to simulate like larger scale deliberations. And you don’t have to apply it just to like what AI does. Now that you have like, all right, we have a sample of what this subpopulation and this subpopulation, this subpopulation, this subpopulation do, we can simulate what like the synthesis view of all of them might be and apply that to not just like what an AI says, but to any specific kind of law. This now just becomes a general-purpose tool.
-
Essentially, this is, I have been thinking about it that you need to start with individual agents that then built up, but you’re saying we don’t have to start there. We can start with a different place, which is start with a base unit being the collective deliberation.
-
Yes, because that lowers the cognitive requirement to stay the entire day in a deliberative setting is a luxury. Most people don’t have that kind of commitment. But a few yes or no’s, a few conversations, a few thumbs up and downs with reasons that everybody can do so we can scale it.
-
Yeah. And then you invite a smaller subset in to do the longer deliberations. And that thing becomes the grounding in the future. Because you’re just like sampling, it’s like taking a blood sample. Not like draining full blood of the organism. Super interesting. That is a much better distribution or way to start. That’s all the cold start problem in a really nice way.
-
Because you’re not specifically consenting, you’re starting with the hand…
-
You’re not starting with individual agents. Like you have an agent, I have an agent,Audrey has an agent, all of them have to model us before they can create a deliberation for the three of us. And so, it takes a while to build up to full deliberation. And Audrey’s solution is to just start with like, at the multicellular organism, just start with the deliberation, model the deliberation, and then use that as your base unit. Because that’s the thing that Audrey can control.
-
Yes.
-
Yeah.
-
Got it. That’s a super cool unlock.
-
Where do you want to go from here, Aza, with the 10 minutes that we have left? Because this is really inspiring to hear… these totally novel ways of potentially applying this in a way that is really deeply, deeply hopeful. I still wonder about, you know, the unease that I feel when I listen to Dario and I listen to Sam Altman and the scaling laws and the pressures to race and the don’t worry, we would stop and pause and don’t worry, chat to OpenAI, wouldn’t have GPT-5 on that web address, even if we had it and it had dangerous capabilities, we would stop.
-
But then I just, I don’t feel good about where this is going and I have my, I worry that like, there is a reason, there’s a very strong reason why I feel that we’ve articulated a bunch of it. Some of the things that we might be worried about, like cyber-attacks, maybe there’s, again, you can use the tools like you’re talking about, but again, how fast can you patch this?
-
I don’t know. I guess I’m just trying to still get to, at the end of the day, I care about the world, not turning into Mad Max or catastrophes or dystopias, which are the two outcomes. And I’m wondering… and that seems to be like the center of that conversation is the international agreements around how AI needs to be governed and those consensus agreements need to be… both the rules need to develop very, very quickly, and then the enforcement that can actually make those rules followed has to be happening.
-
So that’s, I think, where my heart daily is placed. So. I’m just going, yeah.
-
Yeah, and I mean, the principles or the rules, I mean, the generative AI guidelines that I pasted you is not that different from the UNESCO rules and the UNESCO rules has been around for, I don’t know how many years now. So, it’s not, the difficult part is not coming up with sensible rules. The difficulty is in the enforcement and implementation.
-
And so, what is your answer to that? Like for this to go well in your mind, when you’re just looking at the race between these massive giant powers who are growing 10X bigger superhuman giants living among us every day or every year, what is your view about how we would get to governing that so we don’t get the bad outcomes?
-
Yeah, I think we did address that in the podcast. So, basically things like the social dilemma and AI dilemma movies, clips, they serve as a focus point that just makes it harder for existing player to say that we’re waiting for someone else to coordinate us. When the harm is so widespread understood, then people can say, let’s not repeat the social media harm mistake that we did a decade or two ago. Let’s put in forecasting, notification, mitigation before anything really bad happens to us and our kids.
-
So, I think this progress-safety trade-off, because of your work, is changing. In the US, I think a vast majority now prefer safety or race to safety versus race to this individualistic competition. And this meme race to safety, I think is good enough so that we can then say that the AI companies are liable if they don’t race to safety. And that’s all we need to do at this point.
-
Wait, say more about that. How would you actually, what form of liability if they don’t race to safety?
-
Yeah, so like in Taiwan, we passed a law a couple of months ago that if a social media company allow the deepfake scammers to fake me or actually nobody fakes me, but the premier or someone else in scams, in cons, and they must implement notification mechanisms. And if they don’t, or if they ignore the notification that people send to them, then they can still post those advertisements. But anyone who get scammed, they’re also liable so that if we cannot find anyone that runs those bots, maybe they’re self-running bitcoins, right? Then Facebook have to pay for all the damage. And that’s the law. We, our parliament already passed that.
-
And so, after that, there’s been zero cases where Facebook is held liable because they’re all gone. They simply remove everything that is notified to them that is a scam, financial advanced thing. And so, a legislator actually asked me right before the law is passed, that can I guarantee that within seven days of reporting, everything will be removed?
-
And I was like, of course, we’ll measure that. And I’m optimistic. And it turns out my optimism is well-founded. When Facebook finds that they’re liable to unlimited damage, millions and millions of our people being scammed, they actually do implement the notification and take down a mitigation.
-
And the notification, is this just labeling that it’s a…
-
Right, exactly. They have a transparent advertisement library that anyone can search through a keyword. And their civic integrity team who reviews it can see investment scams, featuring Premier Chen Chien-jen, who totally didn’t authorize this of course. And we are also working on digital signature so that they must collect signature from that person or that investment company.
-
And anyway, so if they don’t do their due diligence, and people notify them, then if they don’t take down, then they just are liable to the damage.
-
I’m thinking, Aza, in the MTC report in 2021, that there is $500 million in romantic scams from basically Tinder, right?
-
Yeah, people going on Tinder or Bumble or any of those things forming a relationship and then scamming people for money.
-
So, in there, the solution, according to this, what we’re seeing here is like, unless people label that they’re not a real, they’re not who they say they are, which of course they’re not gonna do, then you’d make Tinder or Bumble liable for-
-
Yeah. Anyone who provides the reach has to be liable.
-
Right, I mean, we have this meme that we came up with, that freedom of speech is not freedom of reach. I think we need to change the meme to connecting reach to liability, because it’s the volume and the scale and the amplification that drives up the responsibility. Reach is responsibility or something like that, but…
-
Yeah, exactly. That’s how we counter the DDoS of the world, because DDoS is nothing but sheer numbers and the reach that it can cause to existing fragile institutions.
-
But then Elon bought Twitter with the express purpose of wanting to stop all the scammers and the bots. Do you think there is an obvious, easier set of extreme measures he could be taking, but he’s not simply because… Like, if Twitter was liable for, you know, all the fakes and scams and money that people lost on the surface, what aggressive thing could Elon do that he’s not doing already to mitigate, to remove that liability? What’s the extreme thing that he would do?
-
Require zero-knowledge proofs of each and every tweet.
-
For identity, right? And you’re for identity, just to make sure I’m not…
-
Yeah, for identity. Yeah, zero-knowledge proofs for an adult citizen identity, a streamlined form of CAPTCHA, through passkeys.
-
Pass-keys, uh-huh.
-
And that would be tied to, like a government ID or some other…
-
Or decentralized ID on web3. There are also people who makes The Orb… So that depends on the jurisdictional norm.
-
Do you have a vision for the actual way that you can see the world in Western democracies doing the zero knowledge proof identity thing? Like, again, which thing was it? Is it a world coin or the orb? Is it some crypto thing that I don’t know about that is the best? Because in Taiwan, you have phone numbers, right? You do verifications of any…
-
We have SMS and we have FIDO, right? I think FIDO is catching up quite well. There are people now already switching, especially Apple users, to pass-keys over passwords. Nobody likes passwords anyway.
-
I don’t. What is FIDO again? I’m sorry.
-
So, it’s a way to sign in to GitHub and so on without typing any password. Your phone just authenticates yourself, basically.
-
And that FIDO is built into something or is this…
-
Yeah, it’s built in to Safari, the web browser. I think most of the browsers now support it. In our ministry, we don’t have passwords anymore. It’s all pass-keys.
-
I know that we’re at time. This is a… Great.
-
Glad to be of service. And feel free to arrange more conversations like this. I’m enjoying this a lot too.
-
Yeah, this is remarkably helpful. I feel like the thinking just progresses in many frontiers at once, which is a rare feeling. So, thank you.
-
Yeah, we’re very much, thank you, Audrey, like truly grateful for your insights. And I think there’s actually some potential pathways here. It’s really, really inspiring. And Aza and I talked to a lot of people about endgames, and people do not have good ideas about how we get to a safer world.
-
Of course, there are different stages in that diagram that we drew. Like it’s an obstacle course and we’ve got to make it through first contact, second contact, and then when we get to recursive self-improvement, there’s a whole other set of questions.
-
But yeah, if you’re willing to do another one of these in a little bit, we can space it out and really prepare for the questions that we want. We’ll record the transcript. We’ll review it and develop the questions and then it’d be really great to do that again.
-
Okay, great. Till next time then.