And so, in addition to finding vulnerabilities and sharing joint defense strategies, I was also in the US to basically push for this idea of race to safety, meaning that the frontier AI labs, in particular, META, but also, of course, Google, OpenAI, Anthropic and so on, need to adopt a cybersecurity mindset and open themselves to open red teaming, meaning that instead of their internal testing before release, there needs to be joint testing, just like the code cyber-attacks, between the labs and even preparedness mechanisms that invites the general public, like a bug bounty, to think of novel ways to abuse those AIs for harm, so that the societal risk can be evaluated by the society. And this idea of race to safety was also quite well received by the people that I talked to in the US.