-
How are you?
-
Pretty good. So, before we begin — we will make a transcript of this interview, including your words, but we will embargo it until you publish and you get to co-edit, if that’s okay with you.
-
Wonderful. That’s fantastic. Yeah, I know this is part of your radical transparency.
-
Excellent.
-
Yeah, I read up on it. That’s completely fine. I’m going to actually flip on my recorder now as well.
-
Okay.
-
So, I’ll just do that very quickly. Thank you so much for taking the time and congratulations on your inclusion in the list. I’m very excited to be working on it as well and really looking forward to this conversation. I think I’m also on the recording front. I might try recording as well just to… you know what, it never works, so I’ll just leave it as that. Anyway…
-
(laughter)
-
I can send you my sound file, if it happens to not work on your end.
-
That’s perfect. Thank you so much. I appreciate that.
-
Well, I guess maybe just to start, I’d love if you could just talk a little bit about sort of how you came into politics and also sort of how your role as Taiwan’s digital affairs minister really came about and how you kind of found yourself in that role.
-
Sure. So, I first entered the cabinet as a reverse mentor, as a young advisor to the then Minister without Portfolio, Jaclyn Tsai, around the end of 2014. And that happened because in March 2014, the g0v movement, which I’m a participant, contributed to a massive deliberation around, for example, whether to allow the PRC-made components in our 4G infrastructure, as well as many issues related to the cross-strait trade agreement. It’s called the Sunflower Movement. And with half a million people on the street…
-
Minister Tang, I’ve been cut off for some reason. Okay, sorry. I apologize for interrupting you. The audio went out for some reason. I’ve just taken off my headphones because it was causing issues…
-
Sure thing.
-
Please continue.
-
It’s fine. So, it’s called the Sunflower Movement and with half a million people on the street and many more online, we helped the facilitators, the NGOs and so on to be not just thoroughly non-violent around the parliament, but also make a coherent demand out of the people on the street and more online.
-
After three weeks of peaceful occupy, the demands were met and the competent authorities in Taiwan agreed from that point onward, open government, open data, civic participation is going to be a national direction because we have shown that collective intelligence actually can produce coherent demands.
-
And afterwards during 2014-15, as an advisor to the cabinet, I helped launch the first e-rulemaking platform in Taiwan, the vTaiwan project, which successfully resolved issues around Uber, sharing economy startups, and many other topics, before becoming a full-time minister in 2016.
-
Wow, thank you for that overview. That’s really helpful. And I guess for a kind of a lay person, someone who certainly isn’t perhaps as into tech as perhaps someone like you or many experts, the AI’s emergence would almost feel sudden, even though in a lot of ways it feels like it’s part of our daily lives. I mean, you have Siri and all these things that I think everyday people are very used to.
-
At what point did AI kind of figure into your job? And what role, I mean, I guess in terms of if you had a pie chart of kind of how much attention certain issues take, what percentage do you think AI is kind of occupying your attention and focus these days?
-
Before joining the cabinet in 2016, for six years, I worked with the Siri team in Apple on natural language technologies, and specifically with a team called “cloud service localization,” meaning that we work with people who speak different languages so that Siri can custom itself to fit the local way of speaking things instead of forcing everyone to learn English or learn a handful of languages.
-
I also worked at the time with the Oxford University Press to bring the dictionary making work, lexicography, to the lower-resource languages, languages that do not have an official body to make language dictionaries and therefore not part of the cloud service localization, so that the people on the ground can assemble themselves and then like Urban Dictionary, collaboratively make their own dictionary together. So, my background was in language technologies and computer language design, that’s kind of my domain.
-
And nowadays, I’m the Minister of Digital Affairs. The ministry is in charge of digital participation, universal broadband access, and public infrastructures that enable digital identity and many other services. That’s our ministry. And under our ministry, there are two administrations, one for industrial digital transformation and another administration for cybersecurity.
-
So, participation, progress, and safety, this triangle forms our ministerial mandate. This is quite unusual. In most other jurisdictions, the digital ministry either doesn’t exist, these are three different ministries, or they exist but it’s just two of the three, whereas the other one, like safety or the progress, belongs to maybe the interior or to economy ministries.
-
I say this because AI touches upon all three parts. Like AI can actually, through deepfakes and so on, interfere with democratic process, so the participation arm needs to counter against this kind of synthesized political speech. But the progress arm needs to work with the least empowered people and also help them to digitally transform themselves, even though they may not be able to afford the kind of chips required for fine-tuning AI. Whereas the cybersecurity arm needs to work with the generative AI because it has new classes of security vulnerabilities.
-
Previously, to be a hacker, to hack into a system, you have to learn programming language. But now with generative AI, you just need to learn English and then you can also hack into a generative AI system, right? So, it opens new cybersecurity threats and the cybersecurity arm also need to deal with it.
-
So, I would say I spend around half of my time, but in three different roles, all working with the emerging challenges presented by generative AI.
-
Wow. Yeah, no, thank you for that. That’s a really clear way of explaining it. And I think even the point you just made about sort of what it takes to be a hacker these days, it is really incredible, even just talking to colleagues, talking to friends, the role that even something like ChatGPT has started to play in people’s everyday lives.
-
And something that you touched upon very briefly there with regard to deepfakes is something that I’m really interested in as a foreign affairs writer for TIME. I have a specific focus on democracy and authoritarianism, and I’m really interested in better understanding both the possibilities and the threats that come from AI when it comes to our democracies. And I know that this is something that you’ve been thinking a lot about, and you’ve talked about the Taiwan model with this.
-
Could you talk a bit about what you think are the risks, but also opportunities that come with embracing AI and how you think democracies like Taiwan should be thinking about it?
-
Yes, the Taiwan model is two pillars. We call it democratic alignment. One pillar is to ensure everyone in Taiwan can have the kind of access to not just use AI, but also tune AI to their liking. So, this is the first one, democratizing access.
-
And then the second pillar is that if the local community, any community, feel that the current path of AI somehow causes harm, causes what we call epistemic injustice, like writing off the knowledge and wisdom that they have. Maybe they’re indigenous nations, maybe they have ways that are different from the current mainstream AI’s training assumptions, maybe have different social norms. Then there needs to be a way, what we call Alignment Assemblies, for them to, like what we did around Uber, to come up with the rough consensus, the general direction where they expect AI to behave.
-
And then we can retrain AI models, or to work with the AI service providers to tailor-made the kind of AI that fits the local people’s needs, instead of asking local people to adapt to whatever mainstream AI is doing. And this is important because many of those harms cannot be directly observed by the makers of the service.
-
We talk about 2014, in Sunflower Movement. Around the same time, the PRC regime in Beijing decided that they don’t want retweet or share or anything to spread a kind of chaotic messages. So, they decided to impose a top-down way to censor, or at least strongly moderate, their social media in anticipation of the harms.
-
But at the same time, Taiwan decided to democratize the ways for each and every student to contribute to fact-checking, for example, as part of their junior high school education, maybe they fact-check the air pollution measurement by measuring air quality themselves, or fact-check the three presidential candidates as they’re having a debate in real time, and so on.
-
So that when people go through fact-checking by themselves, and also with the community, they might become inoculated against the polarization, the hate, and so on. So, this illustration of us being the democratized response, and the PRC being a more top-down authoritarian response, points to two very different ways to think about emerging technologies.
-
But of course, you would ask, why don’t the social media companies work on civic integrity and counter-polarization and so on? Well, I think that partly is because such harms are not easily observable when it’s just a few university alum networks connecting to each other. Some of the harms only start to appear when there’s a large-scale deployment, and at that time, there’s no systemic way for those harms to be pointed out and redressed.
-
And now, learning from the Social Dilemma, we need to work with AI this time, setting up such reporting, forecasting, democratic alignment and guardrails before our economy becomes overly dependent on it. Because in the social media case, many other jurisdictions say, but our agency’s webpage is now our Facebook page, we cannot meaningfully reform because we’re too dependent on it. We don’t want that to happen again.
-
Yeah, absolutely. And it sounds like, I would imagine that this is also very much has to be an international conversation, because Taiwan, as a relatively small island nation, can do so much. Britain can only do so much, where I’m based, the US, even big as it is, I imagine can only do so much on its own.
-
But as you say, you’re dealing with, say for example, China, if it were a case of disinformation coming out and AI being used in that regard, I feel like that’s a big concern that democracies have is, to what extent AI be a tool for authoritarian regimes, or those who would seek to subvert democracy?
-
Yeah, for this kind of foreign information manipulation and interference, or FIMI, there are already existing STIX, or structured threat information expression.
-
The STIX format was originally used for cybersecurity threat indicator sharing, like when our computer gets hacked in, we’re using some zero-day vulnerability, even though it’s stopped by some other measures, we will use STIX to notify our democratic allies, so that when the attacker goes to their machine, that’s already patched or updated.
-
And so, because we expect the AI-generated deepfake manipulations to be done in such a short time frame, it is important that our countermeasures are equally quick. So, for example, the Taiwanese Cofacts, for collaborative fact checking, Cofacts project, by the civil society, not the government, the g0v or GovZero community, they train a model to provide contextualized clarifications to what’s likely to be reported as FIMI or disinformation, but this model is trained by the Cofacts community.
-
So, this solves a problem whereas our adversaries are paid professionals working nine to six on information manipulation, but the community fact-checkers were not, and so they only got to clarify it when they have time. But now, with language models automatically provide a clarifying context, the community can focus on training that model, not manually replying to each and every viral disinformation. So, I think language models can also be used for the defense for the blue team, but the prerequisite is that we share the information and threat indicators with everyone around the world.
-
And I know that you’ve already listed some examples, but I was wondering if you could talk a bit more about sort of the ways in which AI can strengthen democracy and also the ways in which Taiwan specifically is already incorporating AI.
-
I mean, you’ve talked about kind of radical transparency and even these fact checks, like these sort of civil society things, but are there ways that you think the government can or is already incorporating AI to enhance and strengthen democracy?
-
Yes, definitely. If you go to this website, it’s called Talk to the City, you can see our recent alignment assembly, which is gathered by the polis tool. If you click on it, you can very quickly see a constellation, and it’s bilingual. So regardless of whether the person commented in English or in Mandarin, it can be displayed as the same cluster.
-
And then if you click into a cluster on the right, you will see the main arguments, which are also summarized by the reports on the top right. But if you click into a statement in the cluster, you should be able to have a conversation with that cluster. You can ask, why do you believe this? Can you elaborate? I disagree, and so on.
-
So, this is what I mean by a democratic alignment. We’re just asking the citizens - how would you like generative AIs to behave? And then you can, through this, what we call a wiki survey, a survey with the survey questions written by the citizens, participants themselves, easily see the holistic, blended volition of the people.
-
We can then use the detailed context as the agenda for face-to-face multi-stakeholder conversations, so that everybody can more holistically… it’s like a multi-dimensional view of an elephant, instead of each stakeholder feeling the elephant, different parts of the elephant in the room, we now have a multi-dimensional view that let people see how the elephant is like, and if you want to talk just to the part that is the trunk or something, you can actually have a language model having an in-depth exploration with you.
-
Instead of a professional facilitator or moderator staying online all the time to answer people’s questions, this can easily scale up the deliberation process, making deliberative results accessible interactively to everyone who wants to participate in a conversation.
-
And now, in the face-to-face workshop, which lasts an entire day, we will make a long transcript of what people eventually agreed on as the norms, and this long context can then be used to train a constitutional AI model, so that this AI model behaves exactly the way the workshop’s participants agreed that it should behave. It’s like collaboratively making a constitution for a language model, so that language model can behave thoroughly according to these people’s wishes, and we can continuously integrate new feedbacks so that those models increasingly align with what people want.
-
So, I think this is good for democracy, because it makes deliberative democracy more scalable, and also it makes AI development democratic, in that it’s not just a few engineers in the top labs deciding how it should behave, but rather the people themselves.
-
And I could imagine this would also be applicable to issues that are not just AI, right? Like this is almost like a citizens assembly in a way, but just digitized.
-
Yes. So, this is what I call a level-two assistive intelligence. It’s not level-five autonomous in-silico conversation. We’re not there, and neither should we rush to get there. We should instead just take the best facilitators, the best citizen assemblies, run a really good face-to-face and online hybrid workshop, but then the wisdom captured in that day can be shared like language models in a way that you just saw.
-
And then I guess as a last question, because I know I’ve kept you a while. How do you envisage this sort of feedback from the public influencing kind of broader international sort of debate and discussion around AI? Because as I’m sure you’re aware, the EU has its own sort of AI regulation bill, the US is mulling its own things.
-
I mean, I guess where do you see Taiwan’s role in this sort of international collaboration and discussion? And how do you think that this sort of thing could be applied, not just for Taiwan and sort of Taiwan sort of framing its own views on this, but even potentially more globally?
-
Yeah, as you mentioned, the imaginations differ a little bit when you ask people what happens in 2030, for example, and that leads to different schools of thoughts in EU, UK, Japan and US and so on, for next year, which many jurisdictions will have elections, for the very short term, everybody agrees that these are the course of action we need to take. Sometimes it’s called “race to safety.”
-
Instead of racing to optimize this or that feature, we need to race to safety, so that there is international coordination to tackle immediate harms to democratic processes, for example, to counter scams and convincing voice clones. These are whatever jurisdiction you ask, whether they’re in the progress camp or in the safety camp or the participation camp, everyone agrees that these issues need to be resolved by 2024.
-
So, our idea, very simply said, is that no matter the imaginations are for a decade away, let us just quickly get the consensus around what the next 12 months we absolutely need to build a democratic assessment, assurance framework, threat indicators, and things like that. These are just common-sense stuff.
-
And if there are law amendments that need to be made, like in Taiwan, we already amended the law against generative AI in fraudulent election meddling, in revenge pornography, and also in scams, in financial scams. And these are the immediate harms that are raised and then immediately amended. So, I think this kind of agile governance way is the best way, and we, after a lot of conversation with our counterparts in other jurisdictions, this “race to safety” seems to be what people can broadly agree on.
-
Yeah, yeah. I’m glad you bring up next year, because I think that’s going to be really interesting to see how AI debates. I think we have so many elections. I think it may be a record-breaking year for a number of elections, I heard that. I need to fact check that.
-
Ours is January, so we will be the first pilot.
-
Wow. Oh, I look forward to it. I look forward to following along. Minister Tang, thank you so much. Is there anything that I didn’t ask you that I should have, or anything else that you think is really kind of central that hasn’t come up?
-
I would like to add that our ministry is formally a partner of the Alignment Assemblies, which are convened by the Collective Intelligence Project, along with OpenAI and Anthropic, among others. I would encourage you to maybe read a little bit in the CIP blog.
-
I personally participate in many of the works of the CIP team, and I think when I say OpenAI and Anthropic seems to be broadly aligned, and that’s because we’re working to raise safety in those very short-term issues. So, I would encourage you to read a little bit on those CIP essays or policy briefs.
-
I certainly will. Yeah, this is all really fascinating stuff. Thank you so much, Minister Tang. I really appreciate it. I’ll be sure to send you an email to let you know. I believe early September is when this project is going to publish, but I’ll let you know once I have a better idea so that you can prepare to publish whatever you’d like on that end.
-
And yeah, it was really a pleasure speaking with you. I know that the UK is planning to host an AI conference. Hopefully, if you come back to the UK, please do feel free to keep in touch. It’d be great to kind of get your thoughts on what goes on there as well.
-
Looking forward to meet in even higher resolution.
-
Yeah, wonderful. Well, take care and thank you again.
-
You too. Live long and prosper.
-
Bye-bye.