-
So welcome to our startup. It’s just a year and a half since we started. A couple weeks before we started, speaker Pelosi visited Taiwan. We had missiles over our head. We had the largest cyber attack ever and so on. So it was very exciting times. And our main idea is digital resilience for all that is to say to recover about not just recover, but also grow stronger, thanks to those attacks. One of the most recent ones was leading up to our general election, we suffered more than 33 times of DDoS attacks compared with the same quarter last year, but we succeeded in defending them and using novel techniques like pre-bunking and so on. We also made sure that after the election, people like each other more. There’s less polarization after the election resulted, which is a marked difference compared to our previous elections. So happy also to talk about these issues and so welcome and look forward to the exchange.
-
Fantastic. Very good. So thank you very much, Minister Tang, for inviting us. It’s a great pleasure to be here to learn more about the work that you are doing and the overall situation in Taiwan. We just arrived, most of us just arrived from Beijing, so we spent a few days there talking with Chinese colleagues and Chinese students. We had a wonderful setup, which I think you’re probably going to hear more about from the Yale students, where we had a very open discussion about AI issues and cyber issues with students mainly from Renmin University, which is one of the universities, as you know, that specialize in these kinds of things in the PRC. So it was very fruitful, and I must say to me, and I think to many of the people who are here in our delegation, then coming to Taiwan right after that is exactly the right kind of thing to do because we can experience a very different kind of society and a very different kind of approach to most of these matters than what you find now in the mainland. So let me talk just very briefly about who we are.
-
So this is a joint group of students, mainly sitting on that side, a little bit down this side as well. And Yale faculty. The organizational framework is an institution at Yale called International Security Studies, which does different kinds of things. So one of our most active programs is the one that is represented here on AI and cybersecurity and related issues, headed by my colleague Ted Wittenstein, who I’m going to hand over to in a second. But it also does all kinds of other things, from African security issues and across the board to a number of issues that have to do with teaching. We run the Grand Strategy program at Yale, which is primarily an undergraduate teaching program. I think those who are here in GS would admit that it’s a relatively unconventional manner look at issues of strategy, in a general sense, in a more theoretical sense, but also in a practical sense, looking at historical cases where things have gone terribly wrong and also a few cases where things have gone right. So that’s what ISS does. I think among the programs that we have, the Schmidt program on AI and cybersecurity has become one of our key programs.
-
And the reasons for that are quite obvious. It is because of the significance of these issues and how these have developed over the past few years. More and more students and faculty at Yale who have an interest or expertise in these areas are drawn to that program. And that’s also the reason why we set up these meetings that we would have here in Taiwan and elsewhere when we are traveling. So let me just turn to the faculty members of our delegation. I think the best thing for you to do is just briefly introduce yourselves, going down to who is sitting down there, just very briefly to let the minister and others who are here know who we are. And after that, I’m going to turn it over to you.
-
Well, if I’m going to get turned over to, I’ll just briefly say.
-
Ted Wittenstein, I teach at the Jackson school at Yale, cybersecurity, AI, tech and security. And it’s been a pleasure to develop what we call our Schmidt Program on Artificial Intelligence, Emerging Technologies, and National Power. So thank you so much, minister.
-
I’m Andrew Makridis. I’m a fellow at the Jackson school, and I teach an intelligence class and just retired from the US government after 38 years there.
-
David Rank. I was an American diplomat for about 30 years, six assignments between Taiwan and the mainland, twice at AIT. But before you were in government or AI was a thing.
-
You spent over six years in Taiwan.
-
Karman Lucero, I’m a fellow at Yale Law School’s Paul Tsai China center, where I study the governance of a lot of the technology that these students are interested in, as well as, of course, me. I also study how China engages in information control within its borders and would love to hear more about how Taiwan preserves its open society.
-
We’re delighted to be here, minister. Many of us have admired all the work that you’ve done over the years in championing democracy, digital resiliency, openness. The technology development in Taiwan is really a case study, I think, for how to build the nation or struggle to grapple with the challenges of technology and democracy in our Schmidt program. We’re really focused on trying to develop technical fluency among some of our law and policy oriented students. And then in reverse, we have a number of terrific stem students who are desiring greater law and policy business ethics fluency. So we bring them together, to focus on strategic technologies and how they’re impacting international affairs. We have a number of students interested in sort of emerging technologies in democracy, how you’re building digital resilience here in Taiwan in particular, maybe lessons from the recent election, the work that you’ve done both on the technical end of cybersecurity, but also disinformation, the influence operations from China, how you identify, how you message, how you’ve gotten through the election to where we are. And sort of, as you reflect on that, I think that’s a particularly important lesson for America. As we are now in the midst of an election year with plenty of influence and disinformation of our election.
-
It’s a big election year.
-
We can have, I think, a freewheeling conversation to the extent that you’re open. But on this question of disinformation building resiliency, that might be an area to start where I think a lot of us would be interested. If you want to share your thoughts on how you approach this challenge from your position in the ministry and as you look back now on getting through the elections, what are some things that we should think about, particularly in terms of China’s own influence and cyber? So thank you so much for having us.
-
Let us begin with a rant. Yeah, I’ve been working on information manipulation defense since 2017. Back at the time, we were struggling because the main english word at the time was fake news, which is affront to journalists, and my parents are both journalists. Out of filial piety, I cannot use that word. So it took some time for us to find the Mandarin word for that, which is 爭議訊息, or literally disputed information. And this is important because it speaks not about the content layer of things in the active/behavior/content model, but rather talks about how it incites. 爭議, polarizing, means a conflict causing behavior in the actors, right? So the idea is to shift toward the actor and behavior layers. You can talk about foreign interference, foreign is actor interference of behavior without speaking to whether the content is true or false, because in that particular place resides the journalists, the fact checkers, the professional media institutions, and so on. And it’s really not a government minister’s business to talk in the content layer. And so I think that is quite strategic and resulted in us adopting a very different model to defend against foreign information manipulation and interference.
-
And for example, we always ensured that the government itself does not directly play “disinformation oversight” roles. Instead, we have a broad ecosystem of mostly open source civic tech communities. The Cofacts project, started around 2017, is a crowdsourced platform like Wikipedia, where everybody can flag anything as scam or rumor. It doesn’t really matter, but it goes to a kind of clearinghouse like SpamHaus, so that people can in real time offer the recontextualization. And the Cofacts community, which is open source, has been adopting like generative AI. So if you go to the Cofacts website, you will see like generative AI generated real time clarifications as soon as those things are flagged, and gives us a really comprehensive scoreboard of what is going viral. And the main government’s work here is about prebunking, which is teaching people, showing people how we approach such information manipulation, calling out the tactics instead of specific payloads. Two examples. Like a few years ago, three years ago, I had someone deep faking me, making it into a viral video, like how to do that into a virus on your MacBook and things like that. It went reasonably viral so that people understand how it’s like.
-
And especially during the pandemic, we used a lot of those prebunking things, like, we knew that there’s going to be a vaccine manipulation approaching, but somewhat fortunately, well, not really fortunate. Taiwan received the vaccines later than most other countries. So it gave us about two months to prepare for the pre-bunking. And so we prepared quite a few scoreboards showing people’s preferences. Like the younger people, some of them prefer AstraZeneca, but the older people, they prefer Moderna or BNT or later on, Taiwan has it’s own one called medigen. And so using this scoreboard, which is transparent odata and nationwide pre registration platform, we turned this vax/anti-vax struggle into a my vaccine better than your vaccine struggle. So I got like four different brands of vaccines just to make a point. Yeah, the result is that we don’t have an anti-vax conspiracy faction in our politics. And like, everybody got vaccinated. And so time and time again, we saw that if the government plays the role of a debunker, then it would alienate more and more people.
-
But if we just keep on prebunking so that people can feel that, oh, there’s a rumor on TikTok that the counting stations were rigged or whatever. Then in that particular counting station, you can always count on the YouTubers belonging to three different major parties to have the records of the actual counting, which is always paper only. So if I have one advice, just to use paper only ballot and then use the advanced high definition broadcasting technology just to live stream the counting from the three different parties, so as to say using emerging technologies only defensively and not offensively, because if it’s done inoffensively, it’s always brutal and oversight becomes polarizing. Whereas if it’s defense, like, if people just focus on producing masks, there’s no offensive use of masks. The same for pre bunking and for live streaming and radical transparency for accounting issues. So the end result is that, as I mentioned, William Lai did beat the polls, but the two other opposition parties also feel that they have won somewhat. And the viral video that was on TikTok a couple of days after the election about the miscounting and so on just died down.
-
It doesn’t go viral at all. So just like the antivax, we knew it’s coming with pre bumps and it went nowhere and people started liking each other more. The effective polarization actually decreased significantly after the election, which I count as a win. So that’s my rant.
-
Thank you so much.
-
I might invite my faculty colleagues here for reactions and questions, and then I’ll open it up to our group here because our students are not shy and I know that they’re very interested in many of these topics. So thank you.
-
Minister Tang
-
Thanks for that example, debunking. Can you just talk for a minute or two about what other tools are available to protect an open society? I mean, there’s been, if you look at some of the PRC writings that talk about cognitive warfare and the manipulation of public opinion. And one of the concerns, certainly in the US, is manipulation of public opinion about Taiwan. Markedly, the Chinese often use the word reunification as if they’re fixing something.
-
What other things do you think about when you’re thinking about inoculating the public against those kind of approaches?
-
Yeah, that’s a great question. So research has shown that the taiwanese people, we have our own ideological differences, ethnic, linguistic, whatever, 20 national languages. So not the lack of diversity or fights between ourselves, but as long as people understand that there’s FIMI going on, that this information manipulation is foreign in nature, it backlash, like, people understand that, oh, this is actually targeting our democratic institutions. It’s not targeting any particular party. It’s not one party versus another party. This is autocracy trying to sabotage democracy. Then when people saw these issues, no matter their party affiliation, the feeling of solidarity, let’s call it that plurality, collaborative diversity, increases in our people. So I think the main thing that we’ve been seeing in US, like the recent TikTok thing in Congress, is that it’s phrasing things in exactly the same way as we do, like in our cybersecurity act and regulations, with this idea of a harmful product, which is defined as any product that is controlled by any entity, that is effectively controlled by a foreign adversary. So indirect control, like through TikTok, Singapore, or whatever, Cayman Islands, and so on.
-
So this definition is the same as is adopted on the congressional latest draft about TikTok. And so as long as this main idea is in people’s mind that this is not about whether it’s made in PRC. I mean, many employees or whatever that is not connected to the Internet, probably made in PRC, but it is about whether there is de facto control. Instead of just tracing down just some brands in some company setup and things like that, it is about a constant alert and threat indicator sharing between people about signs of de facto control. And so because of that, while there’s, like, slightly less than one quarter of taiwanese population have TikTok install on their phones, only less than 4% report it as their main social media platform, which is a very low number compared to most developed economies. So my main suggestions worldwide is just to let people understand that there’s really a foreign adversarial de facto control. This is not about partisanship. This is about this idea of a shared cognitive, shared reality, fabric of trust that these emerging technologies employed by authoritarians.
-
I guess my question is, how much is Taiwan a laboratory, and how much are you adopting ideas from other places and applying them here? If you’re a laboratory, how do you test things before the big day, before election day or that sort of thing?
-
Right. Well, I mean, every day is a big pain for us. There are millions of cyberattack attempts literally every day, and they evolve very quickly as soon as they find something that doesn’t work. Like beetles used to work in 2022, it doesn’t work now because we have quite advanced, resilient CDN or even ipfs networks, and then they shift to something else and something else. So, two answers to that question. One is that in non femi spike times, most of the defensive technologies we do, we do it under the branding of counter fraud. and counter fraud, I mean, just recently, right, we’ve seen many voice clones, and now there’s video clones already. So people would get video calls from people they trust and asking them to invest in a new stock or things like that, because the Taiwan stock Exchange is doing really good. So there’s this new breed of famous celebrities just start video calling people, which is one of the most detrimental to the society uses of generative AI.
-
And so we very quickly said that we’re going to require digital signatures to all the investment related advertisements on YouTube, on Facebook and online and so on. And we’re quite fortunate because we have a comprehensive, free of charge, PKI public key infrastructure that’s already rolled out, not just in Plastic card, but also as a form of an app so that I do my document signing everything in a zero trust, passwordless manner. So because the infrastructure is here, all it takes is for this emerging threat to come and for us to say, now it’s time to require digital signatures for investment scans and so on. And our legislature has been quite like all parties, and all three parties are very willing to pass amendments like that. So for each vulnerability identified by us, they would pass one law amendment specifically for that harm. Like if an investment scam, generative AI posted on Facebook, we scan it, we report to Facebook, it doesn’t take it down. Somebody gets scammed, there’s no fine. But if somebody gets scammed for $1 million, Facebook is now liable for that $1 million. So by reinternalizing the negative externalities, investment scams, non consensual intimate images on defaking election candidates leading up the election and so on, we make sure that the big platforms are not seen as we’re top down censoring them, but rather responding in real time to such harms.
-
And so when feeming time comes, like before elections, all the defensive status was deployed for anti fraud was already there, which is conveniently repurposed for counterfeiting.
-
Thank you.
-
Thank you.
-
I personally think Taiwan is a pioneer in a lot of these approaches. And so a two-part question, very related issues. The first is to what extent you’ve mentioned multiple times the role that you have as minister, as a government minister. You mentioned that when the government plays the role of debunker, that that actually increases distrust rather than trust. And so if you just wouldn’t mind elaborating a little bit more on what you think the role of government is when it comes to truth and freedom of speech. And then related to that you mentioned the phrase fabric of trust, and in the US, it feels like that fabric is maybe even non existent at this point. And so do you have any recommendations for how to rebuild that fabric as opposed to just sort of make it resilient?
-
Yeah, I’ve read research that shows the US is at peak polarization, which means it doesn’t have anywhere to go, but better.
-
Yeah, I think the strategy that we adopt in the g0v (gov-zero) community way before I joined the government is consciously aligned with nexus in a society that are seen as critically neutral. And so the g0v hackathon were always held in a national academy, the so called academia Sinfca, and the national academy, because it only reports to the president and not to the ministers of education of science or digital. It’s widely seen as beyond party politics. And so in the national academy, we hold those hackathons. And so the politicians across all the different political persuasions are free to attend without seen as selling ou t or being compromised or things like that. And we also work with social media institutions that are with purpose or for purpose rather than for profits. The earliest gov zero hackathons were closely with PTT, which is local Reddit like that, has no advertiser or shareholders. It’s a pet project by national Taiwan university students, open source for six years now. And so by consciously only aligning to these institutions, we won support from career public service. And those bureaucrats are actually the key to the bureau way of doing things because they also want radical transparency.
-
They also don’t want whatever project they work for four years to be canceled just because a new party comes to power. They also want to co create with oppositions instead of just getting blamed for things they do wrong. With radical transparency, the opposition can be turned into co creators because they also have all the data there, right? So it’s their job also to produce new suggestions. And so, long story short, I think we need to find in any society, it could be, I don’t know, public radio, it could be National Institute of Standardization, like NIST, it could be public library, it could be teachers union. I really don’t know about the US. But the main idea is that you start with those small mixes and start doing very practical things that let people feel that the bandwidth of democracy is increasing, meaning that there is more bitrate, not just five bits when you vote on a referendum, or three bits when you vote on a presidential ticket, or really just one bit, but anyway, but rather a high number of bits when you can do participatory budgeting, when you can do presidential hackathon when you can do alignment assemblies to Q and A together and so on.
-
So people can feel that there’s something that they can participate in a high bandwidth way, and also the result gets back in 60 days, not in four years. So the latency is also shows up. So as we increase the bandwidth market, it could start with very practical, very local things and all the way up to presidential hackathon, which is an award is not a monetary, but rather a promise that whatever the local social innovators, the five winners each year picked by quadratic voting, can get their protocols, their standards national wide in the next fiscal year, because this solves this supermodel economic good allocation problem.
-
Right.
-
These ideas don’t become impactful until the entire nation adopts it. And so the main presidential promise is not money, but rather it’s just we will adopt this pre functioning protocol or whatever. And so this standardization, as promised to social innovators, is only possible if the initial proposals are seen as credibly neutral in your political climate.
-
Could I jump in with a question there in terms of mainland trends? So what do you see as the main trends in terms of disinformation coming from the mainland? I mean, obviously, you just had an election, so I’m sure much of it is connected to that. But do you see any broader trends in terms of where it’s headed?
-
Yes. So when we prebunk, we prebunk not specific operations, but rather the underlying narrative. For example, in 2020, the underlying narrative was always that only autocracies can counter the virus, because only lockdowns work and democracy only lead to chaos and so on. Of course, we counter that very effectively in 2021 and some variation of that has always been the case. Coming from PRC. It’s basically saying democracy only leads to chaos. And here is the most recent proof. Right. And so to prebunk that is to involve people in everyday democracy and to see that whatever their involvement was, even just a very small bit, actually had an effect and did not lead to chaos. And so, in election terms, I think the main FIMI that we have received leading up to the election was exactly that. It is to say that accounting was rigged. The election was rigged. The CIA has printed invisible ink, which has always been one of the main Femi things, because people believe in the capacitive CIA. Here, you see? So all these are about the election system. It is not about any particular candidate.
-
Well, I have many questions. Minister, I’m going to restrain myself for a moment so we can get some thoughts and questions from our students. So I’m just going to ask that you briefly introduce yourself, tell us your name and program at Yale, and we’ll start with you Halli.
-
Hi, Minister Tang, my name is Halli. I’m a senior at Yale College studying global affairs and statistics. I’m currently in a class on disinformation and misinformation right now. And we just before embarking on this trip, talked about generative AI. And really the question posed in the class was, is this an incremental step based on something, the way the technology was already evolving, or is this something categorically different? And I think you’ve already been talking about how online scams and all the increase in frequency of these things is, I think, something being caused by generative AI. And so I’m curious what you think about that question, how MODA is thinking about fighting generative AI in the space. So, yeah, thank you.
-
Yes, I of course think that it’s qualitatively different. Everybody get to be a shapeshifter now. It used to take a, because we have 20 national languages, if someone really fluent in one language or one culture, to scam people in that culture. But now that restriction is gone, anyone can scam anyone in any culture without understanding that culture in the first place. And the latest frontier models, they’re really, really good at just bridging those cultural differences and they’re very tunable, very alignable. Right. The latest clause cloth three opus was partly aligned by the alignment assembly effort that I mentioned. People co created along online deliberation, a set of constitutional documents. So instead of just the researchers selling whatever their universal values are, the target community get to co create a document to align that data too. But the problem is that this reward model, if you just assign, then it become equally capable, but very good at polarization, at isolating people, at bullying and things like that, because that’s what a constitutional document tells it not to do. So as it’s now becoming easy to align from here models, it’s becoming equally easy to align for abuse.
-
So that’s the first point. And the second point is none of this require cutting edge hardware. I kill my own email replying models on this MacBook, and it means that just capping GPUs or whatever doesn’t work because for scammers, they have eight advantage in that it doesn’t have to be accurate. For the real time clarifications that we offer, we need to have accuracy of 95 or 96% for it to be useful. But for the scammers, it really doesn’t matter. It just needs to create 4000 seemingly real people. And if some of these doesn’t appear accurate, actually it’s better because it lowers the epistemic expectation of online interaction. And so this pollution to fabricate trust is wide ranging. It is not one operation. It’s that each operation decimates. Just like Freon decimates the ozone. Right. It decimates the epistemic commons. So that’s my second point. So, to counter that, I think we really need to go back to the actor, to the source. All the governmental utility bills, electricity, water, whatever, are sent now from a single number, 111 in our SMS and all the three telecoms to their commercial SMS services have all adopted short codes.
-
So the idea is just like a blue tick. If you don’t see a short code, that’s a scam. Unless, of course, we’ve met face to face and exchange our address code on WhatsApp or whatever. But otherwise, it’s always assumed to be a spam. And this is a marked reversal, because previously we assume it’s a human until it shows bot like behavior, but now bot has human like behavior, so that doesn’t work anymore. So we have to go back to the blue tick, to the digital signatures, to short codes, to verified codes, and so on. And once we do that, then we can rebuild this provenance and all the related technologies so that it doesn’t matter how content is evolving to be lifelike. Because then we have two broad classes. One is the trusted institutions and numbers. One is the friends and families who have met, and everybody else is a bot.
-
Thank you so much, Minister Tang. My name is Naomi. I’m a first year master’s student at the Jackson School of Global affairs. I’m interested in talent. Hearing you talk about some of the developments and threats to democracy here is pretty amazing. Coming from the US, a lot of folks in government certainly struggle to keep up with some of these trends. And so I’m wondering how you think about having folks come work for you. Is that a challenge? And what is it like for the government more broadly, recruiting people who are interested in these issues?
-
Okay, great question. So you’re all invited as gold card holders. So we have this quite innovative program, the gold card, which was very popular during the pandemic. A lot of my friends in Silicon Valley, whether they work in startup or science or research, they prove that they’re expert in that area. And then we give three years of residency, universal healthcare, for you and your family, open work permit, tax incentives, you name it. And then after you renew once your fifth year, you can also naturalize without giving up your passport. So dual citizenship. And so I think this is, there are people who like that. And so after Moda formed, we look at all the science minister, the culture minister, and thought, are there particular people that were not serving yet with the Gold card program? And we found, oh, it’s actually open source developers, because many of them don’t have the kind of full time job salaries or credentials. I mean high school dropout, right, the educational credentials to prove that you’re a top PhD, top 500 schools, shall we say, if you can prove with a distributed ledger or with GitHub or with Wikipedia, really, anyway, that you’ve been working to improve the Internet commerce for eight years or more, you’re also eligible as a gold card holder.
-
And it’s very popular with not just open source contributors like Vitalik Buterin, who have contributed for eight years on a small project called Ethereum, but also his fellow Ethereum hackers, many of them from Argentina and so on, who don’t normally have the credentials required for Gokart application. So I think the combination of a very friendly dual citizenship as really a award to your contribution to Internet commerce and also a very flexible salary in the national institutions. I’m also the chair of the National Institution of Cybersecurity. And in the NICs, we track the cybersecurity, the researcher, the engineer pay in Taiwan, telecom, and I think healthcare, the highly regulated sectors and banks. So when their salary goes up, so do our salary. We track it on a year by year basis. So people only have to take a, say, 10% cut in their salary to join NICE and enjoy much better working hours and benefits and work life balance. We have many people joining for one year, for two years, and going back to their original startup or whatever, and then introducing even more people as fellows to our national institute. So it used to be a challenge.
-
But recently, thanks to the forming of Moda, we’ve had a lot of very good exchange with the audit office and the HR audits and so on. And because we can prove that during the three years of the pandemic, the people who meet every week to design revenue responses, regardless of which ministry they hail from, now become a new ministry. And these professionals working as fellows become zero civic hackers, the gold card holders and so on, they literally saved everyone right during the pandemic. And so it’s high time to change the HR rules for them. So we did get them. These changes done over the past year.
-
Wonderful. I can see a lot of eyes light up. You have at least one student from Argentina here.
-
Let’s get some work done. Go ahead.
-
Hi, minister, a pleasure to speak. My name is Neil Sachdeva. I’m a current undergrad studying computer science, Yale College. I actually worked on the team that takes from open source deep learning compilers. So particularly interested in that subject. My question is, in the realm of deep learning research, we’ve seen a lot of contributions from the open source community, particularly in the past 15 years. But in more recent times, as it’s become privatized, a lot of the cutting edge research, cutting edge work has come from the private sector. And while we’ve currently seen a lot of those contributions added to the deep learning, to the open source sphere, that’s clearly going down with time. So I wanted to ask how you see this evolving and more specifically how you facilitate the open source community in regards to deep learning here in Taiwan?
-
That’s a great question. I think a lot of the frontier model labs, of course, they take from open source, but they don’t contribute back, mostly because they were afraid that those frontier models can be used in a primarily open source manner. If they now, well, maybe Taiwan is ready with our digital signatures and everything, but I don’t think many other democracies are ready. I think that is the main thing that are keeping them gating their release. On the other hand, there are many other parts of deep learning that doesn’t have this offensive nature. I have in my mind, like the reward model, the constitutional alignment, these kind of things. these things, it doesn’t increase the base capability of the foundation model. And so open sourcing, it only gives more communities more ways to align the frontier model to their needs. But it doesn’t fundamentally increase capacity, for example. And so I think when looking at research, we need a way to just distill it to the essence of whether this is defense oriented or whether it’s offense oriented. And then we need to double down on investment on the defense oriented things and make it so that it’s open source.
-
In Taiwan, we have a program called public code. So it’s not just open source, but rather it is open source that is used first or initially in the government that makes algorithmic decisions or things like that. And so it’s transparent because of governmental accountability, not just because to save some development costs, as was the original idea of open source. And so we published guidelines of public code. We made sure that our investment to digital public infrastructure, meaning that things that are going to be reused by different levels of government four years down the road, always prefer public code when we’re doing the work. And this is possible only because last year, starting last year, we got the budget for public infrastructure, like building railroads and highways and so on, steering them toward open source code, because previously open source code and development was always thought as R and B, and therefore part of the science budget, which is not just less than the public infrastructure budget, but also it’s somewhat competitive, so that the ministries would not open source it for other agencies or local governments to use. They at most open up the API or the data, but not the raw code.
-
But because we’ve got more infrastructural budgets, and we say that as long as it’s going to be used twice or more across different agencies, then it qualifies as infrastructure. So we can afford to pay the upfront cost of red teaming, of collaborative red teaming, even of collective evaluation of all those penetration testing, all the things that you have to do, like building a software bill of material and things like that, to prepare it for open source release, which is actually quite expensive. But if you think of it as safety engineering, as part of the bridges or highways, then it’s actually not that much. So I think it all depends on whether you frame open source as just a public good that you share with the private sector, or it is actually public infrastructure that the government should provide in the first place, like digital signatures and so on. Again, because its value are super modular and it only makes sense when the entire society is adopting it. So for these ones that we got funding, then there’s a lot of funding for the free software and open source communities to work with. Hope I answered your question.
-
Thank you. We’ll go there.
-
Hi, Minister, thank you so much for speaking to us.
-
My name is Pranav. I’m a junior studying global affairs here. You spoke about the digital signatures, right. So I’m curious, more broadly, how concerned are you about PRC created deepfakes? How do you think they’ll evolve? And then more broadly, do you think open societies like Taiwan and the US are ready for them?
-
I don’t think humanity is ready for those precision persuasions. I read a paper by ByteDance researchers. It’s called Linear Alignment, that’s the title of the paper. It basically is a very interesting way to turn what’s called alignment fine tuning during their training phase, which was usually expensive, and you do it once and serve many models, they change it so that the reward model to each user, well, by dance has some users, each user can be applied at inference time. So I film myself doing a TikTok video or whatever, and then the 5 million viewers, each viewer has a reward model and then it tune towards persuasiveness. So maybe it’s just some subtle color balances, maybe it is a different caption, maybe it’s a different intonation, maybe different procedure, doesn’t matter. And so if that program knows that the reward model of each preference in each user, they can use their own phone to do this alignment so that they don’t have to spend GPUs or anything, it just automatically played in a way that is what we call a super stimulus, right?
-
And something that is much more persuasive than any normal human has the right to be. They published it publicly, right? I think it’s good for science that they publish, but I don’t think the human civilization is ready for this kind of capabilities yet. And so part of the pre-bunking is also to let people know that these are coming. And part of the work that we have on regulation around harmful products is that, well, there’s always a kill switch, but in addition to a kill switch, that people need to get educated, that there’s capabilities like this. It’s just the PRC is not using it yet. But when situation arise, we need to sound the alarm and tell people they have activated this particular apparatus. And this is why we’re taking some quite drastic measures, because humanity is not ready for this.
-
Hi. Thank you for your time. My name is Claudia Wilson, and I’m a master’s in public policy student at Yale. Back in the 90s, when asked about Chinese censorship, the Internet, President Clinton said that it would be like trying to nail Jello to the wall. And obviously, since then, that’s been proved to be somewhat false. I’m curious to get your thoughts on sort of the way that the Chinese government will look to regulate AI and some of these tools. I know you mentioned previously that it may be possible to send in the truth in forms of these LLM models. And I was wondering whether you could expand upon these thoughts on how AI will impact Chinese society. Optimism versus pessimism.
-
Great. Yeah, I think I was interviewed by the VOA. So I basically said that you can package the whole VOA into a few hundred gigabytes and then send it as a USB stick or whatever, and then people have this interactive VOA that they can interact with. So I don’t call it the truth. I think it’s just VOA, which is, of course, journalistic, but it represents a certain viewpoint. Let’s just say that. And so, yes, it is possible, and much more possible than previously to circumvent the great firewall with this kind of information packages. We can even imagine that it, with coding capabilities, can also evolve and make better communication tools, not just truth individually, but truth socially within the PRC firewall. But I think the main challenge to the PRC sensors is not to clamp down those things. It is how they can afford to continue the narrative of only autocracies, or really only the autocracy with chinese characteristics can safeguard people’s lives and so on, given the actual performance of their society at this particular point. So I don’t think this particular way of smuggling VOA and so on is going to be the main thing to cause the next white paper a4 revolution or something like that.
-
I think it is this larger environmental context that made their grand narrative really doesn’t work. That is going to provide a backdrop for the next a4 or a3 white paper revolution. And these tools, they may be somewhat useful when people want to communicate across censorship, but these are not going to be the cost of such social movements. That’s my reasoning. Thank you.
-
Thank you, minister, for having us. My name is Juan. I’m a first year MPP student at Yale. I’m from Argentina. And I have one particular question. We’ve seen in this last decade, new leaders emerge all over the world, from the US to Argentina, that actually use polarization as an asset. So while for civil society, polarization is a problem, for some politicians, it’s an asset, and politicians reap great rewards by polarizing society. So what I was wondering is, first, whether your ministry is doing anything to prevent political parties within Taiwan to polarize society. And if so, how do you think we could persuade politicians? How do you think we could create incentives for politicians to actually get out of the polarization trap? Because while for the rest of society, polarization is bad, for some of them, it’s pretty good.
-
I know. Yeah. We’ve had our own share of peak polarization and populism around 2018, so I totally hear you. And I’ve just co wrote a book on this. It’s at plurality net. We are finalizing the english edition this Sunday, but most of it is online. It’s free of copyright. You can take it and say it’s your work. We won’t do this. This is entirely copyright free. Okay. Anyway, but just to explain the idea of plurality a little bit, the idea of plurality, which goes back to Hannah Arendt, is the idea that any action done by us is done as people, not as individuals. And people have their differences, and those differences cause conflicts which may evolve into polarization. But if we tap into the technology to harness this energy, then we can have this kind of control explosion that continuously have conflicts without exploding the infrastructure, the institutions of the democracy that will make polarization something that you don’t go back to the next conversation. So some limited polarization is useful as long as for each incoming topic, you polarize across the different axis instead of just people, the left eye not seeing the right eye.
-
I think that’s literally true. Instead of just polarizing more and more across the same two groups of people, you polarize across different axis when a new social issue comes. And so in Taiwan, we’ve been very intentional in phrasing our social conversations so that it’s always possible for people who are deeply opposed to each other. Like during our marriage equality debate, which took three referendum and one constitutional court and many citizens petitions and initiatives for us to find the crux, the bridging statement, which is both sides care about family, I guess. And one side doesn’t want the kinship relationship in the confucian tradition to be polluted by homosexuals, and the other side wants individual liberties and rights and the same state protection and everything, but they actually don’t quite care about kinship relations. So the solution, once we find a bridging statement is to design a specific act for same sex union marriages, actually, that says it’s applying everything from the civil code except the chapters about kinship. And then this made both sides very happy. And so this idea of finding not just continuously different polarizing axis, but also systemically, sometimes using AI language models to design the bridging statement that people can all live with and make that the topic of prebunking of government led conversations and so on, I think is a burden to politicians because then their base keep increasing, they become everybody’s politician.
-
It’s like the good side of populism without the bad side of polarization. And so when politicians start adopting this, and I would say President Tsai Ing-wen is a master in adopting this particular kind of politics, taking all the sides, building bridges. And so this then has a leadership effect to other politicians. And so in the recent presidential election, you can see all three candidates took something like this bridging stunts when it comes to Taiwan PRC relationship, which is very refreshing because every other presidential election was about two polarizing candidates. But for this particular one, all three if you muster their names, they’re essentially saying the same thing when it comes to bridging the Taiwan and PRC differences. Hope that answer your question. Thank you.
-
Hello. My name is Valentina Simon, and I’m a junior studying at Yale. And it’s incredibly inspiring to hear about all your work with pre bunking narratives from foreign actors, and it seems to work quite well. But what I’m wondering is, first of all, how do you predict what narrative you’re going to need to prebunk? And second of all, if you don’t happen to prebunk a narrative and it ends up taking hold of society, how would you deal with that? Because we’re not perfect. We aren’t able to predict everything.
-
Indeed, that’s a great question. So prebunking is not about, again, it’s not about anticipatory debunking, particular operations. It is about the overarching narratives and letting people know that there’s going to be manipulations using those narratives coming, but there’s no telling which particular form or which social issue it’s going to do. So, of course, we’re not going to be perfect, but it’s a pretty good bet that PRC later this year will still continue to say that democracy only leads to chaos and China models better or whatever. I mean, that particular part is very predictable. But if there are particular issues that are outside of the pre bunking, we usually see it as our domestic differences amplified by GME operators. That is, if for these, they don’t come up with these things themselves. It’s not an information manipulation in the traditional sense. It’s mostly about paying the actors in the two most polarized edges a lot of money, and really hands off, not really telling them what to say, but just amplify disproportionately the most polarized voice in any particular issue, which is usually domestic. And only when both sides are polarized do they take hold and have a polarizing spiral that stretches to pull the society apart.
-
And so for this particular thing, I think it’s always easier for us to simply say, we respond instantly. There’s this idea of two, two, two, which means that for each polarizing narrative, we respond with two modalities, like a short video and a picture or a press release or whatever meme hashtag doesn’t matter within 2 hours, and each payload is less than 200 characters. And so the idea is that within the same news cycle, when we detect something that is barely out of control, within the same new cycle, we do our I would say clarification. And it still counts as prebunking because for most of the people, they receive the clarification before they receive this polarizing account. If it’s done in the same new cycle, if it’s done in the next new cycle, then maybe it’s 50 50, but if it’s one day later, then it’s too late. I think it’s important that we have a lot of ready to go memes and pictures and things like that. If you go to my flickr, you can see my staff having me take pictures of all the trending memes, like me doing this or whatever, right? Everything, right.
-
So when a new situation arise, they can just slap on some clarifications and memes and so on, and then instantly make something very funny to the general public. And so I think this kind of rapid response, we see it as just like cybersecurity, like time is of essence. You share your threat indicators, you have defense and depth, you have your antivirus booster shots ready every quarter or so. All in all, I think people come to expect that the government is going to do this kind of thing. And when things are really spiraling, then I go out with the AMA, ask me anything, a live stream or things like that. And so people know that if they attack me on Twitter or on threads, they tag at digital minister. I would just use my phone and record a short sound snippet that is somewhat funny so that that can go viral as a real time clarification. So there’s a lot of work that’s done here. But the prebunking gives this non adversarial sentiment so that people from all three different parties can still share and partake, this means, I’ll call it a viral inoculation, a viral vaccine.
-
But if we had already opposed people and said that all these people, these parties, always say the wrong things and so on, then they will not become carriers and fellow inoculators in the firebank.
-
Thank you, minister. I might just ask a question of my own here and then see if any other my faculty colleagues want to chime in terms, any thoughts, since it’s your time, grateful for how much you’ve covered in just a short amount. I wanted to ask you a question about strategy and leadership. So one of the real challenges, I think, for cybersecurity professionals is just it’s an overwhelming current inbox all the time, and Taiwan is under just constant cyber threat bombardment every day. Yet in this session that we’ve had, you’ve demonstrated that you are still able to carve out time for forward thinking and trying to always monitor the evolving threat landscape. Empowering your team to be flexible and creative and problem solving. Just give us a little bit of sense of how you approach this challenge of both countering what is sort of a daily string of ongoing crises that you’re defending the island against, as well as trying to look beyond the inbox to what’s not just a week or a month out, but some of the trends that you might envision over the next few years.
-
Yeah, so training an email replying model really helps. I can still read everything before I send, but the model does most of the drafting, so there’s that. And there’s also about the design of our ministry. Our ministry really is three very different parts that will belong to at least two different ministries. In almost all other jurisdictions in the ministry proper, we concern ourselves with universal broadband access, satellite connectivity, one web, and things like that, universal service on government services, data, open API, public code, and everything that is universal. So this is the infrastructure layer that is industry cover. Within the ministry, there’s teams to coordinate with two underlying administrations. One is the Administration for digital industries, the ABI, which handles like they’re the competent authority for e commerce, vendors, student, also advertisers, sorry, advertisement platforms such as YouTube and Facebook and TikTok, and also like cutting edge AI evaluation, the kind of alignment assemblies we’re doing to counter information integrity threats and so on. So the ADI is for applications, so all the private sector applications of those universal digital technologies, that’s ADI under the Moda also is the ACS, the administration for cybersecurity, which protects, of course, the government and the curriculum infrastructures, but also through the National Institute of Cybersecurity, also all the public listed companies and private sectors and things like that.
-
And so the ideas of participation, progress and safety were usually what three ministers or two ministers fight. But in our case, it’s all in my head. So anything we do is to increase the overlap between participation, progress and safety. The inbox chase only becomes a problem if you adopt solutions that creates problems on the other two sides of the triangle. If all the solutions are designed with increasing overlap, then it actually goes into the self affirming mode after time. So that, for example, how we use digital signature to counter fraud advertisements will also become a powerful counter FIMI tool, so you can rather design safety with participation into progress. So by keeping this triangle in mind all the time and also designing moda with no KPI on scoring points in any of those three points, but rather design architecture and how quickly we can design overlooking solutions, like having agility as the main KPI. It then frees my mind to think about more useful solutions and so that I can sleep 8 hours every day, which is really the only way to think creatively at the next day.
-
I have a question about how much of what you do has to be a central government function. How much could be a local government or how much could be a non government function?
-
That’s a three hour seminar. I think most of my work can be done with private sector taking care of the infrastructure. If NCU, their governance structure is democratic, that is to say, if they’re subject to the same oversight and accountability. And there’s some people trying that, right? The idea of data coalitions, the ideas of for purpose with profit companies and so on. Because I was, before this job, the minister in charge of social entrepreneurship. So there are some co op like entities that are striving for this kind of democratic governance. But we have to admit that the democratic governance technology itself is a very emerging field of technology. The technology to listen carefully and respond carefully to millions of people simultaneously is research at this point. So once that becomes development instead of just research in the lab, and we’re investing heavily on that, then I would say that most of the infrastructure function that we have now can be safely put into these people. First, private public partnerships. We’ve already seen this in the fact checking ecosystem.
-
Most of the fact checking ecosystem here in Taiwan is powered by the antivirus ecosystem. So TrendMicro, Gogolook and so on. They take the Cofacts database to participate in the governance, but it’s still comprehensive packages that protects against unsolicited calls and messages. Like there’s an app called message Checker that just look at all the pop up notifications on your phone. So it doesn’t matter which application you use, right? It’s all part of the protection and so on. And these things are better done in the private sector compared to the public sector, of course, because otherwise it’s worth having your phone. So for these very privacy oriented issues already, our private sectors are trusted partners, and they are, in a sense, taking over with some public oversight. And with time, I think more and more, as long as the social sector feels that they can exercise the same sort of control and auditing to the private sector, we’re seeing more and more of this role shifting into the private sector.
-
Just building on that. How can you internationalize it more? And a lot of folks here are going to go into government and be leaders, or are leaders. If you could send a wish list to say, like US government and US society about how we can expand this, what would you say?
-
Yeah, there’s a policy chapter in my new book. But the main idea is don’t see Taiwan as somewhere exceptional, because as I mentioned, back in 2014, the administration was enjoying less than 10% approval rate. Political apathy was at all time high. And as recent as 2008, people dispute about election results for three years. Right? So we’ve been there, but we overcame it not because of some numerical technology, but because of a commitment. Plurality meaning collaborative diversity, seeing diversity as a fuel for co creation. It’s just you know the Pacific tectonic plate and the eurasian one bumping into one another all the time. And the Jade mountain, the highest point in Taiwan, raises seven centimeters every year. We have three or so earthquakes somewhere in Taiwan every day. And so see those earthquakes as invitations for more resilient building code or whatever, and see these as invitations to rise higher citizen in IPM.
-
So, Minister Tang, we do not want to take the whole afternoon, beginning of evening. I have one question sort of moving us towards the end of this conversation, which I’ve been thinking about when I’ve been listening to this deeply impressive approach that you have to dealing with these issues. Are you sometimes worried that the antidote can become as much of a problem as the messaging that you are trying to create? By presenting what you see necessarily as the antidote to disinformation and misinformation, that you yourself could come to narrow down what people see?
-
Yeah, so our hashtag, official hashtag for our ministry is hashtag free the future. And so, yes, I think it’s always a tempting vice, really, to design anything perfect. I see myself as just a good enough ancestor, someone good enough to open up possibilities. So the next generation, who are going to be much more smart than I am, is going to design the responses they have to whatever situation they have. So I always make sure that’s the same overlap between participation, progress, and safety. It is to let the idea of widening the narrow corridor, so to speak. So every step we do, as long as it widen the corridor a little bit, pushing up the Pareto front, then it’s going to be good enough for the future generations, which is why we’ve resisted quite vehemently, really, against any top down, lockdown, takedown, shutdown solution to this information that tempted pretty much all the nearby jurisdictions. And I’m not talking about PRC. I’m talking about Singapore and everywhere else. And so I think it is not an accident that you see Taiwan at the top of the democracy index pretty much everywhere in Asia, according to pretty much every outlet.
-
And that is not because we became much more democratic. It is because we keep at our democracy while the other democracy backslid, especially during the three years of pandemic. So, again, this is not about chasing some democratic ideal to the detriment of future choices. This is about responding to each and every threat in a way that makes participation easier for progress and safety.
-
Thank you, minister. We are reaching the end of our time for which we’re very grateful, and I hope this can be a continuing engagement as we think about the leadership lessons that you’ve demonstrated in digital affairs and their relevance both in Taiwan and the United States and beyond.
-
So please join me in thanking Minister Tang.
-
We have a very small token of appreciation. As you might be aware, the Yale University mascot is a bulldog and might just help keep the paper uncluttered in your office.
-
Thank you.