• Hi, I’m Audrey Tang. I’m Taiwan’s Digital Minister in charge of open government, of social innovation, and youth engagement.

  • Cool. Today’s interview is disinformation. I might sit here, actually. Sorry for not being too formal, but it’s easier on my arms.

  • There’s been a number of legislative attempts to fix the problem of disinformation. I understand that you are also pioneering some technologically-oriented initiatives?

  • Or rather, social innovations, really, like changing how people perceive the issue, because in Taiwan, disinformation is not the first time that we had this social sector-oriented way of making these models that hold the solution.

  • We have a infrastructure for counter spam, but we don’t have any laws pertaining to spying, particularly, other than a draft currently in the parliament. Spam, you don’t hear this in Taiwan much anymore, because with the global international Spamhaus project, we work with all the email providers to make sure people can flag something as spam.

  • Then essentially donating what’s not really private communication to begin with, because a spammer sends the same thing to everybody, into this social sector network that analyses the sender’s patterns, so that new emails from these same senders that fits these patterns goes into the junk mail box instead of the inbox. That is a classical case of people just solving things together by the social sector’s innovations.

  • We also had a very similar configuration called iWin, which again, is a social sector organization that handles reporting of, for example, Internet harm relating to minors, exploitation, and also intimate images shared against a person’s consent, and things like that.

  • This information while relatively new, it’s not the first issue that has been tackled by this configuration. For this information we also introduced ways to enable to the social sector and the private sector to work together.

  • For example, for the social sector to flag something as disinformation. For example, to collaborate with institutional media on fact-checking, and for these results to be fed back into the private sector algorithm that are all within this idea of self-regulation or co-regulation, rather than a top-down way of law, which handles only the most serious cases.

  • You’ve been reaching out, are you like the bridge between social groups, civil society and…?

  • One of the bridges, one of the many bridges.

  • Helping a company say like LINE, or Facebook, tweak its algorithms to identify disinformation?

  • That’s right, that’s right. Many governments now has this role of what I call digital ambassador. Essentially realizing that we’re dealing with overlapping jurisdictions here, and it’s not to be solved in a purely domestic way, rather we must first observe what’s happening and share the same facts among all the stakeholders, and stakeholders reflect on it, and maybe design some norms.

  • Instead of just implementing these norms which couldn’t be all the same around the world, because every jurisdiction has their own culture, we just share those norms for the local implementers to implement. When they implement norms, of course, the people will have further feedback which then feeds back into this loop.

  • This norm building, or what we call norm-first design, I think is a good first step toward a collaborate governance rather than a top-down way of government imposing its laws, and the private sector imposing its algorithms, and the social sector imposing their social sanctions and boycotts [laughs] and that doesn’t usually lead to a fruitful discussion. Norm-first I think is better than the other directions.

  • One of the challenges that people talked about with disinformation here is so much of it’s social-media-based, and can’t regulate a private conversation, and it’s very hard to regulate a post that’s shared by an exponential number of people. What kind of norms can be designed around that?

  • For spam, generally the norm is that if you’re sending something to five million people claiming to be a royalty, having a trillion dollars in the bank, but if you only pay $10,000 in transaction fee, and so on, it’s genuinely not considered “private communication.” It’s rather abusing this private communication channel that is email, to further a purpose that is harmful to the public.

  • It’s not just like a single email that could be construed as spam, but it’s rather the behavior, the activity itself. It’s coordinated, inauthentic behavior, and these CIBs, instead of individual lines, like the same lines would be fine if it’s a real royalty [laughs] sending you a real request, right?

  • It’s this inauthenticity, not about its content, it’s rather about the authenticity of the sender, and how much coordination there is. If you’re only sending it like as a prank to a hundred of friends, that’s one matter, but sending to five million people, another matter altogether.

  • For something like sharing a news post about some kind of political event, but that’s misinformation, how are…

  • The norms around it.

  • Yeah, how are you looking at the norms around that?

  • What we’re looking at now, is that if you’re paying the platform to reach a specific audience that has been determined beforehand through a mixture of social engineering and analytics, so that you can precisely target their cognitive biases with these hyper-targeted messages, that wouldn’t make much sense if the public gets a chance to scrutinize because of precision targeting, only the people who would fall into this narrative would receive these messages.

  • These hyper-targeted, precision advertisement-based messages, are by, I think, this year, the norm is it is not an appropriate use of the advertisement system, that it is rather an abuse of the advertisement system.

  • The norm is now, I think, across the globe around saying it’s OK to do precision targeting, but once it’s political, once it’s about something that could cause us social harm, and that there is a chance of abuse, then someone should just reveal it.

  • This is the same as campaign donation. There really is nothing wrong of donating to your favorite councilor, candidate, or even presidential candidate, there’s nothing wrong about donating to a party, but if you donate under somebody else’s name that is another thing all together.

  • I think transparency from the donation or advertisement side, and accountability when people get a public view of the precision targeting criteria and so on, and have the ability to go back and forth to hold them accountable. I think these two are the new norms as of this year, around precision-targeted messages.

  • When you say they’re new norms as of this year, these are conversations that you’ve had with advertising sites?

  • The advertising site of the advertisers, I think you mean, and platform that accepts those advertisements, and also the users’ activities when they are determined as something like precision targeting, this kind of misclassifying somebody just use the algorithm and classify them as inauthentic behavior, where as they’re just really passionate about it, and so on.

  • There’s different stakeholders here, but I think people generally agree, once you pay a lot of money to hyper-target a certain amount of people and these people’s profiles are really kind of vulnerable to this message.

  • At least this behavior including the criteria, should be public and also it should be kept for perpetuity, for people to analyze independently and this behavior itself must become a social object around which people can have a back and forth conversation, and held each other accountable.

  • Are there any other concrete steps that you think that next year, the year after, norms should be established?

  • Currently the norms around synthetic images and videos are still being debated. Whether it is OK to portray someone as saying the words that they never said, performing actions they never performed, because previously it’s very costly, so it requires the equivalent of a post-production of “Lord of the Rings” to synthesize Gollum, right? [laughs]

  • Because it’s so heavy an investment, you can usually find the institution that does it, and hold them accountable in the traditional way that we hold institutions accountable. Nowadays, this iPad can do it, so it’s democratized so much that it’s virtually impossible to find, to trace back, who is the original maker of this synthetics, because everybody can do it.

  • Also because of the heart of traceability, the whole attribution, like why, what motivates them to make this video, of synthetic video, is also lost. Because of that, I think the new norms around this kind of synthetic videos and audios and so on, are still being hotly debated, but I think by sometimes next year, after a few major elections empowered with these technologies will be probably see the norms emerge.

  • When you have these conversations about norms, what are some of the biggest challenges that you encounter? Misunderstandings, maybe.

  • First of all, norms is not by definition a global thing. The social norms in Taiwan, for example, when we wrote out the self-driving vehicles sandbox – the first testing plate is just being issued last week, I think – people had a lot of conversations around what is the proper norm for these self-driving vehicles when they interact with people.

  • In Taiwan, the MIT Media Lab folks, which is the team behind these self-driving vehicles, did a test about which kind of people this foreign vehicle should yield to, to prefer, to have more free of walking and so on, to yield to them.

  • In Taiwan, people overwhelmingly said that you should yield to elderly, and then maybe handicapped people in wheelchairs, or people with blindness and so on, and then maybe pregnant women, and then children.

  • In Boston, which is where the team originates, everybody said you should care for the children, and people care far less about the elders. This small anecdote shows that there is no such thing as a global norm. There is only common understanding. I understand the thing is like that in Boston. People in Boston understand that in Taiwan, things work differently.

  • People agreeing to disagree to a certain part but come to a common understanding about the local social norms, that is most likely the hardest thing, because it’s cultural translation. That’s something machine translation has yet to catch up to.

  • Like just this idea of common understanding, which is called 共識 in Mandarin, is usually translated as consensus, which is something that people can sign their names on, which is too fine a consensus for the term 共識 in Mandarin, which is like rough consensus, like a little bit. Small things like that is actually the most challenging.

  • Is there a concrete example that illustrates that cultural relativity, so to speak, when it comes to disinformation and norms?

  • Yes. One of the concrete examples is around the campaign donation. In Taiwan, we hold campaign donation transparency to extremely high standard, because we have a separate branch of government doing that. The Control Yuan is auditing everybody including legislation, the courts, and the administration.

  • The Control Yuan, which is difficult to translate – how do I translate that – does all the campaign donation record analysis by themselves. They have a data scientist team and so on.

  • However, the popular demand is for them to publish the raw data, including the time the individual records of campaign donations for independent analysis, so investigative journalists can do a systemic comparison and say among all these candidates, only this candidate is not declaring this kind of expense or campaign contribution, which is very difficult to do unless you have the raw data to access.

  • Because the popular demand starting from the previous election, the Control Yuan just publish everything – unstructured data – for independent analysis, which is by far not the norm in the other countries that I have conversations with.

  • Because of that, in Taiwan, most of the money that would have gone to campaign donation in other jurisdictions went then to precision targeting, because advertisement doesn’t need to be declared with the same amount of accountability like radical transparency in the Control Yuan level.

  • This is something that’s difficult to get across to unless you know the post-Sunflower political climate in Taiwan and the high standard that people hold the political practitioners to. That’s just one example.

  • What’s my next question that I wanted to ask?

  • When I talk to non-technical people, the way they frame this approach to disinformation is trying to balance these contradictory interests of free expression and controlling disinformation. From a technical perspective, is that the right framework we should be thinking about disinformation within?

  • Did you talk to Taiwanese people or just people in general?

  • Think tank-y people.

  • Oh, think tank-y people. I think it’s generally true that when we are talking about disinformation, we want to balance the freedom of information and freedom expression and of assembly, general civic freedoms.

  • Because of the difficulty to define how much of freedom of speech is helpful and whether something is abuse of the freedom of speech, we generally say that we hold the freedom of speech but also assembly and so on as a core value. That’s an easier framework. It’s more like this is something we’re not willing to cave in to give up on. Then we innovate with that as the premise.

  • This is in sharp contrast with most of our nearby jurisdictions who are framing, as you said it, a more instrumental way, how much freedom of speech, freedom of journalism are we willing to sacrifice in the service of a more harmonious society? That is a usual framework. In Taiwan, it’s not like that.

  • When people still remember the martial law like I do, we just don’t want to go back there. Not even all the way there, we don’t want to go in that direction. Because of that, all the innovations that we do, be it the rapid response, be it the social sector, collaborative fact checking and things like that are all based on the simple idea that a minister’s words should not be worth more than a journalist’s.

  • That is what I think sets Taiwan apart from our nearby jurisdictions.

  • What do you remember from martial law that informs your work now?

  • Both of my parents are journalists and they worked before the martial law was lifted. They worked when the freedom of speech was by far not the norm here in Taiwan. All their work needed to be put into official censors.

  • When I was very young, I remember my parents having conversations about how much to write, how much to omit, whether their publisher, Mr. 余紀忠 would defend their journalism with the KMT party, or whether this is just too far and will in fact cause a liability to the entire publishing apparatus.

  • These are conversations that some of our nearby jurisdictions are forcing people to have right now, like self-censorship, and also journalistic standards, and balancing against the so-called party harmony thing.

  • After the martial law was lifted, we’re seeing an absolute blossoming of different kind of media and different kind of speech, which inform Taiwan’s democratization process, and, I would argue, is the key that our democratization has been pretty much nonviolent so far because the violence was in the market of ideas. [laughs]

  • Once things are fought there [laughs] and people agree on a common understanding, we don’t have to fight it as much on this feat as other jurisdiction are currently having.

  • Because of that, I think this core value is held as a treasure by all parties in Taiwan.

  • Is it possible to begin a transparent process within technological vetting of, say, targeted advertisement or political contributions? That’s been one worry with the legislative amendments that are passed, is that there’s no clear standards for how disinformation is identified and then dealt with.

  • There is a legal definition which builds on existing laws. They must be intentional. They must be untrue. They must be harmful to the public. Not the minister’s image, which is just good journalism. In any case, because we have plenty of existing laws and regulations pertaining to the definitions of these, we’re not building any of new legal concepts.

  • What we’re saying is that there is an existing law that says when there’s an epidemic like SARS, when you in the analog world and artificial world, spread untruth about the SARS and mislead people into going into places that puts their health into a hazard, and if people gets harm or die because of it, there is criminal liability for the person who perpetuated that message.

  • This is way before the digital art of social media. What we’re saying is that, if you’re doing the same in cyberspace, the penalty should be the same. That’s the extent that we do.

  • We’re digitizing the cases pertaining to the laws, which are often written in a pre-social-media language that excludes the digital media because sometime if they enumerate the publishing medias, they would say on paper, on CD, on radio, [laughs] and so on and then just forget about the Internet.

  • Most of Minister Lo’s work on this is just making sure that these concepts held in the analog world are applied exactly the same in the appropriate way for the cyberspace. We are not building legal concepts.

  • What kind of researchers would you need though to apply these concepts to everything that goes on the Internet from a news outlet?

  • It’s not about news outlets. It’s more about, as I said, intentional, harmful untruth. People who have a premeditated intent to cause harm – diinformation – that is just illegal. It could also be spreading not disinformation but misleading information. It’s called malinformation – information operation and things like that.

  • The idea is that, if we can prove in a court process the intent and harm, disinformation is just one of the many kinds of behaviors that could fulfill the intent to cause harm. Of course, this process is just like any other court-system cases, subject to public scrutiny.

  • It’s relying on the public reporting potential cases and then a either judicial or a minister of…

  • This is saying that the jurisdiction for public harm still resides in the court system in the judicial branch. This is an administrative branch saying it’s not a case that the ministers are pillars of truth. Rather, the ministers give out our own assessments as quickly as possible to the fact-checkers, to the journalists, who can then compare the different sources of information and draw their own conclusions.

  • If people could assume that the journalist’s words are worse than the minister’s words, then we don’t have this kind of healthy relationship.

  • Cool. Thank you very much. Do you have any questions for me or…

  • We’re not out of time?

  • Have you seen concrete results from your targeted…

  • The paid-advertisement thing was quite interesting.

  • There’s quite a few political campaigns already declaring themselves on the Facebook targeted-advertisement archive. Twitter put a twist on it because they are not accepting political [laughs] advertisements anymore. Even for Twitter, we’re also seeing a proactive disclosure whenever they do detect a block of IP addresses’ origins from the PRC, that doesn’t need a VPN. It doesn’t need a proxy to attack Twitter.

  • They not only closed their hundreds of thousands of accounts, they actually published a fact that these are dedicated computers within the PRC territory. That doesn’t need to bypass the Great Firewall because they’re designed for offense to sew discord around the Hong Kong case in Twitter.

  • This revelation, which was corroborated by Facebook and Google, also resulted in Twitter publishing exactly, as I said, the individual raw data of the identities of these accounts. People in Taiwan, especially investigative journalists working with data scientists did a pretty good analysis of those public data sets provided by Twitter.

  • These are a new norm surfacing, which is saying, if the private-sector platform doesn’t have the local context to make analysis and they don’t want to jump to conclusions, then instead of saying, “We just don’t know,” they could be saying, “Here are some data. Please help us to complete the analysis in another context.”

  • That’s how I was wondering the methodology by which Facebook and Twitter had determined these 10,000 accounts. It was because of the IP address and then have to get access?

  • The origin. They don’t even need to find oversea proxies because they have their access to the outer Internet. That means it’s not necessarily state-sponsored, but it must be state-blessed for that to happen. [laughs]

  • You are in conversations then with social-media groups making sure that you’re on top of this activity?

  • That is my job, yes.

  • Cool, cool, cool. Do you have any questions for me?

  • Thank you very much.

  • If I may, a question about the NCC cases. There have been some cases of NCC punishing CTI TV?

  • That’s cable TV, no?

  • What we were talking was with social media.

  • I’m just citing that it’s one of the examples because they were being fined for publishing misinformation. How is it harmful to the public?

  • I don’t think they’re being fined for publishing this information. They’re being fined for not adhering to the fact-checking process that they declared that they would do as an institutional media. It’s a defaulting on a promise of a media workflow. That’s the official reason.

  • It’s completely different from what we just talked about, reporting inauthentic behavior from social media fake accounts because these never promised to be institutional media to begin with. These are completely different cases.

  • I’m just also interested regarding NCC again. What’s the due process of deciding if something is misinformation or not? Who gets to decide that?

  • NCC only insists that institutional media, when they’re using a public spectrum or a cable TV in this case, that they need to adhere to the journalistic fact-checking process that they themselves designed and declared when they are first asking for license. That’s the criteria that they are using. They’re certainly not overriding editorial judgments.

  • What they’re saying is that, if you say you would do this process but in reality you just took a picture from a social media. Because it’s posted on this hour and five minutes later it becomes news, obviously there is no process [laughs] that you declared that you would do as an institutional media that went through it.

  • Unless, of course, there is some super-innovative technology that can complete source-checking and fact-checking in five minutes, then I would want to know about that as well.

  • (laughter)

  • That is, generally speaking, not saying that the truth must be arbitrated by the NCC. It’s generally speaking that institutional media is, after all, different from a person commenting on the Internet.

  • If an institutional media just used this as a single source, then it defeats the purpose of asking for a license for institutional media to operate in the first principles. I am not supervising the NCC. The NCC’s requirement for these is completely different from the social-media companies.

  • Although they are now also doing some governing of their own content, they’ve not used public spectrum or laid any cables over any months. Indeed, they didn’t ask for any license to operate. This is the norm-first part but a license, I think, is by its bylaws.

  • Is there any way you can quantify just how much inauthentic behavior you [laughs] see on Taiwanese social media?

  • That’s actually the question of the year. What we are now asking to the social-media people is that, because Facebook now has the credibility to answer their questions, to quantify that through their tools that they provide to the advertisers, like CrowdTangle and so on.

  • They actually have the workbench that, if you put in some criteria that you would define as inauthentic, not Facebook, as a social scientist or investigative journalist. If you enter those queries into this workbench, they can actually give you back the answer without compromising users’ privacy using a technology called differential privacy.

  • The understanding is that they’re providing the use of that to their partners that pay them for advertisement. They are also providing this to selective social scientists and not-for-profit organizations. I don’t know how many of those think-tanky people that you talked to had an access.

  • What we are looking at is whether it’s possible for everybody to repeat this inquiry because that’s how you do academic rigor. It’s only used when the social scientists say, “There’s X percent of CIBs. I give in my paper a query exactly as I enter it into the workbench.” For FB to say, “Here is the public workbench that you can validate it by yourself by entering those inquiries.”

  • That’s the only way that I know of to build a public accountability. If I give you a number, even if I have quantitative information to back this number, unless you can independently reproduce that number, then it’s actually qualitative. It’s not really [laughs] a number. It’s a assessment of feeling. That is the question of the year.

  • We’re working intently with the platforms to make this configuration possible.

  • How possible do you think it’ll be?

  • Mathematically, it’s very easy.

  • (laughter)

  • The hard problem has been solved by brilliant mathematicians a decade or so ago.

  • First, we need to translate these mathematical concepts into legal concepts that regulators are happy with. Otherwise, it could be portrayed easily as, “Everybody can invade everybody else’s privacy.” That will be a nightmare for everybody involved.

  • It’s more of a big cultural-translation thing. That’s the first thing. The second thing is that we also need to make sure that the data sets around which the people can select can be appealed to. If you find systemic omissions from those data sets, it must be for a very good reason.

  • If the social platform company cannot come forward with a good reason about systemic omissions, then there needs to be a mechanism to look at those omissions and say, “It’s a really good reason,” or “it’s just a business reason.” That’s what after Cambridge Analytica, what people are asking about.

  • As far as I understand, FB is working on the oversight board, with the goal of just solving this very problem. It’s in its early days, and we don’t know yet whether this a private sector governance with its own judicial branch is a good solution to this problem. We will see.

  • We’re also seeing different innovations in different multinational platforms. There is also some competition of innovation landscapes around here. It’s too early to say, but within a year or so I think the picture will be much more clear.

  • Are you able to disclose which platforms you are in talks with?

  • Sure. It’s public information, that anybody who signed on this self-regulation pact — the norm package — in Taiwan is willing to be talked with. In Taiwan, that’s Facebook, Google, LINE, Yahoo, and PTT to begin with.

  • Any more questions? I think that we’re pushing up against our allotted time.

  • It’s good. Thank you.