• … I am delighted that you are able to take the time to speak with me, minister. My colleagues and I at NewsGuard admire your leadership in this area and admire in particular your success in countering misinformation in Taiwan. You probably don’t know that much about NewsGuard so…

  • I have tried it out.

  • Good. Thank you so much. Perhaps I can tell you a little about NewsGuard, and, of course, one of the reasons I am glad to be in touch is that we are considering expanding beyond North America and Europe. We’re considering whether Taiwan might be the most likely first country for us to operate in in Asia, and I wanted to get your guidance and thoughts.

  • We started NewsGuard about three years ago. The idea was to try to help people understand the difference between generally trustworthy sources of news and information and sources that are generally not trustworthy. I’d spent my career in journalism. I ran “The Wall Street Journal” and before that was based for almost a decade in Hong Kong running the “Far Eastern Economic Review” and other news operations for Dow Jones.

  • It had become clear to us a few years ago when we founded NewsGuard that it had become difficult for normal people to be able, as they’re reading news on their phone through Facebook, or through Twitter, or through search, to know the difference between a generally reliable source and one that is not reliable.

  • We did a very simple thing, which was to identify nine basic, apolitical criteria of journalistic practice relating to credibility and transparency. “Does a website make corrections? Does it disclose its ownership?” In the countries in which we operate, we’ve rated all of the news and information sources that account for 95 percent of engagement.

  • Each website gets a point score from zero to one hundred and a rating through red or green icon, plus a Nutrition Label explaining in detail the nature of the site.

  • So far, we operate in the US, the United Kingdom, Germany, France, and Italy. We’ll launch later this year in Canada, Australia, and New Zealand. We’re looking to now to other potential launches.

  • Our core business model is to work with third parties interested in providing a safer internet. For example, Microsoft has licensed our ratings and labels for everyone that uses its Edge browser. Microsoft also uses our ratings to help to improve the quality of Bing search results and a number of their other products.

  • In some countries like the United Kingdom, Internet service providers have licensed our ratings for their subscribers. These internet providers tell their customer households, “We didn’t create misinformation on the Internet, but we’re bringing broadband into your homes, so here’s a tool that you can use to protect your family from misinformation and hoaxes.”

  • Almost one thousand public libraries have downloaded our browser extension onto the computers that library patrons use. We’re used in many universities and schools as a news literacy tool. Likewise, hospital systems license us for their patients and staff.

  • We do have a browser extension that consumers can sign up for directly, for which we charge the equivalent of US $3 a month. That’s a small part of our business. Our core business is finding technology companies, Internet service providers and dozens of other partners that use our ratings and labels as a safety tool.

  • Eventually we hope that the leading digital platforms such as Facebook, Twitter, YouTube, and others, will give their users the choice to have our ratings embedded in their products, as a middleware solution. So far, the only technology company that has done that is Microsoft. We remain optimistic that eventually others will.

  • Minister, maybe I should stop there and just ask if you have any questions about our rating system, how we go about our process, or if you have any other questions based on what I’ve said so far?

  • My preference is always to add contextual information instead of taking anything down. I believe that our values are very much aligned.

  • When it comes to disinformation, we found that if you take anything down, people get enraged and they share it even more. If you classify certain words as hate speech, as some people in Germany have legally obliged their service providers to do so, they just change to use some other innuendo, and really the toxicity of the conversation, it goes down for a while, then it goes up again.

  • I do believe that participatory journalism, that is, people understanding how journalism works and can devote their spare time to become part time journalists, is the real root cause solutions.

  • I believe we’re very much aligned. From what I understand, though, Facebook and Friends have already incorporated their international fact checking networks mandatory labeling to first flag as disinformation material, but more and more to all materials where the ranking came from independent journalist working the fact checking organization.

  • What is your relationship with the Poynter Institute and that’s coalition of people adding mandatory or advisory labels?

  • Thank you for that question. To your earlier comments, we strongly agree with you that it’s much better to give people more information rather than censoring something or taking down accounts. There’s persuasive research that says if untrusted digital platforms take down information, many people will think there must be something true in what is being censored.

  • Our relationship with the fact checkers is we act as an amplifier of their work. When we do our ratings of news sites, one of the criteria that we use is, does this website repeatedly publish false content? We often cite the work of the fact checkers. On the other hand, fFrom the consumer’s point of view, the user’s point of view, we’re quite different from the fact checkers. Fact checkers have two big frustrations. One is that it’s very difficult for them to fact check a very high percentage of the false information being published. There’s so much of it. The other is that face checkers are by definition checking the facts only after the fact, after stories have gone viral.

  • From a user’s point of view, Facebook and others may include fact checks, but again, only after the fact. The difference is what we do is we rate at the domain level, the source level, rather than the article claim. By seeing a red icon or a green icon and a point score, a user will know right away that if the story is from a red-rated source, they should proceed with caution.

  • In effect, what we do is we pre-bunk false claims. An extraordinarily high percentage of the most popular misinformation is spread by news or information sites–or sites claiming to be news or information authorities. Individual social media accounts then pick up misinformation and spread it, but they almost always cite back to what looks like an authoritative source.

  • But when false information and hoaxes come with a red icon in a person’s newsfeed, they’ll know if the claim is from an unreliable source. They’ll know to proceed with caution.

  • One example we give is, before COVID 19, there were a number of websites, mostly in Europe, that claimed that the 5G technology caused cancer. We had rated several of these red because they published many hoaxes. When COVID 19 began, they began to publish the false claim that 5G caused COVID 19. We had already rated those sites red for being unreliable. People who had our rating would have had the chance to be warned and conclude, “Maybe this is not true,” whereas it took fact checkers days or weeks to see some COVID-19 false claims and then to report the claim.

  • We collaborate with all the fact checkers. We actually have a separate product for defense ministries and intelligence agencies, which is we call our Misinformation Fingerprints, which was created at the request of a unit of the US Pentagon and a request by In-Q-Tel, the US intelligence venture firm. This is a constantly updated catalog of all of the top current hoaxes and misinformation in a machine readable format. Cyber Command in the US, for example, has used the Misinformation Fingerpirnts to trace Russian and Chinese misinformation and to identify all examples of this misinformation anywhere it exists on the Internet. This allows analysts to understand the nature of the misinformation, including its provenance–the origin of the misinformation–and who is sharing it to give the misinformation wide reach such as through social media.

  • Fact checking alone is not enough. It doesn’t scale well. The digital platforms, while they work with the fact checkers, do not do a very good job of integrating fact checks into their products in such a way that users are truly empowered to know trustworthy sources versus untrustworthy sources.

  • If you go onto Facebook, you’ll still see COVID 19 hoaxes with no warning label, made by what look like authoritative sources. I don’t know the situation in Taiwan, in…

  • In Taiwan, if any post contains the word “vaccine” or “email,” there’s mandatory labeling. I’m aware of the dynamic that you’re describing where the fact checkers, although they fact check at the individual pieces level, they need to focus their energy on the truly viral ones.

  • It’s not really useful without a dashboard that can show which two or three trending disinformation is at the moment, which is the basis of the Taiwan model, by the way, is to identify which disinformation are going viral.

  • We are facing a rather different configuration because I don’t think people on their mobile phones are eligible — let’s use this word, “eligible” — for a browser extension based solution.

  • We become essentially reliant on the individual app developers to integrate such capabilities or the third party developers as in the chatbots from Trend Micro, our leading antivirus company, to add that functionality after the fact as a chatbot.

  • Overwhelmingly, these are the places where the disinformation first goes viral, the end to end encrypted chat rooms and end to end encrypted messages. You’re referring to a different configuration where people has the capacity to install extensions into their own browsers, in which case it’s probably on the desktop anyway.

  • People operating in a desktop environment, say in a library, I can certainly see the value in that, but I do not think they are the main key influences in propagating and disseminating disinformation.

  • You’re absolutely right, of course. Most people will get their information from mobile devices and usually, of course, within a walled garden app or another. We’re working as hard as we can to try to persuade the platforms to integrate us as a middleware solution, to be opt in or opt out, whatever they want.

  • On Mobile Edge, the Microsoft mobile app, there is an integration. It’s not as user friendly as the browser extension that we built, but it does show that it can be done. We’re working with a new social media company based in the UK and a new search company based in Silicon Valley, run by a former top Google executive, Sridhar Ramaswamy. His search engine, called Neeeva, will integrate NewsGuard ratings and labels into search results so that people get instant guidance on the trustworthiness of sources in the search results.

  • Yes, I’m aware of that.

  • This integration enables people getting their news in the mobile device also to have access to the ratings and labels.

  • OK. It sounds like by partnering with Neeva and Edge Mobile, you already have the localization and display issues, such as either right to left writing or whatever, ideographic layouts or whatever, figure out by their and your user experience teams, right?

  • If you are to say go mainstream and integrate into mobile Twitter app and so on, you already have pretty solid examples to point the Twitter designers to. That’s my understanding.

  • Yes, you’re absolutely correct. There are different design choices that platforms can use, but the general idea is the same. There should be a red or green icon, an easy way for a user to get more information. The challenge is not really a technology challenge. The challenge is persuading the biggest digital platforms to make a middleware solution available as an option for their users.

  • So far, Facebook, Twitter, YouTube have been reluctant — maybe that’s the best word — reluctant to open their platforms to middleware integrations.

  • I hope that will change. As you’re probably aware, in the United Kingdom, the Online Safety legislation is designed to create a new duty of care by the largest digital platforms. The British Government has published a list of about 80 middleware solutions in different areas — bullying, harassment, sexual abuse, and misinformation — and they expect the digital platforms to make these middleware solutions available to their users to show that they have taken reasonable steps to protect their users from the harms they currently cause.

  • We hope that the pressure from the UK government will encourage the platforms to be more open. You’ve put your finger on the main issue, which the biggest platforms, at least so far, have been reluctant to open up to safety tools like ours.

  • If it’s about the intimate images, they actually work quite quickly.

  • That each and every image and movie uploaded is automatically scanned and so on.

  • They did it with even more, let’s say, resilience than the previous copyright violation, which was quite lax. Maybe because there was no popular demands, just industrial demand. For their sexually explicit images and videos, there’s both industry and public demand.

  • I don’t want to appear cynical at all, but I think the advertisers were very firm. They never wanted their ad against sexual images.

  • That’s right. Against sexual abuse. That’s exactly right. I think we need to raise this abuse against journalism, crime against journalism, to the same degree. More than copyrights abuse and approaching sexual abuse. I think that that’s winning ticket.

  • Somewhere between would be a big advance.

  • That’s right, because we’re currently below copyright abuse.

  • We do have a product for advertisers to enable them to make sure their programmatic advertising appears only on appropriate websites. We track very closely whose advertising is appearing on which websites. One of my favorite examples is the US Centers for Disease Control: The CDC has an advertised on “Global Times” and other mainland Beijing-controlled disinformation sites, alongside articles claiming that COVID 19 was developed at a US military lab.

  • The advertisers are beginning to see that they have a problem with the platforms and with programmatic advertising. That realization is putting pressure on the platforms and ad-tech companies, the same way pornography led the ad-tech companies to take new steps to ensure that programmatic ads didn’t reun on pornography sites.

  • OK. Yeah. In Taiwan we’ve intentionally classified disinformation not as, say, fake news, but classifying it in the same category as spam or junk, as an intentionally false that does public harm, mostly because people and the advertisers especially, already has a distaste to scam and spam.

  • By classifying disinformation with spammer’s scheme, they have a mental image that’s a part of this integrity, of their browsing or customer experience that makes this convincing easier to do. The integrity of their brand, essentially.

  • It’s almost positioned as if it were spam or a virus software.

  • It’s a spam that grows at virus scam speed, but it’s the same thing.

  • Right. That’s a great way of thinking about it. Could I ask this?

  • We think about Taiwan not because we think it has a big misinformation problem. We think about Taiwan because it has a robust and free press, where we could operate freely and where there is, of course, the problem of disinformation targeting Taiwan by Beijing.

  • Is your sense that if we were to operate in Taiwan that we would be able to add another meaninful layer of protection for people there?

  • As I mentioned, I do think that the existing ecosystem with the delivery mechanisms of chatbots and absence of one are already, I wouldn’t say mature, but they’re reasonably resilient. My suggestion, if you are to make an impact in Taiwan, is to first, of course, continue your work with Mobile Edge and others.

  • Taiwan has a very thriving Mozilla/Firefox developer advocacy base. Many of the Firefox popular functions, extensions, or even operating system used to be developed out of this Taiwan community.

  • What I would suggest is to engage with the Mozilla Taiwan Community, who can probably improve a lot on the extension based experience. They have a lot of experience to building alternate browsers.

  • In Taiwan, we have a lot of start ups, and so on, tackling this, as I mentioned, anti scam, anti spam browsing experience. For example, the Puffin Web Browser, which is actually Taiwan based, it’s a US company, but all its employees are in Taiwan.

  • The idea of this kind of isolated browsing experience shields against computer virus because it runs on another manager machine, but also, it makes introducing far more sweeping changes to user experience much easier because in a sense, they’re just transforming these middlewares into the final presentation layer.

  • The public library computers are just displaying the resulting transformation layer. This is much easier ASL than installing individual pieces of applets or JavaScript to run on a public library computer, which is always an uphill battle, as you probably know. Your natural allies are these already established middleware solutions.

  • By offering your service as part of the toolkit against scam and spam, you don’t have to do the business development yourself, is my suggestion.

  • Thank you. That’s a wonderful suggestion. I appreciate that. You mentioned Trend Micro earlier and we have worked with them on their messaging apps. I think it’s only in the English speaking markets, where if you have a question about a source, you can send the name of the source. It will come back with our rating and Nutritional Label. This is a form of a middleware solution.

  • Yeah, and they have a QR code scanner as well. In Taiwan, when you check in a venue, you scan the QR code that redirects to SMS, but because of that, people gains a habit of scanning QR code, which can be dangerous for their phones.

  • Trend Micro rolls out its own QR code scanner and integrates with, as I mentioned, those notice and public notice anti scammer spam tools, so if they scan a QR code and see a red label saying this is a disinformation domain, that works too as part of their experience.

  • In that sense, they’re like the Puffin browser. They’re one step between any customer and the website, they’re a semi browser.

  • It’s quite a change for me to have a conversation with a digital minister, giving me technology tips.

  • (laughter)

  • Yes, right. I think what we’ve been discussing are just proof of concepts.

  • In order to make a real impact, what we did when we introduced a SMS based QR code, is to work with LINE itself. So that its main QR code scanner, the scanner that’s used to add context, it’s convince it’s Japan headquarter to add a SMS specific module. So that people know this is the SMS sending kind of QR code, instead of blindly accepting each and every website.

  • A dedicated, tailor made middleware for counter pandemic, contact tracing purposes.

  • If some sort of labeling can be done at a domain level, at an embedded browser, within the LINE ecosystem, just like the embedded browser within Skype, or embedded browser within the Messenger, then we are talking. That’s like 90 percent of people’s actual dedicated time. What we’ve been discussing is 4 percent.

  • Correct. Minister, do people in Taiwan use some of the SMS systems in order to learn whether a particular claim is true or not true?

  • They use a very popular LINE Bot for that, and also is a website, it’s the part of the g0v communities efforts called, CoFacts, for collaborative fact checking.

  • CoFacts is at an upstream of the identifying, trending disinformation. On the downstream there’s the town fact checking center, the Trend Micro, Whoscall, and other folks. CoFacts is the main community that works on the upstream level.

  • Got it. That’s fascinating. I think we would add to reasons to have a presence in Taiwan that we would be better able to work with the development community there.

  • You can also serve as a platform of which, these newer innovations out of Taiwan that has reasonable success in our jurisdiction and maybe you can introduce it to libraries in UK or something.

  • Yeah, why not. I think Trend Micro, in particular, is also quite active in North America and Europe.

  • Whoscall is actually getting some adoption in Japan as well, which is the other Indo-Pacific jurisdiction that’s very friendly to the press.

  • We’re considering Japan as well. As you know, Japan is a great and large market, but on the other hand it’s a difficult market. I think Taiwan might be a little bit more open.

  • (laughter)

  • Whoscall seems to be enjoying reasonable success there. I believe that’s because, in Japan, journalistic integrity is held even more on a different strata than commentaries and information on the web.

  • In Taiwan these two sometimes side each other, but in Japan, they rarely do. Whatever your prototype in Taiwan, if it works to a certain degree, then serves as a very convincing example for Japan to say, “We can then do better, because we have even more established journalistic integrity sector.”

  • Minister, I think you’re right about Japan. Wall Street Journal did a joint venture with Nihon Keizai Shimbun, Nikkei. I was amazed at the difference in standards. We thought we had high standards–they had truly high standards.

  • Very impressive. When you think about misinformation problems in Taiwan, at this point are you more concerned about COVID and health topics, or more concerned about disinformation from Beijing or some other area of concern?

  • Certainly election integrity. There’s a very special relationship between disinformation and a partisan dynamic during elections.

  • In Taiwan we’ve seen basically a resilience framework of the delivery mechanism we just talked about, works really well in the day to day operations. Even during COVID, I believe they’ve only strengthened itself.

  • Only in the days leading up to a major election or referendum, do the volume of disinformation actually exceed the capacity of the voluntary and professional, both fact checkers, as well as other automated mechanisms. So much so that everyone need to adjust their heuristics on the fly. That’s because, of course, the payoff is high and it’s really just these 72 hours.

  • Mostly, our integrity of the counting process itself, which is counting paper ballots and live streams and filmed by YouTubers from different party affiliations works quite well, so people don’t question the result of the election itself. Still, the polarization and the other name calling and divisiveness and toxic behavior, that US as an advanced democracy, probably doesn’t have, but in our kind of democracy we have plenty of those.

  • (laughter)

  • I’m mostly worried about that. We’ll see if in our December referendum, which has four national referendum topics, whether these dynamics enter into play and whether the capacity of the social sector organizations, including CoFacts and everything we just mentioned, is up to the task of handling the incoming torrents. If you ask me what’s on my mind, this is on my mind.

  • Even before the Internet, politicians would spread disinformation, it’s always been part of the trade. We did some reports on the German elections recently. It wasn’t as bad as the US election, but it surprised the Germans.

  • That’s right. We need to leverage the election times to highlight the public health harm of hurting the…Call it public mental image. If the mental image of the public is filled with scam and spam, then people do not have a good democratic experience.

  • Maybe we’ll, over time, think authoritarianism maybe isn’t that bad after all. This integrity of the mental image is certainly, at this moment, not ranking as high as integrity of body image in the social media providers.

  • The election times really are the times when we see the Cambridge Analytica and many other incidents. These are the places where the public consciousness recognizes that. This is something that’s worth protecting.

  • Exactly. Minister, I want to thank you very much for taking the time. I also want to thank you for protecting free and open journalism in Taiwan. It is depressing what has happened to the once-vibrant news media in Hong Kong. Such a sad situation.

  • When I was a child it was the other way around. We need to smuggle our human right violations to the correspondents in Hong Kong so the international community learns about the Taiwanese martial law experience. Let’s say that we’re returning the favor but not in a very happy mood.

  • I’m just delighted that Taiwan is in the position that it’s in. Thank you very much for your time.

  • Before we decide one way or the other, I might try to get another little bit of your time just to update you.

  • Thank you especially for your suggestions about working with some of the middleware providers. That’s an excellent suggestion. I appreciate it.

  • The anti scam/spam community is quite powerful in Internet governance. By labeling ourself as anti scam/spam solution, I believe that is, on the short term, the fastest way to get the social media companies to come to the table.

  • That’s a great way of putting it. I’m going to start using that phrase.

  • Thank you so much. Frances will provide you with the transcript. We co edit for 10 days and then we publish.