… I am delighted that you are able to take the time to speak with me, minister. My colleagues and I at NewsGuard admire your leadership in this area and admire in particular your success in countering misinformation in Taiwan. You probably don’t know that much about NewsGuard so…
Good. Thank you so much. Perhaps I can tell you a little about NewsGuard, and, of course, one of the reasons I am glad to be in touch is that we are considering expanding beyond North America and Europe. We’re considering whether Taiwan might be the most likely first country for us to operate in in Asia, and I wanted to get your guidance and thoughts.
We started NewsGuard about three years ago. The idea was to try to help people understand the difference between generally trustworthy sources of news and information and sources that are generally not trustworthy. I’d spent my career in journalism. I ran “The Wall Street Journal” and before that was based for almost a decade in Hong Kong running the “Far Eastern Economic Review” and other news operations for Dow Jones.
It had become clear to us a few years ago when we founded NewsGuard that it had become difficult for normal people to be able, as they’re reading news on their phone through Facebook, or through Twitter, or through search, to know the difference between a generally reliable source and one that is not reliable.
We did a very simple thing, which was to identify nine basic, apolitical criteria of journalistic practice relating to credibility and transparency. “Does a website make corrections? Does it disclose its ownership?” In the countries in which we operate, we’ve rated all of the news and information sources that account for 95 percent of engagement.
Our core business model is to work with third parties interested in providing a safer internet. For example, Microsoft has licensed our ratings and labels for everyone that uses its Edge browser. Microsoft also uses our ratings to help to improve the quality of Bing search results and a number of their other products.
In some countries like the United Kingdom, Internet service providers have licensed our ratings for their subscribers. These internet providers tell their customer households, “We didn’t create misinformation on the Internet, but we’re bringing broadband into your homes, so here’s a tool that you can use to protect your family from misinformation and hoaxes.”
Almost one thousand public libraries have downloaded our browser extension onto the computers that library patrons use. We’re used in many universities and schools as a news literacy tool. Likewise, hospital systems license us for their patients and staff.
We do have a browser extension that consumers can sign up for directly, for which we charge the equivalent of US $3 a month. That’s a small part of our business. Our core business is finding technology companies, Internet service providers and dozens of other partners that use our ratings and labels as a safety tool.
Eventually we hope that the leading digital platforms such as Facebook, Twitter, YouTube, and others, will give their users the choice to have our ratings embedded in their products, as a middleware solution. So far, the only technology company that has done that is Microsoft. We remain optimistic that eventually others will.
When it comes to disinformation, we found that if you take anything down, people get enraged and they share it even more. If you classify certain words as hate speech, as some people in Germany have legally obliged their service providers to do so, they just change to use some other innuendo, and really the toxicity of the conversation, it goes down for a while, then it goes up again.
I believe we’re very much aligned. From what I understand, though, Facebook and Friends have already incorporated their international fact checking networks mandatory labeling to first flag as disinformation material, but more and more to all materials where the ranking came from independent journalist working the fact checking organization.
Thank you for that question. To your earlier comments, we strongly agree with you that it’s much better to give people more information rather than censoring something or taking down accounts. There’s persuasive research that says if untrusted digital platforms take down information, many people will think there must be something true in what is being censored.
Our relationship with the fact checkers is we act as an amplifier of their work. When we do our ratings of news sites, one of the criteria that we use is, does this website repeatedly publish false content? We often cite the work of the fact checkers. On the other hand, fFrom the consumer’s point of view, the user’s point of view, we’re quite different from the fact checkers. Fact checkers have two big frustrations. One is that it’s very difficult for them to fact check a very high percentage of the false information being published. There’s so much of it. The other is that face checkers are by definition checking the facts only after the fact, after stories have gone viral.
From a user’s point of view, Facebook and others may include fact checks, but again, only after the fact. The difference is what we do is we rate at the domain level, the source level, rather than the article claim. By seeing a red icon or a green icon and a point score, a user will know right away that if the story is from a red-rated source, they should proceed with caution.
In effect, what we do is we pre-bunk false claims. An extraordinarily high percentage of the most popular misinformation is spread by news or information sites–or sites claiming to be news or information authorities. Individual social media accounts then pick up misinformation and spread it, but they almost always cite back to what looks like an authoritative source.
One example we give is, before COVID 19, there were a number of websites, mostly in Europe, that claimed that the 5G technology caused cancer. We had rated several of these red because they published many hoaxes. When COVID 19 began, they began to publish the false claim that 5G caused COVID 19. We had already rated those sites red for being unreliable. People who had our rating would have had the chance to be warned and conclude, “Maybe this is not true,” whereas it took fact checkers days or weeks to see some COVID-19 false claims and then to report the claim.
We collaborate with all the fact checkers. We actually have a separate product for defense ministries and intelligence agencies, which is we call our Misinformation Fingerprints, which was created at the request of a unit of the US Pentagon and a request by In-Q-Tel, the US intelligence venture firm. This is a constantly updated catalog of all of the top current hoaxes and misinformation in a machine readable format. Cyber Command in the US, for example, has used the Misinformation Fingerpirnts to trace Russian and Chinese misinformation and to identify all examples of this misinformation anywhere it exists on the Internet. This allows analysts to understand the nature of the misinformation, including its provenance–the origin of the misinformation–and who is sharing it to give the misinformation wide reach such as through social media.
Fact checking alone is not enough. It doesn’t scale well. The digital platforms, while they work with the fact checkers, do not do a very good job of integrating fact checks into their products in such a way that users are truly empowered to know trustworthy sources versus untrustworthy sources.
In Taiwan, if any post contains the word “vaccine” or “email,” there’s mandatory labeling. I’m aware of the dynamic that you’re describing where the fact checkers, although they fact check at the individual pieces level, they need to focus their energy on the truly viral ones.
It’s not really useful without a dashboard that can show which two or three trending disinformation is at the moment, which is the basis of the Taiwan model, by the way, is to identify which disinformation are going viral.
We become essentially reliant on the individual app developers to integrate such capabilities or the third party developers as in the chatbots from Trend Micro, our leading antivirus company, to add that functionality after the fact as a chatbot.
Overwhelmingly, these are the places where the disinformation first goes viral, the end to end encrypted chat rooms and end to end encrypted messages. You’re referring to a different configuration where people has the capacity to install extensions into their own browsers, in which case it’s probably on the desktop anyway.
You’re absolutely right, of course. Most people will get their information from mobile devices and usually, of course, within a walled garden app or another. We’re working as hard as we can to try to persuade the platforms to integrate us as a middleware solution, to be opt in or opt out, whatever they want.
On Mobile Edge, the Microsoft mobile app, there is an integration. It’s not as user friendly as the browser extension that we built, but it does show that it can be done. We’re working with a new social media company based in the UK and a new search company based in Silicon Valley, run by a former top Google executive, Sridhar Ramaswamy. His search engine, called Neeeva, will integrate NewsGuard ratings and labels into search results so that people get instant guidance on the trustworthiness of sources in the search results.
OK. It sounds like by partnering with Neeva and Edge Mobile, you already have the localization and display issues, such as either right to left writing or whatever, ideographic layouts or whatever, figure out by their and your user experience teams, right?
Yes, you’re absolutely correct. There are different design choices that platforms can use, but the general idea is the same. There should be a red or green icon, an easy way for a user to get more information. The challenge is not really a technology challenge. The challenge is persuading the biggest digital platforms to make a middleware solution available as an option for their users.
I hope that will change. As you’re probably aware, in the United Kingdom, the Online Safety legislation is designed to create a new duty of care by the largest digital platforms. The British Government has published a list of about 80 middleware solutions in different areas — bullying, harassment, sexual abuse, and misinformation — and they expect the digital platforms to make these middleware solutions available to their users to show that they have taken reasonable steps to protect their users from the harms they currently cause.
We hope that the pressure from the UK government will encourage the platforms to be more open. You’ve put your finger on the main issue, which the biggest platforms, at least so far, have been reluctant to open up to safety tools like ours.
They did it with even more, let’s say, resilience than the previous copyright violation, which was quite lax. Maybe because there was no popular demands, just industrial demand. For their sexually explicit images and videos, there’s both industry and public demand.
That’s right. Against sexual abuse. That’s exactly right. I think we need to raise this abuse against journalism, crime against journalism, to the same degree. More than copyrights abuse and approaching sexual abuse. I think that that’s winning ticket.
We do have a product for advertisers to enable them to make sure their programmatic advertising appears only on appropriate websites. We track very closely whose advertising is appearing on which websites. One of my favorite examples is the US Centers for Disease Control: The CDC has an advertised on “Global Times” and other mainland Beijing-controlled disinformation sites, alongside articles claiming that COVID 19 was developed at a US military lab.
The advertisers are beginning to see that they have a problem with the platforms and with programmatic advertising. That realization is putting pressure on the platforms and ad-tech companies, the same way pornography led the ad-tech companies to take new steps to ensure that programmatic ads didn’t reun on pornography sites.
OK. Yeah. In Taiwan we’ve intentionally classified disinformation not as, say, fake news, but classifying it in the same category as spam or junk, as an intentionally false that does public harm, mostly because people and the advertisers especially, already has a distaste to scam and spam.
By classifying disinformation with spammer’s scheme, they have a mental image that’s a part of this integrity, of their browsing or customer experience that makes this convincing easier to do. The integrity of their brand, essentially.
We think about Taiwan not because we think it has a big misinformation problem. We think about Taiwan because it has a robust and free press, where we could operate freely and where there is, of course, the problem of disinformation targeting Taiwan by Beijing.
As I mentioned, I do think that the existing ecosystem with the delivery mechanisms of chatbots and absence of one are already, I wouldn’t say mature, but they’re reasonably resilient. My suggestion, if you are to make an impact in Taiwan, is to first, of course, continue your work with Mobile Edge and others.
In Taiwan, we have a lot of start ups, and so on, tackling this, as I mentioned, anti scam, anti spam browsing experience. For example, the Puffin Web Browser, which is actually Taiwan based, it’s a US company, but all its employees are in Taiwan.
The idea of this kind of isolated browsing experience shields against computer virus because it runs on another manager machine, but also, it makes introducing far more sweeping changes to user experience much easier because in a sense, they’re just transforming these middlewares into the final presentation layer.
Thank you. That’s a wonderful suggestion. I appreciate that. You mentioned Trend Micro earlier and we have worked with them on their messaging apps. I think it’s only in the English speaking markets, where if you have a question about a source, you can send the name of the source. It will come back with our rating and Nutritional Label. This is a form of a middleware solution.
Yeah, and they have a QR code scanner as well. In Taiwan, when you check in a venue, you scan the QR code that redirects to SMS, but because of that, people gains a habit of scanning QR code, which can be dangerous for their phones.
Trend Micro rolls out its own QR code scanner and integrates with, as I mentioned, those notice and public notice anti scammer spam tools, so if they scan a QR code and see a red label saying this is a disinformation domain, that works too as part of their experience.
In order to make a real impact, what we did when we introduced a SMS based QR code, is to work with LINE itself. So that its main QR code scanner, the scanner that’s used to add context, it’s convince it’s Japan headquarter to add a SMS specific module. So that people know this is the SMS sending kind of QR code, instead of blindly accepting each and every website.
If some sort of labeling can be done at a domain level, at an embedded browser, within the LINE ecosystem, just like the embedded browser within Skype, or embedded browser within the Messenger, then we are talking. That’s like 90 percent of people’s actual dedicated time. What we’ve been discussing is 4 percent.
CoFacts is at an upstream of the identifying, trending disinformation. On the downstream there’s the town fact checking center, the Trend Micro, Whoscall, and other folks. CoFacts is the main community that works on the upstream level.
In Taiwan these two sometimes side each other, but in Japan, they rarely do. Whatever your prototype in Taiwan, if it works to a certain degree, then serves as a very convincing example for Japan to say, “We can then do better, because we have even more established journalistic integrity sector.”
Minister, I think you’re right about Japan. Wall Street Journal did a joint venture with Nihon Keizai Shimbun, Nikkei. I was amazed at the difference in standards. We thought we had high standards–they had truly high standards.
Very impressive. When you think about misinformation problems in Taiwan, at this point are you more concerned about COVID and health topics, or more concerned about disinformation from Beijing or some other area of concern?
In Taiwan we’ve seen basically a resilience framework of the delivery mechanism we just talked about, works really well in the day to day operations. Even during COVID, I believe they’ve only strengthened itself.
Only in the days leading up to a major election or referendum, do the volume of disinformation actually exceed the capacity of the voluntary and professional, both fact checkers, as well as other automated mechanisms. So much so that everyone need to adjust their heuristics on the fly. That’s because, of course, the payoff is high and it’s really just these 72 hours.
Mostly, our integrity of the counting process itself, which is counting paper ballots and live streams and filmed by YouTubers from different party affiliations works quite well, so people don’t question the result of the election itself. Still, the polarization and the other name calling and divisiveness and toxic behavior, that US as an advanced democracy, probably doesn’t have, but in our kind of democracy we have plenty of those.
I’m mostly worried about that. We’ll see if in our December referendum, which has four national referendum topics, whether these dynamics enter into play and whether the capacity of the social sector organizations, including CoFacts and everything we just mentioned, is up to the task of handling the incoming torrents. If you ask me what’s on my mind, this is on my mind.
Even before the Internet, politicians would spread disinformation, it’s always been part of the trade. We did some reports on the German elections recently. It wasn’t as bad as the US election, but it surprised the Germans.
That’s right. We need to leverage the election times to highlight the public health harm of hurting the…Call it public mental image. If the mental image of the public is filled with scam and spam, then people do not have a good democratic experience.
Maybe we’ll, over time, think authoritarianism maybe isn’t that bad after all. This integrity of the mental image is certainly, at this moment, not ranking as high as integrity of body image in the social media providers.
The election times really are the times when we see the Cambridge Analytica and many other incidents. These are the places where the public consciousness recognizes that. This is something that’s worth protecting.
Exactly. Minister, I want to thank you very much for taking the time. I also want to thank you for protecting free and open journalism in Taiwan. It is depressing what has happened to the once-vibrant news media in Hong Kong. Such a sad situation.
When I was a child it was the other way around. We need to smuggle our human right violations to the correspondents in Hong Kong so the international community learns about the Taiwanese martial law experience. Let’s say that we’re returning the favor but not in a very happy mood.
The anti scam/spam community is quite powerful in Internet governance. By labeling ourself as anti scam/spam solution, I believe that is, on the short term, the fastest way to get the social media companies to come to the table.