-
Thank you for visiting. Now that we have proper coffee and that we’re actually sitting instead of standing waiting for each other…
-
(laughter)
-
Thank you so much for taking the time to meet with us amid a really busy schedule. Just by way of reintroduction, Jun and I are members of Twitter’s public policy and philanthropy team. We work across the Asia Pacific.
-
We’re based in Singapore. Jun covers the Greater China region and so spends…
-
You must be really busy now.
-
(laughter)
-
Yeah, there’s a lot going on. There’s a lot going on. We work more widely than that. We have a presence that spans from Australia to Japan. At Twitter we say, such as it is, we’re a growing company.
-
We may not have the ubiquity of some other platforms, but we have a lot of engagement and a lot of impact in different places and countries around the world, so we’re more popular in some than in others.
-
Our service is becoming more popular in Taiwan, and it’s growing overall in popularity across the APAC region. We like to say amongst ourselves that APAC is one of the growth engines of Twitter as a company, so we have three considerations for our work on the public policy team.
-
The first is our commitment to public policy work. The second is what we call @Gov, since 97 percent of the world’s governments in some fashion or another have a strong presence on Twitter.
-
The third is that we dispense the company’s commitments to corporate social responsibility. We’re responsible for the philanthropic commitments of the company.
-
We have a framework that we broadly refer to as Twitter for Good, and five pillars within that framework. They span digital safety and Internet education, digital literacy, to equality and equal rights, to freedom of expression and human rights.
-
More recently, we’ve added the climate crisis and environment to the portfolio, and there’s a number of reasons for that. One is that Twitter as a very public platform, with more open APIs and more open global conversations, its a place that welcomes, and is also more popular for discussing really complex problem sets.
-
We often talk about how Twitter is quick, how Twitter is sharp and fast, how it rewards short messaging. The truth is also that because of the way we’ve expanded and built out the platform inclusive of threading and tweet storms, the ability to post more video and to do more media-rich work and conversations there…
-
Micro-blogging to mini-blogging. [laughs]
-
A combination thereof, exactly. For real wicked problems that we face in the world, like climate change, it’s an enormous connector for important conversations and debates. We’ve noticed that conversation powerful and moving on our platform.
-
We’ve also noticed that many of our employees, many of our partners all over the world were asking us to double down on our commitments to this, particularly as a large international company, as well. We’ve added that more recently to the portfolio of work we do with partners all across the region.
-
As I said, Jun and I are based in Singapore, which is also the APAC regional headquarters of Twitter. We’re here in Taiwan this week to attend and support the Oslo Freedom Forum in Taipei.
-
Which I will deliver a remark to.
-
Yes. We’re looking forward to that, amazing. We understand that the Taiwanese government is supporting and helping to sponsor that effort.
-
We’ve in fact been working with OFF for five years already. It’s our work here in the Asia Pacific that’s more recent, as we’ve grown. We’ve been working directly in Oslo or, more recently, in the regional locations such as Mexico. I, in fact, attended the OFF last year here in Taiwan for the first time. We’re excited to be back this year as a co-sponsor of the event.
-
We’ll have a booth and we’ll be available to answer questions, and to speak to attendees who want to learn more about Twitter or get updates from us. We’re looking at providing workshops, rules and tools, and updates on how to run successful campaigns, and how to stay safe on our platform, for some of the activists who are traveling in from around the region to attend.
-
Yeah, we’ve been providing some sponsoring support so that Human Rights First and the Forum can have a strong voice and easy conversations on our platform. We’re looking forward to joining them. They do tremendous work. They’re an amazing convener.
-
We think that the recognition that they provide to some of the most vulnerable and important voices, if not some of the most exposed and brave voices, is critically important. It is part of a network of efforts, visible and less visible, by human rights activists all over the region, all over the world, really.
-
It’s a big part of our commitment as a company to help create the spaces for those groups to thrive.
-
It’s a big commitment of our company to support and expand the spaces for marginalized and less-represented voices, and vulnerable and at-risk communities that would otherwise be ignored or paid less attention by mainstream media and information ecosystems in their own respective countries or locations.
-
That’s something we really treasure as part of what we do, as a company. It’s one of the powerful reasons why people like to work at the company too, because we’ve made that commitment. Our CEO has made that commitment.
-
That’s part of what we’re here to represent this week. In addition to that, I would just add that we’ve been working together as a company to double-down on a commitment that our CEO first spoke about on March 1st in 2018, about the growth of healthy public conversations online.
-
It’s the number one priority of our CEO, and the number one priority of our company. There are a number of efforts and steps that we’ve taken since then, to expand that commitment, but also to show the results.
-
We’ve been trying very hard to address and mitigate for abuse and harassment, to reduce the burdens on people who use our platform to have to report themselves, and to make it a safer, more welcome place for people to have the conversations that they want.
-
Do you have anything to add?
-
I am under Kathleen’s guidance.
-
She’s with me!
-
Yeah. [laughs]
-
I’m working with the government also, not for the government, just to be clear.
-
(laughter)
-
Actually, we’re all together. I’m actually from Hong Kong. Before joining Twitter, I lived in Washington DC. I moved to Singapore for this job, because it’s such a great opportunity. My colleagues are really great. In DC you met some of my colleagues.
-
I did.
-
I joined about a year ago. Like Kathleen said, a part of the beneficiary of the company’s growth is this part of the world. Some of the work that I do is exactly what Kathleen said, along the three pillars of our work for policy, for Twitter government, and for Twitter for good in this part of the world.
-
Particularly, I noticed Taiwan, the Taiwanese government, whether it’s political parties or government agencies, are very active and they’re very open-minded in terms of Twitter adoption. That makes my job slightly easier.
-
Including yourself, Minister Tang, you are really, really good at your Twitter game.
-
It’s about as recent as your employment.
-
(laughter)
-
I like to think that maybe because I helped a little bit.
-
(laughter)
-
That’s right. [laughs]
-
We’ve had success in terms of onboarding the Premier, Su Tseng-chang, the Vice Premier, Chen Chi-mai and also government agencies like the Ministry of Culture, Ministry of Foreign Affairs, and President Tsai. Her Twitter game is on fire.
-
In addition to government agencies, we also do a lot of Twitter for Good work in this part of the world. Some of the partners that we have in Taiwan are really doing exemplary work. We’ve been working with 展翅協會 at Taiwan.
-
The ECPAT, yes.
-
I’m sure they also work with other platforms as well, to promote anti-CSE, child sexual exploitation. We recently started working with Garden of Hope, which is another great organization. It has been around for a very long time.
-
This time around we are here for the Oslo Freedom Forum to support human rights and free expression, we are very proud to be working with the Taiwan Association for Human Rights, 臺權會…
-
…they just got a Presidential Culture Award.
-
Exactly.
-
They were wondering whether to accept it or not, for the mid-October ceremony.
-
Also, the Open Culture Foundation, the OCF, they also attended the OFF last year. I know you are also very involved in the Open Government, Open Data community.
-
That’s right.
-
All these are great partners. I’m very grateful that people in Taiwan…I love the Taiwanese people, very open-minded and very receptive to new ideas and new platforms. We’re very grateful.
-
We’ve heard about you and how wonderful and supportive the Taiwanese government is. We just want to come here and introduce ourselves in person.
-
Excellent. Joel, would you be my timekeeper and remind me of the cabinet meeting? OK.
-
(laughter)
-
We don’t want you to miss a cabinet meeting for Twitter.
-
That’s right.
-
As happy as we are to see you, we don’t want to make you late for a cabinet meeting.
-
(laughter)
-
All right. A few things. First of all, when I talked to your DC counterparts, they mentioned that, especially around election seasons and large social movements, they’re actively considering revising the traditional privacy protection for robots. That is, coordinated fake account activities.
-
Privacy, of course, belongs to natural persons. If you can conclusively prove that these are automated bots, operated in a very coordinated fashion, that no human being can control 200,000 accounts at once – like “Lucy” in the movie – then it is OK or considered a new cyber norm for Twitter to publish not only the tweets those accounts have posted before taking them down, but also the metadata, meaning the user accounts, their online behavior patterns, and things like that.
-
We did just see the data sets published around Hong Kong.
-
I think you’re referring to the…
-
The Twitter datasets from the portion of the PRC that did not require VPN to access Twitter.
-
Those are the most recent data disclosures in relation to the Hong Kong protests. That’s right. To be clear, we act on these as soon as it comes to our attention that it might be violating our policies.
-
That includes attempts to manipulate conversations, or malicious, coordinated behavior. In that regard, our fulfillment to openness and transparency about these kinds of actions when they occur is part of a broader commitment we have to be a more transparent company.
-
In our view, I think what we would want to say here is that there will be more of these disclosures going forward. It’s more likely that they will be more regular and more often. This is the work of today and tomorrow at the moment for many platforms, including ours.
-
Our view is that, en route to making sure that our platform is healthy, open, and safe, that we need to address those kinds of accounts. I think one of the things you would have seen in the report is that while some accounts can be operated by people, that doesn’t mean that they’re not being manipulated or that the holders of those accounts aren’t…
-
Operated with Assistive Intelligence.
-
Sorry?
-
Assistive Intelligence. That’s my usual expansion of the term AI. It doesn’t mean that there’s no human working in collaboration with AI, but they are working at such a speed and precision that it couldn’t just be a human alone.
-
That’s right. In this particular case, it looks like it may have been a combination, though. At that particular scale, obviously, it has to have been, as you say, a combination of both. That’s why you’ve seen some coverage and some responses from individuals, as well as statements and commentary around the accounts.
-
There’s been some very interesting investigations since we’ve made that data public. That’s one of the reasons why we do that. A number of credible researchers and academics who work in this space have already been digging into that data and have made analyses.
-
I went through the CSV files myself.
-
You went through it yourself?
-
Yeah.
-
What’s your take on what you saw?
-
I think, first, your approach of releasing the hashed identities strikes a good compromise.
-
We think so.
-
We also communicated with Facebook, which responded, I think with a tip from you, to act similarly, but without releasing the hashed metadata, or indeed, any metadata. They just published statistics of how many accounts, and not even which duration or anything like that.
-
I believe Google also followed Twitter tips, but also disclosed a similar amount of Facebook, which is to say, not very quantitatively investigable. My take is that your action is showing a new direction for a cyber norm that clearly delineates activities such as these and activities that are organically viral.
-
This distinction is not yet made by the other larger platforms, perhaps because they have different legal views on this matter. I think I would strongly encourage their legal teams to move toward your direction, which is, I think, much more accountable and allows for the social and academic sectors to have the power of interpretation.
-
The usual trade-off of, for example, the Manila Principle was between the government’s overstepping of a ministerial correction order versus a private sector company over-censorship, as in the case of NetzDG.
-
These two, I’m not saying that they are not working well. They’re working somewhat well. However, they take more liberties from the public, centralized to one of the first or second – that is to say, public or private – sectors.
-
Your approach empowers the social sector and the academia. I think that is to be commended.
-
Thank you.
-
Thank you. We hope so. Our view is we are not an intelligence agency. We’re not a law enforcement agency. We don’t have those investigatory powers, nor is the mission and the purpose of our company to spend time working on and solving those issues.
-
Our commitment is to contributing where we can and sharing what we learn and find that happens on our platform. The people who use our platform and come to it every day are everything. It’s their resource, and we share it with them, too.
-
Our commitment is to those customers, those citizens who want reassurances and want to know that it is a safer place to be. Our view is that it’s going to take a lot of collaborative work in order to continue to address these growing and expanding challenges.
-
It’s getting harder. Bad actors are getting smarter and more sophisticated. Our view is that it’s only with partnerships going forward and some of the smartest people, doing the best work in these areas, are not necessarily inside the companies. They’re (also) outside.
-
That’s in civil society. It’s in academia. It’s in other parts of government. It’s in other parts of the private sector. What we’re seeking to do is to start sharing on a more consistent basis, but particularly, where we see pronounced efforts and impacts.
-
Honestly, we’re taking down these kinds of accounts every day as a routine part of our work all over the world to ensure that fake accounts aren’t thriving on our platform. Now, bots can actually be quite good and quite healthy. People use travel…
-
I know. I wrote quite a few of those. Also, I was working with the Siri team. [laughs]
-
Exactly. I think you have a clearer and more sophisticated understanding than perhaps some people do about the importance of certain kinds of AI and why machine learning can actually be better for all of us.
-
We need to acknowledge those risks, but also not bury the good and the important, innovative work being done here. In this regard, we’re proud of the process, but actually, we think we have a lot more work to do.
-
Our view is that by starting to share more of what we can and more often, we’re hoping to partner with more people who can give us feedback and tell us what we can learn from what’s happening. Also, how we can continue to partner in innovative ways that yield stronger results going forward.
-
We also think that this is part of our journey towards protecting citizens and customers who use Twitter as a service. That goes to your earlier point about privacy. Twitter has expanded its commitments to privacy over the last couple of years.Very, very deeply, in terms of new policy changes we’ve made, new commitments we’ve made. That trade-off and the stressors on how we comply with an expanding range of local laws coming onto the books around the world, while we try to protect our privacy values, is because that trust is everything to us.
-
If we lose that trust, then Twitter doesn’t really exist very well in the world. It’s also a real value point for us and to us a company. A lot of people come to Twitter to work for us because one, most of its surfaces are so public, and it is so open.
-
It does have a long history of open APIs and a commitment to working more openly. Also, because we do value and treasure the rights of people who use our service. The human rights, but also, their own privacy.
-
We think that, by sharing and disclosing data like this, that it’s actually a conduit to stronger privacy commitments to our users. If governments can see, if the public can see, if experts can see what’s happening in this regard, then we’re not only sharing the problem.
-
We’re also getting very clear about the information that we don’t want to share that’s in violation of the rights of the people who use our service. We want to protect their data. We don’t want to over-disclose. In fact, we continue to expand our work in this particular area.
-
You may be familiar with the Twitter Transparency Report. That is also something that’s been around for about 12 years. Each new report that we produce approximately every six months is a stronger iteration of the one before it.
-
We, of course, also work with Lumen for posting those government requests. We’re expanding our commitment, so that people can see what requests we get from governments, and they can see what’s happening inside the company with regards to that content.
-
There’s more work to do there, too, we think. You’ll see more in upcoming reports that reflect that, but it is an important commitment for us. Those data disclosures, I think, also reflect…It’s not at first glance that one would pick that up, but it is part of our commitment to privacy as well, which is we’re very clear about communicating what bad actors are doing.
-
Honestly, I think one of the real challenges for us as a company is that yes, there can be individual bad actors. Yes, there can be criminals and criminal syndicates. There can be bad faith disruptors, but some of the biggest challenges we’re facing are actually from state-sponsored or state-based actors. That work to combat that is very difficult. It’s very challenging, and we believe that we need to share it.
-
We also want the world to know that this is happening. We could, in another scenario, take all of that action and not talk about it, but we don’t think that that’s in the public interest. We don’t think that that’s part of the social contract that we have with the people who use our service.
-
I worked with one of the earlier cypherpunks communities. I worked on the translation of the Freenet project, as well as being part of a group that translated the word “blog” to Mandarin.
-
That’s amazing.
-
When we did that work, which is in the days of Indymedia and friends, our main philosophy is not of cyberspace independence, but rather cyberspace interdependence. I’m very happy that it’s now an official UN policy, as of this year.
-
Interdependence means that we have to share reliable data with everybody else so that the checks and balances are not only between state and their citizens, but across the international norms and different practices.
-
So that we can reliably say that, by the end of this month, I think, when the Singapore model comes online, and the Taiwan model, which came online last month, we can quantitatively show which model works better on which parts of the state-sponsored disruptions, disinformation campaigns, and things like that.
-
Without such comparable quantitative data, it’s all anecdotes. It’s very difficult for you, actually, because you have to change with every revision of jurisdiction and the jurisdictional administrative orders, for a lack of better term.
-
I think the norm-first architecture is much better than a law-mandated market that determines the architecture, to use Lawrence Lessig’s theory. Starting with a social norm, but the data powers that norm. I think that is a much better direction.
-
On a more concrete level, I’ve been talking with telecommunication company representatives, such as, I think, Orange, Vodafone, and a team of researchers. For example, Alex Pentland, “Sandy,” from MIT, as well as the broader UN research system.
-
They have a design that’s called Open Algorithms. It’s a little bit of technical, so I apologize for speaking Greek.
-
We welcome Greek. Go for it.
-
Well, we will have a transcript.
-
The idea of open data, as you succinctly put, stops with privacy. Anything that you consider a disclosure of privacy is not to be published as open data. Hashed identities may be your delineation, beyond which there is no access. I applaud this point.
-
It is true that with only these data and a different format for each jurisdiction, because of jurisdictional orders, it makes it impossible for academics to have a comparable quantitative analysis of any kind, really, on that.
-
What the open algorithm movement is advocating is the flip of open data. It is the researchers writing, using a mocked or synthesized data, which is purely not reflecting any raw data at all. It could be just random rolling dices that you publish to establish your data storage schema, which is the structure of your data, but not any of the content.
-
The academics would write some code that do a different analysis that produce statistics, aggregates, or some other output that is impossible to reverse into the identities or any private materials. In the end, what the academics cares about is the result.
-
It’s not in their interest to peek into their peers’ personal, private tweets. It’s their interest to show that there is a trend growing and what that trend constitutes, and if they publish this code in just academic peer review fashion.
-
We invite the cyber security community, the mathematics community to inspect the algorithm and show conclusively that it cannot actually compromise privacy. Then the operators, such as the telecom operators take the same code packaged in a container, run it in their data center, and only publish the end result.
-
This framework is called open algorithms, because it’s not any personal data that’s disclosed. It is actually the statistics algorithm which may be very advanced is published.
-
May I ask your perspective, then? With that perspective in mind, can I ask what your thoughts are on the evolving work around blockchain, and that at the same that we’re discussing this, we’re also entering into a world where information can also potentially be on the web in perpetuity and not removable?
-
That’s right. As you said, the distributed ledger, which is the part of the blockchain that concerns our discussion, because we’re definitely not talking about Telegram ICOs.
-
No, not right now.
-
(laughter)
-
Not right now. The ledger part of the blockchain. Because storage gets cheaper as data is produced, so perpetuity is actually maybe a very accurate term to describe that. We wondered if copyright is also for perpetuity, but we are glad to have been proved wrong recently. [laughs]
-
I’m going to make you late for your cabinet meeting, I think.
-
(laughter)
-
That’s right. My take is twofold. First, if the ledger is built by the citizens, by the social sector, as in Taiwan, people measure air pollution levels by themselves and water pollution level by themselves.
-
If they are individual citizen accounts, there’s no accountability as a private citizen to offer anything like that. They just upload their real-time measurements through NB-IoT, LoRa, or any Zero-G network protocols into a distributed ledger.
-
That has two effects. The first one is that they collectively ensure the accountability of the data they measure. Basically, they cannot go back and change it. The second is that it forces, in Taiwan, the government to adopt a, “We cannot beat them, so we must join them,” approach.
-
Even with our national high-speed computing center, we cannot modify the numbers of that ledger. We have to commit and say, “Oh, you want to measure industrial parks?” Private property, they cannot enter into.
-
“Well, we own the lamps, so we will hang your device on the lamps within the industrial parks and join your ledger, rather than building one’s own ourselves.” That has always been our idea of social innovation, which is initiated by the social sector and always for the public good.
-
When used this way, this is essentially a trust machine that enables bootstrapping from lower-resource communities that set the governance of that ledger, rather than only large, private sector companies, such as the Libra Alliance, setting the rules of that sector.
-
It’s not to say that their work is not useful. It’s just, as a user, it’s very difficult to participate in that governance system. In our case, it starts with a citizen governance system. To me, it’s again, whether it’s about a norm-first architecture or whether it’s about a market-led one.
-
Thank you for sharing that. Do you have a timeline or plans that you’re rolling out?
-
The open algorithm work? We have a Presidential Hackathon, which is an annual event. It runs for three months, and anyone from any sector can propose. Every year is 100 or so cases. We coach the 20 of them, as measured by popular vote.
-
We use a new voting method called Quadratic Voting that ensures a fair and balanced representation of people’s true will, instead of being divisive online, as you put it. Then the 20, which is very diverse, each has to correspond to a specific sustainable development target, one of the 169.
-
Then those 20 gets coached to be trilingual, meaning that whichever sector they start from, they must have a regulatory expert from the public sector, a technological expert from the private sector, and a domain coordinator and organizer from the social sector.
-
They all grow to be trisectoral in the last two months of the competition. By the last day, we have a meeting in the presidential office, choose five teams, and give them trophies by the president herself. When she hands out the trophies, because each team now has public servants in it, we cannot hand them cash. It’s prohibited.
-
We hand them instead a beautiful trophy with a micro projector as the stand. When you turn it on, it projects the image of the president handing the trophy to you. This is very useful internally, because if their director general say, “Oh, we don’t have the budget to implement open algorithm,” the winner just turns on the projector, and the DG cannot refuse.
-
(laughter)
-
It’s right there. It’s right there.
-
Right. It’s the presidential promise that we must roll out something like that for this year. In this year, there is a team using the open algorithm framework. Their social good is for the beneficial ownership information, so that people can use machine learning to learn which corporate entities, like in the Panama Papers, engage in international illicit financial flows.
-
I think everybody agree, for publicly-listed corporation, that is just something that you don’t do. If you do that, you may actually get…You can flee to a friendly jurisdiction, but the fact that your company are about to do that is actually not private at all.
-
This, I think, is a great thing to apply open algorithm to. It would require collaboration with all the financial entities worldwide, which we are part of the anti-money laundering and the beneficial ownership movement.
-
That doesn’t concern Twitter. [laughs] I’m just saying that we’re starting to try out this system.
-
We look forward to hearing more about this going forward. Just to clarify, the five winners, will they be supported to expand their work going forward?
-
The trophy represents the presidential promise that whatever it takes – budget, regulation, personnel – their idea, which they co-develop with the society for three months, will be our national policy in the next 12 months.
-
Thank you.
-
Thank you.
-
I wanted to ask a question about, in light of this discussion, how you’re looking at or any concerns or projects you have related to the upcoming election.
-
I am nonpartisan. I care about the fairness of the election and keeping the deliberative nature of the referenda, which is why we moved them to alternating years. [laughs] Hallelujah.
-
(laughter)
-
No matter how long you deliberate, like the Swiss model, if it’s the same as the election day, people enter into a polarized mindset. There is just nothing we can do about it to have a representative, deliberative, representative, deliberative tempo, I think, is a much better tempo, personally speaking.
-
For the upcoming representative election, I think our work is twofold. First is that, on Twitter, there could be sponsored advertisements. Your DC counterpart told me that they work with the social sector to identify the accounts that are upcoming candidates, even before they are formally registered.
-
That is very important, because actually, a lot of work is already done before they become official candidates on the roster. I wonder if it is at all possible, because we do have civil society organizations – OCF and TAHR knows all of them – that identifies the candidate hopefuls.
-
They, as you witnessed, has Twitter accounts now. They used to only have clerk accounts, but now, mostly Twitter accounts now. If you can do something like that, that would be very helpful.
-
Facebook has already rolled out the feature where those potential candidates can declare their own pages as political or socially active advertisements in light of our transparency act in the parliament now that will treat these as campaign donations and subject to the same disclosure.
-
I look forward to collaborate with you on that. That’s one thing. The second thing is that, if people can show, or any independent investigative journalist or academic can show, that ultimately, the sponsorship come from an extrajurisdictional source, just like campaign donation, it’s actually up to a, I think, 30 NT million dollar fine.
-
Helping to track that down is also something that Facebook has collaborated very closely with us on. They actually, I think, learned that, from the previous mayoral election, that because our campaign donation law was just changed at that time to be radically open, you can see individual records of political donation.
-
The usual money came through precision targeting, because this is too open for them. It’s too easy to track the ultimate sources. We’d need to also make this as transparent as this. The legal draft is already in the parliament.
-
Again, to enable academic and the social sector to tell the narrative, instead of relying on a closed-door, public-private relationship. It’s the so-called Taiwan model.
-
That makes sense and aligns with our commitments. You may have heard and seen our announcements around ads transparency centers. It takes the same principle and the same model and seeks to provide similar levels of information that can be accessed by anyone.
-
Also, your state-controlled news agency policy, which is a surprise, for me, actually. [laughs]
-
That’s a more recent policy, and it’s not exactly the same thing.
-
I know.
-
The state-controlled media ads policy would potentially, by virtue of its existence, impact other forms of advertising, such as during an election. That policy is with regards to, in particular, state-controlled media themselves.
-
Indeed…
-
It’s a broader…
-
I’m just saying that my account is not a state-controlled media. [laughs]
-
I think we’re aware of that, Audrey. We can tell. It’s far too entertaining, anyway.
-
(laughter)
-
Those definitions are there. Actually, we rely on some important resource groups from the around the world. Unfortunately, actually, there aren’t really any rankings that are developed by academic institutions or larger-scale social groups that produce regularized, periodic reporting around these kinds of issues (in the Asia region).
-
We turned, instead, to those which were internationally credible and trusted sources, used by other credible institutions all over the world. A combination of multilateral UN inputs, as well as some of the most trusted nonprofits in the world that address or think about these issues every day, such as Reporters Sans Frontières and…
-
Which has their headquarters here, too.
-
Right, exactly. It’s a lot of work. It’s a work in progress, but we think it’s important, again, for transparency reasons. Again, because we want to give customers, consumers, and citizens more choice over what they see in their timelines.
-
You’ve seen the progress that we’ve made with timeline. There was a time when you, years ago, if one joined Twitter, that it was just chronological, or that, and reverse chronological. As we continue to evolve, we want to take people on the journey with us and widen that commitment about what people see.
-
When we know that people increasingly expect to understand or to be able to authenticate what it is that they’re seeing. We want to achieve that, while honoring a very closely-held commitment we have, which is that we believe in pseudonymity.
-
We believe in the right of people to have anonymous accounts on Twitter. We believe that it’s critically important for human rights activists and social sector organizations. Also, I think one of the things we’ve learned since we joined Twitter was that an overwhelming majority of people who do really critical work don’t self-identify as activists, actually.
-
They do rely on their ability to be able to safely communicate what they’re doing pseudonymously.
-
I am totally on that side during the Nymwars on Google+, when it existed. [laughs]
-
Back in the day, back in the day.
-
Yeah, back in the day.
-
We think that’s a really important balance there, too, to add to that.
-
For pseudonymous communications, especially the direct chat, one-to-one, is now actually much more visible in your product. It’s always there. It’s now much more visible.
-
I wonder, what’s your take on Facebook, Skype, and pretty much everybody’s position that there is an end-to-end, encrypted option, at least, for people who are worried about their pseudonymous identity to be reidentified by large intelligence agencies around the world, which you must obey, to some degree, to a jurisdictional obligation?
-
If you don’t hold the conversation themselves, you can safely say, “Well, it’s your chat.” What was your position on that?
-
Our position is we want our customers and our users preferably to use two-factor authentication and to avail themselves of everything that we can offer to stay safe.
-
That’s for account takeover. That’s not for the communication itself.
-
One of the things that’s most interesting to me is that, when we look at how front line defenders and activists are using our service, I think one of the things that’s most interesting about it is that we fit within an ecosystem of tools and resources…
-
They use Telegram for that, or Wire, or Signal.
-
They might use Signal, or they might use something else. What we’re seeing is a diversification of, and a specialization in, what are the best tools, redundancies in those, and diversity in those.
-
At Twitter, if you’re a front line defender, and you’re speaking openly on our platform, that’s one thing. If you’re…
-
…using a protected account, or direct messaging somebody…
-
If you’re seeking to add really complex, high-risk conversations, we would hope that you would use the very best services in the world. Many of them are free. We’ve worked with a lot of those groups over time.
-
Our view is we want people to do what they need to do to stay safe. I think there’s been a lot of discussion over the years about end-to-end encryption. It’s something that we think about a lot.
-
I also think that, given the work that we have to do in front of us with healthy conversations, and the open and public services of the platform, we’re concentrating on those, and directing people, really… What we say is use the best tools for what you need.
-
I totally understand. I will repeat my advice to Facebook when they are seeking a public consultation on this matter, which is my formal input, and I will repeat it here. On social media platforms, many people use the 🔒 lock icon which look exactly the same as the browser’s 🔒 lock icon, but they represent different things.
-
On the browser, the 🔒 lock icon means that no intermediary can eavesdrop or store your conversations. It means end-to-end encryption. On many social media platform, the lock icon has come to mean a conversation with a private or limited audience, but with no end-to-end encryption.
-
I think it makes the cyber norm conversation very confusing. Also, it makes people expect more service from you than you can technically provide. I would strongly advocate for a non-lock icon, that’s to say a icon indicating a limited audience, but not secret – as in end-to-end encrypted – conversations.
-
It could be something like 👥 one-to-one conversation. Like in Facebook, they recently renamed the “Secret” groups to “Hidden and Private,” indicating that messages in the group are not really “secret” to Facebook.
-
It could be the ⚙️ gear icon or the 🛡️ shield icon. In any case, I would like to reserve the 🔒 lock icon to mean end-to-end encryption.
-
Thank you for that feedback. We can take that back. That’s very helpful for us. Thank you.
-
Thank you. I have a cabinet meeting now.
-
Yes, I don’t want to make you late for the cabinet meeting. Thank you so much, Minister Tang.