• I’m here with Audrey Tang. She is the Digital Minister for Taiwan, a minister without portfolio, in charge of helping government agencies communicate policy goals and managing information published by the government, both via digital means. She is the youngest minister without portfolio in Taiwanese history.

  • We first met over 15 years ago when she was an incredible coder in the open source movement. She was writing not just phenomenal code, but entirely new ecosystems of phenomenal code. Her IQ has been measured at way past the official definition of genius.

  • Audrey, many people want to attempt a midlife career change, but going from hacker to national government minister is, shall we say, rare. Why was it important to you to take this job?

  • When we occupied the parliament in 2014 in protest of the Cross-Strait Service and Trade Agreement, or the CSSTA, the theory was that the MPs refused to deliberate substantially the Cross-Strait Trade Agreement, so people have to take their place, do their job in the place for them.

  • Because of that there is definitely a demo kind of spirit in it. When we say demo, demonstration, we don’t mean protest alone. We mean building something new, a new ecosystem, as you just said.

  • With half a million people on the street and many more online, back in 2014 we successfully agreed on 40 month, and not one less, to the MP. Interestingly, the head of MP did take that as binding, and it was a successful Occupy.

  • Right afterwards the Taiwanese cabinet see that with the power of the Internet technologies not only can we broadcast to millions of people, but it is actually possible for millions of people to listen to one another at scale.

  • There’s a great interest from the cabinet to learn about this new deliberative technology, and so I joined the cabinet as a reverse mentor to the then minister in charge of cyberspace regulations.

  • Because of that two-year internship it is important as civic hackers to work with, but not for, the government to modernize their idea of listening to people, and also to make sure that people have a direct say in the policymaking in the here and now.

  • I’m still a civic hacker. I’m just working with the cabinet.

  • I believe you’re even still writing code. I think I saw something about an app.

  • You wrote an app for tracking masks?

  • Several things. I wrote a progressive Web app, a simple portal that links to more than 140 applications that show the real-time availability of medical mask in all the pharmacies – there’s 6,000 of them around Taiwan – and also vending machines if you’re in Taipei City.

  • I also contributed to a CADE-based small server in the beginning to use the National Health Insurance app so that you can pre-order those masks and deliver it to your nearby convenience store. If you work late in the hour, so you cannot go to a physical pharmacy, you can at least pick it up from one of the 12,000 convenience stores.

  • I also helped a little bit on the scalability of the initial app-based prototype of the preordering system. I think there’s more than 90 percent of Taiwanese citizens using these two applications now, and everybody have adequate medical mask and supply.

  • That’s fantastic. Congratulations. What sort of skills has this job demanded of you that you didn’t need before?

  • I think one of the main things, as opposed to our work previously in the open source or civic tech community is that we need to adopt an idea of designing with people and that we are actually like civil engineers, not just civic technologists.

  • The difference is that, with a civic technologist, you primarily code for people who understand what you’re doing. For open source people, we primarily code for other open source developers and people who at least have an idea of what our purpose is.

  • As a government official and as a public servant, we need to first consider people who have no idea what this digital technology’s about.

  • We have to first consider the people who are vulnerable populations, people who don’t have the digital competence to understand the full idea of their rights, of denying access and/or granting access to their personal data, and things like that.

  • We need to work with the people and empower the people who are closest to the field, instead of just people who happen to have digital capabilities. I think, with a user base that’s more than half of your population, you have to consider a lot more things than compared to if you only work for maybe just 10,000 people that share your values.

  • It sounds very much as though you’ve brought the ideals of open source to government, and you’re debugging democracy. Either way, it sounds that you’re empowering people to an extraordinary extent.

  • Can you describe digital democracy and the role that technology plays in it?

  • Certainly. The main idea is that previously, using pen and paper, we can only upload very few bits of information. Voting for president or voting for MPs is essentially maybe four bits of information every four years or every two years.

  • Even with referenda, it’s like five bits of information every four years or two years. The total information input from the civil society to the democratic institution cannot accurately reflect the real-time needs of people.

  • Compare that to, for example, the mask map is a good example. Because everybody who show up with their National Health Insurance card to a pharmacy and swipe it, they can immediately purchase for just under two US dollars 9 medical mask every two weeks if they’re an adult or 10 if you’re a child.

  • They can see within a couple minutes that the stock level actually decreases on the map. If it actually increases instead of decrease, they will probably call the hotline, 1922, to report a bug in the system.

  • This is the idea of a distributed ledger. It’s an idea that everybody has a copy of what’s going on during the mask rationing and supply, so that not only they can feedback with their ideas about dashboards and about over or under-supply detection monitoring.

  • But can also call 1922 and say, “Hey, this district only has pink medical mask, and my boy doesn’t want to go to school, because all he had was pink medical mask.”

  • Instead of changing the supply line to feature more colors, the very next day, the press conference of the Central Epidemic Command Center, everybody wore pink medical mask, regardless of gender, and teaches the nation about gender mainstreaming.

  • This kind of rapid response system, where everybody’s idea can be amplified through a radically transparent Ask Me Anything live press conference every 2:00 PM, that is, again, a sign of what a digital technology – in particular, live streaming and listening at scale technologies, such as Slido, Polis, and the e-petition platform – can do to a democracy.

  • That makes us in the cabinet to work with people much more directly, and have much more bits of information to work with.

  • Giving all this power to the citizens, there’s a saying in the West, “You’ll never keep them on the farm once they’ve seen Paree.” Are you in some sense inoculating the people against the possibility of some future government wanting to take some of that power back? Would they just not stand for that?

  • That’s exactly right. What we’re building is a social norm that makes sure that the government, the state, need to be transparent to the people, and the people can choose to trust the government or not.

  • This is as opposed to the more authoritarian regimes – some of them quite nearby – that makes the citizens transparent to the state and asks for essentially blind trust without accountability or freedom of journalism.

  • I will say that this is norm-building and this is the norm, exactly the same as the core Internet norms. That is to say, end-to-end innovation. That is to say, rough consensus and running code, except this time it is not just algorithm code. It’s also code of regulations and code of other norms.

  • You’d been building this digital democracy before the pandemic, and then that happened. You achieved a response that is the envy of most of the world in its success, especially because you achieved it without the sort of totalitarian police state tactics that many people thought were going to be necessary.

  • Does that validate the concept of digital democracy?

  • I would say so. In open source development, there’s a saying that, “Many eyes make any bug shallow.”

  • Meaning that if you make sure that everybody understand the underlying science – in this case, epidemiology – of what we are doing, then you do not have to do a global lock or an imperial – sorry, imperative – style social program that locks down schools and businesses, which we’ve never had to do.

  • Nor any strict criminal penalties, because people understand the underlying science, and they can contribute their own innovations to the furthering of those scientific principles. We’ve never had to rely on any of those totalitarian measures that you just mentioned.

  • We just basically relied on the civil society to understand the importance of crucial technologies, such as soap – a very crucial technology – and the design of incentive of medical mask. For example, building it is something that protects yourself, because it reminds you to wash your hands properly and not to touch your face.

  • That enabled, in turn, a small portion in a large crowd who wear masks to remind the other people to wear masks to appeal to their selfish interests of protecting their own health and caring for each other, instead of a purely altruistic incentive design, which would not work, unless a majority of people in a room already wore a mask.

  • There’s many small things like this that, taken together, form a more resilient pandemic response.

  • As we like to say, the testing proves it, and your numbers have proved that.

  • This pandemic disrupts the status quo in just about every walk of life for just about everyone on the planet. We’ve had many patterns within our societies and institutions that were not optimal but have become ossified, as it were. Almost impossible to change, and yet COVID-19 has disrupted them along with everything else.

  • Maybe we have the opportunity to improve those patterns while they’re still fluid and volatile, before they gel again. Do you see places where we can emerge from this better than we went into it?

  • Definitely. I think this is a great opportunity for the world to see and compare each other’s governance models. We see, for example, in Taiwan, we amplify a lot the ideas of humor over rumor.

  • We had our counter-disinformation plans before that worked more or less OK, but it’s the COVID-19 pandemic that really launched the spokesdog, a very cute Doggo -shaped dog, a real dog, that lived with the participation officer, the PO, of the Ministry of Health and Welfare who is in charge of the open government work.

  • What they have done is that after each daily press conference, take photos of the dog and translate the scientific language into simple Doggo with memes and language that reminds people, for example, if you are indoors, you have to keep three Doggos apart from each other.

  • If you’re outdoor, keep two Doggo apart, as a way to explain social distancing, as it were. This has been viral, literally, on the cyberspace that just dwarfed the conspiracy theories and pseudoscientific health advices.

  • Every adored the spokesdog of the CECC, the Zongchai. That amplifies Taiwan’s message, and also, our way of countering disinformation, and the power of humor to pretty much a lot of nearby jurisdictions, including Japan and South Korea.

  • That also amplified, we see, in other places that if you’re authoritarian to begin with – if you make citizen transparent to the state – then that tendency also gets amplified. If you rely on multinational tech companies as a more capable fiduciary trust of people’s data, then we also see in places that gets amplified.

  • Whether it’s social sector, the state, or the private sector, whomever that demonstrate a more resilient governance model would get noticed and replicated. I think our idea here is just to remain firmly in the idea of constitutional democracy.

  • I think Taiwan is uniquely helpful in that, because we’ve never declared a state of emergency. We still operate within the confines of constitutional democracy, so everything we do can be translated into everyday regulations and everyday norms once the pandemic is over. It doesn’t require a special authorization of power.

  • Wow. That’s a lot to take on. You mentioned disinformation in various ways. Fake news, and what used to be called propaganda or psy ops is a serious problem in at least some democracies at the moment.

  • The coronavirus has led to us isolating behind Internet conversations, which is exactly the kind of environment in which that virus of the mind breeds. At a time when we were trying to decrease the amount of screen time people had and increase their social interactions, we’ve been forced to do exactly the opposite.

  • How could AI or other technology combat the problem of disinformation?

  • As I said, what we’ve found to ensure our press freedom and freedom of speech, while countering disinformation, it’s essential to take this humor over rumor mantra and just make sure that everybody understand, based on not epidemiology of memes, but really the psychology of memes underlying it.

  • Just like our Vice President has been recording online crash courses, MOOCs, on Epidemiology 101, I’ve been recording short videos that explains the psychology behind disinformation.

  • The idea is that the emotions spread on social media, you, too, can also see that it mutates. The one that with a higher R0 higher basic transmission number go viral, while the one that has the R0 under one do not go viral and remain just in private or personal communications.

  • The main factor in determining whether some message goes viral is whether it can provoke a sense of outrage. If it provoke a sense of outrage, that can channel somebody’s unease or somebody’s anger into the action of clicking share before fact-checking, before going to search engines, and so on, then that meme is a toxic, viral meme.

  • A virus of the mind, if you will, that will go viral. We’ve found that, if people gets an inoculation, if people see a message that is funny, that connects to the same symbols, to the same video, or the same image, then the psychology makes sure that their upset can be turned into a sense of humor.

  • They will share this funny message. Once they see the other message that want to provoke outrage, they will no longer feel outrage. Therefore, the basic transmission rate of that meme, based on outrage, will decrease.

  • For example, there was panic buying of tissue papers for a couple days. There was a rumor that says, because we’re ramping up the medical mask production from 1.8 million to 18 million a day, you will consume the same material as the tissue paper.

  • People went to panic buy. Our premier just published a meme within two hours, and that shows his wiggling bottom, with a large title saying, “We each have only one pair of buttocks.” There is no need to panic buy.

  • Yeah, you laugh, but that means the meme had worked. A table that shows that the tissue paper are made out of material from South America, while the medical masks are made out of domestic materials.

  • Everybody who laughed at that, it has that two-by-two table kind of remembered in their mind. They will not then share the panic buying conspiracy theory, because they know it’s the different material.

  • The conspiracy theory died down just in a couple days, and we eventually found that the person who spread that tumor was a tissue paper reseller, go figure. The idea is that we need to respond within a couple hours to each trending rumor.

  • The response need to be short and succinct, like 20 characters or less in its title. Also, it needs to have a memetic payload that evokes a sense of humor. Once we do that, then we successfully inoculate the disinformation that’s rampant on social media.

  • I think there are many people on this side of the world that need to understand that as well as you do, because manifestly, it is a much bigger problem over here.

  • In one aspect, AI is accelerating that problem because of the number of bots that are populating Twitter with messages that are aimed at causing disruption and dissent among the population, probably in many cases, at the behest of some foreign power.

  • Do you see this where you are, and how do you see this evolving as perhaps a kind of information warfare as AI evolves?

  • I would not call it disinformation, even. It’s essentially information warfare or null information operations. For example, during our presidential election campaign period, which is late last year, there was a surprisingly amount of forged captions to real photos.

  • That says, for example, there was one that says, “The rioters in Hong Kong are just teenagers, and they get paid 20,000 – or 200,000, depending on the rumor – US dollars for murdering police.”

  • That’s a cyber op essentially designed to make Taiwanese people not sympathize with people who are in Hong Kong at the moment. At that moment, during the anti-ELAB protests. What we have done, instead of just shutting down those accounts or taking down the content…

  • Which doesn’t work, because as you pointed out, it is a bot network with synthetic information out there. Even though the platforms may do some takedowns and so on, what we’ve found as reliably working is to do an attribution, to do a notice and public notice.

  • Some of people in Canada may know that their copyright enforcement, unlike the DMCA, is not notice and takedown, but rather notice and notice. Meaning that people who share those allegedly copyright-infringing works do not see their work, maybe remixes, taken down.

  • Rather, they receive an automated notice that says, “This is a copyright claimer, and you probably need to sit down and talk with them.”

  • The state doesn’t arbitrarily take down any content, but they make sure that all the allegedly infringing content have their publishers and the people who share it receive the copyright notices that’s sent by the original copyright holder.

  • We apply the same idea to the social media post, so that people sharing that rumor about the Hong Kong so-called rioters see a very clear fact-checking link after each of those contents showing that this is a disinformation, and we can trace.

  • This “we” is independent journalists working on third-party fact-checking, can trace it back to the Weibo or to the social media account of the Zhongyang Zhengfawei the central political and legal unit of the Chinese Communist Party in Beijing.

  • What this has done, essentially, is to put a face to all the messages of this vein. That lets people see and judge for themselves whether they want to believe a Weibo account that is run by the Beijing communist government.

  • That takes a Reuter photo, and the original caption of the photo says nothing about being paid, and adds their own captions. Once people understand the path of this transmission, they can then make their own decisions, and they can inoculate themselves against future variations of this sort.

  • They see how the information, the propaganda, as it were, comes from, and where it’s designed to go.

  • That’s very interesting. I’m thinking of some cultural differences and wondering how successful that approach might be in the United States, where it seems that pointing out the provenance of information hasn’t always helped in changing its effect.

  • I’m reminded of the saying that, “A lie can be halfway around the world before the truth has got its boots on.” I think that you’re demonstrating how the truth can get its boots on faster.

  • That’s right. The truth can be very funny. It could be filled with Doggo memes, yes.

  • As an open source developer, your customers were mostly a narrow base of developers, but government is responsible for the welfare of all of its citizens, including, as you point out, those who neither care nor know about technology.

  • What should governments be doing today in respect of the impact of technology on those people? Does that role, for instance, tilt mainly towards providing education, regulation, or incentives on the tech sector, or something else?

  • In Taiwan, broadband is a human right. Anywhere in Taiwan – for example, on the peak of Taiwan, almost 4,000 meters, the Saviah Mountain – you still have 10 megabits per second for just 16 US dollars per month unlimited 4G connection.

  • Probably faster, because fewer people share it. It’s very high. What I’m trying to say is that, if we give not only equal opportunity to people, but also makes the places where it’s even more remote than usually think of being included in the digital be actually the first places to get 5G deployment.

  • The first place to get telehealthcare, telemedicine, and self-driving vehicles to address their needs of healthcare, communication, education, and so on.

  • We make a firm commitment to the people saying, “No matter where you are in Taiwan, no matter where your indigenous place is, indigenous nations, islands, and so on, you actually have better opportunity when it comes digitally in your local digital opportunity center and so on as compared to people in the municipal areas.”

  • I personally tour around Taiwan, probably every three days or so, into a place less connected to the main transportation, Taiwan High-Speed Rails. I stay there usually for a couple days. I’m going to a more remote part of Kaohsiung tonight.

  • Then spend time with the local elders, spend time with the local social innovation organizations, including co-ops and nonprofit sector, and to hear what they have to say. Then use the broadband as human right to connect them into real-time video conference with the central government in Taipei and in other municipalities.

  • This is essentially like a fishbowl, a binding conversation locally that I personally host, but it’s just me who travels. Using the Internet, we can get the central government to listen at what people have to say.

  • Instead of just handling their ideas like abstract A4 papers that gets passed between the organizational silos, which gets nowhere, or even worse, maybe gets somewhere, and they think they have solved the problem while causing more problems.

  • We make sure that all the responsible ministry or agencies and so on are in the same room, so they don’t have to copy each other. They can just brainstorm with each other and solve the local issues in the here and now.

  • I think that the digital again serves as an amplifier. Instead of working on smart cities, we need to work with smart citizens. Instead of working with just machine learning, we need to work on collaborative learning experiences.

  • There’s concern in the West about the impact of artificial intelligence on jobs. Technology development can provide benefits, and at the same time, disruption that has unanticipated or negative effects on some people.

  • Do you see this happening there? Can you forecast that happening? What do you do about that?

  • Here, we say assistive intelligence. This is something that many open source, and prior to that, free software movement practitioners, do. They just redefine the acronym to fit their idea. That’s what I do, too. I always just say assistive intelligence.

  • This redefinition of AI basically tells us that we need to treat AI as we would to a human assistant. That is to say, they need to respect our privacy and agency, act in our best interest, while providing wise counsel.

  • However, accountability. That is to say, if they make a decision on behalf of us, we need to ask for a full explanation of why it is in our best interest. That is essential in human assistance. Why should we relax our standards when it’s AI, when it’s machine learning powering the same assistive technologies?

  • For example, we have a Presidential Hackathon where every year we ask for the best ideas of AI that can transform the public service. Every time, there’s hundreds of teams. This year, there’s over 200 teams.

  • We choose five at the end, and there is no money. There’s no prize money, but the president gives a trophy which is a micro projector that, if you turn it on, it projects the president handing you the trophy.

  • It’s a meta trophy that describes itself. It represents the presidential promise that whatever idea you have done, prototyped in the previous three months, we commit ourselves to make it national policy within the next year.

  • One of the best ideas that came out of it is using assistive intelligence to automatically detect self-repairing systems of water pipes and other essential utility supplies. Previously, people were employed to listen to the pipes using a stethoscope.

  • Most of the time, they listen to the pipes that are not leaking. They only get creative and become solution providers when they listen to the part that has been leaking. On average it took two months between a leak to happen for it to be discovered by those rotating people who listen to it.

  • Worse, they have problems recruiting young people, because young people did not think this job to be very fulfilling, even though it may be relatively higher paid. By working with AI researchers, they built a chat bot that can remind each worker where are the most likely leaking places near them with a high degree of accuracy, like 70 percent.

  • So that every time they travel in a day, they are almost guaranteed to work on some interesting, instead of something routine. It did replace the job of those professional repairs people, but it ensured that the trivial part of their work gets automated.

  • Or the interesting and creative part of their work can be then taught to the young people, who are much more willing to join the workforces of the Taiwan Water Company if they know their work is primarily problem-solving, instead of just doing redundant or trivial work.

  • Wow. Just a note to the audience here, I’m going to have to go back over this and listen to it several times to get all of this, so you might want to do that as well.

  • Audrey, you were quoted in the Taipei Times as saying that AI should be developed transparently. In a world where it looks as though most of that cutting-edge AI is coming out of Google, Apple, IBM, how do you see that happening, and what would that look like in a practical sense?

  • I think AI is essentially something that a society brings to help assist the societal values. For example, when MIT developed the self-driving vehicle, the PEV, Persuasive Electric Vehicle, instead of blindly deploying it to the street of Taiwan, we first built a sandbox.

  • It happens to be that my office is a park, the Social Innovation Lab, in the heart of Taipei. Everybody can just walk in – there’s no walls – and have 40 minutes of my time to chat with me. I do open office hours every Wednesday, so it just happens those AI researchers just came to my office hour.

  • Bringing with them those self-driving tricycles and say that they want to deploy it. I asked them to provide in open source their source code, in open hardware, how to build it, how to make people’s training of that norm apparent to the nearby, like Taipei Tech and other college and universities.

  • They built quite a few mobility hackathons together to figure out with the nearby market, such as the Jianguo Flower Market, how to make maximally societal useful use of the self-driving technology.

  • Interestingly, with the feedback from the market, literally, people who purchased some orchid flowers, holding it, and they just walk to the park, see me, and ask, “Minister, what are you doing with those shopping baskets, those trolley with baskets?”

  • I’m like, “This is not a shopping cart. This is a self-driving vehicle. It gets you places.” They are like, “No, I only see a basket there. This looks just like a shopping cart. Let’s see whether this flower pot will fit.”

  • It will, of course, fit. They say, “Why don’t you reprogram it, so that instead of taking us places, it follows us around in the Jianguo Flower Market, so we can shop in a hands-free fashion? Just shop the flower part, put it into the basket.”

  • We see also on the television that there this platooning technology, that when a self-driving vehicle is full, when its basket is full, it can step back and summon another one to form a fleet. That they can help us in the shopping cart fleet, so that we can do hands-free shopping.

  • They don’t want a fast-running self-driving vehicle. They want a safe one that can help them carry their baggages, as it were. The MIT folks did not design the PEV for that, so we need to work a lot with the local civic hackers to realize that goal.

  • We replaced the one blinking light with two eyes that shows where the attention is. It’s not strictly necessary, because it actually use LIDARs and other ambient sensing technologies. If you’re navigating a busy market, you need to show the people around you where you’re going.

  • You need to read the emotions and so on. It need to understand, in Taiwan, we first yield to the elders, not to the children, as people in Boston do, and so on. All these social norms are done in a co-creation fashion, so that the collaborative learning of the societal norms can co-domesticate the assistive intelligence. In this case, PEVs.

  • The learning that we did in the past year or so during the sandbox has now transformed the supply of buses so that, after the Taipei Metro closes off every midnight, I think next week or so, they’re trying out the self-driving small buses of maybe 30 people in it.

  • In its dedicated bus lane to fill in the lack of public transportation after Metro closes. This is done in a way that maximizes the societal input to the norms.

  • Even though the underlying technology may be developed by many multinational companies and research facilities it is essential, before a society decide to incorporate into the market to first settle with the sandbox experiments and co-creation regulation fields to make sure that people understand the norm around it, and only order for those norms.

  • That is to say, smart citizens before building smart city infrastructure.

  • Wow. I’m going to have to process a lot of that offline. I’m looking at the clock here and aware of our time.

  • I want to move onto an international area here. The Obama Administration produced a report from the National Science and Technology Council on the future of AI. One of their recommendations was the US government should develop a government-wide strategy on international engagement related to AI and develop a list of AI topical areas that need international engagement and monitoring.

  • Are you seeing any developments or developing anything in that respect yourself?

  • Yeah, definitely. If you just search for AI Taiwan, you will probably see our national AI strategy website, ai.taiwan.gov.tw. We designed the domain name so that it’s very difficult to beat us on the SEO.

  • It’s probably guaranteed to show up the first place if you search for AI Taiwan, because the domain name is AI Taiwan, and the title is AI Taiwan.

  • In that, we make sure that we work with like-minded economies and countries to provide not only the sensor vision, the optics, the electronics, but the edge computing which are old forte of Taiwan with TSMC, and other companies – media tech and so on – working on this.

  • We also work with other jurisdictions with similar philosophies to transform our learnings into workable policies. We work with Europe on one side to build privacy-preserving and privacy-enhancing technologies.

  • For example, one nonprofit in Taiwan called AI Labs, founded by the previous director of Cortana, interaction AI technology Microsoft, who went back to Taiwan to start their own nonprofit. It’s a little bit like OpenAI.

  • They’ve developed the contact tracing app-based Bluetooth and so on technologies with an open blueprint and with consultation of many international counterparts to tackle the issue of how to make sure that the people keep as their private information their whereabouts.

  • But only share sufficient bits of information so that people gets notified when there’s a high risk of getting infected and so on. We’ve never needed that technology in Taiwan, because we’ve never had community spread.

  • It’s not people in Taiwan, but people in the UK, for example, are very interested and participated in this collaborative deployment. When it comes to with the US, we hold the CoHack, the coronavirus or collaborative hackathon, at C-O-H-A-C-K-dot-T-W, cohack.tw.

  • You can also see many US and international teams to tackle the coronavirus, and also how to migrate to a post-corona world with their AI-based technologies. The top five winners, I’m very glad to say, are all under this idea of autonomy.

  • One team is literally called Autonomy. That respects the individual’s choices and agency, minimizing the privacy harm, designing with privacy in mind, but also empowering the public decisions.

  • So that we can share in our own best interest only the part that we’re comfortable sharing, while contributing to the pandemic resilience. Yeah, I would encourage you to check out cohack.tw, and all the winners need to provide their source code under the MIT license.

  • Which is great, because some of them are chatbots using Microsoft Teams platform. If they don’t provide the source code, we wouldn’t be so easily transport that into other open source chat systems.

  • Is this the time to develop international treaties on the ethical use of AI, and how about autonomous weapons?

  • I think international norms is as important as international treaties. Treaties are honored, of course, by governments and states that are important, such as nonproliferation. However, if there is no international norm, then any government can actually do only the letter of those treaties while trying to work around.

  • Much as how people work around patents, and develop something that is de facto the same, or even worse, because there’s no public oversight. It is only by signaling very clearly to politicians that, if you work against the societal norms, that the people will actually hold the politicians accountable.

  • Can we actually enforce any kind of reasonable AI ethic guidelines? I think the basic idea of privacy and accountability are intuitive to many people, but many people do not understand that privacy is not just a negative freedom, like freedom from peeking or freedom from surveillance.

  • It could also be an active freedom, a freedom to form data collaboratives, data trusts, data coalitions that share the intersectional social data with each other while collectively determining where those data should go, and how to empower the world for good, based on participation governance of the data.

  • If it’s a credit union, then people understand intuitively how many can be done in a way that is collaboratively and socially responsibly managed. When it comes to data, we still need to set up the societal norms that ensures the collaborative governance of data.

  • Again, that’s something that Taiwan can help, because our, for example, air pollution measurement network, the AirBox, was built by thousands of high school and primary school teachers that just measures the air quality on the balcony of their school and so on.

  • Many people just purchase a very cheap, like less than $100, kit at home. They all contribute to the air quality measurement with a distributed ledger, so much so that they can pressure the public sector, because the environment minister doesn’t have as an accurate a picture.

  • To pressure the environment minister to install their micro sensors on the industrial areas, so that they have a more complete picture of where the air pollutions come from. It’s been applied also to WaterBox, to arable lands, and so on.

  • What I’m trying to get at is that, when people collaboratively, jointly control the data production and see themselves not as data consumers and passively observed objects, but rather subjects and active people who curate the data for public good, then the whole idea changes.

  • It’s just like when encountering disinformation, if you just teach media literacy, which is about consuming information, then there’s a limit of what you can do.

  • If you instead learn media competence, which is treating everybody as producers of information and responsible citizen journalists at that, then you can collaboratively create a much more robust media ecosystem and landscape.

  • Just as we have discussed on the disinformation-countering campaign, we need to build a similar one when it comes to social data sharing.

  • Wow. One last question. What do you think we will face in the next 10 years in terms of progress in artificial intelligence?

  • I think there’s two things. First is that the idea of digital twin previously require a very costly infrastructure to set up, will probably become pervasive with 5G technology. So that, first of all, we don’t have to look at two-dimensional representations of one another now. [laughs]

  • We will probably just scan ourselves into extended and augmented reality so that we can feel that we’re in the same room with a much lower latency, a much higher fidelity. That essentially brings the online social norms directly overlapping it with the offline social norms.

  • It is essential that this kind of digital twin of not only our public infrastructures, our cities, and so on, but also ourselves – our online persona and so on – all this need to be accurately build in a way that respects the human dignity and the societal norms that makes people comfortable of sharing with their friends.

  • Instead of just being in a matrix, literally, like in that movie. That is the main thing I think we need to work on with this post-5G era. That’s the first thing about pervasive, immersive computing and ambient computing. That’s one.

  • The second thing I would like to highlight is the data collaboratives. The data collaboratives, we already see, because of the pandemic’s requirements. In Taiwan, we built them, of course, with the controllership firmly in the social sector.

  • We also see in other places that, because of pandemic, they’re justifying much more state surveillance, state control, and so on. In other places, it’s in the private sector, with very fragmented governance relationship.

  • I think this governance, as outlined by I think that there’s a book called “Surveillance Capitalism” on that need to be more widely read and understood. In the next 10 years, we will probably see all those three different models of data governance be amplified even more by the pervasive AIoT technology that I just alluded to.

  • These governance models will probably go back and re-inform the ideas of constitutional democracy to redefine what democracy really is. When people already accept the data governance by algorithm or by data, then it essentially weakens the legal protection and access to justice to only the people who understand open source code, actually.

  • We become like lawyers in this new era. If we don’t democratize the competence of the civic right education that people receive when they’re just primary schoolers, if we don’t do that for algorithmic governance, then even the best-designed liberal democracies risk becoming authoritarian or even totalitarian in the next 10 years.

  • Thank you. Audrey, I remember from the conferences where you would be presenting about software to developers that you would be going at a rate that would cause steam to come off their heads.

  • I feel the same way right now. I’ve got so much to think about here. Thank you so much for sharing that with everyone who’s listening here. I wish you all of the best in all of the challenges and continued success with the programs that you have been carrying out. They have so much to teach us in the West, and I hope more people listen. I’m doing my part to help with that, so thank you.

  • Thank you. Thank you for going through lightning talks rounds with me. [laughs] If you want to learn more, there’s more at taiwancanhelp.us.

  • Fantastic. Thank you. Audrey, that was great.