-
Shall we do… Would it be helpful to do a really quick round of introductions? I can start. So, my name is Rohan, I’m a consultant. I work with a company called TPXimpact, and I’m part of the project team working on this piece of work, which you’ll hear more about in a second. I’ll hand over to my colleague Conor to introduce himself.
-
Hi, I’m Conor, I’m working with Rohan. So, I’ve been working on this project for a few weeks, and we’re basically trying to build a framework to evaluate productivity. So, we’re speaking to a whole bunch of people who’ve worked on this project, or worked with this project in the past, so really looking forward to speaking with you guys.
-
Great, and then we have Hannah and Prateek.
-
Hello, so I’m part of Rohan and Conor’s team at TPXimpact, I’m a consultant doing strategy policy engagement stuff. And yeah, really excited to be working with you. I’m also part of the team at TPXimpact, so I’m part of the team that’s working on this project, doing a lot of the research element of the work on this project.
-
Great, and Prateek?
-
Hi folks, Prateek Buch. I guess I’m also a consultant within the UK civil service. I’m part of a team called Policy Lab, which acts as an internal consultancy on innovative, people-centred digital things, and heavily, heavily inspired by Taiwan’s leadership on the use of policy, and other engagement with the UK. So, we’ve been using policy to help make policy better, and we’ve started asking the question, how, if at all, are we helping to make policy better by using these approaches, which is why we brought TPX in, and we’re really looking forward to hearing more about Taiwan’s approach.
-
Great, and on our side we have Wendy, maybe?
-
Hello everyone, can you hear me? Okay. Hi, my name is Wendy, and I’m currently working in the Minister’s Office in the Ministry of Digital Affairs, and I’m responsible for citizen assemblies. We just completed a deliberation event in March that was statistically representative of the entire Taiwan. My colleague will provide detailed information about the relevant content shortly. Thank you.
-
We have made a short presentation. And now we have Wei-zhong Huang and his digital double. We see two of you, Wei-zhong Huang.
-
Oh, yeah. Yeah, because we are in the same room as other colleagues, so you can see two of me. Yeah, I’m Wei-zhong, I’m from ITRI, the Industrial Technology Research Institute. And my major duty in ITRI is artificial intelligence and cybersecurity. And right now, I’m working for the Digital Minister, and working for Audrey for cybersecurity and artificial intelligence. And we’re also working on the electrical signatures in Taiwan. And we also hope to have some kind of mutual recognition or interoperability with global and electronic technologies, so we have a signature ecosystem. Thank you.
-
Great. And we heard from the President’s office that next Monday, our new Electronic Digital Act will take effect. So, it’s really good, a major step forward. And we have Timmy, right?
-
Can you hear me? Okay. I’m Timmy Lin, and I work as a section chief for moda. And I’m responsible for AI application and development. We established AI evaluation center recently, and we held democracy deliberation, the deliberative assembly on March 23. And we held a meeting and discussed about AI reliability and responsibility with major AI companies, like Google, Meta, and OpenAI. So, we have lots of progress about AI application and for reliable AI.
-
Great. So, in the interest of time, I know that Timmy has some slides available, but we can also just send it to you after the meeting. So, if it helps, I think maybe we would just very quickly delve into the slides for maybe five or seven minutes, just so that you have a kind of general idea of what our latest citizen deliberation is if that’s okay with you.
-
Yeah, that would be great. Thanks so much. I guess you’re comfortable… Would it be helpful for any more context about the project, or you want to just jump straight in?
-
I think there’s somebody else joining?
-
Okay, he is my colleague from NICS.
-
Okay, great. From the National Institute of Cyber Security. So, Timmy, is the idea for you to do a slide sharing?
-
Yes, okay.
-
While we’re setting up the slides, maybe Rohan or someone from your side can provide a little bit of context?
-
Yeah, sure thing. So, Prateek gave a very sneak peek into what we’re doing. So, like we said, we are on a project, we’re trying to look at how can we develop a repeatable methodology for Policy Lab to be able to measure the productivity of the methods it uses. And as a way to start that, we wanted to use a case study. So, it’s a specific method that they use, which is collective intelligence, and as a sort of a prototype to see how can we do this and then expand from there. And so, while we’re also looking at developing this methodology, we’re doing a bit of a point-in-time evaluation of how has the collective intelligence capability been used today, what are the lessons that can be learned, and how can it be… what might the options be to optimize its use in the future?
-
So, I appreciate that’s still just a very high-level summary, but hopefully, that gives you a little bit more of an insight into what we’re trying to do and why we want to hear all about the things that you’ve been doing.
-
Okay. Great. Yeah. Actually, PDIS started as a kind of offshot of Policy Lab with Fang-Rui Zhang joining from Policy Lab UK to Taiwan, the public digital innovation space, in 2016. So, we’re a little bit of a sibling at the time.
-
So, yes. Are the slides ready or can we have a short presentation?
-
Okay. I will make a show with attention, and I will share my screen. Can you see the screen about the slides?
-
Yes. You can click in… that’s a hyperlink on the bottom of your screen.
-
Okay. And as you know, Taiwan needs an AI ecosystem. You can see the talent cultivation, the technology, and the investments. And all of these are a part of the work of the Ministry of Digital Affairs, as well as the administration for digital industries. But underneath all of this is the AI innovation system so that we can establish standards and application guidelines to ensure that the AI is trustworthy.
-
And the next page, based on strength AI chip and hardware in Taiwan, coordinate with powerful international generative AI-based platform, like ChatGPT and the Copilot. We are also promoting more startups, small and medium-sized companies to develop AI tools and software for various industries now that can help industries improve their efficiency.
-
Now, as you may also know, in addition to more than 300 startups with more than 10,000 employees. And this only counting the ones which will rank AI applications, most make downstream or was already powered by them. Now, we see this as the foundation of our AI evaluation and the certification system on both the systems and the products as listed there. And so, that we can ensure that the startups create systems and products that are fitting for people’s expectation.
-
What we are meeting today is determined by the steering committee guiding the AI evaluation center. Make sure that we specify the evil or evaluation systems for an evaluation institution, which will also guide the development of the tools based on those assessments items for the AI evaluation tool development level. And the AI testing laboratories that would then apply those tools and provide testing reports to the system that are being tested in those labs.
-
Now, as for the items to be assessed, we basically took a union of the NICS and the ISO and the EU and also some of the Mitre Atlas to ensure that we can automatically evaluate those pictured on the left with bright green, bright blue, ones that can be technically done kind of automatically. And they are also the ones that need societal inputs or additional reviewed by experts.
-
So, this is just one very short example. So, all of you, of course, have your AI models and some of them have like multi-turn conversation capabilities. And so, by offering us either API endpoints, some of you have research or the assessment program that can evaluate before deployment is later. What we offer is presented via the testing tool web UI and then select the category that the risk level and the application, so we have a lot of results.
-
AI Evaluation Center, we are progressively developing different kinds of evaluation for object detection models, audio classification models, images generation models, and the multi-modal models. Additionally, we plan to exchange AI evaluation techniques and standards and laws with international organizations through seminars, experts exchange and similar activities, any tool established mechanisms that align with international certification systems. Furthermore, the AI Evaluation Center also plans to assist the private sector in establishing AI testing laboratories, promoting domestic AI evaluation technology and promoting the development of Taiwan’s AI industry, thereby enhancing the trust AI environment in Taiwan.
-
And the next topic is about alignment assembly. We are dedicated to promoting trustworthy AI. Last year, we became a partner in the international non-governmental organization called Collective Intelligence Project (CIP) to collaborate with global organizations in providing the public with more convenient and reliable AI system. Additionally, in the period from August to September of last year, we organized deliberative workshops on AI future democratization to intelligent public participation and discussion on the direction of AI development.
-
So, it’s a global declaration on information integrity online. Integrity refers to ensuring that information maintains accuracy and consistency throughout its lifecycle, including storage, processing, and transmission, to guarantee that information remains reliable and accurate at all times. In the context of generative AI, information integrity involves creating an information ecosystem that generates accuracy, transparency, and reliable information. This ensures that individuals interacting with various AI systems can rely on the information they need. They also have access to healthy sources of information from the accuracy of the information they receive, preventing access to rumors and manipulated data.
-
So, this is the detailed process of the collaborative assembly. We send 200,000 short message services with surveys to citizens and select a sample of at least 400 participants from those willing to respond. This selection will be based on a stratified random sampling, considering variables such as residence, place, gender, and occupations to ensure the sample aligns with the overall demographic ratios in Taiwan. The online citizen deliberation assembly had held on March 23.
-
So, this is our discussion topic. It is about utilizing AI to enhance information integrity. The online deliberative meeting will be divided into two sessions, discussing four topics.
-
Session 1 topics include AI how to analyze emergency and regulation should be strengthened from industries or government. Session 2 topics include social platform how to use AI to maintain information integrity and AI how to better maintain information integrity standards. We want to let citizens’ think and discuss how to use AI to improve information integrity, avoiding enormous misinformation and disinformation.
-
So, this side shows the online deliberative platforms. We use Stanford’s online platform and we host group discussions with 10 people in each group. Each group will be led by an AI robot serving as the table head to facilitate topic discussions.
-
Following the exchanges and discussions on the four main areas, 10 interdisciplinary academics and experts in information science, AI law, humanities, social sciences, communications technology and computational linguistics shared ideas and opinions about issues concerning the public. After a half-day discussion, the following outcomes will determine on three consensus. These are monitoring regulation for AI harms, strengthening ability to analyze and identify evaluation standards for large platforms.
-
When it comes to the issue of monitoring and regulation AI harms, citizens hope to have more involvement in driving AI-related policies and regulations. They also expect major social media platforms like Facebook or Instagram to be able to analyze and identify deceptive content. At the same time, they also expect for the establishment of an evaluation mechanism for the management and discernment of information on these large platforms. Viewpoints gathered from the online citizens’ deliberative assembly provide important references for inviting international AI major companies to discuss relevant topics about the deliberative democracy event about international major AI company conference.
-
As we held the meeting by inviting Google, Meta, Microsoft, and OpenAI, we set two topics to ask them following and we have made some progress. So, this is the conference about April 7th. There are two main concerns regarding the analysis and identification of AI generative contents on large platforms. International companies agreed to establish upcoming practices. This includes AI content detection technologies like SynthID, tracing labels, and signatures. User reporting mechanisms and alerts on AI generative content. For example, Google has already labeled generative AI on YouTube videos. Meta we will label and label and verify generative information. And OpenAI, we’ll continue to implement tracing mechanisms like C2PA to promote information integrity.
-
All the international companies at the meeting agreed that AI needs to be trustworthy, accurate, and safe. They mentioned that their teams are constantly testing and adjusting their models to meet the expectations of reliable AI systems. When the moda invited to promote and provide their development language model to the AI evaluation center established by the NDI for testing, everyone reacts positively, and Meta and OpenAI have tentatively agreed.
-
Okay, this is my short brief and slide.
-
Well, thank you. I just have two points to add.
-
First, I think the mini public, the 450 citizens who did participate, are statistically representative, meaning that legislators and fellow ministers have a certain confidence that this deliberative result is reflecting the composition of the entire population. And so, when 85% of people agree, we know that people are behind these suggestions.
-
And second, as you just saw, those 10-people groups are not led by human facilitators, but rather by AI interface. So in the past, we were often limited by factors such as the venue or the number of facilitators. But this time is truly like broad listening, not broadcasting, broad listening, so that we can more easily drive consensus through such assisted collective intelligence or ACI, as we call it.
-
Oh, and finally, as I mentioned this morning, the three major suggestions made it into the Anti-Fraud Act draft. So, it’s in the parliament now. So, that’s also one of the shortest time turnaround. When we did 10 years ago, the Uber, it took months before the assembly result actually made to the legislature floor. But this time, it only took months or so. So that’s my supplementary remark.
-
I would love to pick that exact point up and just sort of unpack that a little bit more, because that’s something we have heard of the potential for this talk exactly being that how can it help speed up the policymaking process and get reach consensus quicker, give ministers confidence in the results, which then ultimately leads to what you’ve described, it’s already in the draft legislation.
-
So, I guess maybe if we could just sort of think about how over time has… what were the developments of this tool that you think… You mentioned the ability to get statistical representation, but are there things that have helped? Are there additional things that have helped to give confidence and help speed up the process? Or are there other ways, not other than you’ve just said that have helped also to build confidence and speed up the policymaking process?
-
Yes. I think so using kind of a taxonomy by our good friends at DemNext, the before, the during, the after are equally important. And we have improved over the past decade on all the three parts. The before part, which is helped by… There’s a slide that talks about the SMS, the trusted SMS number, right? The 111. This is a new invention this year that lets us send out random SMS messages to random people and to hundred thousand at this time. And it went to 170,000 numbers successfully. And so, when people are filling the survey attached in the SMS, first they know it comes from the government. They can trust it and also we have a much wider sampling than traditional like rolling wave or things like that. So, when we say it’s stratified random sampling, we really mean that it’s rigorous, if not more rigorous than a poll basically. The most rigorous of polls. So, this is the before part.
-
And the during part, so we tried all sorts of deliberation methods, but this time, Ithink it’s the first time that it is really scale-free in a sense of an AI facilitating room of 10 people at a time. And the beauty of it is that it’s because it’s real-time transcripted. When the people seeing that the transcription or the auto-summarization doesn’t work, they can just interrupt and contribute a few seconds of corrections and say their own topics, their own initiatives and so on, which is like a real-time POLIS, right? They also do voting. So, on the right-hand side, you can see that they’re doing a kind of real-time voting on what to surface. And so that’s the Stanford tool.
-
And we’re also working with AI Objectives Institute on a tool called Talk to the City that can then take this entire transcript and everything and using large language models to automatically identify clusters of opinions, division points, as well as potential bridging statements. And again, entirely automatic. It used to take senior researchers a month or so to get to that point, but now assistive intelligence can do it all. So, that is the during part by the Stanford and the after part by Talk to the City. This all contribute to a more kind of scale-free and risk-free, I would say, setting.
-
I promised that I would be in listening mode, but I’m going to abuse that. I think there’s something really interesting in the speed, compressing the speed of your process compared to, as you mentioned, the kind of vTaiwan model, POLIS and online and sort of online and offline hybrid model.
-
Do you think that you have… Is there a trade-off between speed and… I don’t know what the other dimension is, dimension quality, integrity, trust, depth, richness. From the perspective, not from your perspective of running the process, but perhaps from the legislators, the people who are receiving the recommendations, do they feel they can trust the same amount of confidence in this than they did in the previous approach?
-
Yeah, that’s a great question. So, we had a more open-ended, more like the discovered part of Double Diamond, actually in 2023, when we first started Alignment Assemblies, that’s a more traditional vTaiwan process of a POLIS for agenda setting and two workshops, one in Tainan and one in Taipei for face-to-face deliberation.
-
And of course, it did lead to many more networking, hallway tracks and so on that is typical of a face-to-face deliberation. And the depth of career recommendations really helped because we used that to co-create a constitutional document to help align our sovereign model, taide.tw. So, we taught AI basically to align to the people in Taipei and Tainan, building reward models that help to surface particular Taiwanese cultural concerns.
-
And simultaneously, Anthropic is doing the same thing in the US and the Cluade3 Opus training showed that for disability rights, I bet that none of the original drafts of the Anthropic Constitution were alternative-abled. So, that wasn’t found in the last constitution, but that’s one of the things that’s highlighted by the people’s contribution. So, there’s a lot more nuance that you can get from good face-to-face facilitated conversation.
-
Now, for this particular one, for the info integrity one, I would argue that the legislatorsalready know all the nuances about being impersonated for fraud, scam and everything because they’re also celebrities. And so, they, for this particular threat model, that is to say deep fake fraud and advertisements that are misleading and things like that, I do not think that more nuance would add to the legitimacy of the process because people already feel it as a harm.
-
Indeed, it was one of the main harms that was surfaced by the previous round, the Ideathon round. So, this, for this one, I think it’s a good idea to have a little bit of a look at the process. And then for this round, we’re much more in the define stage, not the discovery stage. And so, a quick convergence process helps us to reach the how-my-week questions, the three with the most consensus and also get a buy-in from the commitments synchronously from the major labs.
-
So, in a sense, we still retain the vTaiwan process spirit of getting a multisig-hugger
-
conversation. It’s just that it’s much more like a checkmark, checkmark thing. Like Meta, can you do it? Microsoft, can you do it? Microsoft says, no. OpenAI says, we can do it. OpenAI can do it. And OpenAI says, well, OK, we can do it, right, because it’s multisig-hugger. So, what used to take days and months in vTaiwan now just take an afternoon, basically. But we did not skip the develop parts of the diamond.
-
Hope that answers your question.
-
Yeah, no, that was really interesting, which leads me to my next question, actually. You talked about how you can use this like, Polis in different stages of policy making. And I’m just curious if you have any reflections on whether it works better in particular instances or maybe not, and it’s actually just about the way that you use the tool that makes the difference?
-
Yeah. I think Polis works if there is a need for clarity from all stakeholders, that is to say, the dimension is so large that we cannot pre-map them all. So, it works really the best on the beginning of the discovery stage of the double diamond. And that was indeed the case last March in the beginning of alignment assemblies. Like nobody really know what everybody thinks about AI safety and openness. And so, and Polis really helped really shine in that.
-
And in fact, working with CIP, we held the zeroth alignment assembly just on the margins of the summit for democracy. So, what’s really interesting is that the leaders, the delegates and so on to the summit for democracy by in large agreed that we need to focus on information integrity first. Because no matter if you’re a doomer or a boomer, if we lose the ability to coordinate, we lose the ability to tackle and further challenges, so we need to fix this first before moving on to others. And so for this kind of bridge-making statement, Polis really shines, but I bet our ideas our rematch or whatever that also works for the same purpose on this particular stage.
-
That makes total sense. I guess yeah, we’ve been thinking about in two dimensions if you think about how Polis can help improve the productivity of policy making. Both of what you just talked about… Quality, being able to understand what people think and get really good rich insights.
-
And then there is an opportunity further downstream, we talked about being able to turn what used to take into an afternoon type process. It’s clearly a tool which is multi-purpose but it’s interesting to hear you say that you personally think the value probably is at that discovery stage of the double diamond.
-
Um, I’m wondering have you… I’m sure you have. How have you thought about evaluating the effectiveness of using Polis? I mean you talked a bit about evaluating AI, but specifically the tool and the… how do you monitor and track whether it’s doing what you want it to do and um… there’s the these categories here, so those are almost the things that you would be measuring and collecting evidence against. Is that right?
-
I’m sorry, I didn’t get the last part.
-
Oh, so I’m just looking at the slide here that’s on… I can see… Are these the types of metrics or measures…
-
Oh, yeah. Yes, correct. So, the idea here is societal evaluation, right? So, what’s accurate for the researchers may not be what’s accurate for the people. And so, we really want to hear from the people like what do they consider as accurate what these words even mean uh for people. So, for societal harms, we want to do this regularly like at least twice a year, maybe more often, so that we can also surface more harm, like over reliance harms, addiction harms, emotional manipulation harms, you name it.
-
And so, I think it serves two purposes. One is that for the existing guidelines, which is very abstract like the EU AI ACT, it’s quite abstract actually, when it mentions things that can be operationalized like integrity or accuracy or harm or privacy or whatever, this is like an instant jury that can help us adjudicate what these words really mean in the here and now. And second, for the kind of harms that we don’t have a good clue yet, then that’s another discovery process right there so we can use this is as a horizons timing tool.
-
I see your hand up, Prateek.
-
I think… building at that, I think Rohan and Hannah and colleagues, we are also interested in how you evaluate the engagement process itself. So, the youth stunt for polling mechanism or Polis, how do you go about evaluating how that has been run as well? Because this is really, really crucial for evaluating the development of AI tools, but how do you develop an evaluation for that public input into that?
-
Okay, so we have run literally hundreds of citizen participations, and there’s really two metrics that we care, right? First, does the career public service, the people who are actually in the government as a job, right, whether they feel that is virtually risk-free and they’re willing to do this again for the same topic? And this is kind of sustainability of the system. And the second is that how quickly can some measurable result coming from the citizen suggestion to be announced in a way that makes the original participants feel that it’s worth their time. So, I think these two are the main metrics that we measure.
-
For this particular one, because there was a lot of legislative inquiry, interpellation and so on about, you know, whether people actually trust mmoda to hold Facebook or made our TikTok account, whether that we’re effective against fraud and whether these measures are backed by citizens and so on. I guess there’s another legitimacy measurement, so we did do pre and post testing for all the 450 participants about their trust level in moda and in other related stakeholders, so that if we implement such expectations in law, are they happy with the state now holding slightly more power than before. So, that was the third thing. And we’re happy that we get a more than 85% support for that. And legislators are quite also happy with that number.
-
I see a hand from Conor.
-
Sorry, I’m putting it up five times. Rohan, I hope I’m not skipping ahead of questions that you’re going to ask. Was there a learning curve internally to educate policy makers and other senior officials around the results of Polis and how you communicated the advantages of Polis versus say traditional polling? And how was that brand of this research product, Polis? How has that been developed over time?
-
Yeah. Well, not many can answer that because that’s a 2016 or 2015 year question. I mean, now people just see, oh, moda doing a assembly, of course, that’s like a poll, right? It’s like a Google form, right? So, I mean, it’s just now so part of the culture now. We don’t need to explain this. This is just like, you know, everyday stuff.
-
But it wasn’t the case in 2014, 2015. I think we did two things right back at the time. First, I think we let the civil society choose which topics are important to them. So, a little bit like Ideathon this time. We explicitly asked the people, what kind of AI uses or harms do you want us to run such consultations for? And that’s really help the legitimacy because it’s, in a sense, citizen led. In the following years after 2016, we often use the Polis in conjunction with the JOIN platform. Again, answering to the questions, the initiative is set by at least 5,000 counter signatures on our e-petition platform.
-
So, because there is a ministerial duty to answer these petitions anyway, Polis is then set as a kind of signal-through-noise tool. I wouldn’t say noise cancellation, but it’s at least a kind of catalyst tool because nobody has the time to read through the 5,000 petitions. It’s impossible. But if you, instead of a normal petition board, if you have a pro column and a con column, and if you set SSD, the Polis statements, the top three or top four ideas from the pro and con column, this is something we learned from the Icelandic Better Reykjavik system, then you very quickly get the rough consensus, as well as the points of division and the bridges, and then the minister’s team only have to process like three A4 pages instead of like 300, right?
-
So, it is always built as a time-saving tool which also scores points with we carry a public service. So, this is the first answer. It’s a time-saving tool to address something that has urgency anyway, that is set by the civil society or the people in general. First answer.
-
And the second answer is that we made sure that we tackle the issues in a way that doesn’t result in explosion. So, at the very same time, in Taiwan, we had a lot of very explosive topics like marriage equality and so on that went all the way to the constitutional court, the referenda and things like that. But because we chose Polis questions very, very carefully to only focus on the overlapping consensus, that is to say, things that are specifically clocks across ideological points. So instead of, for example, for the Uber case, talking about what do you think about extractive economy versus sharing economy, which is guaranteed to lead nowhere, we ask instead, how do you feel about someone with a driver’s license, but not a professional driver’s license, picking up strangers, they mean to find an app, their way to work and charging them for it. So. like a very down to earth, very specific case.
-
And so, because we kept at it, then the ministries eventually, after like five years or six years, discovered that, oh, there’s no way it will explode when you set topics such as this. Hope that answers your question.
-
Really nice. Yeah, thank you. That was very helpful. Rohan, I’ll hand back to you.
-
Well, yeah, I was just going to say, I’m conscious I’ve been hogging the mic. So, I just wanted to ask either you, Conor or Hannah or Prateek, if you had any follow-up questions.
-
I have one follow-up question. Have you ever faced any skepticism about AI analysis versus personal analysis?
-
Talk to the City remains experimental, right? So, it’s an addition to the methodology. In fact, we did not use Talk to the City this time when drafting the Anti-fraud Act. We plan to use it as a kind of supplemental information, so that people can go back to the deliberation and feel that they’re part of the deliberation again in a way that increases the post event informed face, which is always the hardest with mini public, right? They came here, they got a consensus. It’s hard to get that mood back to their communities and so on. So, in a sense, it’s a communication or educational tool, but we’re not relying on it when we draft the anti-fraud detection. We did that just by old-fashioned reading out of the Stanford deliberation platform.
-
And going forward, I think the latest crop of generative models, we can do some side-by-side testing and so on vs human summarizers. I think they’re generally at a good point so that if we don’t like how they’re summarizing, we can also fine-tune it because specifically, the larger LLaMA models that can be fine-tuned by the Taide team very quickly, like 4 days, 5 days if you have the constitution.
-
Nice. Brilliant. Thank you.
-
I’ll jump in if that’s alright. I’m really interested in the point you made around it often being used in addition to other methods. And I know you mentioned as well that kind of hybrid approach of having some in-person engagements, so I think it’d be really valuable how often that is the case that you do something in-person alongside something digital and how those things work together. And also just how… what are the measures you might use to make sure that the process is inclusive to as many people as possible?
-
Yeah, so I think when we need new ones, when the scoping is more on the discovery and a little bit of defined side, we tend to use face-to-face and not necessarily with mini-publics, because at some points, it may actually help to have a rolling stakeholder discovery process instead just because so few people actually have an idea of what it is, right? But when there is already widespread agreement about, for example, info integrity harm, I mean, if you ask a random person on the street, whether they have been scammed or have seen misleading information online, they’re going to say yes.
-
So, at this stage, then mini public has the, like it has two good things going for it. First, it’s really inexpensive. It’s a really good tool to set up now with the streamline process. And second, it’s far harder to have explosions when you do things online. But of course, accessibility is an issue, and we need to augment it with, for example, sign language interpretation and so on. These are all needed.
-
And fortunately, the ADI, the Administration for Digital Industries under moda also has some projects working on this automated or at least remote-controlled sign language interpretation. So, it’s really, really important. And I think that’s the key. We have to take, I think the language interpretation and multilingual interpretation. I think within a year or so, this will not be a problem, everybody just will see an hearing. The persons on the round table online speaking the language or even the culture that they’re most comfortable with.
-
That was so interesting. And also I was thinking about maybe digital exclusion as in people who might not be comfortable or might not have access to devices, and how may be useful those people be involved in the conversation as well?
-
I think for this particular one, info integrity, the relevant public are people who are online. And because we have an internet penetration rate of easily like in the high 90s for people in this age bracket, I mean, if you’re consulting for very senior people’s rights for people 80 years or older, maybe that’s a different conversation. But for this particular conversation, we’re not losing much by choosing an online-only component for the video, synchronous deliberation.
-
And also, I would argue that for many people, receiving SMS from the state and using the same phone to participate in this video conference is easy enough. They don’t need helpers. They don’t need people to help setting it up. In fact, we do have elderly people and so on who feel quite liberated that they don’t have to travel all the way to Taipei City or Tainan City in order to participate in this conversation.
-
So, I would say, of course, there’s some digital gap to be considered. But because Taiwan is, I think, the country with the fastest internet speed for any country over 10 million population, and we have broadband as a human right for a flat fee, you can have unlimited broadband for just like 15 euros a month or something. So, we’re not excluding people who are economically disadvantaged. We’re not excluding people who are financially disadvantaged much by running this process. And we gain inclusion for people who have mobility requirements.
-
Oh, I feel like we’ve got a lot to learn. But yeah, it’s a good point because we’ve been looking at inclusion but recognizing that this kind of process will be more inclusive for some and less for others. So, it’s that trade-off, isn’t it?
-
Yeah. Prateek?
-
In the last few minutes, I was going to ask you about the future of this sort of space. I should preface this by saying every time I hear from you, Audrey, or from your colleagues, I feel like I’m seeing the future. Because for us, we are in this space in your 2014, 2015, we still have to persuade policymakers that using the computer, not just an artificially intelligent system, but frankly, not using paper.I’m just being very open.
-
In some cases, we have to persuade people that a non-paper-based or non-in-person process or additionally, online process is valid to let alone then think about some of the more modern tools. However, that’s a UK problem.
-
And I also think that when we are talking about pointing some of these tools, not so much towards conversations about AI, and information integrity, but when we start using them for policies about other things, those trade-offs around inclusion and the kind of way in which the technology gets used do become more relevant. I’m sure even in ..
-
However, it would be great for you to give us a few minutes on where you think the future of this kind of lies. Like I said, my mind is blown at the speed of which you’re integrating things. I’ve only just recently heard about. But where do you think the future might lie? What are the potential barriers you think you might need to overcome?
-
I just wrote a book about it. So, it’s a plurality book. If you check out part five, you can begin reading at part five, Collaborative Technology and Democracy. It outlines the future as we see it. And I think Amazon will start selling it on May 20.
-
Anyway, but to answer really, really briefly, I think there’s a frontier here that’s like the Pareto frontier or the frontier for productivity. The degree of collaboration and the breadth of diversity, which used to be framed as trade-offs, maybe are no longer that trade-off-ish as we may think it. Right? So, I have in my mind, for example, this picture. I would just paste the link so that you can open it on your own browser.
-
If you can click it, it’s from the book Five Zero. So, if you open that link, it shows the idea of plurality or interoperable coexistence, not as a trade-off between the two sides. So, it’s like more depth or more scale. But rather, you can fold it into two dimensions. And then you can picture the conversations we’re now having. For example, the idea of moving from just voting, which is vector with a lot of people, to bureaucracy with computers, which is structured data.
-
And now, we’re using augmented deliberation that’s like a conversation, which is natural language and so on, as in a frontier. And each technological improvement that we do, language models, immersive shared reality, real-time translation across cultures and so on, moves these points to the up and right a little bit, so that it’s moving like a zigzag. Every time we use Stanford deliberation, for example, we keep the same structure but reaching slightly more people. And each time we use something like the shared reality, Holopolis, talk to a river, or whatever, we’re moving. Right? So, whatever, we retain the number of people. But the conversation is now more in-depth, bringing more bandwidth into it.
-
Usually, when you improve both at the same time, it becomes brittle. It becomes fragile. People don’t have a reference model of it. And so, you probably run into this minstrel blank stare, like what you were talking about. But if you’re phrasing your improvements as just improving on one elliptical scale. So, you’re going to put the scale to the right, and then to the top, and then to the right, and then to the top, you actually go into much more plural conversations much quicker. Because every time it just feels like an incremental change. That’s what we have done in the past 10 years.
-
If you go back to 2014, 15, 16, 17 in our collaborative meetings, our participation offices, you will know that every consultation looks like the one month before it, except with one slight methodological change. And so, 10 years later, we reach this point.
-
It’s fascinating because I don’t think we do that zigzag. Just every consultation looks like the last one, with exceptions. But that’s the thing. I don’t think we do the increase… and I think that that image is incredible.
-
What’s amazing for TPX colleagues, the challenge I’m hearing here is trying to express that journey towards a different quadrant in that image in terms of the productivity of the product influence learning. Because you mentioned about the career civil servants, right? Although I kind of am one now, I’m supporting the Policy Lab.
-
I think the challenge is to view that zigzag journey as increase in productivity, it isn’t always seen as that. It isn’t always seen as… It’s not even necessary negative, but it’s always that’s gonna take more money, that’s gonna take more time, that’s a bit of a risk. And I’m interested in how we frame, perhaps it’s a UK culture thing, I don’t know. How do we frame that increased breadth as a more productive enterprise?
-
Yeah, I think it’s two things, right? It’s time saver and it’s a risk reducer. And these two may not be true at the same time, right? During the pandemic, we’ve seen lots of trade-offs of these two.
-
But in general, you can say that if you move from a scalar to a vector to structured input, you generally reduce risk because then what you get is more nuance. And when you have more nuance, it’s far harder for things to blow up after the fact when people find something that really didn’t work to their liking just because they express things in a low-fidelity way.
-
And in general, you can frame the breadth of diversity as a time saver because frankly, it takes time to talk to that many people. And if you have technology that lets you reach an order of magnitude more people at the same cost, then the career public service is all for it. I think we chose Stanford delegation this time because ADI found out, oh, you can just spend a couple thousand US dollars and then you save all the time of having to organize a face-to-face meeting and inviting people to travel and reimbursing them for high-speed rails and missing people and things, and you don’t have to worry about that anymore. And so, you can reach a lot more people just by sending SMS. What a time saver. And really, that expense is negligible if you compare it to high-speed rail tickets.
-
You look speechless, Prateek. That was incredible. I’m conscious we’re at time. I think we’re all leaving. I’m feeling great and inspired and with lots of food for thought to take. So, thank you so much for taking the time, and so thoughtfully answering our questions are really, really helpful.
-
I guess I want to open the floor. Yeah, we’ll look forward to reading the book. Any kind of final questions from your side or we can wrap up?
-
Yeah, I think we can adjourn. Timmy will follow up with the slides. And I hope that if you have further follow-up questions, you can send to Timmy and Wei-Chung here will also help.
-
And since we’re handing over to the new minister on May 20th, I may actually have much more time to play a consultative role. If you have additional thoughts and if you read the book and want to, like, expand on that paragraph, feel free just to send me an email, my personal email, or just to catch me in the Plurality Discord channel.
-
Direct access to the author. I love it.
-
Yeah, exactly.
-
Thank you so much. Have a great day.
-
You too. Live long and prosper.