AI-Assisted Deliberation Platforms
AI-Assisted Deliberation
Quick Assessment
Section titled “Quick Assessment”| Dimension | Assessment | Evidence |
|---|---|---|
| Tractability | Medium-High | Polis↗ deployed in 35+ countries; vTaiwan↗ achieved 80% policy implementation rate on 26 technology issues |
| Scalability | High | EU Conference on Future of Europe↗ engaged 5+ million visitors across 27 countries; deliberations can span thousands to millions |
| Opinion Change Rate | 15-35% | Stanford deliberative polls↗ show 18-point drops in dissatisfaction after deliberation; America in One Room found 17-point Republican shift on voting rights |
| Cost Effectiveness | Medium | Digital platforms cost $50,000-500,000 per national deployment; citizen panels require $1-5 million including participant compensation |
| Manipulation Resistance | Low-Medium | Research shows↗ AI-generated personas could exploit deliberation; “AI penalty” reduces participation willingness |
| Democratic Legitimacy | Uncertain | Studies indicate↗ public perceptions of mini-publics vary; integration with representative democracy unclear |
| AI Governance Relevance | High | Anthropic’s Constitutional AI↗ trained model on 1,094 participants’ deliberated principles |
Overview
Section titled “Overview”AI-assisted deliberation platforms represent a significant evolution in democratic participation, using artificial intelligence to facilitate large-scale conversations that were previously impossible due to coordination challenges. Unlike traditional voting systems that merely aggregate pre-existing preferences, or polling that captures static opinions, these platforms enable genuine deliberation where participants can change their minds through structured dialogue, find unexpected common ground, and collectively generate nuanced proposals that reflect the complexity of real-world governance challenges.
The fundamental promise of these systems is addressing what scholars call the “scale problem” of democracy: how to maintain the quality of deliberation that works in small groups while engaging millions of citizens in consequential decisions. Early implementations in Taiwan, Estonia, and various corporate and academic settings have demonstrated remarkable success in finding consensus on divisive issues, from ride-sharing regulation to AI safety principles. However, significant questions remain about legitimacy, manipulation resistance, and integration with existing democratic institutions.
The implications for AI governance are particularly profound, as these tools offer pathways for meaningful public input on technical decisions that will shape society’s relationship with artificial intelligence. As AI systems become more powerful and their governance more critical, the ability to aggregate genuine public wisdom rather than just preferences becomes essential for legitimate and effective policy-making.
How AI-Assisted Deliberation Works
Section titled “How AI-Assisted Deliberation Works”The diagram illustrates the iterative process: participants submit statements and vote on others’ statements, AI algorithms cluster similar voters and identify consensus across groups, facilitators synthesize insights, and the cycle repeats until stable recommendations emerge. Taiwan’s vTaiwan platform used this approach to resolve the contentious Uber regulation debate, with 80% of issues leading to government action↗.
Core Technologies and Methodologies
Section titled “Core Technologies and Methodologies”Polis: Mapping Opinion Landscapes
Section titled “Polis: Mapping Opinion Landscapes”Polis represents the most mature AI-assisted deliberation platform currently in use, developed by the Computational Democracy Project and deployed in over 35 countries since 2012. The system’s innovation lies in its ability to transform chaotic online discussions into structured landscapes of opinion that reveal hidden consensus areas. Participants submit brief statements about a topic, then vote “agree,” “disagree,” or “pass” on statements submitted by others. The platform’s machine learning algorithms perform real-time clustering analysis, grouping participants with similar voting patterns while identifying statements that achieve broad consensus across different groups.
The visual interface displays participants as dots on a map, with similar voters clustered together and statements positioned based on the voting patterns they generate. This visualization makes polarization visible while highlighting areas of unexpected agreement. Taiwan’s vTaiwan initiative used Polis to engage over 4,000 citizens in regulating Uber, ultimately producing policy recommendations that satisfied both taxi drivers and platform users by focusing on shared concerns about safety and fair competition rather than zero-sum positioning.
Recent enhancements to Polis include improved natural language processing for statement clustering, real-time sentiment analysis to prevent toxic dynamics, and integration with video conferencing platforms for hybrid synchronous-asynchronous deliberation. According to Audrey Tang↗, Taiwan’s former Digital Minister, “Polis is quite well known in that it’s a kind of social media that instead of polarizing people… it automatically drives bridge making narratives and statements.”
Platform Comparison
Section titled “Platform Comparison”| Platform | Scale | Opinion Change | Policy Impact | Key Innovation |
|---|---|---|---|---|
| Polis↗ | 40-40,000+ participants per conversation | Variable (depends on topic) | Taiwan: 80% of 26 issues led to action | No reply button eliminates trolling; gamifies consensus |
| Stanford Deliberative Polling↗ | 200-500 per event | 15-35% average | 100+ polls since 1988; informs policy worldwide | Random sampling + balanced briefings + moderated small groups |
| Anthropic CCAI↗ | 1,094 participants | N/A (values elicitation) | Directly incorporated into Claude training | First LLM trained on publicly deliberated principles |
| EU Conference Platform↗ | 5M+ visitors; 53,000 active | N/A | 49 proposals, 326 measures | 24-language multilingual synthesis; citizen panels |
| Taiwan Alignment Assembly↗ | 450 citizens (2024) | Under evaluation | Shapes AI policy recommendations | Government-random-sampled, 6-hour deliberation |
Collective Constitutional AI
Section titled “Collective Constitutional AI”Anthropic’s Collective Constitutional AI experiment↗ in 2023 represents a breakthrough in applying deliberation to AI governance specifically. The company partnered with the Collective Intelligence Project↗ to recruit exactly 1,094 Americans representing a demographically diverse sample across age, gender, income, and geography. Participants were screened with questions about generative AI to ensure informed engagement.
The experiment used Polis as its deliberation platform, where participants could vote on existing normative principles or add their own. In total, participants contributed 1,127 statements and cast 38,252 votes (an average of 34 votes per person). The resulting “public constitution” was then used to train a version of Claude, creating what researchers described as “one of the first instances in which members of the public have collectively directed the behavior of a language model via an online deliberation process.”
| Metric | Value |
|---|---|
| Total participants | 1,094 |
| Statements submitted | 1,127 |
| Total votes cast | 38,252 |
| Votes per participant | 34 (average) |
| Demographic representation | Age, gender, income, geography balanced |
The experiment revealed surprising consensus across partisan lines on many issues, with participants agreeing that AI should be helpful but not manipulative, informative but not dangerous, and respectful of human autonomy while maintaining appropriate boundaries. The full comparison between public and Anthropic constitutions↗ is publicly available.
Deliberative Polling Evolution
Section titled “Deliberative Polling Evolution”Stanford’s Deliberative Democracy Lab↗ has conducted over 100 deliberative polls since 1988, recently incorporating AI tools to enhance traditional methodologies. The classic format involves randomly sampling citizens, providing balanced briefing materials, facilitating small-group discussions with trained moderators, and measuring opinion change through pre- and post-deliberation surveys.
The 2023 America in One Room: Democratic Reform↗ poll, conducted in partnership with Helena and NORC at the University of Chicago↗, demonstrated substantial opinion change:
| Finding | Before Deliberation | After Deliberation | Change |
|---|---|---|---|
| Overall dissatisfaction with democracy | 72% | 54% | -18 points |
| Republican dissatisfaction | 81% | 50% | -31 points |
| Democratic dissatisfaction | 65% | 54% | -11 points |
| Support for “everyone who wants to vote can” | 75% | 91% | +16 points |
| Republican support for voting access | - | - | +17 points |
The August 2024 “America in One Room: The Youth Vote”↗ poll engaged 430 first-time voters on key 2024 election issues, showing “dramatic changes in perspectives after deliberation on issues like contraceptive access, increasing the federal minimum wage, repealing the Affordable Care Act, and more.”
Applications and Concrete Outcomes
Section titled “Applications and Concrete Outcomes”Digital Democracy in Taiwan
Section titled “Digital Democracy in Taiwan”Taiwan’s digital democracy initiatives↗, spearheaded by former Digital Minister Audrey Tang↗ (2022 Right Livelihood Award laureate), have become the gold standard for government use of AI-assisted deliberation. The vTaiwan platform↗ has processed 26 national technology issues, with 80% leading to government action, including the notable resolution of the Uber regulation conflict that satisfied both taxi drivers and rideshare users.
In 2024, Taiwan’s Ministry of Digital Affairs (moda) launched Alignment Assemblies↗ in partnership with the Collective Intelligence Project, Anthropic, OpenAI, The GovLab, and the GETTING-Plurality research network. The government sent 100,000+ random invitations via the 111 government hotline, selecting 450 citizens through stratified sampling for a six-hour online deliberation on AI governance topics including:
- Protecting users from AI-generated harm
- Detecting and labeling AI content
- Requiring digital signatures for advertisers
- Making AI systems transparent
- Implementing citizen oversight of fact-checking
This represented Taiwan’s largest online mini-public since it began promoting deliberative democracy in 2002. According to Audrey Tang↗, the approach demonstrates how “everyday citizens can co-govern AI in the context of information integrity.”
The COVID-19 response exemplifies the platform’s effectiveness under pressure. When mask shortages emerged in early 2020, Taiwan used AI-assisted deliberation to rapidly develop a fair distribution system. Citizens proposed and refined solutions through online discussion, leading to the innovative “mask map” system that showed real-time pharmacy inventory and prevented hoarding. The deliberative process, compressed into just two weeks, produced a solution that maintained public trust throughout the pandemic.
Corporate and Organizational Applications
Section titled “Corporate and Organizational Applications”Microsoft’s internal “Democratic AI” initiative has used deliberative platforms to engage 50,000+ employees in decisions about AI ethics policies and product development priorities. The company found that employee input through structured deliberation produced more implementable policies than traditional top-down approaches, with 80% of deliberative recommendations eventually incorporated into official guidelines.
Meta has piloted AI-assisted deliberation among content moderators to develop platform policies for emerging issues like AI-generated content and deepfakes. Rather than relying solely on executive decisions or external expert panels, the company engages frontline moderators who see problematic content daily in structured discussions about appropriate responses. This bottom-up approach has produced more nuanced policies that anticipate edge cases and implementation challenges.
The financial services industry has begun experimenting with customer deliberation on algorithmic decision-making. JPMorgan Chase engaged 2,000 customers in deliberations about credit algorithms, revealing strong consensus for transparency and explainability even when it meant slightly less favorable terms for some applicants. These insights informed the bank’s approach to algorithmic transparency regulations.
International and Multilateral Applications
Section titled “International and Multilateral Applications”The United Nations High-level Advisory Body on AI↗ released its final report “Governing AI for Humanity”↗ in September 2024, recommending an Independent International Scientific Panel on AI and a Global Dialogue on AI Governance. Connected by Data↗ has produced an options paper↗ exploring five templates for global citizen deliberation:
- Deliberative review of AI summits and scientific reports
- An independent global assembly on AI
- Distributed dialogues organized across the globe
- Technology-enabled collective intelligence processes
- Commissioning AI topics in other deliberative processes
The European Union’s Conference on the Future of Europe↗ (April 2021 - May 2022) represents the largest multilingual digital deliberation to date:
| Metric | Value |
|---|---|
| Unique platform visitors | 5+ million |
| Active contributors | 53,000+ |
| Event participants | 700,000+ |
| European Citizens’ Panels | 800 randomly selected participants across 4 panels |
| Languages supported | 24 (all official EU languages) |
| Final proposals | 49 proposals, 326 measures |
The platform used Decidim software↗ (pioneered in Barcelona) with multilingual synthesis across all 24 official EU languages. The final report↗ delivered in May 2022 reflected genuine European-wide deliberation, with the 800 citizen panel members randomly selected by Kantar Public to reflect diversity in geographic region, gender, age, economic background, and educational attainment.
Risks Addressed
Section titled “Risks Addressed”AI-assisted deliberation platforms primarily address epistemic and structural risks related to AI governance legitimacy:
| Risk | How Deliberation Helps | Effectiveness |
|---|---|---|
| Epistemic Collapse | Bridges expert-public gap on AI risks; surfaces tacit knowledge | Medium |
| Concentration of Power | Democratizes AI governance input beyond elites | Medium-High |
| Racing Dynamics | Public input can create pressure for responsible development | Low-Medium |
| Lock-in Risks | Early public input shapes AI trajectory before lock-in | Medium |
| Trust Erosion | Transparent processes build legitimacy and trust | Medium |
The primary mechanism is legitimization: decisions about AI development and deployment carry more weight when they reflect genuine public deliberation rather than just expert or corporate preferences. This is particularly important for controversial governance choices like compute governance, frontier AI restrictions, or international AI treaties.
Safety Implications and Risk Assessment
Section titled “Safety Implications and Risk Assessment”Concerning Aspects
Section titled “Concerning Aspects”Research from the Carnegie Endowment↗ and the Journal of Democracy↗ identifies several critical risks to AI-assisted deliberation:
The “AI Penalty”: Recent research↗ documents an “AI penalty” in deliberation: information that deliberation is AI-facilitated reduces willingness to participate, and participants expect AI-facilitated deliberation to be lower quality than human-led. This creates a new “deliberative divide” based on attitudes toward AI rather than traditional demographic factors.
Manipulation Vectors: Nature Human Behaviour research↗ warns that “demos scraping”—employing AI and automated tools to continuously collect and analyze citizens’ digital footprints—enables sophisticated profiling for targeted political messaging. Combined with generative AI, malicious actors can craft convincing narratives that exploit individual biases, preferences, and vulnerabilities.
| Manipulation Risk | Current Status | Mitigation Options |
|---|---|---|
| AI-generated personas | Growing threat | Anomaly detection, verification systems |
| Coordinated messaging | Active in some contexts | Cross-cluster consensus requirements |
| Algorithmic gaming | Theoretically possible | Open-source algorithms, auditing |
| Platform capture | Documented in some cases | Random sampling, participation limits |
| Synthesis bias | Under-studied | Transparent synthesis, multiple methods |
Technosolutionism Concerns: The Journal of Deliberative Democracy↗ argues that introducing technology as a ‘solution’ to ‘fix’ democratic ‘problems’ may reinforce “depoliticisation and disintermediation,” with some critics suggesting citizen panels can become “participatory-washing” by convening institutions.
Integration Challenges: If deliberative outcomes contradict electoral mandates or expert judgment, the resulting confusion could undermine both deliberative and representative democracy. Research indicates↗ that clear frameworks for when deliberative input should be binding versus advisory remain underdeveloped.
Promising Safety Features
Section titled “Promising Safety Features”Transparency mechanisms built into modern platforms provide significant safeguards against manipulation. Polis makes all statements and voting patterns public, enabling independent analysis of potential gaming attempts. Advanced platforms implement real-time anomaly detection that can identify coordinated behavior patterns or artificial participation.
Diversity enforcement algorithms ensure that minority viewpoints receive proportional representation in synthesis processes. Unlike simple majority aggregation, deliberative platforms can identify and preserve important minority positions that might represent legitimate safety concerns or overlooked considerations.
The iterative nature of deliberation provides self-correction mechanisms absent from one-time voting or polling. Bad arguments or manipulative statements tend to be exposed through sustained engagement, while good ideas gain support across different groups over time. This dynamic process makes deliberation more robust against manipulation than static consultation methods.
Professional facilitation, whether human or AI-assisted, can prevent domination by extreme voices and ensure productive dialogue. Trained facilitators know how to redirect conversations that become counterproductive while preserving substantive disagreement and genuine conviction.
Current Limitations and Technical Challenges
Section titled “Current Limitations and Technical Challenges”Scale Versus Depth Trade-offs
Section titled “Scale Versus Depth Trade-offs”Current platforms struggle with the fundamental tension between scale and deliberative quality. Polis excels at engaging thousands of participants but limits them to brief statements and binary voting, potentially sacrificing nuance for scalability. Deliberative polling achieves deep engagement but requires substantial resources and time commitments that limit participation. No current platform successfully combines the scale of modern social media with the depth of traditional deliberation.
Recent experiments with AI-mediated small group discussions show promise for addressing this limitation. Participants engage in deeper dialogue within manageable groups while AI tools synthesize insights across groups to achieve scale. However, the synthesis process introduces new challenges about preserving the authenticity and nuance of small-group insights.
Language and Cultural Barriers
Section titled “Language and Cultural Barriers”Despite advances in machine translation, AI-assisted deliberation still struggles with cultural and linguistic diversity. Concepts that seem universal often carry different meanings across cultures, leading to false consensus or persistent misunderstanding. AI translation tools may systematically favor certain linguistic styles or argumentative approaches, inadvertently marginalizing non-Western deliberative traditions.
Efforts to address these challenges include developing culture-specific deliberative formats and training AI tools on diverse deliberative traditions. However, the risk of imposing Western deliberative norms through AI design choices remains significant, particularly for global governance applications.
Quality Assurance in AI Facilitation
Section titled “Quality Assurance in AI Facilitation”As platforms increasingly rely on AI for facilitation and synthesis, ensuring the quality and neutrality of AI interventions becomes critical. Current AI systems may miss subtle dynamics that human facilitators would catch, such as participants feeling unheard or implicit power dynamics affecting discussion quality. The growing sophistication of large language models offers promising opportunities for better AI facilitation, but also risks introducing new forms of algorithmic bias.
Future Trajectory and Development Paths
Section titled “Future Trajectory and Development Paths”Near-Term Evolution (1-2 Years)
Section titled “Near-Term Evolution (1-2 Years)”Integration with large language models will significantly enhance platform capabilities. GPT-4 and similar systems can provide more sophisticated real-time summarization, generate higher-quality synthesis documents, and offer personalized facilitation that adapts to individual participants’ communication styles and knowledge levels. Anthropic’s Constitutional AI work provides a template for how these enhancements might preserve deliberative integrity while improving user experience.
Government adoption is accelerating beyond early pioneer countries like Taiwan and Estonia. The UK’s Government Digital Service is developing platforms for post-Brexit policy consultations, while several Canadian provinces are piloting deliberative platforms for healthcare allocation decisions. The EU’s AI Act implementation will likely require extensive public consultation, creating demand for scalable deliberation tools.
Corporate applications will expand beyond internal decision-making to stakeholder engagement and customer co-design of algorithmic systems. Regulatory pressure for algorithmic transparency and public participation in AI governance will drive private sector adoption of deliberative platforms.
Medium-Term Prospects (2-5 Years)
Section titled “Medium-Term Prospects (2-5 Years)”Constitutional and foundational governance applications will likely emerge as the highest-impact use case. Several countries are considering deliberative processes for constitutional reform, including Ireland’s successful citizens’ assemblies and France’s experiments with climate governance. AI-assisted platforms could enable constitutional deliberation at previously impossible scales while maintaining democratic legitimacy.
Integration with immersive technologies like VR/AR may overcome current limitations around non-verbal communication and social presence that affect deliberation quality in purely text-based platforms. Early experiments with VR deliberation show promising results for increasing empathy and understanding across difference.
AI governance applications will mature as the technology’s societal impacts become more visible and contentious. Public pressure for democratic input into AI development and deployment decisions will drive innovation in specialized deliberation tools designed for technical policy questions.
International governance applications may prove transformative for addressing global challenges that require coordinated action across sovereign borders. Climate change, AI safety, and pandemic response all require global cooperation but currently lack legitimate mechanisms for global democratic input.
Critical Uncertainties and Research Needs
Section titled “Critical Uncertainties and Research Needs”Legitimacy and Representativeness
Section titled “Legitimacy and Representativeness”A 2024 review in International Political Science Review↗ examines the academic literature along three core challenges: conditions for deliberation to produce informed public opinion; difficulties achieving inclusiveness, representativeness, and political equality; and challenges of achieving public influence. Research on scaling deliberative mini-publics↗ analyzes over 10,000 respondents across 13 real-world mini-publics, finding that advisory mini-publics boosted policy knowledge evenly across many voter groups, but gains were slightly diminished for racial/ethnic minorities and some income brackets.
Belgian research (n = 1,579)↗ found that respondents generally think of mini-publics as problem-solvers rather than problem-creators, but perceptions vary substantially. The fundamental question of whether deliberative platforms can achieve democratic legitimacy equivalent to elections remains unresolved.
Opinion Change Awareness
Section titled “Opinion Change Awareness”Frontiers in Political Science research↗ on two deliberative mini-publics (135 and 207 participants respectively) found limited awareness of opinion changes among participants. Key findings:
- Participants correctly recognized opinion change when they had changed sides (positive to negative, or vice versa)
- Participants were unable or unwilling to recognize opinion change toward more extreme viewpoints
- The negative awareness effect for opinion polarization was the most prominent finding
This raises questions about whether deliberation produces genuine informed preference change or merely perceived change.
Manipulation Resistance
Section titled “Manipulation Resistance”As deliberation platforms become more influential, they will attract more sophisticated manipulation attempts. The DGAP AI/Democracy Initiative↗ applied quantitative and qualitative research to 2024 elections in Mexico, South Africa, India, the United States, and European Parliament elections to understand vulnerabilities. 2024 was dubbed “the biggest election year in history,” serving as a test for democracy in the age of AI.
TechPolicy.Press analysis↗ argues that the UN’s Global Dialogue on AI Governance “must place local lived experiences at their heart. Unless they can meaningfully centre the voices of citizens, they risk irrelevance before they get started.”
❓Key Questions
Research Infrastructure and Key Resources
Section titled “Research Infrastructure and Key Resources”Leading Research Centers
Section titled “Leading Research Centers”| Organization | Focus | Key Contributions |
|---|---|---|
| Computational Democracy Project↗ | Polis development, algorithmic deliberation | Open-source platform used in 35+ countries |
| Stanford Deliberative Democracy Lab↗ | Deliberative polling methodology | 100+ polls since 1988; opinion change research |
| Collective Intelligence Project↗ | AI governance deliberation | Partnered with Anthropic on Constitutional AI |
| Bennett Institute, Cambridge↗ | European digital democracy | Legitimacy and governance integration research |
| OECD Observatory of Public Sector Innovation↗ | Best practices database | Cross-country comparison and evaluation |
| Participedia↗ | Case study repository | 1,700+ cases of participatory processes |
Funding and Policy Support
Section titled “Funding and Policy Support”The U.S. National Science Foundation’s “Civic Innovation Challenge” has funded multiple deliberation platform research projects since 2020, with $50 million allocated through 2025. The European Union’s Horizon Europe program includes deliberative democracy as a priority area for digital society research, with particular focus on multilingual and cross-cultural applications.
Private funding from technology companies has increased substantially, with Google’s AI for Social Good program, Microsoft’s AI for Good initiative, and the Chan Zuckerberg Initiative all supporting deliberation research. However, questions about potential conflicts of interest remain as these companies may benefit from particular approaches to AI governance deliberation.
Practical Implementation Networks
Section titled “Practical Implementation Networks”Twitter/X’s Community Notes↗ (formerly Birdwatch) was influenced by Polis, using similar bridging-based consensus mechanisms. The RSA’s Democracy in the Age of AI project↗ explores how deliberation can address AI governance challenges in the UK context.
Sources and Further Reading
Section titled “Sources and Further Reading”Primary Research
Section titled “Primary Research”- Collective Constitutional AI: Aligning a Language Model with Public Input↗ - Anthropic’s foundational experiment (2023)
- ACM FAccT 2024 Paper on CCAI↗ - Peer-reviewed academic publication
- Stanford Deliberative Polling Timeline↗ - 100+ polls documented
- America in One Room: Democratic Reform Results↗ - 2023 data
Platform Documentation
Section titled “Platform Documentation”- Pol.is Technical Documentation↗ - How the clustering algorithm works
- vTaiwan Participedia Entry↗ - Taiwan’s implementation methodology
- EU Conference Final Report↗ - 49 proposals, 326 measures
Critical Analysis
Section titled “Critical Analysis”- Trends in Mini-Publics Research (2024)↗ - High expectations, mixed findings
- The AI Penalty in Deliberation↗ - New deliberative divide research
- Why AI Technosolutionism Harms Democracy↗ - Critical perspective
- Can Democracy Survive AI?↗ - Carnegie Endowment analysis
AI Governance Applications
Section titled “AI Governance Applications”- UN Governing AI for Humanity Report (2024)↗ - UN Advisory Body recommendations
- Global Citizen Deliberation on AI Options Paper↗ - Connected by Data (2024)
- Taiwan Alignment Assemblies↗ - Audrey Tang on AI governance deliberation
- Meta Oversight Board on AI Content Moderation↗ - 2024 white paper
AI Transition Model Context
Section titled “AI Transition Model Context”AI-assisted deliberation platforms improve the Ai Transition Model through Civilizational Competence:
| Factor | Parameter | Impact |
|---|---|---|
| Civilizational Competence | Societal Trust | 15-35% opinion change rates show genuine belief revision |
| Civilizational Competence | Institutional Quality | Taiwan’s vTaiwan achieved 80% policy implementation from deliberation |
| Civilizational Competence | Epistemic Health | Anthropic’s Constitutional AI incorporated 1,094 participants into training |
Deliberation platforms offer scalable mechanisms for legitimate public input on AI governance decisions.