AI and Collective Governance: Points of Intervention
People often speak about AI as if it is one thing. It can seem like that when we use the most popular interfaces: a single product, packaged by a big company. But that view is both misleading and disempowering. It implies that only the big companies could possibly create and control this technology, because only they can manage the complexity. But another orientation is possible.
As we understand AI systems in more detail, we can recognize them as involving a sequence of smaller operations, and those can start to seem more approachable for community-scale governance. Feasible interventions start to seem possible. We don’t need to be a trillion-dollar tech company to make a dent in shaping this technology through our communities’ needs and knowledge.
This is a collective document meant to do two things. First, it identifies distinct layers of the AI stack that can be isolated and reimagined. Second, for each layer, it points to both potential strategies and existing projects that could steer that layer toward meaningful collective governance.
Please share your knowledge and ideas by contributing below.
Model design
How are foundational models designed, and who does the designing?
- Worker governance and ownership of AI labs could contribute
- Develop smaller, purpose-specific models that involve less costly and environmentally destructive training, and can be less error-prone
- Organize design through institutions oriented around the common good, such as democratic governments and nonprofit organizations, such as the Swiss Apertus model
Data
What data is used to train models? Where does it come from? What permission and reciprocity is involved?
- Ensure that all training data is transparent and retrievable, such as the Apertusand Pythia models
- Establish and rely on data cooperatives and trusts to provide ethical data sourcing and compensate data providers
- Leverage existing data under cooperative control, as in cases such as agricultural co-ops and credit unions
- Require robust data provenance techniques to avoid unintentional appropriation
- Promote adoption of the OSI Open Source AI Definition
Training
How are foundational models trained? What infrastructures and natural resources do they rely on?
- Organize training processes through accountability-oriented institutions such as democratic governments or nonprofit consortia
- Provide robust benefit-sharing arrangements for communities that host data centers
Tuning
What fine-tuning do models receive before deployment? What human intervention is involved?
- Utilize collective intelligence processes such as alignment assembliesto set standards for AI behavior
- Implement co-design practices that include alignment workers fully in the process of ethical oversight, rather than the dehumanizing roles they are often expected to assume
- Design tuning processes around an ethics of care, ensuring that all workers in the process experience respect and dignity in their work
Context
How do AIs obtain contextual information? What kinds of actions are agents able to carry out?
- Enable privacy-sensitive tools for connecting local models with community data, such as RooLLM and KOI Pond
- Utilize cooperative worker ownership, like READ-COOP, for human-in-the-loop, AI-assisted activities
Deployment
Where are AIs running while they are interacting with users? How do they treat user data?
- Deploy AI systems at data centers powered by renewable energy
- Host AI services on cooperatively owned servers, such as Cosy AI
- Ensure worker control over the deployment of AI systems in their workplaces
- Establish sectoral agreements over AI use, as in the outcome of the 2023–2024 Hollywood strike
- Run local models on personal computers with tools like Ollama and Jan
User experience
What kinds of interfaces and expectations are users presented with? What options do users have? How do interfaces nudge user behavior?
- Create interfaces that enable user choice among different models, such as Duck.ai
- Provide privacy-protecting mechanisms, including user-data mixers and data-protection compliance
Public policy
How does public policy shape the design, development, and deployment of AI systems?
- Ensure that workforce-displacing AI adoption accompanies universal benefits like shorter working hours and guaranteed income
- Develop policy with AI-augmented citizen assemblies
- Prohibit surveillance of users’ AI interactions
Culture
What cultural norms form around expectations for AI providers and users? How do these norms shape behavior?
- Establish clear, context-sensitive agreements on AI use at sites such as classrooms, workplaces, and communities
- Hold AI companies responsible for the behavior of models that they monopolistically control
- Encourage use of community-controlled systems wherever possible
Coda: Feedback loops
The idea of “points of intervention” here comes from the systems thinker Donella Meadows—especially her essay “Leverage Points: Places to Intervene in a System.” One idea that she stresses there is the power of feedback loops, which is when change in one part of a system produces change in another, and that in turn creates further change in the first, and so on.
What feedback loops can we imagine across these layers of the stack? How could change in one area lead to greater change through its effects at other layers?
Now, time to intervene!
Initiated by Nathan Schneider, with contributions from [put your name here!].