AI and Collective Governance: Points of Intervention

People often speak about AI as if it is one thing. It can seem like that when we use the most popular interfaces: a single product, packaged by a big company. But that view is both misleading and disempowering. It implies that only the big companies could possibly create and control this technology, because only they can manage the complexity. But another orientation is possible.

As we understand AI systems in more detail, we can recognize them as involving a sequence of smaller operations, and those can start to seem more approachable for community-scale governance. Feasible interventions start to seem possible. We don’t need to be a trillion-dollar tech company to make a dent in shaping this technology through our communities’ needs and knowledge.

This is a collective document meant to do two things. First, it identifies distinct layers of the AI stack that can be isolated and reimagined. Second, for each layer, it points to both potential strategies and existing projects that could steer that layer toward meaningful collective governance.

Please share your knowledge and ideas by contributing below.

Model design

How are foundational models designed, and who does the designing?

Data

What data is used to train models? Where does it come from? What permission and reciprocity is involved?

Training

How are foundational models trained? What infrastructures and natural resources do they rely on?

Tuning

What fine-tuning do models receive before deployment? What human intervention is involved?

  • Utilize collective intelligence processes such as alignment assembliesto set standards for AI behavior
  • Implement co-design practices that include alignment workers fully in the process of ethical oversight, rather than the dehumanizing roles they are often expected to assume
  • Design tuning processes around an ethics of care, ensuring that all workers in the process experience respect and dignity in their work

Context

How do AIs obtain contextual information? What kinds of actions are agents able to carry out?

  • Enable privacy-sensitive tools for connecting local models with community data, such as RooLLM and KOI Pond
  • Utilize cooperative worker ownership, like READ-COOP, for human-in-the-loop, AI-assisted activities

Deployment

Where are AIs running while they are interacting with users? How do they treat user data?

User experience

What kinds of interfaces and expectations are users presented with? What options do users have? How do interfaces nudge user behavior?

Public policy

How does public policy shape the design, development, and deployment of AI systems?

Culture

What cultural norms form around expectations for AI providers and users? How do these norms shape behavior?

  • Establish clear, context-sensitive agreements on AI use at sites such as classrooms, workplaces, and communities
  • Hold AI companies responsible for the behavior of models that they monopolistically control
  • Encourage use of community-controlled systems wherever possible

Coda: Feedback loops

The idea of “points of intervention” here comes from the systems thinker Donella Meadows—especially her essay “Leverage Points: Places to Intervene in a System.” One idea that she stresses there is the power of feedback loops, which is when change in one part of a system produces change in another, and that in turn creates further change in the first, and so on.

What feedback loops can we imagine across these layers of the stack? How could change in one area lead to greater change through its effects at other layers?

Now, time to intervene!

Initiated by Nathan Schneider, with contributions from [put your name here!].