Beyond Federal vs. State: The Missing Voices in America's AI Policy Interventions
How jurisdictional battles obscure the real enigma in AI governance
Quick personal note: It's been about two months since my last post, and I've been immersed in the labor of dissertation research — organizing interviews, collecting documentation, writing and rewriting chapters, and preparing myself for the summer push toward completion. I’ve also been playing with several ideas for Open Wafers, letting the doctoral work lead me where it will. As I complete different parts of my dissertation over the next few months, I will likely wander between themes here, but it will allow for the kind of deep dives you’ll find in the piece below.
Today’s post represents a(nother) discussion on artificial intelligence, though readers will notice it connects to topics I've explored in previous posts about community ownership, democratic technology, and alternatives to extractive digital relations.
The United States Congress is once again debating whether AI regulation should take place at the federal or state level, with proposals for multi-year moratoriums on state AI laws moving between the House and Senate in various forms.
For context, this past June, the House approved a 10-year moratorium on state AI regulation (and then numerous Representatives claimed they didn’t know it was included in the very Bill they were working to pass). In fact, just yesterday, observers noted that Senators had nixed almost all language concerning the moratorium in their proposed “One Big Beautiful Bill” legislation.
But while legislators argue over jurisdiction, millions of Americans face algorithmic decisions daily that shape their employment prospects, healthcare access, and interactions with law enforcement, with no meaningful involvement in discussions on proposed regulation. This is less of a question about policy oversight (or incompetence, whatever the case may be); it amounts to a misunderstanding of what democratic technology governance requires.
As someone who studies how technology's design, governance, and ownership encodes particular values, priorities, and possibilities, I see this federal-state conflict as symptomatic of a deeper failure of imagination. Instead of debating who should regulate AI, we should be asking how AI governance can reflect the knowledge, needs, and values of communities1. Through my research, I interact with digital cooperatives and peer production networks to build theory around transforming extractive, tech-mediated social relations into sources of community strength. Those interactions surface unique analytical tools for understanding how concepts like 'intelligence' and 'resilience' are rooted in engaged communities. Genuine accountability emerges from below, not Capitol Hill.
The Battle Lines: Innovation vs. Protection, Uniformity vs. Experimentation
The current AI policy landscape reveals key tensions about how democratic societies should govern transformative technologies. Federal efforts in the US emphasize unified standards to promote innovation and prevent regulatory fragmentation, with the current administration directing agencies to prioritize "systems free from ideological bias" while removing "unnecessary bureaucratic restrictions."2 Meanwhile, states are trying to assert their role as laboratories for policy experimentation, rushing to pass legislation for their constituents in the wake of AI’s mainstream adoption, and growing concerns about algorithmic harms3. These are competing visions for AI given that individual state approaches also vary in scope and philosophy, and the dynamic creates a governance vacuum that AI companies are inclined to take advantage of.
Here are a few examples of state-led regulation. Colorado's AI Act targets systems making "consequential decisions" in employment, housing, and healthcare, requiring risk assessments and consumer notifications. The law creates a framework where developers and deployers share accountability, enforced exclusively through the Attorney General's office4. California has enacted 18 AI-specific laws addressing everything from deepfakes to healthcare algorithms, with dozens more under consideration5. Texas chose categorical prohibitions, banning AI designed for manipulation or discrimination while creating regulatory sandboxes for innovation6.
The distinctions between these policy plays matter. A healthcare AI system legal in Texas might violate Colorado's impact assessment requirements. An employment algorithm compliant with California's transparency rules could fail Texas's anti-discrimination standards7. For businesses, this means navigating a labyrinth of conflicting requirements, and for citizens, it means radically different protections depending on their zip code.
Federal lawmakers argue this "patchwork" stifles innovation and weakens America's competitive position, as the fragmentation extends beyond state borders. The global nature of AI development means American companies compete with Chinese firms unconstrained by similar regulatory debates. European AI governance through the EU AI Act creates another model entirely — comprehensive, risk-based regulation that American companies must navigate as well, for global operations8. This international dimension complicates arguments about innovation versus protection, as regulatory approaches shape competitive advantages across interconnected digital markets.
State officials across party lines have united against federal pre-emption attempts, with over 260 state legislators from across nearly all 50 states calling it "irresponsible" and "wholly destructive."9 Their opposition stems from practical experience: states often encounter AI's impacts first, from biased criminal justice algorithms to discriminatory lending systems. They argue that the “patchwork” allows faster responses to emerging harms while enabling policy learning across jurisdictions.
Yet this entire debate rests on flawed premises. It assumes AI governance is primarily a technical and legal challenge, requiring either uniform federal standards or flexible state experimentation. Can AI policy really be binary: innovation or protection, efficiency or accountability, centralized or distributed control? Most critically, it treats AI as something that happens to communities. The 300+ million Americans whose job prospects, healthcare access, and civil liberties increasingly depend on algorithmic decisions appear in these debates only as abstract beneficiaries, and never as participants.
The Democratic Deficit in AI Governance
The jurisdictional battle I described obscures a more essential question: how can AI governance ever be democratic without participation? The current framework — whether federal or state — maintains a critical distance between those who govern AI and those who live with its consequences. Consider some of the following impacts:
When predictive policing algorithms direct patrol routes, they often rely on historical arrest data that encodes decades of discriminatory enforcement patterns. Studies show these systems can create feedback loops — sending more officers to historically over-policed neighborhoods, generating more arrests, which then justify continued intensive policing10. Residents have no input into the data selection, algorithmic design, or deployment decisions that intensify surveillance of their communities.
In employment, AI screening tools now process most initial job applications, often eliminating candidates before human review11. These systems frequently penalize gaps in employment history, non-traditional career paths, or even specific zip codes; these are patterns that disproportionately affect women returning from caregiving, formerly incarcerated individuals, or residents of economically disadvantaged areas. Workers cannot interrogate why their applications never reach human reviewers or challenge the criteria determining their economic opportunities.
Healthcare algorithms exhibit similar opacity with life-altering consequences. One widely-used algorithm affecting millions of patients systematically assigned lower risk scores to black patients than to white patients with identical health conditions, resulting in reduced access to care programs12. Patients remained unaware of the calculations directing their care decisions.
The federal-state debate perpetuates what can be termed "ethics without accountability" — governance structures that respond to corporate interests and bureaucratic imperatives rather than community needs. Federal pre-emption promises uniformity but concentrates power13. State regulation offers flexibility but fragments oversight and impacts delivery. Neither approach addresses the core democratic deficit: those most affected by AI systems have the least influence over their development and deployment.
Community Intelligence as Governance Innovation
High-stakes, dynamic technological changes create conditions where policymakers become disoriented and fail to coordinate on necessary actions14.¹¹ This coordination failure is both symptom and cause of a deeper structural problem: the concentration of AI development for purposes that systematically disincentivize participation. AI systems may optimize for efficiency, but they achieve it by undermining collective agency.
I believe one strategic response lies in what I call "community intelligence"15: AI systems designed, owned, and governed by the communities they serve, emerging from community needs rather than top-down control. When AI development incorporates local knowledge systems and participatory approaches, it creates resilient coordination mechanisms that can navigate transformative technological transitions. My research examines over 50 global cases of similar “democratic experiments” with technology, and I’ve found that they exhibit certain patterns in their implementation and outcomes.
Consider the Workers' Algorithm Observatory, which enables gig workers to collectively document and analyze how algorithmic management affects their working conditions. Using tools like FairFare and Gigbox, drivers pool data to uncover patterns in algorithmic wage calculations and job assignments that individual workers could never detect alone16. This is collective sense-making at scale that transforms isolated workers into communities capable of negotiating with platform companies.
The Detroit Community Technology Project has developed community-owned digital infrastructure that includes open-source mesh networks for internet access17. Their Equitable Internet Initiative has trained over 300 "Digital Stewards" - residents who build, maintain, and govern these systems based on community-defined principles18. When facial recognition was proposed for Detroit's Project Green Light, these Digital Stewards conducted research and public education campaigns highlighting facial recognition’s racial bias and inaccuracy. They revealed evidence and local data showing extremely high error rates for facial recognition, especially on Black individuals. The proposal was ultimately defeated through community expertise combined with collective action, with strict limits on facial recognition use, barring real-time live surveillance, and requiring it only be used for violent felonies under oversight.
Organizations across the US are developing accountability frameworks that give communities real power. In Seattle, the Algorithmic Equity Toolkit enables community evaluation of municipal AI systems, from housing allocation algorithms to automated permitting systems19. This shifts algorithmic assessment from closed-door technical reviews to public processes where communities define acceptable uses and impacts.
It is quite easy to distinguish these examples of community stewardship from aforementioned federal and state approaches to regulation. They acknowledge the power of collective knowledge: to produce tech capabilities that are accountable to community needs. They emphasize capacity building — helping people understand and evaluate AI systems — rather than compliance. They create mechanisms for ongoing participation rather than one-time consultation. Most importantly, they demonstrate that meaningful innovation at societal scale emerges from distributing governance authority.
Toward Participatory AI Governance
The path forward requires reimagining AI governance as fundamentally participatory. This doesn't mean abandoning federal coordination or state experimentation but rather embedding both within frameworks that center community knowledge and control. My research on democratic digital experiments suggests conceivable pathways for achieving this transformation.
Community AI Boards could function like civilian review boards for police, with real authority to evaluate AI systems affecting local populations. Unlike top-down regulation, these boards would bring together residents, technical experts, and advocacy organizations to assess everything from predictive policing algorithms to automated benefit determinations. Oakland's Commission on Community Technology demonstrates this model in practice, where residents directly shape city technology policies and have successfully blocked deployment of facial recognition and predictive policing systems through community-led technical assessments20.
Participatory Design Requirements could mandate that high-impact AI systems undergo community consultation from before development through to deployment. This moves beyond token engagement to shared decision-making about system objectives, data use, and evaluation metrics. The Montreal Declaration for Responsible AI offers principles for such participation, though implementation remains nascent21. Pittsburgh's recent ordinance requiring community impact assessments for AI deployments shows how cities can operationalize these principles, giving neighborhoods veto power over surveillance technologies22. If police or city agencies ever want to deploy them, they must first publicly justify the need and get buy-in from the community’s representatives.
Public AI Infrastructure could provide communities with computational resources and technical support to develop their own AI systems or audit existing ones. Canada's Sovereign AI Compute Strategy allocates $2 billion for domestic AI capacity, though it currently focuses on research rather than community empowerment23. To truly democratize the “means of AI production,” some envision more public-interest approaches. For instance, one proposal is to leverage public libraries as AI hubs – equipping libraries with the hardware, software, and training programs needed for ordinary people to learn AI skills and even develop their own algorithms. This vision is already being piloted in places like Indianapolis. The Indianapolis Public Library system has piloted such programs, providing both computing resources and training for community groups to build local AI applications24. These early pilots suggest that libraries and other civic institutions can play a key role in democratizing technical capacity, ensuring that everyone has the opportunity to shape and leverage AI.
These approaches would complement other governance mechanisms. Federal standards could establish baseline requirements for community participation, while state regulations could create frameworks for local AI boards. Technical standards could then mandate algorithmic transparency in forms accessible to communities. The goal isn't to choose between levels of government but to ensure all governance levels serve community-focused ends.
Navigating Structural Tensions
The tension between community-centered AI governance and existing power structures runs deeper than implementation challenges. We're confronting a system where technical complexity has become a gatekeeping mechanism, where "innovation" often means freedom from democratic oversight, and where the pace of technological change is weaponized against deliberative governance25.
Corporate AI developers argue that community participation would fragment standards and slow innovation in a globally competitive landscape. In response, communities should be afforded the agency to ask: does speed matter more than justice? Is efficiency more important than participatory processes? Does global competitiveness mean we have to sacrifice local agency? In effect, the questions from previous rounds of globalization are becoming relevant once more.
The coordination challenges I’ve previously referred to in this piece are real but not insurmountable. Environmental justice movements have successfully navigated technical complexity for decades, with communities mastering everything from toxicology to atmospheric chemistry, to challenge corporate polluters26. The civil rights movement transformed statistical analysis into a tool for proving discrimination27. Technical knowledge, when coupled with lived experience and collective action, becomes a form of community power rather than a realm of “domination” by experts.
Resource constraints shape these possibilities, but they reflect political choices rather than natural limits. We spend billions subsidizing AI development through tax breaks, research grants, and procurement contracts that primarily benefit large corporations. Redirecting even a fraction of these resources toward community capacity building would transform the landscape of AI governance.
The Stakes of Democratic AI Governance
Democratic innovation in AI is already taking place at scale, from Detroit to Oakland to Seattle, and many other places around the world. Communities aren’t waiting for permission. They're building alternative infrastructures, developing new frameworks for accountability, and demonstrating that technical complexity need not preclude participation. Their eventual impact emerges from restructuring power relations in technology.
This restructuring connects to broader transformations I've traced in previous pieces on Open Wafers. Just as communities are reclaiming data as a form of social relations instead of property, and building digital commons to counter platform economy architectures, community intelligence represents another frontier in democratizing technology. These are interconnected movements sharing common principles: collective ownership, participatory governance, and the transformation of extractive systems into sources of community strength.
The coordination failure at the heart of current AI governance reflects deeper contradictions in how we organize technological power. Federal and state governments struggle to regulate systems they neither understand nor control, while the communities bearing AI's impacts remain excluded from decisions shaping their futures. This gap between governance and consequences creates the space where democratic alternatives become possible and necessary.
The path forward requires recognizing that genuine AI accountability cannot be legislated from above or negotiated between existing power centers. It must be built from below, through processes that distribute authority. As AI systems increasingly embed themselves in economic, social, and political processes, their governance becomes inseparable from democracy itself. The missing voices, the overlooked stakeholders, in America's AI policy apparatus can instead be the architects of transformation, with a more open outlook on governance.
Those affected by, at risk, or benefitting from the deployment of AI and its associated infrastructure.
Brookings Institution. "A Technical AI Government Agency Plays a Vital Role in Advancing AI Innovation and Trustworthiness." 2025. https://www.brookings.edu/articles/a-technical-ai-government-agency-plays-a-vital-role-in-advancing-ai-innovation-and-trustworthiness/
White & Case. "AI Watch Global Regulatory Tracker: United States." 2025. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-united-states
Skadden, Arps, Slate, Meagher & Flom LLP. "Colorado Enacts Artificial Intelligence Act." May 2024. https://www.skadden.com/insights/publications/2024/05/colorado-enacts-artificial-intelligence-act
CalMatters. "California AI Bills: Tracking 2025 Legislation." 2025. https://calmatters.org/politics/2025/01/california-ai-bills/
The Texas Tribune. "Texas Responsible AI Governance Act." 2025. https://www.texastribune.org/2025/05/23/texas-ai-bill-legislation-regulation/
Carnegie Endowment for International Peace. "Technology Federalism: U.S. States at the Vanguard of AI Governance." February 2025. https://carnegieendowment.org/research/2025/02/technology-federalism-us-states-at-the-vanguard-of-ai-governance?lang=en
European Commission. "EU Artificial Intelligence Act." 2024. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
StateScoop. "State Lawmakers Push Back on Federal Proposal to Limit AI Regulation." 2025. https://statescoop.com/state-lawmakers-push-back-federal-proposal-limit-ai-regulation/
Ensign, D. et al. "Runaway Feedback Loops in Predictive Policing." Proceedings of Machine Learning Research. 2018. http://proceedings.mlr.press/v81/ensign18a.html
The Washington Post. "Virtual Recruiters and the AI Transformation of Job Hiring." June 30, 2025. https://www.washingtonpost.com/business/2025/06/30/virtual-recruiters-ai-jobs/
Obermeyer, Z. et al. "Dissecting racial bias in an algorithm used to manage the health of populations." Science. 2019. https://www.science.org/doi/10.1126/science.aax2342
In agencies lacking both technical expertise and meaningful connections to affected communities.
This concept draws from my analysis of technological transitions and institutional adaptation in complex systems.
Workers' Algorithm Observatory, Princeton University. 2024. https://wao.cs.princeton.edu
Detroit Community Technology Project. "Equitable Internet Initiative Report." 2024. https://detroitcommunitytech.org/
Focused on consent in network design, community governance, and privacy.
"Algorithmic Equity Toolkit." https://www.aclu-wa.org/AEKit
Electronic Frontier Foundation. "Oakland's Progressive Fight to Protect Residents from Government Surveillance." January 2021. https://www.eff.org/deeplinks/2021/01/oaklands-progressive-fight-protect-residents-government-surveillance
Montreal Declaration for Responsible AI. "Participation Principles." 2023.
City of Pittsburgh. "Department of Public Safety to regulate the use of facial recognition and predictive policing technology”. 2024. https://www.pghcitypaper.com/media/pdf/_final_draft__select_surveillance_technology_ordinance.pdf
Government of Canada. "Sovereign AI Compute Strategy." 2024. https://www.canada.ca/en/innovation-science-economic-development.html
Noble, S. "Algorithms of Oppression." 2018. https://nyupress.org/9781479837243/algorithms-of-oppression/