Intelligence as a Collective Project
Why innovations in artificial intelligence should be built by and for communities
Note: This past month, I’ve been busy collecting data for my dissertation, doing some analysis work, and writing. Moreover, I’m heading to Bucharest, Romania (later today) to present and participate in the “Business perspectives on corporate accountability for human and environmental rights violations” workshop at the National University of Political Science and Public Administration. As a result, the SubStack has taken a bit of a back-seat recently. I had originally planned to write a series of posts on AI in February, so I’m picking up that thread starting now, adding to them some snippets from my recently collected dissertation data.
I’ve recently observed a pattern at conferences, in the media, and other public settings (including, frustratingly, my LinkedIn feed) that's become all too familiar: executives from major tech corporations presenting AI as an inevitable force that will transform our lives — for the better, they assure us — if only we'd step aside and let their proprietary systems take control. When it comes to artificial intelligence, the narrative of who gets to create that future remains remarkably narrow. What's missing is clear: any meaningful discussion of who designs these systems, who benefits from them, and who bears their costs.
This reflects a fundamental tension in how AI development is unfolding. The transformative potential is undeniable, but AI’s current trajectory concentrates unprecedented decision-making power in the hands of tech companies, often far removed from the communities their systems will affect most profoundly. As someone who is researching community-led digital initiatives across transnational contexts, I'm convinced that AI's most promising future lies not in corporate labs but in collectively-driven approaches.
Communities — whether defined geographically, culturally, by economic sector, or through common purpose — offer unique perspectives that can transform how AI systems are conceived, developed, and deployed. The community lens isn't just about representation; it's about prioritizing collaboration, contextual knowledge, and collective benefit. It's about reclaiming the democratic potential of technology lost in corporate-driven narratives of innovation.
The Corporate AI Problem
Today's AI landscape resembles less an open scientific endeavor and more a corporate arms race. A handful of players — such as Meta, Google, OpenAI, and others — control not just the most capable models but also the massive computing infrastructure necessary to train them. This concentration creates troubling asymmetries of power (intellectual, financial, and technical) and knowledge.
When AI development happens primarily behind closed doors, accountability suffers. As anthropologist Mary L. Gray observed in her research on hidden labor in AI: "The myth of automation obscures the ways in which human labor remains essential to automated systems." Behind seemingly objective algorithms are human choices, values, and power relations that determine who benefits and who bears the costs of AI development. When AI systems increasingly make or influence decisions about healthcare access, employment opportunities, and resource allocation, their governance becomes a matter of public concern, not merely technical optimization.
The consequences of this closed-doors approach manifest in alarming ways. Predictive policing algorithms reinforce patterns of racial profiling by learning from historically biased police data > Healthcare AI systems trained primarily on data from affluent populations fail to serve marginalized communities effectively > Language models struggle with local dialects and cultural contexts outside dominant Western frameworks. These sometimes predictable outcomes happen because development processes are disconnected from community knowledge and priorities.
When technical complexity combines with proprietary systems, meaningful public oversight becomes nearly impossible. Communities affected by algorithmic decisions often have neither the transparency to understand how these systems work nor the authority to demand accountability when they cause harm.
Community-Led AI in Action
The urgency of developing alternatives to corporate AI models becomes clear when we consider what's at stake, beyond technical performance: equity, self-determination, and collective well-being. Community-led AI initiatives are more than experimental curiosities - they represent crucial counterpoints demonstrating that different development paths are possible and necessary.
Masakhane demonstrates this alternative vision. This pan-African research community develops natural language processing models for African languages chronically underserved by commercial AI. Their collaborative approach ensures that cultural nuances and contextual knowledge shape the technology from the ground up. By bringing together linguists, technologists, and cultural knowledge-keepers across the continent, Masakhane has created models that preserve linguistic diversity and serve local needs that commercial systems ignore. This work, located at the grassroots level, is protecting cultural heritage that could otherwise be erased in increasingly AI-mediated communication.
Data for Black Lives represents a community-led approach focusing specifically on addressing AI's racial impacts. They build coalitions between data scientists, community organizers, and legal advocates to challenge discriminatory uses of data and algorithms across systems, doing work like crafting community data trusts, and establishing accountability mechanisms for companies that use AI.
Equally intriguing are the Monlam AI and AI Pirika projects:
Monlam AI is a project developed by the Monlam Tibetan IT Research Centre, developing software, fonts, and digital tools to preserve and revitalize the Tibetan language. They are tackling a global digital divide for minority languages, advocating for linguistic justice in AI development. This is a mission to prevent linguistic and cultural extinction, ensure that Tibetan linguistic data remains under Tibetan control (sovereignty), and democratize access to Tibetan knowledge.
AI Pirika is aimed at reviving the critically endangered Ainu language spoken by the Ainu people, indigenous to northeastern Japan and a string of islands to the north of Japan. The project is managed by the Society for Academic Research of Ainu Culture (SARAC), AI experts, and other technical and Ainu collaborators. “Pirika” means “pretty girl” in Ainu; the application is meant to be a hybrid between a virtual chatbot and speech recognition engine. Given that the Ainu native speaker is almost extinct, they hope that AI Pirika will survive as an Ainu speaker in the future, to contribute to Ainu language education programs and activism for preserving minority languages worldwide.
The stakes are equally high in workplace contexts, where algorithmic management systems increasingly dictate working conditions with minimal transparency or accountability. The Workers' Algorithm Observatory addresses this power imbalance by enabling workers to document, analyze, and challenge algorithmic impacts. They produce tools like FairFare and Gigbox to empower gig workers by pooling data to uncover helpful trends and patterns, such as intelligence on how to optimize their wages.
These initiatives share a common approach: they begin with community needs rather than technical possibilities, center the knowledge of those most affected by technologies, and build governance structures that distribute rather than concentrate power. Their urgency stems from the recognition that waiting for corporate AI to reform itself means accepting the continued entrenchment of technological inequalities.
Pathways to Democratic AI
Building AI systems that serve community interests requires reimagining both how these technologies are developed and how they're governed. Several promising pathways have emerged from communities already engaged in this work.
Public computing infrastructure offers one essential foundation. The Sovereign AI Compute Strategy launched by Canada demonstrates how governments can invest in computational resources that democratize access to the building blocks of AI development. By committing $2 billion to grow domestic AI capacity while ensuring affordability for small enterprises and researchers, such initiatives can counterbalance corporate dominance while supporting community-centered innovation.
Data cooperatives provide another pathway, addressing inequalities with alternative frameworks for managing the resource that fuels AI: our collective data. The Driver's Cooperative in New York exemplifies this approach, creating a driver-owned alternative to corporate ride-sharing platforms. Through cooperative ownership, drivers ensure that algorithmic systems serve their needs rather than extracting value at their expense. Similar models are emerging in healthcare, agriculture, and other domains where community data governance can transform power relations. When communities govern their own data, they can negotiate terms of use that ensure benefits flow back to them.
Educational initiatives like TechTonic Justice build the civic capacity necessary for communities to engage critically with AI systems. Their work with legal aid organizations and community groups creates "algorithmic literacy from below," enabling effective challenges to harmful systems like those that wrongfully terminate public benefits. By demystifying AI and building coalitions between technical experts and community advocates, they enable meaningful participation in AI governance.
The transnational character of community resistance to corporate AI is perhaps most visible in the Indigenous data sovereignty movement, where organizations like Te Mana Raraunga (Māori Data Sovereignty Network) and the US Indigenous Data Sovereignty Network collaborate across borders. These networks articulate principles for AI development rooted in Indigenous relationships to knowledge and community, demonstrating how alternative governance approaches can address contemporary technological challenges while building collective power.
Community Intelligence as the Future of AI
To be sure, community-led approaches face significant obstacles. Resource constraints limit their immediate scale, corporate resistance undermines their adoption, and policy frameworks often favor established actors. Technical challenges remain, as well, in developing methods that effectively incorporate diverse community knowledge into AI systems.
Yet these barriers are not intrinsic to AI technology itself but reflect the political and economic contexts in which it develops. The alternative models highlighted here offer something essential: proof that different AI trajectories are possible. They demonstrate that technologies embedding democratic values from the beginning - rather than applying “ethical” or “responsible” fixes after problems emerge - can produce more equitable outcomes, with more innovative and contextually appropriate solutions.
What makes community approaches essential for AI's future is their ability to redefine what we consider "intelligence" in the first place. While corporate AI often reduces intelligence to pattern recognition optimized for predictive accuracy, community approaches expand this definition to include cultural understanding, diverse knowledge systems, and collective decision-making.
This expanded conception of intelligence offers a profound shift in how we imagine AI's potential. Rather than autonomous systems that replace human judgment, community-centered AI can amplify our collective capacity to address shared challenges — from climate adaptation to public health — while respecting the variety of human contexts and values.
The question isn't whether we'll continue developing increasingly capable AI systems - we almost certainly will - but whether these systems will reflect the priorities, knowledge, and governance of the communities they affect. By reclaiming AI as a community project rather than surrendering it as a corporate product, we open possibilities for it to enhance our capacities: to relate to one another, to make technology accountable to us (and not the other way around), to collectively produce vital knowledge, to serve collective economic rights.
The future of AI should be a shared endeavor of collective intelligence and community governance, not because it's technically expedient, but because it's the only approach that aligns with deeper aspirations for more inclusive futures.