The first time I witnessed a (corporate) AI ethics panel, I was struck by a peculiar inconsistency. The room buzzed with earnest discussions about "responsible AI" and "ethical frameworks," yet when audience members asked pointed questions about specific algorithmic harms or (absent) oversight, the conversation quickly retreated to comfortable abstractions. That contradiction stayed with me: a space where ethics was everywhere and nowhere at once.
As technologies like generative AI rapidly transform our social landscape, "ethics" and “responsibility” have emerged as the go-to buzzwords for addressing concerns about their impacts. As someone who studies community-led stewardship of digital resources, I’ve come to see this tendency not as an oversight but as a feature of our current AI moment. “Ethics” is invoked to soothe public anxiety, but rarely to shift power. Moreover, haven’t we seen this all before, in previous debates about algorithms, social media, and surveillance / privacy?
In this post, I want to explore how we move from ethical rhetoric to real accountability in AI—one grounded in democratic values, structural transparency, and community control.
The Ethics Problem in AI
We've entered what Kate Crawford called an era of "enchanted determinism" around AI, where these systems are presented as simultaneously magical and inevitable – beyond ordinary understanding and therefore reproach, yet destined to reshape our world. We are told the systems are neutral, the consequences unintentional, and the future inevitable. But what if that’s all part of the design? This framing serves a strategic purpose, positioning AI development as a technical domain beyond democratic governance, even as they are crafted by very specific institutions, people, and politics.
Corporate AI ethics has become a sophisticated public relations exercise. Major tech companies announce ethics boards and publish principles while simultaneously fighting transparency requirements and meaningful regulation. This contradiction reflects the fundamental tension between ethical AI and the business models driving its development.
The reality is that many AI systems deployed today carry significant risks of bias and harm. Hiring algorithms penalize women and people of color. Predictive policing tools replicate and intensify existing racial biases in law enforcement data. Language models perpetuate stereotypes and exclude underrepresented dialects and identities.
Consider facial recognition technology, where companies published ethics principles while selling surveillance systems to law enforcement agencies with documented histories of discrimination. Or content moderation systems that disproportionately flag non-Western languages as "harmful," reflecting the biases of their predominantly Western developers. In both cases, ethics statements function more as liability shields than substantive protections.
These patterns of harm aren't random. The Algorithmic Justice League has documented how AI systems consistently demonstrate higher error rates for marginalized communities—misgendering or misclassifying people, and reinforcing discriminatory practices in hiring, lending, and criminal risk assessment. These patterns reveal that AI bias is a structural issue rooted in who designs systems and whose needs they prioritize.
Corporate ethics frameworks typically position these harms as unintended consequences to be mitigated through technical adjustments rather than as predictable outcomes of development processes. This framing conveniently avoids addressing the fundamental power imbalances that determine who benefits from AI and who bears its costs.
Yet time and again, companies respond to such critiques by pointing to ethics boards, mission statements, or "responsible AI" guidelines—without meaningful transparency, enforceability, or community input. Ethics, in this context, becomes less a mechanism for accountability and more a tool for reputational management.
What Real Accountability Looks Like
True AI ethics cannot be separated from power. It requires redistributing control over how technologies are developed, governed, and evaluated. Accountability starts with a simple but often overlooked premise: the people most affected by AI should have a say in how it is designed and deployed.
Community oversight is one critical pillar. This could take the form of citizen panels reviewing algorithmic decisions, as seen in some urban planning contexts, or labor unions negotiating the role of automation in the workplace. Participatory AI design—where community members co-create systems alongside technologists—offers another model. These approaches move beyond consultation to shared authority, ensuring AI reflects a broader range of values, needs, and lived experiences. For instance, the Algorithmic Equity Toolkit developed by the City of Seattle offers a practical guide for communities to evaluate the risks of AI systems in municipal governance. Rather than relying solely on internal reviews, the toolkit provides questions and criteria for residents to interrogate how algorithms affect housing, policing, and public services—flipping the script on who gets to define impact.
Second, accountability requires transparency that goes from voluntary disclosure to structural requirement. The AI Incident Database by the Partnership on AI offers a real-time, open catalog of documented AI failures across domains—from biased recruitment software to flawed medical diagnostics. This public record creates space for collective learning and public pressure, translating opaque systems into tangible, documented risks. This resource transforms abstract discussions about AI risks into concrete documentation that journalists, researchers, and policymakers can use to establish patterns and advocate for change.
Opening up AI models, training data, and development processes to public scrutiny is essential. So is creating mechanisms for redress when harm occurs.
Third, genuine accountability establishes consequences for harmful systems. Google's AI ethics team (before their controversial restructuring) demonstrated this by blocking the deployment of AI systems that failed to meet ethical standards, even when this created internal conflict. Their public opposition to problematic applications created real consequences for corporate leadership, showing that ethics requires more than guidelines—it requires enforcement power. And in the Netherlands, courts ruled that the government’s use of the SyRI AI system for fraud detection was discriminatory and unlawful—leading to its suspension.
These examples shift power from developers to the communities affected by their systems. That shift is necessary because technical expertise alone cannot determine what constitutes ethical AI across diverse, often contested, contexts.
Ethical AI in Action
Across the world, grassroots organizations and coalitions are already building models for ethical AI that go far beyond corporate codes of conduct. These initiatives point to a different future—one grounded in accountability and collaboration.
Participatory approaches to AI development reveal promising alternatives to corporate-dominated models. The Distributed AI Research Institute pioneers research practices and centers its research agenda around communities historically marginalized by mainstream AI development. Their work on documentation practices for large language models demonstrates how technical tools can enhance accountability when designed with community input.
Masakhane, a pan-African research community, develops natural language processing models for African languages. Their collaborative processes challenge the notion that impactful AI must emerge from elite institutions, instead advancing a model where linguistic equity and capacity-building go hand-in-hand.
And at the policy level, initiatives like the EU AI Act and Canada’s Sovereign AI Compute Strategy begin to sketch out what enforceable, public-interest AI governance might look like—offering a counterweight to the private sector's dominance.
Pathways Forward
Ethical AI is not a checklist. It is a collective project that demands broad participation and constant vigilance. For this reason, education and coalition-building are just as important as technical safeguards. Moreover, community-centered approaches must be complemented by legal, infrastructural, and cultural interventions.
We need to build spaces—in schools, unions, libraries, and civic organizations—where people can critically engage with AI. We need interdisciplinary collaborations that connect computer scientists with social scientists, artists, lawyers, and communities. And we need activism that keeps pressure on institutions to prioritize equity over expediency.
But first, we need new institutions. That includes public computing infrastructure to democratize access to training large models, and independent audit bodies empowered to evaluate AI impacts in critical sectors. Calls for interoperability and data portability rights would give users more control over how their data is used—converting ethical principles into technical affordances.
Educational and civic spaces: programs like the Algorithmic Rights and Protections project in Chicago train residents to identify and respond to harmful algorithmic decisions affecting housing and public services. These efforts build algorithmic literacy from below, linking technological understanding to political empowerment.
We also need policies with teeth. The EU AI Act introduces a tiered risk framework that places stricter obligations on high-impact systems. While not perfect, it reflects an evolving understanding that voluntary ethics are not enough. Laws must follow where risks are greatest.
Ethics with Teeth
To be sure, these approaches face real challenges. Community-led models often struggle with funding and visibility. Policymaking is slow and vulnerable to capture. And powerful actors continue to frame ethics as a matter of internal values rather than public negotiation.
Still, ethical principles matter. They help articulate values and aspirations that can guide development. Many technologists genuinely seek to minimize harm. But ethics without accountability mechanisms ultimately preserves the status quo of power relations in tech development.
The challenge is not to abandon ethics, because it has been co-opted, but to ensure it has teeth—concrete mechanisms for transparency, oversight, and redress when systems cause harm. As long as AI systems are shaping how we live, work, and relate to one another, their governance must be a democratic concern. By centering community expertise, establishing meaningful transparency, and creating consequences for harmful systems, we can move beyond ethics as a buzzword toward technologies that genuinely serve the public good.