It’s time for equity consultants and community organizers to ditch AI

Tools are never neutral, and artificial intelligence has no place in social justice work

File photo / Jessie Evans.

This is not a column about all the things that are terrible about AI. 

You’re probably familiar with at least some of them; the mind-bogglingly unsustainable energy and water demands of AI data centres, for instance. AI infrastructure is projected to consume six times more water than the entire country of Denmark by 2027 at a time when a quarter of the world’s population still lacks access to clean drinking water. According to one report, a single medium-sized data centre uses over 1 million litres of water in a single day — the same daily consumption as 5,000 Canadians. 

The Canadian data centres whose applications are currently under review for approval would collectively require as much energy as that consumed by 70 per cent of all the households in Canada. And the destruction caused by AI data centres is growing at an exponential rate; a decade ago there were about half a million data centres, while today there are over 8 million

Even the Conference Board of Canada recently warned that existing water and energy resources in Canada are insufficient to meet the projected demands of AI development projects: “the trilemma of energy, water, and environmental impacts is a circuit breaker” they warn. (The board’s solution, of course, is to massively expand energy development projects, with all the environmental, social and human rights impacts that accompany them.)

Will you stand with us?

Your support is essential to making journalism like this possible.

You can also read up on how AI’s environmental impact is disproportionately hitting poor and marginalized communities, particularly Black communities, and exacerbating environmental racism for Indigenous communities already struggling to protect their land and sovereignty from pipelines and other invasive energy projects. That’s not even getting into AI’s destructive effects on how we think and act as humans: how it’s gutting the cognitive capacities of young people (who are growing up relying on AI rather than learning how to read, write and think for themselves) and undermining the critical thinking capacity of adults. All this for technology that can’t even deliver accurate answers most of the time. 

Despite the fact its negative impacts overwhelmingly outnumber the positive, governments and corporations continue pouring money, land and resources into AI development with dreams of some eventual payback — or simply out of fear someone else will get this elusive payback first. AI investment has the aura of the biggest FOMO pyramid scheme in world history.

Then there are the security risks. Almost one-third of Canada’s data centres are owned by American companies, and under US law the American government can access any of our data that’s in them. 

My goal here is not to rehash all this—go do your own doomscrolling—but to pose the question: Why are community groups and people working in the fields of equity and social justice continuing to use AI?

I work with a lot of community organizations and EDI consultants, and I can’t think of a single meeting I’ve attended in the past year where someone working in this field hasn’t brought along an AI notetaker to a Zoom meeting, or run data and document drafts through an AI program for analysis. There’s a profound cognitive dissonance involved in having a meeting to discuss how to improve things for a low-income community or population while AI notetakers are running in the background, recording and transcribing and summarizing, and starving other low-income communities in some other part of the world of water, electricity, or employment. In a field where intersectionality has become such a buzzword—acknowledging how all our struggles are interrelated—how are social justice organizers able to shut off any consideration of the impacts of their technology use on low-income, racialized communities elsewhere?

I’ve raised this issue a few times; in more familiar group settings I’ve actually asked folks to turn off the AI recording. It usually elicits a guilty, hapless shrug, along with a response which is some variant of, “it keeps me organized,” or, “I’d never be able to keep track of everything without it.” I’m sure it certainly makes life easier. But in our field, we’re supposed to be suspicious of products and providers that promise to make our lives easier. The question we’re always supposed to ask is: at what cost? I’m astonished that folks who have made decisions to not shop at Amazon, to ditch their Spotify accounts, or to boycott unethically-produced sweatshop goods continue to use AI, whose impact is more destructive than all those other things combined.

Then there’s the ‘AI is here, it’s not going away, might as well use it’ argument. Well, colonialism and climate change are here too, but that doesn’t keep us from fighting to reverse and end them. Where would we be today if the ‘it’s here, might as well give in to it’ argument was used around slavery, child labour, apartheid, or any of the other countless ills our predecessors fought to end? If you work in the equity or social justice field and your argument is, “it’s here, might as use it and benefit from it,” you’re not in the right field.

I can also hear the tech-bro counterarguments: “You use an iPhone, don’t you? A laptop? With blood minerals mined by child slaves in Rwanda or Congo?” Certainly much of our modern work and lifestyle in the west is imbricated in these horrific injustices. And that’s precisely why the last thing we ought to be doing is adding a new evil to the ones we already partake in. We ought to be finding ways to divest our work of its existing reliance on unethical tools, not shrugging and adding a new one to the mix. Let’s stop this practice before we become reliant on it too.

A Health Blog (CC BY-SA 2.0).

‘AI helps me stay organized’

I want to unpack the justification used by a lot of social justice workers that “AI keeps me organized and helps my work.” Much of the work we do as progressive and community organizations is predicated on the principle: don’t do the lazy, faster, cheaper, easier thing. This isn’t just some high-minded aspiration — it’s a core driving principle behind much justice, rights-based organizing. Almost all of our work boils down to the ideal: do what’s right, not what’s easy

It’s easy to ignore the unhoused, to hire cops to sweep away encampments and imprison or disappear unhoused folks, rather than come up with policies and strategies to provide housing for everyone. It’s easy to construct a building without consideration of accessibility needs for the relatively small minority of people who might be using it with wheelchairs or other disabilities. It’s easy to implement development plans or expand universities and other local infrastructure without giving a thought to the Indigenous Peoples whose lands were stolen to create this space. It’s easy to have a non-unionized workforce you can shrink (lay off people at need) or grow (hire people at lowest cost at need) as you wish, without consideration to things like seniority or health benefits or pay equity or job security. 

Nothing good that we strive to do in this world is easy. So the justification that AI makes our work easier, faster, more efficient, or cheaper should never be used, especially in community advocacy and social justice work. When considering whether to use a tool, we should first consider whether the tool itself matches the values of our organization or movement.

Another underlying principle of progressive organizing is intersectionality — considering the interrelatedness of struggles. AI data centres, with all their destructive impact on local natural resources and infrastructure, are being built predominantly in poor, Black and Indigenous neighbourhoods. The climate change they produce is also wreaking much of its worst havoc on poor communities, racialized communities, the colonized global south. Being part of the AI user base means you are contributing to that. How does a progressive organization or individual who claims to be dedicated to decolonizing work, anti-racist work, or climate justice work use a tool that very specifically and directly undermines those aims? 

So allow me to repeat: When considering whether to use a tool, we should first consider whether the tool itself matches the values of our organization. Tools are never neutral. 

Let’s not forget that other core principle of intersectional, community-based organizing so eloquently stated by Audre Lorde in 1979: The master’s tools will never dismantle the master’s house. 

I can hear the awkward groans from a hundred EDI consultants, from the leadership teams of community organizations, from overworked and underresourced activists: Do I have to give up my AI notetakers as well? Do I have to add to my workload by doing this work by hand?

As with anything, the answer to that is up to you. Do you really value the work that you do? Are you truly committed to sustainable, meaningful outcomes that align with your sense of integrity and commitment to intersectionality and justice? Or is your primary concern your paycheque at the end of the day, and keeping an extensive and happy client base?

This is specifically directed to those of us who work in progressive community organizing, because in many ways we have a higher ethical standard to follow. A low- or middle-level clerk in an institutional or bureaucratic workplace setting, or a factory or shop floor worker, doesn’t have the autonomy and control over their work to decide what tools to use or not to use — they’re told by their boss, whose motivation probably involves some variation on making or saving money. 

But our role is different. We go into EDI consulting, equity work, advocacy work, community and worker organizing because we’re committed to building a better world for those around us. That’s why we do the hard things: take on governments and corporations, incorporate intersectionality into our work, reflect on ways to decolonize and implement anti-racist principles. We’re modeling the type of society, actions and accountability that we want to inspire others to adopt. When we cut corners for the sake of cost and efficiency, that sends an implicit message to those watching us set the standard: it’s okay to cut corners. 

Is that the message we want to send?

If we work in the consulting field, our clients ought to be concerned, too. What does it mean to hire a consultant to develop an anti-racist action plan for an organization, when the AI notetaking and transcribing and summarizing that consultant is going to do will draw on the resources of a data centre directly harming Black communities in Alabama or Georgia

But it’s always a matter of trade-offs, right? The time we save by using an AI note-taker will help us improve lives for our union members, for that Indigenous community, for the unhoused in our city, right? Just the same way our investments in that weapons manufacturer will generate important revenue to support our work, right? We could divest from that weapons manufacturer, but the revenue helps the poor people in the community we serve, right? And that balances out the little children being killed in another country by the weapons manufacturer’s bombs, right? 

(The preceding paragraph is a sarcastic take, for those of you using AI to summarize this article. AI is pretty terrible at understanding tone, among other things.)

Turning ideas into action

Where do we start in seeking to divest our work from AI entanglements? There are things we can do on both the client and the organizational side to begin addressing these problems. 

As clients, we can insist that consultants or agencies we hire for work not use AI in the work they do. Those demands or expectations can be made up-front as part of negotiating a contract. It can even be a question we ask when we reach out to a potential consultant: “Do you use AI in your work?” Put them on the spot. Make them squirm, and make them reflect on or justify their operational practices. Ask them how their use of AI reflects the integrity of the principles they espouse, how the harms AI inflicts—from resource-greedy data centres to global warming to job loss—justify its use by people or groups that espouse harm reduction principles.

If we’re funders, we can introduce stipulations that grant applications not be produced using AI, and penalize or reject applications that have been. But the onus is on funders, too, to make sure the workload required for a grant application is reasonable; it’s the pages and pages of repetitive and often unnecessary questions that tempts applicants to rely on AI in the first place. 

If we’re consultants, we can advertise our commitment to providing an AI-free workplace or relationship. That ought to become a new standard, something that’s valued and promoted, which gives added attraction to a consultant. 

If we’re an organization or community group, we can develop AI policies. Those can be simple and sweeping. If you’re an arts organization, does your organization have a ‘No AI art’ policy to ensure no one is using AI tools in Canva to produce posters and ads, for instance? If you’re a union, do you have a policy prohibiting AI recording or note-taking in meetings? Developing such a policy could also help members make the request that AI note-taking not be used in other coalition or multi-group meetings. Your policy could stipulate that organizational documents—like policies, press releases, correspondence—will not be drafted using AI. Provide your members or employees an opportunity to do the work and cultivate the skills—or better yet, get paid for it.

We often don’t realize the spin-off benefits of even the most mundane skills until we’re in a situation where we find ourselves using them. People who use AI for a task aren’t cultivating that skill. Even AI note-taking—the ability to quickly process information, to identify what’s important and what isn’t, to scribe that down on paper or type on a computer—is vital in so many areas of human endeavour. The more we rely on an AI to do that, the more our own capacity to process and prioritize atrophies.

It is often in the process of transcribing something, or summarizing or rewriting my notes, that I come to new understandings of the content I’m dealing with. I might realize additional questions or issues that ought to be clarified, stumble into a deeper understanding of why someone said what they did, notice a difference in tone or a hesitation in their voice, become newly aware of what they left out.

These critical-thinking skills are essential to our work in equity and justice fields. Turning the raw tasks over to AI reduces the quality of our work and our ability to deliver on the issue or project we’re working on. 

I’ll be blunt. If you’re already using AI, giving it up will be hard: from the added hours of transcribing notes and summarizing meetings, to drafting policy wording based on those notes, or writing emails to people asking them to clarify something. But the work you do, and the relationships you forge, will be better for it in the long run. Most importantly, you’ll know your work was produced with integrity and reflects your authentic care and commitment to the values we all like to say we hold.

Author
Rhea Rollmann is an award-winning journalist, writer and audio producer based in St. John’s and is the author of A Queer History of Newfoundland (Engen Books, 2023). She’s a founding editor of TheIndependent.ca, and a contributing editor with PopMatters.com. Her writing has appeared in a range of popular and academic publications, including Briarpatch, Xtra Magazine, CBC, Chatelaine, Canadian Theatre Review, Journal of Gender Studies, and more. Her work has garnered three Atlantic Journalism Awards, multiple CAJ award nominations, the Andrea Walker Memorial Prize for Feminist Health Journalism, and she was shortlisted for the NL Human Rights Award in 2024. She also has a background in labour organizing and queer and trans activism. She is presently Station Manager at CHMR-FM, a community radio station in St. John’s.