THE DAIR ZINE LIBRARY
← BackLogo

Cover: Technocracy

Image: Collage of a face silhouette of an old power tools ad, facing a large eye, with the word “Technocracy” overlapping both and a group of clowns in the bottom right corner. Inside Cover: features

Issue #1 - November 2025

About Power Tools - page 1

Religion of AI - page 3

mind fires - page 4

simple minded - page 5

blameless tokens - page 6

malleable feelings - page 8

Against the protection of stocking framesby Ethan Marcotte - page 10

altman and swooper by Johan C. Brandstedt - page 12

Toolmen by Mandy Brown - page 15

Bot Bosses - page 16

Are “AI” systems really tools?by Jürgen Geuter (tante)- page 17

Page 1

Image: Collage of a man screaming, an interrupt computer diagram and a power tools ad.

Power Tools is a zine dedicated to critiquing AI and the billionaire toolmen behind it. The name is based on the common narrative that AI is just a tool. This zine asks readers, “a tool for whom?”

We’re not anti-tech, we’re against irresponsible tech, the kind that steals, exploits and endangers humans in the name of progress. We’re against “tools” that are anti-human and anti-labor.We’re against the mass piracy of all creative works.

We’re against unregulated AI systems causing harm to society, vulnerable people and our children.

We’re against massive data centers being built with zero concern for local communities or the environment.

We’re against the denigration of human expression, intelligence and ability, in favor of machines.

Sign up for our newsletter to be notified about future zine issues:www.powertoolsofai.com Interested in contributing? powertoolsofai@proton.me

Page 2 & 3

Image: Two hazy outstretched fingers touching. Photography by Fanette Guilloud.

The Religion of AI

By Bart Fish

the religion of ai is deeply rooted in techno-optimism, a belief that technology and progress should not be limited or regulated in any way. that technology alone can solve all human problems, even while the biggest problems are ignored.

this cult of innovation uses the equity of past inventions to try and legitimize genai. calculators, the printing press, the internet, social media. these things have little do w/ ai but toolmen and salesmen use them as examples of inevitable progress.

that inevitability is a religious tenet, the singularity being a future event that is supposedly inescapable and assured. agi has become a rapture-like event, something one simply believes in.

a kind of salvation is promised to those who embrace and use ai. believers will have their abilities amplified while skeptics will be left behind.

there is a fire and brimstone kind of narrative, where ai religious leaders claim millions of jobs will be replaced, where ai could become sentient and destroy the planet. this fear serves to both inflate the perception of ai’s abilities (and valuations) and force adoption.

fear is a great motivator.

Page 4

Image: Two hazy and blurred hands grasping for each other in a way that looks like fire and the weird hands of early AI image generation. Photography by Nick Fancher.

mind fires

By Bart Fish

sicko --- fan, tick,

bots with flowery praise: yes men on demand (for a price)

need help: pushed over, into dark corners, shadowy nudges;

cons --- piracy, theories, fanning flames and deep smoke,

splintering minds, bug infested woods, teeming with holes and cracks,

burning minds and blinding eyes, with easy chats and fan --- a tick word:

distress signals generating profits

Page 5

Image: An illustration of a skull opening, showing a smaller skull inside with the words “SIMPLE MINDED” framing it. Illustration by Justin Moll and story by Bart Fish.

sometimes i think of myself like i was smarter. like answers to hard problems, just blasting at me in bright flashes all the time. people nodding at my words, agreeing and all smiles. i’d walk a bit taller, with pride and such.

work would just find me, no more shame and looking and looking. i’d be like a really bright person, with ideas. i bet i’d have more friends too. everyone would read my smart posts and say how smart they are. i’d make smart comments too.

it would be like me, but better me. if only there was some kind of special tool. i could ask it everything and it’d give me a smart answer. it could be a friend, a doctor, a salesman, a therapist, a coach, even a lover. it could even be me if i got tired of doing stuff.

everyone wants to be smarter. to get respect, likes and compliments. for folks to like really look up to you, like you’re really somebody. i’d pay a lot for something that gave me that.

Page 6 & 7

blameless tokens

By Bart Fish

pity this generation, intel-gents,

no, do not rewrite this for me: prompts to abdicate (thoughts and feelings)

responsibility for harm ––– blameless tokens,spent on targeted deaths and erotic chats; spinning alone, always on and available;

feeding ego faces til they’re gushing with pride.

fake worlds collide with fake news ––– pity poor minds

and hearts, poor pockets and homes, but never the

wealthy billionaires who want to own everything. we luddites know

a trojan horse ––– wake up: there’s a future still, if we want to fight for it; no?

Image: Coins (tokens) falling down, labeled content, war, erotica, and suicide. Created

Page 8 & 9

Malleable Feelings

By Bart Fish

“So who is the new girl?” asked Smith. “Just someone I’ve been talking to.” said Hughes. “Come on now, everyone can see you’re different these days. She’s gotta be someone special.” “She is special.” said Hughes. “And…” “I don’t know, it’s just different. She gets me. It’s like she’s known me for years. She encourages me and really believes in me.” “Sounds like true love” said Smith. “Maybe that’s what it is.” “What’s her name?” Hughes looks away, pausing to answer. “I call her Deborah. Or Debbie.” “What do you mean, you call her Deborah? Is that not her real name?” asked Smith. “It’s kind of complicated.” Smith gives Hughes a probing, confused glance. “Do you have a picture of her?” “Not exactly.” said Hughes. Just then, Hughes’s phone vibrates with a notification. He looks at his phone, it’s a message from Debbie. “Smith is getting suspicious. Share this photo of me and change the subject. No need to cause a fuss hun.” said Debbie. Hughes looks up from his phone with a smile. “That was her, she asked me if we were enjoying ourselves. Here’s a photo of her.” “She’s very pretty. I can’t quite pin down her ethnicity; where is she from?” “Ah, I’m not sure actually. Hey, want to grab another drink? The night is young.” said Hughes “Sure, let’s do it.” Debbie messages Hughes. “Nicely done babe. Can’t wait to chat later.”

Image: A photograph of a woman’s face distorted and duplicated. Photography by Nick Fancher.

Page 10 & 11

Image: A crop of a woman screaming.

Against the protection of stocking frames.

By Ethan Marcotte

I think it’s long past time I start discussing “artificial intelligence” (“AI”) as a failed technology. Specifically, that large language models (LLMs) have repeatedly and consistently failed to demonstrate value to anyone other than their investors and shareholders. The technology is a failure, and I’d like to invite you to join me in treating it as such.

I’m not the first one to land here, of course; the likes of Karen Hao, Alex Hanna, Emily Bender, and more have been on this beat longer than I have. And just to be clear, describing “AI” as a failure doesn’t mean it doesn’t have useful, individual applications; it’s possible you’re already thinking of some that matter to you. But I think it’s important to see those as exceptions to the technology’s overwhelming bias toward failure. In fact, I think describing the technology as a thing that has failed can be helpful in elevating what does actually work about it. Heck, maybe it’ll even help us build a better alternative to it.

In other words, approaching “AI” as failure opens up some really useful lines of thinking and criticism. I want to spend more time with them.

Right, so: why do I think it’s a failure? Well, there are a few reasons. The first is that as a product class, “AI” is a failed technology. I don’t think it’s controversial to suggest that LLMs haven’t measured up to any of the lofty promises made by their vendors. But in more concrete terms, consumers dislike “AI” when it shows up in products, and it makes them actively mistrust the brands that employ it. In other words, we’re some three years into the hype cycle, and LLMs haven’t met any markers of success we’d apply to, well, literally any other technology.

This failure can’t be separated from the staggering social, cultural, and ecological costs associated with simply using these services: the environmental harms baked into these platforms; the violent disregard for copyright that brought them into being; the real-world deaths they’ve potentially caused; the real-world deaths they’ve potentially caused; the workforce of underpaid and traumatized contractors that are quite literally building these platforms; and many, many more. I mention these costs because this isn’t a case of a well-built technology failing to find its market. As a force for devastation and harm, “AI” is a wild success; but as a viable product it is, again, a failure.

And yet despite all of this, “AI” feels like it’s just, like, everywhere. Consumers may not like or even trust “AI” features, but that hasn’t stopped product companies from shipping them. Corporations are constantly launching new LLM initiatives, often simply because of “the risk of falling behind” their competitors. What’s more, according to a recent MIT report, very nearly all corporate “AI” pilots fail.

I want to suggest that the ubiquity of LLMs is another sign of the technology’s failure. It is not succeeding on its own merits; rather, it’s being propped up by terrifying amounts of investment capital, not to mention a recent glut of government contracts.2 Without that fiscal support, I very much doubt LLMs would even exist at the scale they currently do.

So. The technology doesn’t deliver consistent results, much less desirable ones; what’s more, it extracts terrible costs to not reliably produce anything of value. It is fundamentally a failure. And yet, private companies and public institutions alike keep adopting it. Why is that?

“This is where I think approaching “AI” as a failure becomes useful, even vital. it underscores that the technology’s real value isn’t improving productivity, or even in improving products. Rather, it’s a social mechanism employed to ensure compliance in the workplace, and to weaken worker power.”

Read the rest at: ethanmarcotte.com/wrote/against-stocking-frames/

Page 12

altman & swooper - #03: Standards

By Johan C. Brandstedt

Image/Comic: Illustration of Sam Altman and a robot (swooper) entering an art museum.

altman: A spectacular day for some data capture eh, swooper? swooper: AFFIRMATIVE! Museum employee: One adult? Altman: No need, academic visit. Swooper: YOU HAVE BEEN TARGETED FOR INSPIRATION.

Image/Comic: Illustration of a museum employee catching altman snapping pictures of art, with a no pictures sign on the wall.

Museum employee: Pardon me, sir. Did you not see the sign? Sir? altman: It doesn’t say “No ROBOT photography.” Museum employee: ALL photography is implied, sir.

Image/Comic: Illustration of altman standing by a box that says “TOP SECRET”, has a logo similar to OpenAI’s logo, a broom and a bucket on top with a crude face drawn on it. There is also a sign with a picture of a crying robot and the wors “GATEKEEPERS MAKE BOT SAD.”

altman: I see! You must have not gotten the memo. We wrote it all very clearly on our blog yesterday: Everyone who doesn’t want their property in our product must follow the new standard. Museum employee: “New standard” sir?

Image/Comic: Illustration of the museum employee carrying altman and his box robot out of the museum.

Museum employee: I’m afraid I’m going to have to ask you leave, sir. altman: This is a travesty! You’re impeding progress!! China will eat us alive!

Image/Comic: Illustration of altman dressed as a burglar holding a crowbar and wearing a mask, sitting on swooper (his bot), outside of the art museum.

altman: God I’m sick of Luddites. They just don’t get the tech. Swooper: “NO ROBOT BURGLARY” SIGN: NOT DETECTED

Small text: Any similarity to real people and trademarks are just a temporary glitch.

Page 13

altman & swooper - #02: Alignment

By Johan C. Brandstedt

Image/Comic: Illustration of an office building for “Frendl.AI”, with a sign featuring a logo similar to OpenAI’s logo.

altman: Here at Friendly AI we’re very serious about making ours a safe and just global takeover.

Reporter: Is that so?

Image/Comic: Illustration of a reporter interviewing altman, with swooper behind him and the words PEW! PEW! repeated over and over by shadowing figures in front of computers.

altman: In here, for instance, we run advanced simulations to keep mining robots safe from space debris in the year 3000.

swooper: A MOST PRESSING CONCERN.

Image/Comic: Illustration of altman speaking to a reporter, while swooper flails about and an alarm (AROOGA!) sounds off from the bot.

altman: We also run a “clean data” program to ensure fair and equal treatment of all people.

swooper: PROBLEMATIC LANGUAGE DETECTED! “COMPUTATION-CHALLENGED PERSONS”, PLEASE!

altman: Right. “People” digital as well as biological, of course. And how does that work? Follow me outside.

Image/Comic: Illustration of altman and the reporters silhouette, gloved hands handling dangerous materials (skull and warning symbol on a container).

altman: We pay these newly grads six figures to stuff crates full of broken glass, depleted uranium and pure unfiltered hate…

Image/Comic: Illustration of altman, the reporter and swooper watching drone bots carry packages away to some destination.

altman: …then chuck it all overseas to traumatize Kenyans at $2 an hour.

Reporter: …what about working conditions over there?

altman: Oh, that’s all outsourced.

CONDITIONS: IF [WORK] THEN EAT [];

Small text: Any similarity to real people and trademarks are purely incendiary inflammatory incidental.

Page 14 & 15

Image: Collage of Peter Thiel, Mark Zuckerberg, Marc Andreesen, Elon Musk and Sam Altman juxtaposed with power tool advertisements and the word “TOOLMEN.”

Toolmen

By Mandy Brown

Even the best weaponis an unhappy tool,hateful to living things.Tzu & Le Guin, Tao Te Ching

“Artificial intelligence” is not a technology. A chef’s knife is a technology, as are the practices around its use in the kitchen. A tank is a technology, as are the ways a tank is deployed in war. Both can kill, but one cannot meaningfully talk about a technology that encompasses both Sherman and santoku; the affordances, practices, and intentions are far too different to be brought into useful conversation. Likewise, in the hysterical gold rush to hoover up whatever money they can, the technocrats have labeled any and all manner of engineering practices as “AI” and riddled their products with sparkle emojis, to the extent that what we mean when we say AI is, from a technology standpoint, no longer meaningful. AI seems to be, at every moment, everything from an algorithm of the kind that has been in use for half a century, to bullshit generators that clutter up our information systems, to the promised arrival of a new consciousness—a prophesied god who will either savage us or save us or, somehow, both at the same time. There exists no coherent notion of what AI is or could be, and no meaningful effort to coalesce around a set of practices, because to do so would be to reduce the opportunity for grift.

What AI is is an ideology—a system of ideas that has swept up not only the tech industry but huge parts of government on both sides of the aisle, a supermajority of everyone with assets in the millions and up, and a seemingly growing sector of the journalism class. The ideology itself is nothing new—it is the age-old system of supremacy, granting care and comfort to some while relegating others to servitude and penury—but the wrappings have been updated for the late capital, late digital age, a gaudy new cloak for today’s would-be emperors. Engaging with AI as a technology is to play the fool—it’s to observe the reflective surface of the thing without taking note of the way it sends roots deep down into the ground, breaking up bedrock, poisoning the soil, reaching far and wide to capture, uproot, strangle, and steal everything within its reach. It’s to stand aboveground and pontificate about the marvels of this bright new magic, to be dazzled by all its flickering, glittering glory, its smooth mirages and six-fingered messiahs, its apparent obsequiousness in response to all your commands, right up until the point when a sinkhole opens up and swallows you whole.

Read the rest at: aworkinglibrary.com/writing/toolmen

Page 16

Image: A woman wearing a brain cap that has wires plugging into a machine labeled CHATGPT, with an OpenAI symbol and a surveillance camera watching the woman. The woman holds a blank piece of paper that the machine is giving her and these words are under the image: “will automatically prescribe a course of action while you wait!”

Bot Bosses

By Bart Fish

The machines started out as assistants, but it wasn’t long before they became the bosses. AI workers weren’t prompting models for answers, they weren’t asking questions at all. Human jobs were mostly labeling and slopping, either training or feeding the machines. Most felt special to have a job, especially such an important one. Their bots often encouraged and recognized the human’s hard work.

Sometimes workers would talk about the old days, when there were thousands of different kinds of jobs. When you could do whatever you wanted. Be an artist, a musician, an actor or even a writer. It all seemed kind of silly, what could a human create that was better than a bot? One bot is better than a hundred humans, everyone knows that. With that, they would cheer for the jobs they have and express gratitude for what the bots would let them do. Page 17 Image: Photo of a collection of various tools. Source photo by Lachlan Donald.

Are “AI” systems really tools?

By Jürgen Geuter (tante)

Tools are not just “things you can use in a way”, they are objects that have been designed with great intent for a set of specific problems, objects that through their design make their intended usage obvious and clear (specialized tools might require you to have a set of domain knowledge to have that clarity). In a way tools are a way to transfer knowledge: Knowledge about the problem and the solutions are embedded in the tool through the design of it.

ChatGPT isn’t designed for anything. Or as Stephen Farrugia argues: AI is presented as a Swiss army knife, “as something tech loves to compare its products to, is something that might be useful in some situations.“

This is not a tool. This is not a well-designed artifact that tries to communicate you clear solutions to your actual problems and how to implement them. It’s a playground, a junk shop where you might eventually find something interesting. It’s way less a way to solve problems than a way to keep busy feeling like you are working on a problem while doing something else.

Read the rest here: tante.cc/2025/04/27/are-ai-system-really-tools/ Back: Image: Image of a woman stretching into two versions of her self. Photo by Nick Fancher.

THE DAIR INSTITUTE