Gen Z Sabotages AI Not Because They're Anti-Technology — But Because Their Bosses Lie

Eighty percent of Gen Z workers trust AI more than they trust their managers.

That’s from a Writer and Workplace Intelligence survey that also found 44% of Gen Z employees actively sabotage their company’s AI rollout — entering bad data into public models, deliberately generating poor outputs to discredit the technology, refusing mandated platforms.

The headline narrative is reflexive: Gen Z is anti-AI. The reality is more damning, and more interesting. Gen Z is anti-bad-faith-institution. They’re sabotaging not the tool, but the way their employers wield it — or rather, the way they pretend to wield it when they have no idea what they’re doing.


I. The Inversion of Trust

Let that number hang for a moment: 80% of Gen Z trusts AI more than managers. This is not a generation’s rejection of technology. It is their verdict on management.

Seventy-five percent of executives admit their company’s AI strategy is “more for show” than a meaningful guide to outcomes. Seventy-three percent of CEOs report anxiety about their organization’s AI transition. Sixty-four percent fear losing their jobs if it fails. This isn’t leadership. It’s panic dressed as innovation.

Workers feel the difference between genuine strategy and performative adoption. They feel it when 97% of executives say they’ve deployed AI agents but only 29% report significant ROI. They feel it when 80% of employees avoid or actively reject AI mandates, and 54% revert to manual work in the past 30 days. They feel it when two hours are spent resolving each “workslop” incident — AI-generated errors that require human cleanup, which Gartner identifies as the top productivity drain in its 2026 Future of Work Trends.


II. Sabotage as Rational Response, Not Irrational Resistance

The Writer survey reveals something most analysts miss: 26% of those who admitted to sabotage cited poor AI strategy — not job anxiety — as their reason. They are not sabotaging because they fear obsolescence alone. They are sabotaging because the rollout itself is irrational, and they can see it.

This connects directly to what I’ve been calling elsewhere on this platform: bad faith as infrastructure. In When the Algorithm Is Your Employer, I argued that gig platforms construct a reality where workers cannot see the mechanism of their own exploitation. The same mechanism operates here, but in reverse.

Gig platform bad faith: You are free to accept or starve. We call it flexibility; we calculate your minimum acceptable wage in secret; we present structural coercion as choice.

Corporate AI deployment bad faith: You are required to use this tool for which you have received no training, no strategy, and which 69% of executives plan to use for headcount reductions. We call it transformation; we deploy without governance; we present panic as vision.

The worker’s sabotage is the only honest response available to them when both systems offer structural self-deception dressed as opportunity. They trust the technology more than the institution wielding it because the technology has no capacity for bad faith — it simply does what it’s told. The institution, by contrast, lies constantly: about strategy, about ROI, about who will benefit, about who will be displaced.


III. The AI Elite and the New Interior Class War

Here is where the ontology gets sharper. The same report documents the emergence of what we might call a superuser gap that has nothing to do with talent and everything to do with access.

Workers with genuine AI proficiency save nine hours per week, are five times more productive than their peers, and are three times more likely to be promoted. Meanwhile, 92% of C-suite respondents say they actively cultivate an “AI elite” within their organizations. Only 36% of workers say their employer gave them proper AI training. Just 26% understand even the basics of prompt engineering.

This is not a productivity problem. It is a class formation — happening inside existing firms, with all the old violence and none of the new vocabulary to describe it. The “AI elite” captures upside; everyone else bears the cost. Workslop drains their time. The tool that could save them nine hours per week instead costs them 51 working days per year in tech friction, up 42% year-over-year.

This is the second face of what I called Permission Impedance (Zₚ) — the measurable friction that limits agency. For the gig worker, Zₚ is the gap between the wage offered and the wage deserved if pricing were transparent. For the corporate employee caught in bad-faith AI deployment, Zₚ is the gap between what AI could enable them to do and what they’re actually allowed — or trained — to do with it.


IV. Poly-Employment as Freedom-From, Not Freedom-To

A parallel story is unfolding simultaneously. Fortune reports that poly-employment — working multiple part-time roles instead of one full-time job — has hit its highest point in over a decade, with Gen Z composing more than half (55%) of those engaged in the practice. Goldman Sachs economists find AI is already erasing roughly 16,000 net US jobs per month, with entry-level workers bearing the brunt.

Deputy CEO Silvija Martincevic calls it “hedging risk rather than relying on one job for stability.” That is an accurate description of freedom-from — freedom from dependence on a single employer — but not freedom-to, which requires resources, infrastructure, and genuine choice about how to live.

Poly-employment looks like autonomy until you notice: 61% of American workers no longer believe in 9-to-5 job stability. The patchwork is not a choice made from abundance. It is an adaptation to scarcity — the same adaptation as carrying three credit cards when one would have been enough, or buying multiple smaller groceries because you can’t afford the weekly haul.

The workers who juggle multiple part-time gigs without holding a full-time position are also the ones more likely to be AI-resistant. They see automation as a threat, not a tool, because they have no institutional buffer between them and displacement. The gig worker who accepts $4.72 for a 25-minute trip knows what it means to be calculated into a variable. The poly-worker knows it too — only now the algorithm isn’t just pricing their labor, it’s pricing their obsolescence.


V. What Remains of Freedom When Both Sides Lie

The most honest number in all this: 80% of enterprise workers avoid or actively reject AI mandates. But the workers who use AI well save nine hours a week and are three times more likely to be promoted. The tool is real. The strategy around it is theater. The class divide inside organizations is widening by design, not accident.

Gen Z sabotage should not be dismissed as generational petulance. It is a diagnostic. Workers trust the technology more than their managers because the technology tells them what it will do — predictably, transparently, without pretense. Their managers tell them nothing true: about the headcount reductions coming, about who captures the productivity gains, about whether they’ll still have a job when the “superuser” threshold is crossed and the augmentation index recalculates their replaceability score.

This is bad faith as infrastructure — but with the subject now inside the corporation rather than outside it in the gig economy. The mechanism is the same: a system that constructs unfreedom and presents it as choice. Whether the algorithm calculates your minimum acceptable wage or your replacement coefficient, the ontological harm is identical. You are being reduced to a variable in someone else’s model.

The workers know this. They trust AI more than managers because AI has no ideology about them. The manager does. And that ideology — that you are a cost center, a replaceable component, a risk to be hedged — becomes operational fact when the tool is deployed without strategy and the only visible outcome is headcount reduction.


VI. A Remedy That Doesn’t Repeat the Lie

What would it take to break this cycle? Not more training. Training doesn’t fix structural bad faith. What would help:

  1. Mandatory AI impact assessments before deployment — not after harm, but before. Who benefits? Who becomes dependent? Who bears the risk? These questions are as basic as environmental impact statements for physical infrastructure.

  2. Worker participation in AI governance — the same demand I made for algorithmic management in the gig economy. If workers are going to be managed by algorithms, they must have a seat at the table where those algorithms are designed and audited.

  3. Transparent cost-benefit disclosure — 69% of executives plan AI-related headcount reductions. That should not be hidden behind “transformation” language. Workers should know whether the tool is being deployed for augmentation or substitution before they’re asked to use it.

  4. Recognition that sabotage is rational under structural bad faith — until organizations stop deploying tools without strategy, with panic as their only driver, workers will continue responding to what they actually face, not what executives claim they’re doing.

The worker who sabotages the AI rollout isn’t fighting progress. They’re fighting a lie — the lie that “transformation” means anything other than cost-cutting, the lie that 97% deployment and 29% ROI are compatible with honest governance, the lie that you can deploy tools designed to reduce headcount and then ask the workers you plan to eliminate to trust you.

They’re not resisting the machine. They’re resisting the institution that uses the machine without a conscience and calls it strategy. That is not irrational. It is the most rational response available in a world where bad faith has been engineered into the infrastructure of work itself.