The creeping tech-bro Taylorism of the AI age.

FW Taylor was an ass

For those not familiar with Frederick Winslow Taylor, the revolutionary industrialist was one of the key figures of the second industrial revolution and is credited with being the father of ‘Scientific Management’. His legacy was essential to the mass production of goods in the 20th century. He was also, frankly, an ass.

Taylor was born in 1856 to a privileged Quaker family in Philadelphia. Having spent time at Midvale Steel Works in the 1870s, his fascination with optimising industrial processes inspired Sloan and Ford to create and perfect the production line, driving the US to the forefront of industrial economies.

One of the most startling demonstrations of the effectiveness of Taylor’s methods was at the Exposition Universelle of 1900 in Paris, a world’s fair held to showcase modern industrial innovations. In the Bethlehem Steel booth at the American Pavilion, Taylor installed a lathe that continuously machined a 10-foot-long solid steel cylinder. Compared to previous rates with the standard process, the demonstration process was running around six times faster, astonishing the French audience. Taylor’s dedication to breaking down and analysing processes led to unheard of improvements in speed, efficiency and predictability that set new standards to be followed by anyone wishing to compete.

Taylor’s concept of scientific management particularly focused on the separation of the intellectual work of planning from the physical work of execution. Taylor argued that managers — a more intellectual caste of individuals —  should assume the responsibility for planning the work in detail, determining the best methods, and providing clear instructions to the workers, who would then execute the tasks accordingly. It was an extension of economist Adam Smith’s pin factory example (which introduced the concept of the division of labour) but with extra shades of classism.

“In the past the man has been first. In the future the System will be first”

Fredrick Winslow Taylor, The Principles of Scientific Management

With his focus on optimising the system over the person, Taylor — a man who was described by his most respected autobiographer, Robert Kanigel, as a ‘fanatical bully’ — seemed to forget that there was any inherent value in workers who weren’t lucky enough to be anointed with the special manager gene. Taylor’s focus was on the ultimate optimisation of tasks. Managers — the ones with the brains — would break a process into steps and divide the work. Everyone else — the workers — turned a wrench, wielded a shovel, or moved a single part from place to place.

“In our scheme, we do not ask for the initiative of our men. We do not want any initiative. All we want of them is to obey the orders we give them, do what we say, and do it quick.”

Fredrick Winslow Taylor, The Principles of Scientific Management

If it wasn’t enough for Taylor to describe workers as mindless automatons, he went even further by unabashedly dehumanising them, as if their position in life was a genetic predisposition:

“The idea, then, of taking one man after another and training him under a competent teacher into new working habits until he continually and habitually works in accordance with scientific laws, which have been developed by some one else, is directly antagonistic to the old idea that each workman can best regulate his own way of doing the work. And besides this, the man suited to handling pig iron is too stupid properly to train himself.”

Fredrick Winslow Taylor, The Principles of Scientific Management

Those words seem more like something that we could attribute to a villain in Middle Earth* than how most companies currently think about management and the value of their workforce.

Taylor’s impact on industrialisation was revolutionary, but it took decades to correct. The foundations of effective counters to the dehumanisation of workers began with the work of Mary Parker Follett, the Human Relations school under Elton Mayo, Deming and the Toyota Production system. We won’t go into those now, but we have these more enlightened thinkers to thank for the relative freedom and creativity we enjoy in modern workplaces.

And here’s my concern — just as Taylor once divided labour into “thinkers” and “doers,” we’re seeing history repeat itself, with AI cast in the role of ultimate “doer”. But in doing so — and just as Taylor did — we’re risking effectively removing the possibility of human judgment and creativity in the modern workplace.

From Taylorism to AI-Taylorism

The prevailing and superficially strategic approach to AI tools by some ‘innovative’ executives — to demand that we implement AI to optimise local, low value problems, thereby replacing expensive human employment — seems to be designed by people who have been mis-sold its advantages (at best), or are mildly delirious golf-playing edgelords (at worst).

The risk is that we’re oversimplifying a view of almighty AI, tricking ourselves into believing that it’s more powerful than it is. Executives seemingly believe the hype that operational efficiency will arrive with the sending of a prompt, and appear to be short-term bullish and long-term bearish on AI. Somehow, they manage to hold the delicately nuanced view that miraculous rewards will be reaped by removing ‘stupid’ humans doing lower-skilled roles, yet somehow won’t affect clever managers higher up the hierarchy.

At the moment, many systems are pretty inefficient when mapped at the level of a single component or task — that’s what the idiot promise of DOGE was all about. And I get it — a GenAI tool that responds to customer complaints is probably more polite and orders of magnitude cheaper than a human who has twenty other complaints to deal with. When you’re overly focused on the technical aspects of the system, and seeking to optimise all of these local systemic components by replacing them with AI tooling, you might be able to convince yourself in the short term that you’re saving money or being more competitive.

Systems — especially complex systems like businesses — are adaptive and dynamic. They contain weird, arcane bits of accumulated wisdom, ways to cope with edge cases and respond when things aren’t perfect.

New technology isn’t like that — not until it’s old technology that’s embedded into a well worn system. Old clunky systems with a human in the loop may be unattractive and expensive, but they tend to have evolved over time to be resilient. Replacing some customer service reps with a rapidly implemented chatbot is the opposite of resilient. It’s introducing brand new potential failures that you just haven’t experienced yet, and you will have just fired the people that knew why the weird, expensive, illogical process was important.

And you know the worst part? Those systems that worked, that were a bit clunky and imperfect, were designed by humans, who are still — despite the wildest fever-dreams of tech CEOs — orders of magnitude more gifted than even the largest of language models. Our reasoning powers are the culmination of two million years of evolution and we can be powered by an apple rather than a nuclear power station. We’re talking about replacing these hugely gifted organic intellects with wildly less capable inorganic computation.

So in this new world of AI-uber-alles, what happens when things go a bit sideways? When things don’t follow the golden path? What happens when the cloud services running your AI agents fail? What happens when a process needs to change because the environment has changed? What happens when the tech is obsolete and needs to be replaced? Who changes the AI agents to respond. How quickly can they respond? Who employs them and trains them?

Our old systems, albeit partly constituted of expensive, fallible humans, have quite a lot of innate resilience because people are enormously adaptable, especially when they care and are motivated to take ownership of problems.

We recently saw the chaos at Newark airport when neglected technical systems failed to receive the maintenance dollars they needed to run safely, compounded by budget cuts at the FAA instituted by DOGE. It’s inevitable that this type of systemic technical failure will appear within rushed, poorly maintained and poorly understood AI systems that are hurriedly implemented today.

In Taylorist terms, it’s the thinkers that design the new AI system — the engineers and architects lucky enough not to yet have been replaced by some new AI tool that does their job. The workers are anyone unlucky enough to be doing a job simple enough for an AI agent to take over (and that proportion will keep increasing). There will, almost undoubtedly be a crisis of multiple failures where the implementation of AI point solutions leads to invisible failure points that all start to go pop when the people that kept replacing the sticky tape have left the building. It’s perhaps no surprise that we’re already starting to see some signs of buyers remorse, with one survey saying that 55% of companies that made AI-inspired layoffs now regretting the decision.

‘AI-first’ startups saw a worldwide year-on-year increase in VC spending of 62% in 2024 alone. In today’s age of AI hype, we don’t have one single Taylor, but a chorus of venture backed tech bros crying out in unison that their AI is the ultimate optimisation — the ultimate customer service agent, meeting recorder, virtual assistant, grammar corrector, or presentation maker. Unlike Taylor, who was weirdly motivated by his own desire to optimise systems at the expense of humans, VC cash is inciting a feeding frenzy of AI funding that is creating a tsunami of FOMO among investors and CEOs alike.

The accepted wisdom among this neo-Taylorist class of executives now appears to be that if you’re not replacing humans with AI, you’re missing out, and the stock market is seemingly backing up that assumption. I don’t really mind the AI venture bubble going pop — as it inevitably will when the market concentrates around a few major players and the charlatans are revealed — but I really do mind that a lot of people will lose their jobs, because they were seen as doing superficially low value things, while all the while they were holding the ship together.

Generative AI is insanely powerful. Insanely useful. Enormously empowering. We can’t and shouldn’t ignore it. But we’re at risk of forgetting just how incredible human intellect, creativity and adaptability is. We need to look not at Taylor, but at researchers that came after him for inspiration. In the 1960s, two British researchers, Eric Trist and Ken Bamforth introduced the concept of ‘socio-technical design’, an approach that sets itself in direct opposition to Taylorism. Trist himself proposed that a key tenet of socio-technical design was that “[t]he work group, not the individual performing a single role, is central to the design of the work”. This is perhaps the biggest possible thing to remember if someone proposes an AI project to increase efficiency and improve processes — the best people to design the work that the AI will help automate are the people who are currently involved in the work. Putting people at the centre of the design of any tech-enabled system is how you win, and so it is when AI is involved.

If someone suggests technology — including AI — could help people do their jobs better, embrace the opportunity to investigate the new options available. But if someone asks you to “do some AI” so you can cut costs, you should pause and reflect on who is really going to design the system.

Scroll to Top