The generative AI con
Mar 23, 2025
Good non-fiction writing will alter your perspective. Great writing will go further – revealing some deep truth that was always there, but for whatever reason hadn’t been acknowledged. Something akin to a deep breath on a turbulent day.
This week I had the pleasure of reading such a piece. The Generative AI Con by Ed Zitron. You should read it. It’s an opinionated bear case for generative AI. This piece, along with my rapidly declining tech portfolio awakened the AI-bear within me. I’m now going to do the same for you.
Ed’s approach to enlightening his readers to the con of AI is that of the Japanese Zen master. It’s direct and harsh. It systematically refutes any real benefit to generative AI, his critical prose is opinionated, aggressive and full of righteous zeal. I think I agree with exactly half of it.
In this article, we’re going to take a more nuanced approach. We’re also going to explore the arguments against AI in which I think Ed’s piece doesn’t go far enough. By the end of this article I think you’re going to believe generative AI is a con, but in a very different way than Ed’s original piece suggests.
Here’s what we’ll cover
Whether the industry is economically sustainable.
Whether AI products deliver value that warrant the industry's valuation
How the press and other actors have influenced our thinking on Gen-AI.
The (very) questionable sustainability of LLMs
Around a third of Ed’s argument on why generative AI is a con is on the questionable sustainability of the business models of OpenAI, Anthropic, and other businesses that we call ‘foundational models’.
Putting aside the hype, bluster and ungodly amounts of money, I can find no evidence that any of these apps are making anyone any real money. Microsoft claims to have hit "$13 billion in annual run rate in revenue from its artificial intelligence products and services," which amounts to just over a billion a month, or $3.25 billion a quarter… $3.25 billion a quarter is absolutely pathetic. In its most recent quarter, Microsoft made $69.63 billion, with its Intelligent Cloud segment
Ed gives us a litany of similar stats that make a compelling case as to why these large language model (LLM) companies are effectively Ponzi schemes. Ponzi’s that rely on the broken VC incentive structure, a structure that survives only on the consistent marking up rounds, regardless of any clear path to profitability. It's difficult to disagree here.
To pull out a few such statistics: Anthropic is on track to lose $2.7 billion this year—child's play compared to OpenAI's forecast $5bn loss. The rhetoric delivered by the charismatic-in-an-autistic-way CEOs of these companies is that compute costs will drop dramatically while usage surges, eventually leading to profitability. Ed counters the argument that usage will surge in several ways, the most obvious critique focuses on API usage (a key forecast driver of usage):
The Information — who I do generally, and genuinely, respect — ran an astonishingly optimistic piece about Anthropic estimating that it'd make $34.5 billion in revenue in 2027 (there's that year again!), the very same year it’d stop burning cash. Its estimates are based on the premise that "leaders expected API revenue to hit $20 billion in 2027," meaning people plugging Anthropic's models into their own products. This is laughable on many levels, chief of which is that OpenAI, which made around twice as much revenue as Anthropic did in 2024, [barely made a billion dollars from API calls in the same year](https://www.wheresyoured.at/oai-business/#:~:text=Licensing Access To Models And Services — 27%25 of revenue (approximately %241 billion).).
But Ed doesn't go far enough. He misses a key point around the cost of inference going down that makes the whole conversation irrelevant. The premise that the cost of inference will go down and improve profitability is based on the idea that LLM models are – to at least a degree – price inelastic. For those who didn't partake in Econ 1 – a short lesson. Price elasticity simply refers to how demand for a product changes with price. Prescription drugs are inelastic. People will pay a lot for insulin because they value healthy organs over a healthy bank balance. A McDonald's cheeseburger, however, is highly elastic; if the price goes up, consumers will turn to a competitor, or god forbid, cook themselves a nutritious meal. Generally, when there are easy substitutes for a product, that product's price is highly elastic.
As I write, there are perhaps, at the minimum, five large language models that I would consider, for the most part, interchangeable. Sure, I prefer Claude slightly over ChatGPT and Gemini, but if the cost of ChatGPT halved, I would likely move to that model. There’s an unwritten law in capitalism ‘your margin is my opportunity’. This means that any company with a sufficiently high margin is walking around with a target on it’s back. Any disrputors that can enter the market and offer a similar product at a lower price point should, cronyism aside, win.
For example, if OpenAI sees its costs of inference cut in half and doesn't pass those savings to customers, a competitor will inevitably offer a similar product at half the price. This new entrant wouldn't even need to match OpenAI's quality —a service that's 90% as good but significantly cheaper would still siphon away OpenAI's customers. OpenAI understands this dynamic, which explains their aggressive push for AI regulation (read cronyism). The equation is simple: regulation creates barriers to entry, reducing competition and protecting margins. The uncomfortable truth is that falling inference costs won’t improve the profitability of AI model providers. Instead, a drop in costs will attract more competitors, increasing price elasticity and compress margins. Only the consumer will benefit.
So, barring some massive change, LLM’s won’t ever be profitable. Fine. But what does this really tell us about the sustainability of generative AI as a whole? Maybe not a whole lot.
Ed's implicit argument here is that as the foundational models don't seem to be accruing value, generative AI is a con. But this would be akin to saying that because nobody made any money off fibre-optic cables that the internet is a con. History is littered with examples where a fundamental layer of a new technology accrued no value. In fact, it's not the exception, it's the rule.
For your consideration, an extract from Warren Buffet's 1985 shareholder letter on the topic of shutting down Berkshire Hathaway's textile business:
Over the years, we had the option of making large capital expenditures on the textile operation that would have allowed us to somewhat reduce variable costs.
Each proposal to do so looked like an immediate winner. Measured by standard return on investment tests, in fact, these proposals usually promised greater economic benefits than would have resulted from comparable expenditures in our highly profitable candy and newspaper businesses … But the promised benefits from these textile investments were illusory.
Warren then goes on to explain how better and better mills resulted in a textile-industrial arms race between him and competitors. You had a devil's bargain: either outlay the capital cost for the new equipment and reduce your margins due to the amortisation of said cost, or keep your old equipment and reduce your margins due to worse efficiency compared to competitors with their shiny new mills. Either way, the outcome was the same—lowering margins for all involved. You can see the effect of this in the numbers; the textile industry, since the 1900s, has been in a steady margin decline despite dramatic technological improvements.
Contrast this to Brooks Running, another Berkshire company. Brooks benefits directly from shuttlerless looms operating at 600 picks per minute, high-speed winders, digital monitoring systems that reduce waste and all the other incremental and step-change improvements to the textile mill. It benefits from all this without directly owning any mills. Brooks Running focuses on design and marketing, creating beautiful products that sell at a premium price point. The fact that its operating costs reduce with every milling breakthrough only benefits them; unlike the mill owners caught in a price war. Brooks can maintain its premium pricing while enjoying improved margins from these manufacturing efficiencies.
The pattern repeats with refrigeration. The lasting, highly profitable companies built on this technology aren't the ones manufacturing the refrigerators, but the brands that benefit from the existence of the numbingly-cold-beverages-whenever-you-want-them supply chain. Consider the contrast: GE's industrial-food segment (which included refrigeration) operated at around 4% gross margins in 2013 before they sold the division—presumably at least partly for this reason of embarrassingly small margins—while Coca-Cola, a company whose entire business model depends on refrigeration, consistently delivers 50%+ gross margins. A margin built on brand and distribution, not technology.
The point here is not to argue that because the value of generative AI isn't accruing to the foundational layer, it will eventually accrue somewhere else. I'm simply stating that just because the foundational models aren't profitable doesn't mean we can rule out the entire industry as a con. We can't throw the baby out with the bathwater.
For that determination, we need to look at what matters most – whether end users are actually getting real value from these technologies.
Are LLMs providing end-user value?
It’s hard to not get lost in the hype. We have Nvidia's near-mesospheric market cap, the eye-watering VC rounds for foundational model companies, market froth is everywhere. But beneath the private (and increasingly public) market ponzinomics, this excitement hinges on a simple premise: that real humans, at the end of the chain, are using AI applications and deriving meaningful value from them. According to Ed, that’s precisely what isn’t happening—at least not in any significant way.
"I get that there are people that use LLM-powered software, and I must be clear that anecdotal examples of some people using some software that they kind-of like is not evidence that generative AI is a sustainable or real industry at the trillion-dollar scale that many claim it is."
This is a thorny topic. First, because without any kind of Lindy effect—no long-lived, time-tested AI companies—we don’t have much to go on. That’s just how the speculation phase works, people bet there will be a payoff, extrapolating from what they see in the technology right now. We lack hard data.
I’m about to make an incredibly lazy argument. And I hate to do it. But it’s useful for illustrating the point. During the dot-com bubble, there were very few great use cases for the internet—but that didn’t mean the bubble itself was wrong, just premature. If anything, you could argue that the dot-com bubble represented peak clarity—where the market, in its frenzy, correctly saw just how much the internet was going to reshape the world.
This argument is lazy because it’s the exact same one crypto enthusiasts (and indeed any new speculative crowd) use to dismiss any suggestion that crypto was (or is) a bubble: “We’re just early”, “The use case will arrive”, “Look at this Billl Gates clip where everyone laughs at the idea of the internet, crypto is the same!” The question isn’t whether AI is in a bubble but whether it’s more like the internet or more like, well, crypto. Let’s see if we can find out.
In product building, we have a quick and dirty way to gauge whether a product is actually useful or if—far more likely—we’ve built something no one really wants. The test is simple: ask users, “How disappointed would you be if you could no longer use this product?”
It’s a clever question. A Mungerism, an inversion. Instead of asking the usual “How much do you like this product?”—which invites lukewarm enthusiasm and polite dishonesty—it flips the frame. By considering the absence of the product rather than its presence, we get a much sharper read on its true value.
You’ve probably seen this question before—popping up in some annoying survey: “We want your feedback!” As a product builder, what you want is for a portion of your users to answer “very disappointed.” That means, at least for some people, you’ve built something that truly matters. It’s not a perfect science, of course, but its simplicity leaves nowhere to hide. When framed this way, the question cuts straight to the core of whether your product is genuinely needed or just another nice-to-have.
I framed this question to my friends via an Instagram poll. I got about 20 responses without around half the people who answered saying they would be very disappointed, I also followed up this pool with the same question but replacing AI with Crypto. 1 person answered very disappointed. If anything, this poll can tell us people find AI meaningfully more useful than crypto, if we needed any proof of that.
Not exactly a revelation, but still—a data point. Another big thing about generative AI is that isn’t hard to get answers from people as to what specifically they find useful about the tool. What is hard, is trying to make a judgement as to whether the use cases unlocked by AI are on the same scale to what AI claims to be, to quote Ed
I get it. I get that there are people that use LLM-powered software, and I must be clear that anecdotal examples of some people using some software that they kind-of like is not evidence that generative AI is a sustainable or real industry at the trillion-dollar scale that many claim it is.
Let’s dig into this a bit. First, we have to confront the wording, Do people ‘kind-of like’ AI or do people love it – do people need it? The very disappointed crowd clearly have at least answered to the tune that AI for them is a step above kind-of like. I myself answered very disappointed. I can easily give you a use case this week that will show you why.
Last month, I was shocked by a light switch in my flat. The force knocked me unconscious for two minutes. I was rightly scared. My fiancée is about the size of a Samsonite check-in case and would have surely taken considerably more damage from the current’s discharge than I. I got an emergency electrician in, and after about a month of work (and yours truly getting knocked unconscious one more time for good measure), they found the issue—live wires in our new-build flat. Gross negligence from the builder. I was billed £1,500 in total for all the work done.
I am a person who is allergic to admin tasks. Just the thought of having to fill out a five-minute form fills me with some kind of fear that I can only describe as existential dread. In pre-AI days, there is no way I would have considered going through the rigmarole of finding out who is responsible for this and holding them accountable. My ineptitude for admin is costly for two reasons. First, it costs me real money in potential compensation. But secondly, and more importantly, it prevents the shithouses who built my home from being held accountable.
But this is not the pre-AI days, and I now attack bureaucratic processes with pracitcal zeal. I fed all the docs—including invoices, the scathing electrician’s report, and my insurance documents—into GPT-4. Within 30 seconds, it had drafted a comprehensive approach: contacting the building’s insurer, the exact email to send, and the precise process to follow. This took less than an hour of my time and actually makes admin fun. I get a real sense of fuck-you-wrath every time I send a perfectly constructed argument to some clueless, middle-aged underwriter who probably imagines some amphetamine-fuelled legal nerd on the other end of the line, swiftly dismantling his half-hearted responses with legal-jargon-infused rhetoric. It’s looking like there is a good chance I am going to both get the cost of fixing, plus a healthy chunk of compensation.
This example is useful because it highlights, in a very real way, how AI has not only saved me hours of work but also enabled me to take action I would never have even considered before—netting me thousands in the process. But is this kind of use case enough to justify AI’s trillion-dollar valuation? For that, I think we need to look at a very different purported benefit of generative AI.
At the heart of the AI bull case—and the justification for those trillion-dollar valuations—is the belief that AI will make us smarter and unlock extraordinary discoveries for humanity. The media has our minds positively reeling at the prospect of biotech, robotics, or even fundamental physics breakthroughs made possible by sufficiently advanced AI. I think this is a whole lot of hot air. Hot air generated by biased parties (we’ll get to that later).
You can call generative AI a lot of things, but smart isn’t one of them. And the idea that simply throwing more compute at LLMs will make them significantly smarter—or that this path will somehow lead to AGI—is laughable. To understand why, consider the difference between human intelligence, which is knowledge-creating (the ability to understand), and a gorilla’s intelligence, which is purely mimetic (the ability to copy). That gap is only a few Petaflops—far less than what the largest LLMs already operate with. If we are ever to discover AGI, it won’t be through brute-forcing compute; it will be through understanding how human intelligence actually works and finding a way to replicate that in computing.
My fiancée is in the business of knowledge creation—she’s currently writing her PhD. She also happened to answer only somewhat disappointed when asked if she’d miss AI, and maybe not even that. For her, AI is more of a moral support tool, a backup dancer rather. She’ll feed it her work and ask for critique, but really, she’s just looking for reassurance that she’s on the right track, using the validation as motivation to continue. Occasionally, she’ll prompt it when stuck on an argument, but the suggestions AI generates usually lack nuance or real understanding, so she discards them.
The best developers I work with experience the same issue. They use AI for some things—mostly as a glorified Google replacement to look up obscure syntax—but they aren’t letting it handle any real coding. Contrary to popular opinion, doing so actually slows them down. And for my UX designer, AI is useless.
This unravels the key point about AI’s fundamental limitation: it can help us do what we already know how to do, or at least what somebody already knows how to do. But generative AI will never create anything new. And I reject the idea that it makes us more intelligent. There’s more than just a semantic difference between increasing throughput and accuracy and actually being more intelligent. If anything, from what I’ve seen, AI can have the opposite effect—people take the lazy option, blindly following AI’s recommendations instead of thinking for themselves. In software development, I’ve seen firsthand the consequences of this. Developers who know better still end up committing egregious security mistakes simply because they let a dumb LLM take the wheel. It’s not intelligence—it’s just automation with a veneer of authority, and the moment you stop questioning it, you invite real risk.
So whether AI can truly unlock the trillion-dollar scale that many claim—as Ed suggests it won’t—is perhaps still up for debate. But you might think that everything I’ve just written at least dismantles Ed’s claim that AI only provides products people kind of like. Maybe. But maybe not.
I think the real genius of Mr. Ziton’s argument in The Generative Con of AI lies in his framing of something far deeper: when it comes to AI, we may not even be able to trust our own judgment about how valuable we find it. Ed only flirts with this idea, we are going to take it to it’s logical conclusion.
The press and the panopticon
The pinnacle issue with AI is that it is impossible to analyse its usefulness objectively. In short, everything I’ve written about AI’s usefulness, personal anecdotes included, have been given from the perspective of a highly unreliable narrator. To understand this, take Ed’s point on AI in the media.
In short, most of the coverage you read on artificial intelligence is led by companies that benefit financially from you thinking artificial intelligence is important and by default all of this coverage mentions OpenAI or ChatGPT.
Ed mainly makes this point to illustrate that the user numbers of AI—specifically ChatGPT as the darling of the media—are inflated because of the insane amount of free press it has received. This is part of the problem, but again, doesn’t go anywhere near far enough. The coverage of AI, in terms of its impact on getting people to first interact with these tools, is dwarfed by the impact its had on shaping how people view its actual usefulness.
Before exploring this, let’s acknowledge the different actors who influence the discourse on AI. The press, of course, but also VCs, who in turn influence startups, as well as the startups themselves, and let’s not forget the new media—content creators who talk about AI, usually with some kind of course to sell you on its benefits. To make this real, consider my own experience as a humble, just-over-the-cusp-of-initial-product-market-fit startup founder who employs a small handful of developers.
I am walking to the office and listening to the All-In podcast. Yes, sorry, these are some of the worst people in the world, but their positions do grant them interesting perspectives. On the show, I hear Jason Calacanis talking about how he is seeing tonnes of startups get to millions in revenue with just a couple of employees because they are using AI (the fact that you can draw such causal inferences in such a complex system is lost on Jason, and me at the time—our minds crave simplicity). So what do I do? I tell my developers, Right, we should all be using AI, or at least investigating it heavily. I pay for them all to have access to the best AI tools. I log into Twitter, and my feed is practically bursting at the seams with people who claim to have created ‘killer apps’ using no-code platforms like Cursor. I play around with these apps myself, and while I don’t manage to create a killer app, I do get software generated in front of my eyes just by pushing a few buttons—it certainly feels like I’m leveraging some incredibly powerful and useful technology. I then take a board meeting with my VCs, who are now practically only interested in AI startups, so of course, being the wily product manager I am, I come prepared—a roadmap full of features that leverage AI to deliver end-user value. I get a pat on the back for my initiatives.
But this isn't all. Along with this 'AI is magic' carrot, there is the stick. 'You need to learn these AI tools or you are going to be left behind'. This narrative is almost, if not more, pervasive. The argument goes something like this: AI is leverage for your highest value skills. It's going to act as some kind of consequence-free exogenous ingestion of anabolics. Those who utilise it will be stronger and more intelligent; those who don't will be left behind. I should know this as it's a narrative I have personally delivered. It's also one I do, to an extent, believe. But that doesn't make it true – it is a story, a narrative about the future, that is as much based on the rhetoric in our society as it is about my 'own personal beliefs' (if there even is such a thing).
The point I'm trying to make here is that just by being part of our society, you are being told a story about AI which may or may not relate to its underlying value. This story will not only make you more likely to adopt AI, it will also likely make you feel differently about the tools. Narrative can manipulate something that's 'just ok' into 'this is amazing, this is the future'.
I hate to bring in unnecessary abstract post-modernist theory, but as my betrothed is a sociology PhD I can't help it, it's become part of my worldview. Foucault was a French philosopher known for his work on Biopolitics. Biopolitics is a theory of how modern power works, and how our discourse is created. In the past kings and lords would tell us what to think. Nowadays, it's more complex. Knowledge is constructed by different powerful actors, from governments to societies to thought-leaders. These groups don't report on a subject, like AI – they construct its meaning, and the discourse of power goes to those that propagate it. The VC benefits from a positive AI discourse, the startups benefit from going along with this and creating their own discourse, as do VCs. What seems on the surface as multiple different industries all having viewpoints of AI from different perspectives is actually a tightly woven construction of AI designed to benefit those who are bullish on it. It's not about truth, it's about power.
This is important, because it brings into question how many of the people who answered my survey 'very disappointed if I could not use AI' (including myself) are talking about the technology itself, that they are using today, and not say, the promise we have been given of where the technology is going, or the fear they have of not adopting the technology because of the thought of 'getting left behind'.
But hasn't this always been the case? Aren't we always influenced to think better of a new technology because the powers that stand to benefit are constructing a positive narrative around it? I don't think so. I was very young when we first got the internet, probably around 10 or so, but I clearly remember the popular narrative being 'this is slow and annoying'. The Bill Gates interview I shared earlier clearly demonstrates the skepticism of the masses towards this technology. Imagine the same interview today with Sam Altman—it's hard to imagine the interviewer being so consistently skeptical that AI has any use whatsoever. With the internet, the technology had to prove itself against the popular narrative. With AI, it's the exact opposite; being skeptical goes against the normative view.
This shift in narrative dynamics matters because it changes the burden of proof. In the past, a new technology had to earn its place through demonstrated utility. With AI, the assumption of inevitability is so strong that skepticism is viewed as ignorance or Luddism. If the discourse around AI is this tightly controlled, then how much of our perception of its value is real—and how much is just manipulation?
Trusting judgement
We can argue all day about whether the business of AI is sustainable. We can argue all day about whether the use cases that AI provides are worth the trillions of dollars in value. The difficulty in taking either side in these arguments is that the debate can easily fall into 'well it just needs time to prove itself'.
What we can argue about is the rhetoric around AI today. This can be observed directly. And what Ed gets magnificently correct is that beneath the avalanche of AI content and near-religious fervour around its usefulness, there is something deeply unsettling. What is unsettling is how seamlessly the AI narrative has colonised our collective imagination. The real con is not that generative AI doesn't work - it's that, notwithstanding a serious effort to detach from the rhetoric, we've lost the ability to objectively evaluate to what extent it works for us.
We live in a world where the techno-optimists hold the power. They shape our rhetoric. And they are 100% incentivised to have us believe generative AI is the second coming. A combined narrative from the press, the thought leaders, the capital allocators and, of course, the companies. Perhaps the greatest achievement of the 'AI revolution' isn't the technology itself, but the creation of a discourse that makes any level of skepticism feel like heresy.
In the end, the question isn't just whether AI delivers value, but whether we can even trust ourselves to know the answer. That uncertainty might be the most honest conclusion of all.