No Acoustic Guitars in Silicon Valley
- Karl Johansson
- för 2 dagar sedan
- 28 min läsning
Or
Techno-Pangloss, The Next Big Thing, and Silicon Valley’s Loss of Pragmatism
Silicon Valley is good shorthand. The term represents American high technology in all its glory, and in all its folly. It is handy to have a term to represent enterprises as varied as Meta, Amazon, Netflix, and Microsoft given how important they have grown to corporate America, and the impacts they have had on Anglophone culture. But there has not been a lot of Silicon in the Valley for decades now. It is fully Software Valley now, and if current trends continue it might become AI Valley to the delight of people like Elon Musk and Sam Altman.
Artificial Intelligence or AI is the next great innovation if you ask Valley techies and financiers and a largely useless form software that requires insane amounts of energy to produce mediocre results based on intellectual property theft of the largest scale in human history if you ask detractors. To be clear up front I am one of the detractors, and this essay is my attempt to explain how Silicon Valley lost its way from changing the world with microchips and the internet to incessantly talking about how its next big thing will change the world, no matter how impractical that thing seems to outsiders.
Before we get started, I need to address the title. I know that guitars may seem coincidental to AI and Silicon Valley, but as a hobbyist guitarist I think it serves as a useful way to contrast Silicon Valley techies’ view of the world with how normal people see things. And believe me when I say there is some daylight between the two. To illustrate, here are a few extracts from famous Silicon Valley venture capitalist Marc Andreessen’s 2024 ‘Techno-Optimist Manifesto’, note that when he says we he means techno optimists:
"We believe in greatness. We admire the great technologists and industrialists who came before us, and we aspire to make them proud of us today."
"Our civilization was built on technology. Our civilization is built on technology. Technology is the glory of human ambition and achievement, the spearhead of progress, and the realization of our potential."
"We believe that advancing technology is one of the most virtuous things that we can do."
Andreessen sounds more like a techno-Pangloss than an optimist to me. Inventing the next technology and bringing it to market will manifest the best possible of worlds. Indeed, his manifesto reads more like a religious than an ideological text the way he focuses on what techno-optimists believe, rather than the ways he wants society to change. An incessant need to constantly hustle towards the next technology, and to abandon that which came before.
By Andreessen’s logic I should probably quit the guitar and focus on making music through FL Studio or some form of AI music model. But in reality technology is not the centre of the universe, it is simply a collection of tools to make life easier. I argue that the reason why Silicon Valley is so focused on pushing AI instead of developing something more practically useful is that technology has ceased being a means to an end and has become the end itself. This essay seeks to explore how and why that happened, first through a whirlwind tour of the Valley’s history, then by explaining how changes in society and culture created the search for the Next Big Thing. We will then explore the Silicon Valley model of history, before I present my case for why AI will not revolutionise the world. We’ll end with a conclusion, and a defense of the acoustic guitar.
***
The name Silicon Valley comes from the 1960’s, the time when the legends and myths of the California technology miracle where established and revolutionary new modes of thinking and working were established. The “Silicon” in Silicon Valley is a reference to the material used to make microchips, pioneered in California by Fairchild Semiconductor. Before 1960 the rudimentary computers which existed were first mechanical and then based on vacuum tube technology, but etching circuits onto silicon made computers smaller, faster, cheaper, i.e. generally better. A trend which has continued ever since. Fairchild got its production off the ground in no small part due to help from the US government which wanted to use the chips for Minuteman ballistic missiles.
Over the decades since Fairchild made the first silicon chip the industry grew to global scale and the world experienced a revolution in computing technology. Things have changed not only in terms of technology but also the process of making chips. Nowadays most chips are designed in California but produced in Taiwan by a company called TSMC. It is emblematic of the way Silicon Valley has gotten less and less involved in the practical, tangible side of technology and more and more involved in the theoretical and digital; a theme of this essay.
While Silicon Valley is the birthplace of the silicon wafer chips which have radically transformed the world, most times when we think of Silicon Valley today we think of software. Sure, there are hardware giants like Apple and to a lesser extent Microsoft and Google left, and even the firms which have outsourced production of chips still sell physical products. But most of Silicon Valley’s successes nowadays have been in software rather than hardware. A trend which has accelerated as the pace of change in innovative new hardware formats such as smartphones have decreased, and new big hardware categories like VR headsets and smart glasses have not taken off.
To be clear, software is immensely useful. Far be it for me to rag on software when so much of my life is built around it. My job could not be performed without Google’s search engine, and I write my blog posts using Word, often while listening to music from Spotify. Software has changed the world, and has contributed positively to uncountable lives. But it has also enabled some of the Valley’s worst excesses, and deepest follies. I argue that the move from primarily focusing on hardware to primarily focusing on software is one of the main drivers of technology hype, which in turn has not only created real societal harms like social media’s detrimental effect on girls’ and young women’s mental health, but also created the warped incentives which spawned frauds like FTX and WeWork.
Microchips is both the beginning of the Valley technology industry as we now know it and the genesis of its infamous partner: the venture capitalist. Venture capital is just as important to Silicon Valley as chips and mobile apps, and has been an active participant in shaping how American high technology evolved. Originally, the concept came from the US government’s ARPA project: the Advanced Research Projects Agency, today known as DARPA. As mentioned earlier Fairchild may never have gotten off the ground if the US government hadn’t funded it through DARPA to use silicon chips in nuclear missiles. The agency was set up in 1958 with the goal of pushing the frontiers of sciences and technology after the Soviets launched Sputnik. The core idea was to fund private efforts to solve national issues, often relating to defense, through high technology. The list of achievements which can be credited to DARPA is long, including laying the groundwork for mRNA vaccines, the internet, and GPS to name a few. With such an illustrious track record it is no wonder that the methodology would spread to private companies.
In the context of government, such as the original DARPA, venture capital is a useful and productive form of finance. Surveying widely and trying to find small companies with innovative ideas to solve big problems defined from the top down for the benefit of society makes a lot of sense. But modern venture capital often lacks a non-monetary goal, which was the crucial element to DARPA. DARPA did not invest in crypto companies, or the metaverse because those technologies would not help defend the US. But private venture capital firms like Sequoia Capital and Andreessen-Horowitz did, because their goal is not altruistic, it is to make a return. In a very real sense Silicon Valley venture capital distorts financial markets by redirecting capital which could go to productive uses to go to profitable uses.
To understand how allow me a detour to Keynes’ beauty contest. John Maynard Keynes’ analogy that the stock market is like a beauty contest is now almost a hundred years old, but it remains a great tool for understanding finance. Keynes’ idea is that valuation of companies is like voting in a beauty contest. If you get paid by guessing who will win a beauty contest the smart way to do it is not to vote for the contestant you find the most attractive but rather for the contestant you imagine most people find the most attractive. Now, if you realise this you could do one better and guess for the contestant you think other people will think other people find most attractive.
Over the decades venture capital started to lean into the reflexive mirrors that is public financial markets. DARPA invested in a startup because it has an innovative idea on how to solve an engineering problem whereas Andreesen-Horowitz invested in a startup because they think the wider financial markets will value that startup at a premium in the future. This creates a business model of style over substance. And given how much clout venture capitalists have as tastemakers and farsighted futurists they end up serving as expert judges in Keynes’ beauty contest.
It's not hard to see the adverse incentives at play here. The mere fact that a famous VC has invested in a tech startup is seen as a validation of the startup, and VC’s know that their involvement in a firm can lend it legitimacy. This is not to make any unfounded accusations that venture capitalist intentionally and deliberately invest in lemons to later sell to public markets, that would be fraud. But the line can become fuzzy.
For example, Uber was never and will never be a revolutionary innovation in urban mobility, it always was and always will be an iterative improvement on taxis. If VCs are as smart as they would like us to believe, it should have been obvious that Uber’s plan of growing the userbase quickly through subsidizing both drivers and riders would never result in a profitable business model, and raising prices when it had a dominant market share would never be able to solve its profitability problem. Entering the taxi market is easy, so if Uber were to try to raise prices when it has a achieved a dominant market share other companies would simply start to compete, like Uber did, and the market price would settle close to where it was when Uber entered the market. On the one hand, Uber founder Travis Kalanick has a powerful incentive to hype up his firm as he gets paid when he sells shares to VCs, but on the other hand the VC has a powerful incentive to continue the hype as it gets paid when it sells shares to the public markets. Does the fact that hundreds or thousands of people have made great profits from investing in Uber since it was publicly listed have any bearing on whether or not Kalanick’s and the VCs’ actions were ethical?
While the ethics here are fascinating, this is a blog about politics and economics, and the interesting thing about early VC super successes is how they shift thinking and strategizing. In online video gaming you hear constant talk about the ‘meta’, which is shorthand for powerful strategies or tactics. Metas exists outside the realm of video games of course, but the rules change a lot more frequently in League of Legends than in football, as such changes in the meta is a more relevant concept in the League of Legends than in the premier league. With interest rates at rock bottom following the Great Recession, which for technical accounting reasons make future profits more attractive, the tech hype machine started accelerating. The business meta fully shifted towards financial valuations over tangible profits. In 2013 VC Aileen Lee coined the term ‘unicorn’ meaning startup valued at $1 billion or more. The fact that this was getting so common as to make a dedicated term useful is a good indication of just how ingrained the new meta was.
With new vocabulary, a new business model, and a mature financial eco system to support it Silicon Valley spawned a digital gold rush where the best and the brightest could go from broke to billionaire on the back of software companies in a few short years. Silicon Valley changed the world when it invented the smartphone while the traditional banks had an acute meltdown. If you were chasing business success in the 2010’s San Francisco was the place to be. With the new face of success in the 21st century came a new culture lionizing innovation and revolutionizing things, and there developed a cult of speed. No one had time for gradual expansion; things in the Valley should move at the pace of photons travelling through ethernet cables.
The focus shifts from using technology to make new or better products to the technology itself becoming the product. Your ability to invent new technologies is an obvious metric for how innovative you are, and since innovative new businesses get high valuations even if they don’t make a profit, the direct monetary gains and social prestige of inventing something skyrockets. Even if that thing is not necessarily useful now, chances are that you will be rich by the time the business fails, and you could even be bought up by someone meaning that the viability of the business is no longer your problem.
Again, it is paramount to emphasise the deleterious effects of the culture of speed. If the big payout you angle for by starting a company comes at the earliest twenty years from now you act in a completely different way than if you expect your business to be bought up or closed down three years from now. Well laid plans and gradual expansion is out, rushing to be the first to do something, anything, becomes the best way to win. If you can get some tech cred by doing something digitally or through an app do it, even if it is not optimal for the end user. Bending the truth a bit becomes more justifiable. After all, you will have the resource to fix any issues once you get the VC funding.
But the gold rush becomes hard to sustain as apps are developed for all manner of uses, and soon low hanging fruit like being the “uber of X” or the “amazon of Y” runs out. That’s when Silicon Valley started the hunt for the Next Big Thing. In fairness to the Valley, it had managed to produce several NBTs like the silicon chip, the internet, the smartphone, and the app revolution. At the same time as the expertise shifted from physical to digital, and breakthroughs and revolutions became expected outcomes in the innovation capital of the world, research and development became less focused on the uses of technology and more focused on the technology itself.
A perfect example of this is the Metaverse, the gonzo idea that the future would be all about living in digital 3D environments via headsets. As Dan Olson explains in “The Future is a Dead Mall”, the idea that businesses would migrate into the Metaverse is dead on arrival as migrating from 2D to 3D drastically decreases information density. It takes far less time to look at a webpage showing 30 cars than to look at 30 cars at a virtual dealership, and since you can’t test drive the Metaverse car, shopping in the Metaverse is a strictly worse experience than IRL or a website. Still, the Metaverse joins the dubious pantheon of technologies Mark Zuckerberg has gone all in on in his quest to regain his status as a far sighted futurist. Crypto, AR glasses, AI pins, NFTs, and the Metaverse are all technologically pretty neat; real advances in what we can do with computers. Just not very useful in for the average person.
Silicon Valley has lost its pragmatism, and that means that it is now unable to innovate for real. Many of the leaders at the tech firms and the VCs made their careers in the post-2008 world where software was king, and where high valuations was the score card. The generation which built the first silicon microchips has long since retired, and the new generation is so focused on what is technologically advanced that it fails to consider whether what they are building makes pragmatic or economic sense. The way to understand why AI is the NBT when it hasn’t proved its use cases, and why the Metaverse and NFTs filled that role before AI, is to consider the way the Valley conceives of history.
Silicon Valley seems to assume that history is a linear march of progress where new is always and everywhere better; the Sid Meier’s Civilization view of history if you will. In this view, the fact that the fax displaced the telegraph before itself being displaced by the email is a microcosm of how history works, and self-flatteringly this view of history makes those who discover new scientific principles, invent new machines, and bring new technologies to market some of the most important figures in history. Remember Andreessen’s manifesto: “We admire the great technologists and industrialists who came before us, and we aspire to make them proud of us today.” AI then, is not a technology which may or may take off, and may or may not make sense for a business to adopt, but is the new more advanced form of computing destined to overtake and make obsolete current modes of computer usage. It is a view of history which makes technology inevitable.
In “The Future is a Dead Mall” Dan Olson describes the people pushing the Metaverse as seeing it as inevitable, summing the discussion up in the line: ”the Metaverse cannot fail, you can only fail to build the Metaverse.” But that line of thinking is of course not limited to the Metaverse. AI is now just as “inevitable” as the Metaverse was back when the Future is a Dead Mall was released. Replace ‘Metaverse’ with Big Data, cryptofinance, AI, or whatever the Next Big Thing is currently and you have a cypher for understanding how the Valley thinks. It is this lens of inevitability which explains how deranged “big bets on AI” – like Softbank and OpenAI’s Stargate project for investing $500 billion in building AI infrastructure – can seem like anything approaching logical. Burning hundreds of billions on AI data centres, which will consume so much power as to need tech firms to start up nuclear power plants, makes no sense for an unproven technology, but makes perfect sense if it is the inevitable next step.
Again, AI only makes business sense with some very particular assumptions, assumptions which may be taken as a given if you subscribe to the Sid Meier’s Civ theory of history. It appears that those assumptions go unchallenged at the summit of American finance and tech, and it is worth asking why that is. My view is that there is a conflux of incentives, structures, and subculture which effectively form a cultural bubble when taken in the context of biased media coverage of the technology industry. I want to start first by expanding on my claim that there is biased media coverage of the Valley and tech, as it forms a cornerstone for the cultural context that produced the Civ 5 view of history.
In a 2021 episode of the podcast Capitalisn’t hosts interview Kara Swisher about the problem of access in academia, and more importantly for our purposes in journalism. Swisher describes how a lot of journalists covering tech firms seemed to want to be working in tech, and that she received job offers from all the big Silicon Valley firms at one point or another. This may sound innocuous but think of the incentives that creates. Tech firms tend to pay very well, and there has long been a social prestige to working for them. If you know that you might go work for one of the big tech firms after your time in tech journalism you want to avoid being too critical in order not to close any doors. It becomes like the revolving door between the department of defense and defence firms, or the treasury and high finance. The fact that most journalists covering Silicon Valley will never get an offer to work for Google is irrelevant as long as there are enough journalists getting those jobs to keep the dream alive, and by extension the pulled punches. And even if you aren’t angling for a job you just can’t write a story if no one in the industry will go on record.
Journalism has a tremendous responsibility as the medium between the masses and the powerful, and I don’t mean to imply that the media has been lying about tech. But it appears to have been biased, as in having an unintentional and slight preference for the industry’s point of view. The first real Silicon Valley scandal I remember was the Cambridge Analytica affaire. Maybe there were many before it I was too young to remember, but if I had to pick a starting date for the long and gradual process of disillusionment with capital T Tech I’ve seen on the internet I’d say it was on the eve of Cambridge Analytica. But despite the fact that journalists have gotten harder on Big Tech as a group, and more bold in investigating Big Tech’s misdeeds, they still often defer to the industry when it comes to describing technology, which is another form of bias.
Dan Olson described this phenomenon in ‘The Future is a Dead Mall’ saying: “It’s not the job of the writers at Women's Wear Daily to bust crypto wide open”. And it is easy to think that technical know-how is of the utmost importance when discussing complicated and jargon-heavy topics like crypto, the metaverse, and AI. But the fact is that at least part of the reason these topics become so jargon-heavy in the first place is to intimidate outsiders. I have no idea what retrieval augmented generation means, so maybe I should just defer to what Sam Altman thinks about this? The problem with letting the Tech and VC types be trend setters and oracles is that they always have skin in the game. Sam Altman gets his money and social prestige from OpenAI’s perceived success. If OpenAI fails to make the breakthroughs he has promised he has the most to lose, we can never take his opinions, views, or predictions as neutral. He needs AI generally and OpenAI specifically to be useful and successful, and that should colour the way we interpret his statements.
In a great essay in the Economist about the New York Times, James Bennet describes the classicly liberal values journalism should aspire to by writing: “It used to bug me that my editors at the Times assumed every word out of the mouth of any person in power was a lie. And the pursuit of objectivity can seem reptilian, even nihilistic, in its abjuration of a fixed position in moral contests. But the basis of that old newsroom approach was idealistic: the notion that power ultimately lies in truth and ideas, and that the citizens of a pluralistic democracy, not leaders of any sort, must be trusted to judge both.” In stark opposition, some technology writers seem to assume that at least when it comes to how tech works and how it will develop, every word out of the mouth of any person in power in tech is assumed to be true.
To see this in action, look at the big tech companies’ developer conferences where they announce new features to developers and other tech types, and their capital market days where they announce their plans for bankers and other finance types. There are no Q&A or press question sections at WWDC or Google I/O. No one is allowed to ask Elon Musk why he has wrongly asserted that full self-driving is coming to all Teslas next year, trust me bro, literally every year since 2016. And don’t even think about asking Mark Zuckerberg about how going “all in” on the metaverse has gone. Or, to quote Gary Marcus: “Gary Marcus wishes that just once a prominent person spouting these AGI-is-imminent predictions would agree to public, moderated debate.” The fact that these tech types are virtually never intensely challenged over grandiose claims can to some give them an aura of infallibility. Credulous reporting makes Tech tycoons seem smarter than the rest of us, and the fact that knowledgeable insiders seem set on using jargon even when speaking to the masses perpetuates that perception.
At the same time as the media becomes less interested in calling out overconfident (or lying) tech tycoons, the tech being developed becomes more difficult to fact check. Going from hardware to software makes independent verification a lot more difficult, for example fact checking claims about an AI model is far tricker than claims about say a car or a GPU. If the head of Chrysler claims that his newest car can go from 0-100 in 4.8 seconds it is easily to verify or falsify that claim, but how do you independently verify claims like Dario Amodei saying in late January 2025 that: “AI will write 90% of the code”. Even if a publication or an independent expert comes to a different conclusion, Amodie has much more room to manoeuvre in making a counter argument than the hypothetical Chrysler boss would have.
This effectively means that hype is easier to manufacture in software than in hardware. And if, as I said earlier, the business meta is moving towards hype then focusing on software is the obvious play. But genuinely useful software is often either already available or highly specialised. Word processors, email clients, and html websites have been around longer than I have, and there are practically no regular office processes which could be mediated through software that aren’t. On the other side of the spectrum, there are plenty of difficult and time consuming processes for which no tailored software has been designed as the effort is too great relative to the potential profits.
Hardware on the other hand is difficult, time consuming, and crucially very expensive to develop. Doubly so when you are trying to build a product for which there is not already an established market. Consider how it took over a century for electric cars to become mainstream alternatives to internal combustion engine cars. The science had been there almost since Nikola Tesla’s time, but the engineering, product development, and manufacturing was unable to compete until the 2010’s. Again, this is how mundane and iterative improvements like ordering a taxi through an app instead of through a phone call becomes framed as a revolution in urban mobility. Actually coming up with a new hardware platform which could honestly be described as a revolution in urban mobility would take years, and involve lots of engineers and possibly city planners. But writing some software with a fancy user interface and spinning a yarn is quicker and cheaper. Getting a 100 000 people to buy a car based on new technology is a herculean feat, getting 100 000 people to download a new taxi app is not.
Silicon Valley’s software is often a sort of backwards innovation, where someone discovers something cool at which point everyone suddenly becomes an expert on it and writes think pieces about how it will be the greatest invention since the wheel despite the fact that no one has actually come up with what problems the invention will solve, or in the parlance it’s “killer app”. Consider crypto currencies. Blockchains are genuinely a technically sound solution to the double spend problem in a decentralized currency. The thing is, regular old fiat currency does not require a solution to that problem as it is not decentralized, and the drawbacks inherent to blockchain-based money like being slow and energy intensive are significant enough that it does not stand a serious chance of competing with government-issued fiat currencies. Despite having the solution to the double spend problem blockchain-based money is effectively a dead end, practically if not technologically.
Still, for years crypto was the NBT and there were and still are literally thousands of people trying desperately to find a problem for which crypto is the solution. Just imagine how absurd that work flow would be in hardware. I’m sure it is true that plenty of scientific discoveries only turn out to be major breakthroughs decades after the fact when an engineer has figured out how to make use of the discovery. But the discoverer probably did not start a company and try to sell their invention baselessly claiming that it would be good for something, I just haven’t come up with what exactly.
Things like StarGate, the massive AI infrastructure programme the Trump administration announced in conjunction with OpenAI and SoftBank only makes sense as an effort to keep the hype going. These tech geniuses are betting big on AI, so while there is no killer app now, the fact that they are so sure that a killer app is coming must mean it is true.
The AI hype is driven by faulty logic, and linear extrapolation based on current development. The fact that GPT-based generative AI has gotten exponentially better since 2022 does not imply that it will continue to improve exponentially. The only chance for Dario Amodei’s prediction that Artificial General Intelligence (AGI) which is supposed to be as smart as or smarter than a human will arrive by 2027 to be right is to assume that improvements are destined to continue at the current incredibly fast pace. But in reality technologies tend to have ebbs and flows in their developments. Consider human flight. The Wright brothers flew the world’s first plane in 1903 and development continued at an extreme pace to the point that just 66 years later humans landed on the moon! If you were to assume that the pace of change was to continue at the then-current pace in 1969 you might expect humanity to have reached Alpha Centauri by 2025. But you and I both know that human flight technology have not really progressed meaningfully since 1969 in the sense that we have not explored any new astral bodies. Indeed, one could make a case that human flight is less advanced today than it was in the 1990’s when a regular person could cross the Atlantic in half the time that a current flight takes by flying with a Concorde.
Technology adoption is not just a simple function of whether or not the new technology is more advanced, it matters whether the new technology is safe, efficient, economical, or just generally preferable. Electronic locks have been available for years, and yet most people still use a metal key because we are used to it, and because it is cheaper. But Silicon Valley has lost that sense of pragmatism, lost the ability to see that inventions need primarily to be useful, not advanced. Silicon Valley’s view of history has no room for acoustic guitars, they should have died when the first electric guitar was born in 1932. And the ironic thing is that an unflappable belief in the inherent goodness of technology actively serves to make technology worse.
Marc Andreessen’s version of techno-optimism devolves into techno-fetishism and zealotry. Just look at the frantic debates about AI in the year after OpenAI revealed ChatGPT. Will AI wipe out humanity? Is it even ethical to continue this research when there is, according to Elon Musk, a non-trivial risk that it might doom the world? The fact that AI would not only prove useful but would become the most important technology developed in the 21st century was taken as given, never in any doubt. Of course, it would change the world, the only question was if it would be the catalyst for abject dystopia or celestial utopia.
Fast forward to today and the only apocalypse brought on by AI is one of spam and search engine bait websites. The boring reality is that most technology is an iterative improvement on what came before rather than the start of a new era. That’s not to say that there are not potentially game changing hypothetical technologies though. Gene editing, fusion power, and space elevators could meaningfully change the makeup of human society, in economic, social, and political terms, for good and for ill. If you want to really change the world, I think energy net positive fusion power is a far better bet than AI. But fusion, space elevators, and gene editing are tactile and physical technologies beholden to physics, chemistry, and biology. Why bother when there are literally billions of dollars to make in the comfortable domain of software development?
At the heart of the search for the Next Big Thing is a lack of confidence. The people who turned sand into machines capable of solving maths problems did not need external validation. They created something new, and laid the foundation for the world we are living in now. Andreessen’s disciples like Sam Altman and Vitalik Buterin think of themselves as heirs to the Silicon Valley legacy of innovation and progress, but their inventions will never have the level of impact as their imagined predecessors. But Silicon Valley has built an economic system oriented towards monetarily rewarding audacious ideas rather than practical products, a social system which revers founders and echoes hype through podcasts, newsletters, and conferences, and a media environment which does not tolerate dissent from sceptical journalists or common sense assertions that the emperor has no clothes.
Ed Zitron over at Where’s Your Ed At has been a go to source on AI for this essay, and he does an enviable job explaining in detail why generative AI is a dead end in terms of business, if not technology. In a post titled ‘There is No AI Revolution’ he goes over the unit economics of OpenAI and comes to the conclusion us AI-skeptics would expect, that running very compute-intensive models which don’t seem to help people much in their everyday lives is bad business. What is remarkable about the figures Zitron has compiled is the scale at which these companies are unprofitable. The technological superpower that is OpenAI loses $5 billion on revenues of $4 billion! They spent $9 billion dollars to make less than half that in revenue! And this is supposed to be the future of the tech industry?
The core reason why generative AI will never become big (profitable) business is the fact that as of yet no one has figured out a decent value proposition for users. Zitron compiles the data on how many people use gen AI and how the companies offering such products are doing, and in all cases it seems that gen AI companies are losing money on each user. The path to profitability then is to either dramatically lower input costs, i.e. training costs for future models, and compute intensity per query for current models, or dramatically increase the price for using gen AI products. The issue is that the quality of gen AI products cannot seem to support higher prices.
With its free version of ChatGPT OpenAI manages some 600 million monthly active users across its web and app version (though there is likely an overlap between app and web users) which translates into 15.5 million paying subscribers. If some ~500 million monthly users do not find it worth paying $20 per month to get access to better models, that suggests to me that the price elasticity of demand for gen AI products is very high; or in plain English: raising prices will mean that a lot of users quit the product.
Tywin Lannister said in Game of Thrones that anyone who has to proclaim that they are the king is no true king. Fairchild’s engineers did not issue press releases proclaiming the birth of a new era; they had the confidence to believe that their product could speak for itself. Contrast that with how OpenAI has behaved since rising to prominence; Sam Altman and other AI experts are making headlines every month with some new bold proclamation. Some six months after OpenAI launched ChatGPT it published an article on its website called ‘Governance of superintelligence’ in which it states “Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations. In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there.”
Dario Amodei, CEO of OpenAI’s main rival Anthopic wrote an essay titled ‘Machines of Loving Grace’ in October 2024 which raises the pitch on AI hype by several octaves, fully venturing into AI millenarianism. Amodei is cocksure that AI will transform the world radically, the question is not if but how fast. He claims admirably to want to “avoid perceptions of propaganda”, “avoid grandiosity”, and “avoid ‘sci-fi’ baggage” in his essay but then goes on to make hatstand assertions like this: “Thus, it’s my guess that powerful AI could at least 10x the rate of these discoveries, giving us the next 50-100 years of biological progress in 5-10 years.14 Why not 100x? Perhaps it is possible, but here both serial dependence and experiment times become important: getting 100 years of progress in 1 year requires a lot of things to go right the first time, including animal experiments and things like designing microscopes or expensive lab facilities. I’m actually open to the (perhaps absurd-sounding) idea that we could get 1000 years of progress in 5-10 years, but very skeptical that we can get 100 years in 1 year.“ Quite aside from the questionable maths, my immediate reactions is: yeah sure, that’s neither propagandistic, grandiose, or sci-fi sounding.
This man is the CEO of one of the leading AI labs in the world. It is understandable to think that he knows what he is talking about, indeed you would expect say the CEO of Fender to be more or less right in his predictions about the trends in electric guitar sales. But despite his claims like writing: “Everything I’m saying could very easily be wrong (to repeat my point from above), but I’ve at least attempted to ground my views in a semi-analytical assessment of how much progress in various fields might speed up and what that might mean in practice.” Amodei is not coming to his conclusions on the basis of data, he is simply speculating about what the future might be given his assumptions that advanced AI will be like a “country of geniuses in a datacenter.” But current AI is nowhere near as intelligent as a single genius; indeed LLMs are probability based and incapable of critical thinking. The uncritical, assertive, millenarian certainty that this will change everything which Amodei’s writing exemplifies is what the relentless hunt for the Next Big Things leads to.
What is AI when you strip away the hype? An experimental form of business software useful for summarizing documents and drafting emails. The rest is hype, expectations, and dreams. AI the technology is less important than AI as a vessel for the Next Big Thing, the thing that will prove America’s and Silicon Valley’s superiority, and which will prove the inherent and inextricable meaningfulness and moral goodness of working in tech.
Ed Zitron described AI as “the leaded gasoline of tech, where the boost to engine performance didn’t outweigh the horrific health impacts it inflicted.” It is a perfect description for the ways in which it lays bare the faulty assumptions at the heart of the supposed AI revolution. Is AI more advanced than previous software, and can in an ambiguous way be argue to be better because of it? Yes. But is it better in the larger context of the computational power and by extension energy it takes to run the models? No. Narrowly viewed, AI like leaded gasoline is better, but taken holistically it is not. That failure to consider broader factors is how a lost pragmatism manifests in practice.
***
Let’s return to Marc Andreessen’s Techno-Optimist Manifesto from the introduction, here’s another classic line:
"The economist William Nordhaus has shown that creators of technology are only able to capture about 2% of the economic value created by that technology. The other 98% flows through to society in the form of what economists call social surplus. Technological innovation in a market system is inherently philanthropic, by a 50:1 ratio. Who gets more value from a new technology, the single company that makes it, or the millions or billions of people who use it to improve their lives? QED."
I don’t know whether Nordhaus is correct or not, but even assuming he is it is fallacious to assume, as Andreessen does, that Nordhaus’ findings means that “Technological innovation in a market system is inherently philanthropic”, as it does not consider the externalities of the technologies or the fact that not all technologies have much value at all. 98% of zero is still zero. And that’s before we get to contentious and dangerous technologies. Where’s the philanthropic gains from the invention of landmines and mustard gas?
This next excerpt from the manifesto is my favourite due to its total and complete lack of self-awareness:
"Our enemy is the ivory tower, the know-it-all credentialed expert worldview, indulging in abstract theories, luxury beliefs, social engineering, disconnected from the real world, delusional, unelected, and unaccountable – playing God with everyone else’s lives, with total insulation from the consequences."
Who is more unaccountable than the Tech executives who destroyed countless young people's self-esteem through social media algorithms, or cynically prey on people's desire for intimate connection through dating apps designed for engagement? Which credentialed expert indulges more in abstract theories and is more disconnected from the real world than the VC thought leader or AI “expert” confidently asserting that 1 000 years worth of technological progress is possible through an AI he will develop soon? True techno-optimism is not a belief that technology will deliver humanity from wickedness. It is a belief that humanity's better nature can and does win, using technology for good rather than evil; splitting the atom to boil water rather than to burn cities.
Ed Zitron makes the case that Silicon Valley’s lack of innovativeness is rooted in being out of touch. If you’re well paid and spend most of your time writing and reading emails or in meetings, then a Large Language Model (LLM) which can draft and summarise emails, and can give you the main takeaways from a recording of a Teams meeting is quite useful. If you are a scientist, teacher, doctor, or carpenter an LLM is a lot less useful. Valley-style techno optimism is not just philosophically dubious in the sense that it has few responses to the challenges I outlined here. It also directly serves to make the scientists and engineers it reveres so worse at producing useful new technologies. If you were in a practical job you would see that, but in the Beige Box towers at Sand Hill road practical concerns have been replaced by grand philosophizing about the glorious inevitable AI future, or the depending on when you are reading this whatever takes its place as the Next Big Thing.
Across the Atlantic I continue plucking away at my steel stringed acoustic. I suppose it remains to see whether my instrument or generative AI will end up on the scrapheap of history. Common sense would suggest my six string will continue being played for many years, the same common sense suggests that current AI companies will have to change a lot to survive. I’ve made my case the best I can here, so from here on I will endeavor to shut up about AI. The next time someone gives me the ‘AI will change the world’ spiel I will simply answer:
“That is well spoken, but I have to practice my pentatonic”
If you liked this post you can read a previous post about the war in Ukraine here or the rest of my writings here. It'd mean a lot to me if you recommended the blog to a friend or coworker. Come back next Monday for a new post!

I've always been interested in politics, economics, and the interplay between. The blog is a place for me to explore different ideas and concepts relating to economics or politics, be that national or international. The goal for the blog is to make you think; to provide new perspectives.
Written by Karl Johansson
Sources: