Which doomer might this be?

I rarely use Facebook because it’s mostly boomers sharing poorly created memes, people getting enraged about clickbait political news that they have no control over, and not much else.

When I last checked my feed though, I saw that one of my friends had written a post that linked to an article relating the AI boom to the subprime mortgage crisis of 2008 and claiming the entire industry is unprofitable.

Well, If you’re reading this, you might know that I'm running a profitable AI startup that uses generative AI models to identify and block scam messages, and have been for the last few years. Without the advent of large language models, I wouldn’t have a business at all. If ya didn’t know, now ya know!

My instant reaction

Me, Gordon Ramsay, speaking to the author of the blog article

Anyway, upon seeing that post, it might not be surprising that I instantly had a reaction of “is the author out of his goddamn mind, or are the drugs he’s taking just that good? If the latter, I want whatever he’s having!”

In any case, I read the whole article so you don’t have to - you’re welcome, that’ll be $20 - and now I’ll respond to it.

tl;dr: it’s a blog post almost entirely comprised of emotional, fear-mongering, over-generalized BS written by a "technology critic" who has never built anything related to AI in his entire life, yet now thinks himself to be an expert on the future of said technology - with some nuggets of truth embedded.

Okay, so he gets some credit

To be fair, there's a lot of hype with all the new AI tools and agents coming out to the point that I get annoyed with all of it myself even as an AI business owner, and it’s not like I approve of everything that is going on in AI-land (AI generated LinkedIn posts can die in a fire, that site was already a dumpsterfire before AI and now it’s truly in the gutter), so I sometimes like reading critical articles like this to see what they have to say.

I mean, the people that write doomer style blog posts sometimes make good points! Sometimes those posts even written by serious people who have built something relevant themselves instead of just complaining about something they have no experience with, and those articles deserve more weight.

The one in the Facebook post that I linked above doesn’t get it right, though. There are a number of problems with his argument, and I’ll address the 5 most egregious ones now.

And here they are:

  1. Relating the AI funding boom to the subprime mortgage crisis of 2008 doesn't make any sense. This AI funding boom is no more and no less the standard formula of "sell something people want below cost to get as many users as possible as fast as possible, operate at a loss until all the competitors die, meanwhile use VC funding to pay the bills - then raise prices" in action, and it’s something that happens across many different types of VC-funded businesses. It's not like this is an exclusive strategy only used when AI is involved, it happens every 5 years or so whenever there’s a new VC trend to bet money on. Last time the bet was NFTs - a laughably bad bet with zero utility, let’s be real here - and now it’s AI. With AI, we’ve got vastly more utility (don’t lie, you the one person who reads my blog posts, you’ve used ChatGPT as a pretty good replacement for your asshole therapist who charges you $420/hour at least once! Admit it!) with an unknown long term impact. And if the bet doesn’t pay off? Only the VCs, not the entire population of everyone who ever prompted an AI model, will lose their investments. It’s not even close to the same thing as people losing their primary residences, because taking massive risks with the chance of losing everything is part of the job description when it comes to VCs spending VC money. It’s not the same expectation when you get a loan from the bank to buy a house in ‘merica - that’s supposed to be a “safe” and “protected” investment that has a loan underwritten by a federally-insured bank that is expected to go up and to the right with minimal fuss, which is why the ‘08 crash was a different animal.

  2. I have no idea what he's talking about when he says that Uber has/had "no capex.” Talk about being “wrong, ignorant and potentially a big fucking liar” to use his own words. Apparently, he doesn't understand how to read a balance sheet or how expensive it is to operate a logistics company, or he’s just being intentionally obtuse. Uber had over $300 million in capex just in the last year and has had a similar figure for capex for the last several years. Hell, if you go back to over 5 years ago, they were spending nearly double on capex than what they do now and that figure is even lower than what they were likely spending on capex relative to their pre-IPO PMF finding stage, so this claim of “ThEy NeVeR hAd AnY CaPeX!!111” becomes even weaker. You could argue that the current percentage of capex relative to their revenue or net income is low, but that’s a different and more sensible argument than “they don’t have any capex at all.” Come on now.

  3. The fact that the most famous AI companies have gotten more funding than AWS or Uber doesn't mean that they’re somehow doomed to fail. Companies get more or less investor money according to their assumed/potential impact on society and according to how expensive it is to get the business started and operating. It’s then not hard to understand that more investor money does not equal more risk of failure. It merely implies that VCs are betting more heavily on this specific trend vs. past trends because reasons. Are they right to do so? Who fucking knows, we’re not in the room when the deals get signed. We’ll just have to see, won’t we?

  4. The entire industry is not based on "unprofitable economics," and I’m the counterpoint to that entire argument. I run a profitable AI business, and I’m here to tell you that you can absolutely make a profit and charge reasonable prices to customers even with GPUs/AI/whatever boogeyman the author is worried about in the mix. There's no need to pay-per-million-tokens and use the most expensive "frontier" models to solve problems - sometimes they don’t even perform as well as open-source, lower cost models. I’ve written about exactly how I’ve optimized my AI bills before, and what I’ve done can be replicated by any business owner that puts in the effort to benchmark models and run experiments. Certainly, token prices for "frontier" models will and already have gone up because at some point investors start asking for a return on their capital, so I can't disagree with the author there, but it's straightforward to opt out of paying the standard unprofitable per-token prices by using smaller, targeted models and only using the frontier models as a fallback, and/or purchasing your own power-efficient hardware. Which brings me to my next point…

  5. This article seems to assume that power efficiency gains will never happen and that people will just be locked into renting GPUs at pay-per-million-token prices in the cloud forever. That's not a bet I'm going to take with stuff like this coming out. Instead of requiring a rackmounted server with 138 GPUs in a datacenter cabinet and all of the datacenter builds all the NIMBYs are against because “ThE DaTaCenTerS aRe StEaLinG oUr WaTer” you can now run large AI models on your desk in the form factor of a mini PC with significantly lower power consumption than a server running in the datacenter, and significantly lower costs than paying per token generated by said servers - particularly for heavy inference workloads. Also, there are 4-5 competing pieces of hardware from ASUS, HP, Framework, GMKTek, and so on that are lower in price than the DGX Spark machine I linked, usually by $1,000 or more. So: more and more power-efficient, smaller form factor, and competitively priced AI-related pieces of hardware will come out requiring less and less physical space. It's a matter of time until inference and training workloads no longer eat up so many resources.

And the conclusion is…?

I don’t know man.

The only question that I think is worth thinking about after reading this article is: will future demand for AI-related solutions outpace efficiency gains that are coming down the pipe for the hardware that powers it?

And of course, I have no idea and neither does anyone else. It’ll be fun to see what happens though.

Until then, keep on prompting and keep on shipping!

And ignore the doomers. Mostly. Unless you feel like a good rant, which I did. Ayylmao

Keep Reading