TetraMesa

  • About Us
  • Services
  • Clients
  • Contact
  • Blog

PM Perspective: Is AI a Great Business?

August 12, 2025 By Scott

Introduction

Short Answer? Maybe not so much. (At least when talking about the core tech vs. specialized solutions.) This won’t be as much about AI as about category and startup creation in general. AI just happens to provide us with a great developing case to review. (At least in terms of Large Language Models (LLMs) and Generative Pre-trained Transformer (GPT) products.)

Product people need to zoom in and out. There’s so much noise around new AI features, we may forget to take a wider view. While we’re often responsible for detailed product roadmaps, some of us also do strategy and business planning. In that role, we need to think about offerings in terms of long term strategy. Rapid growth in AI LLMs / GPTs provides a case worth study in terms of new category creation. We should keep in mind that AI / ML has been around for decades. What’s interesting now is the LLM / GPT space is a great example of a Blue Ocean strategy that is perhaps only fulfilling its promise in a general way. That is, the rising tide is lifting many boats, but early pioneers might not be able to take margin advantaged positions, which should be a benefit in a new category. (If you buy the book, be sure to get the latest edition.) Even though OpenAI bust this category open, they didn’t get to a winner-take-all or even winner-take-most position.

Why?

The answer is somewhat conjectural. AI isn’t new, though GPTs / LLMs provide new capabilities. It’s another overnight consumer perceptual success decades in the making. The real breakthroughs are about usability, scale, and perception of potential, not just the tools themselves. OpenAI’s breakthroughs have arguably been as much about packaging, accessibility and timing as capability. However, open-source acceleration and imitation has made it harder for anyone to monopolize the category. One product lesson refresher is new categories don’t guarantee dominance for the pioneer; which is a movie we’ve seen before. Often, startup wreckage resulted from trying to build a category too early. (E.g., History of Digital Cash anyone?) In terms of AI, also remember the training content sources for all this, (while use of it remains a contested philosophical and copyright debate), is largely public commoditized content. Many tech giants already had ML research teams, but hadn’t released anything. While there are financial barriers, such as need for large-scale compute and so on, there’s no network effects here. So add this all up and ask, “where’s the moat?“

Unless we’re building core AI tech, we’re embedding it. And it’s the growth of LLMs and GPTs that are rewriting how the future might look. Composing novels, diagnosing diseases, steering self-driving cars, and so on. Early on, the category might have seemed like a Blue Ocean Strategy, opening up uncontested new market spaces with unprecedented innovation. Though along with some of the fastest product adoption ever seen, it’s also shaping up as one of the quickest market commoditizations. (Though Marc Andreessen thinks all the marbles could still maybe go to top players.) And Google’s CEO sees it as a place for multiple players. Either way, it’s ironic how some have derisively referred to products that are “just” wrappers on top of AI. It’s likely – my opinion – that some of these will be what provide higher vertical market value.

Show Me the Money

In spite of initial training costs, the industry’s margin privilege, (low variable costs and scalable cloud infrastructure), suggested solid profits. At least some early players, from startup founders to engineers with stock options or fat salaries, have cashed in well. Yet, for the industry, it’s becoming a knife fight. Relentless competition and commoditization are turning it into a low-margin commodity trap, where profitability is going to be squeezed. This goes directly to the kind of questions we need to ask about product long term viability, and what businesses we want to be in. I wonder how long it will take before Marginal Revenue (MR) optimizes to Marginal Costs (MC). I’m not going to take on that analysis now as I’m not sure I could get the data, especially given investment inputs and rapid change in costs now. I think for all the AI brilliance we’ll not necessarily see dominance among any major player once the massive funding flows start to calm down.

Let’s Level Set & Look at Larger Scale Strategy

Moatless in AI-ville

I’m not saying if you were lead PM at an early LLM/GPT company that you should have looked at the future market and said, “Gee, no real protectable moat here, I think I’ll recommend not launching this.” We obviously have a brand new multi-billion dollar category here both for “picks and shovels” vendors and all manner of ancillary products. (Not to mention what could be an explosion of productivity among users; known problem issues aside for a moment.) Still, as we all get more sophisticated in seeking next things, it’s useful to keep in mind Category Creation ≠ Category Ownership. And we should be thinking about our moats and if they can be made multi-layer. The myth of first-mover advantage has been broken apart elsewhere, but it’s becoming more fragile in ever faster replicating tech spaces. We may need to consider more feature execution and market positioning plus planning defensive plays earlier.

In this kind of oddball industry case, we should consider early players. OpenAI has always had a hybrid identity. (See OpenAI history) In 2015, they were a non-profit with a mission to ensure AI benefits all humanity. They shared research, papers, and open source tools. Later, in 2019 they created a business entity to attract capital, but still focused on a social mission, even as they had less openness. At that point, moats became less possible. It’s hard to craft defensive patents around tech that has been published in academic papers. And while there’s multiple components to LLM and GPTs, the Transformer architecture came from Google. (See Attention is All You Need.) So there’s been an interesting evolution here. Google has apparently chosen to not enforce this IP, perhaps strategically thinking it’s better to grow the whole category. Which is a hugely boss move if it puts you yourself at risk. (Though it’s possible they thought it might not be enforceable anyway given so much public academic research. So it may have been more of a defensive filing.) Making choices like this is large scale strategic and very few have the heft to play with game pieces on a board like this. And what about OpenAI’s recent August, 2025 releasing several models to open source on HuggingFace? Most likely it’s competitive defense to compete to keep developers in its ecosystem. It’s really kind of like a Freemium model to hope others spool up to paid API and Enterprise level offerings. Here’s a spot where we maybe switch from Blue Ocean Strategy back to Porter’s traditional Five Forces. Here’s my quick read: Threat of New Entrants: High. Threat of Substitutes: High. Bargaining Power of Suppliers: Medium. (Suppliers here means GPUs/Cloud platforms/Infra.) Bargaining Power of Buyer: High. Industry Rivalry? Yeah, we’ll call this all the way High. So OpenAI’s Hugging Face release is not charity. It’s a moat-widening and protective tactic. It’s probably smart. But it’s also more reactive vs. visionary or preemptive. (AI industry leader Sol Rashidi has a nuanced perspective on this and poll if you’re interested in weighing in.)

Is There An AI Margin Advantage

AI businesses are supposed to have a cost structure that physical industries envy, the same as most digital products. Part of the business benefit of digital products is lack of friction vs. physical products. As importantly, that the marginal cost of an additional unit often costs practically zero. A large language model (LLM) provider might spend billions training models, but once deployed, each query (inference) should cost pennies, running on scalable platforms. No warehouses, no shipping, no per-unit manufacturing, just digital operating expenses (OpEx). This scalability should mean fat margins. Though inferencing still costs an order of magnitude more than good ole’ indexed search. (At least for now.)

Even among digital sectors, AI’s edge stands out: while AI may have high fixed dev costs the same as SaaS, AI’s inference scales near-infinitely without proportional expense spikes. At least in theory, AI inferencing should have even less variable costs than most SaaS operations because SaaS still has a lot of per user overhead; databases, APIs, etc. whereas in AI the inferencing is on a pre-trained model. But this advantage is crumbling under market pressures and the reality that inference is still very compute heavy. (It’s possible the trend towards some smaller subject specific models will mitigate this.)

The Commoditization Knife Fight

AI’s promise is unraveling in a brutal race to the bottom. Models are becoming interchangeable, with open-source options like Meta’s Llama flooding the market, possibly driving prices towards raw compute costs. And some maybe odd flip-flops on whether cloud is always better than on-prem. (See: The hidden mathematics of AI: why your GPU bills don’t add up)

The fight is fierce: startups wrap LLMs in apps, hyperscalers like Microsoft bundle AI into Azure, and freemium models set expectations for cheap or free services. Customers demand upgrades better accuracy, and new features. Without price hikes. Or for less. Early AI employees with equity or high salaries are reaping rewards, especially if they joined pre-IPO unicorns. I’m not saying this is just some AI bubble. The value is in here. Somewhere. OK, maybe I am saying it’s somewhat of a bubble. What I’m suggesting is that we watch margins if we can. Because aside from how cool everything is, at some point we get back to Marginal Costs (MC) and Marginal Revenue (MR). And if the optimization limit remains when MR = MC, (same as it’s ever been in any industry), that becomes interesting. It’s just hard to tell where things are right now.

AI vs. Other Industries: A Unique Squeeze

Unlike other digital categories, which may differentiate through exclusive content or custom offers, AI struggles to stand out. As models reach parity, they’re just another API call. Unlike physical commodities where scarcity drives value, AI’s abundance fueled by open-source models and cheapening compute erodes moats.

The real winners are hyperscalers and chipmakers, capturing compute spend while AI app providers bleed.

Conclusion: Gold Rush for a Few, Trap for Others

Alright, I’m really not trying to dump on the industry. I love this stuff. AI’s capabilities are dazzling. Yet I’m starting to think beyond the frothy, short term prospects. Early on, it looked like a blue ocean of opportunity, with rapid adoption fueling not just hype, but real value. Though it seems the real profits belong to infrastructure giants; hyperscalers and chipmakers, not the AIs caught in the fray. (Maybe call it the picks and shovels behind the picks and shovels. Between AI and Crypto, these folks are having quite the time.)

Summing it all up then, Here’s my Key Takeaways for Product Managers

  • Category creation ≠ category ownership: First to market doesn’t guarantee dominance; rapid replication and open-source releases have always been a risk, but these days can erode an early lead more quickly than ever. The old jokes apply: “The Pioneers are often the ones with the arrows in their chests.” And “It’s the second mouse that gets the cheese.”
  • Moat design needs to start early: In fast-moving, easily replicated tech, differentiation must go beyond core tech to include data advantages, integrations, distribution, and brand trust.
  • Economics matter as much as capabilities: Cool features can obscure weak business fundamentals; watch margin trends and know when MR is converging toward MC.
  • Don’t overestimate scaling advantages: AI’s theoretical cost edge over SaaS can be eaten up quickly by inference compute costs, customer demands, and price competition.
  • Bundle or be bundled? What does the whole industry map look like? In AI, hyperscalers and infrastructure players have the leverage; smaller players may need to align strategically with them or risk being commoditized.
  • Look for intersections and adjacencies: Sustainable plays may come from pairing AI with other tech shifts (e.g., AI + crypto) or embedding it in specialized, high-value vertical solutions.

If you have anything to add to the list, just let me know!

Filed Under: Marketing, Product Management, Tech / Business / General

Recent Posts

  • Saving Lives through Social Emotional Learning
  • Teach Your Children (AI) Well
  • PM Perspective: Is AI a Great Business?
  • When LLM Models Get all Forked Up
  • GPTs and LLMs from a User Use Case Perspective

Categories

  • Analytics
  • Book Review
  • Marketing
  • Product Management
  • Tech / Business / General
  • UI / UX
  • Uncategorized

Location

We're located in Stamford, CT, "The City that Works." Most of our in person engagement Clients are located in the metro NYC area in either New York City, Westchester or Fairfield Counties, as well as Los Angeles and San Francisco. We do off site work for a variety of Clients as well.

Have a Project?

If you have a project you would like to discuss, just get in touch via our Contact Form.

Connect

As a small consultancy, we spend more time with our Clients' social media than our own. If you would like to keep up with us the rare times we have something important enough to say via social media, feel free to follow our accounts.
  • Facebook
  • LinkedIn
  • Twitter

Copyright © 2025 · TetraMesa, LLC · All Rights Reserved