There’s a fundamental shift happening in AI, and most people don’t see it yet. Right now, the debate is dominated by the question: Should we train our own large language models (LLMs)? But that’s the wrong question. The right one is: What can we build with them?
The first people to discover a new technology often become obsessed with its internals. They want to understand every detail, optimize every inefficiency. This was true with early computers, early internet infrastructure, and now with AI. But the biggest opportunities rarely come from making the core technology itself. They come from building what’s on top of it.
The Mistake of Reinventing the Wheel
Let’s take an analogy from the past. In the early days of personal computing, companies like IBM and Apple were focused on hardware—who could make the best machine. But the real goldmine wasn’t in hardware. It was in software. Microsoft realized this first. While Apple was busy perfecting its computers, Microsoft made an operating system that could run on any machine. And that’s what dominated the world.
AI today is in the same phase. Right now, companies are pouring billions into training bigger and better LLMs. But unless you’re OpenAI, Google, or Anthropic—with near-infinite resources—you’re playing a losing game. Training a competitive LLM requires vast amounts of data, compute power, and time. Even if you pull it off, what then? You still need to figure out how to make it useful.
Instead of burning money training a model from scratch, the smarter move is to take what already exists and build something people actually want.
The Real Value Is in the Application Layer
Every foundational technology eventually becomes a commodity. The internet used to be a rare and expensive resource—now it’s a given. Electricity used to be a competitive advantage—now it’s assumed. AI models are heading in the same direction.
Right now, we’re in the phase where companies think their model is the product. But that won’t last. The real winners will be those who figure out how to apply these models in ways that solve real problems.
Imagine two startups:
- Startup A spends two years training a custom language model, investing millions in GPUs and AI talent.
- Startup B uses an off-the-shelf model and, in those same two years, builds a frictionless AI assistant for doctors, integrating directly with medical records, automating paperwork, and making healthcare more efficient.
Which one is more valuable?
Unless Startup A’s model is significantly better than existing ones (which is unlikely), all they’ve done is reinvent the wheel. Startup B, on the other hand, has created a product that actually matters.
The Myth of Differentiation
Some argue that if everyone builds on top of the same models, products will become indistinguishable. But this assumes that the AI itself is the product. It’s not. The product is how you use AI.
Take databases as an example. Every startup uses the same few databases—PostgreSQL, MySQL, or MongoDB. Yet nobody thinks every app feels the same. Why? Because it’s not about the database. It’s about what you build with it. The same is true for LLMs.
Real differentiation comes from understanding your users better than anyone else, solving a problem so well that switching costs become too high. AI is just a tool for that, not the differentiator itself.
The Risk of Dependency
One of the few valid concerns about relying on external LLMs is control. If your product is deeply tied to a third-party model, what happens if they change pricing, restrict access, or shut down? It’s a risk, but it’s a manageable one.
First, AI models are becoming more open. Open-source alternatives like LLaMA, Mistral, and others are rapidly catching up. If OpenAI or Google ever pull the rug out, you won’t be left stranded—you’ll just switch to another model.
Second, the real risk isn’t model dependency. It’s failing to build anything useful in the first place. Many startups hesitate to build on third-party models because they’re worried about long-term risk. But the bigger risk is spending years training a model and still having no product-market fit.
The best strategy? Build your product using the best available model today. If you ever need to swap it out later, you can.
The Shift That’s Coming
This transition happens in every major technological wave.
- Early cars were built by people obsessed with engines. But the real revolution came from companies that made cars affordable and useful for the masses.
- The early internet was built by network engineers. But the real value came from those who built applications on top—Google, Amazon, Facebook.
- AI is the same. Right now, it’s dominated by researchers and engineers obsessed with models. But the biggest impact will come from entrepreneurs who figure out how to use them.
Where to Focus
If you’re building in AI, focus on distribution, user experience, and solving specific problems. Those three things matter far more than how your model was trained.
- Distribution: How do you get your product in front of users? The best AI product in the world is useless if no one knows about it.
- User experience: How do you make AI feel invisible? The best AI products don’t feel like AI—they feel like magic.
- Problem selection: What’s a problem people desperately want solved? If you solve something painful enough, no one will care what model you used.
The Takeaway
It’s tempting to get caught up in the AI arms race, but the truth is, most people don’t care about your model. They care about what it does for them.
History shows that the biggest opportunities don’t come from training new models. They come from applying them in ways that make life better.
So if you’re building something in AI, ask yourself: are you chasing prestige, or are you building something that actually matters?