That LinkedIn '95% of AI Ventures Fail' Stat That's Going Around...
So over the past week I'm suddenly seeing folks posting on LinkedIn about this number that 95% of corps fail to generate significant revenue with AI projects. Honestly, I'd believe it.
Personally? I think that a big reason so many AI projects die is because folks are investing into the projects without understanding how the technology works, or specifically what limitations you'll have with it. I wonder how many of those projects would have died out if corporate AI initiatives started with 2 blanket rules for at least the first year or so:
- Rule 1: No finetuning allowed. Post-training is off limits.
- Rule 2: Don't load any model with more than 32k context.
I'd bet good money the success rates would go through the roof.
Too many folks learning AI treat finetuning and big context windows as magic boxes that they can throw problems at, and then get stuck in the rut of asking "I followed the tutorials- what am I doing wrong?"
Take that away, and force people to use LLMs more conservatively, as a utility tool powering the software wrapping it? I really doubt we'd see so much turmoil. But instead what I see are folks shooting for the moon on day one, sometimes trying to follow tutorials they're finding on Google or from ChatGPT.
So many corporate initiatives in this space seem to start, and fail, with trying to post-train their corporate knowledge into a model. And when that inevitably doesn't work? Then they move to trying to push the context of models far past what they can reasonably handle with massive prompts and RAG dumps.
If folks started small, and treated AI as a utility to support the surrounding software rather than a magic talking black box that they think could solve everything with mystical tutorial-driven training, or with "1M token context windows," you'd probably see a lot less failure.
Go into the hobbyist/tinkerer world and you'll find people succeeding amazingly on tasks that big non-tech corporations are failing at left and right; it's because in the passion of learning the tech, tinkerers also embrace the limitations. Corps need to do the same.
I want to be clear that my position isn't that companies should never fine-tune or use big context windows. Rather, they should start small and understand the technology before going crazy with it. AI has a LOT of limitations, and if you just learn what those are first, then you really can do some cool stuff.