If You Have the Hardware- Use it to Learn!

If you've never messed with open source LLMs and you jumped on the ClawdBot/OpenClaw hype train: take some time to learn more about how local models work. You likely went through the trouble of getting a Mac Mini, so you now have a nice little test box to play with. Just do it. Turn off Clawdbot/OpenClaw, and make OTHER things with it. Just for a few hours, even.

For the vast majority of folks using AI to Vibe code, make agents, etc- right now they are the equivalent of people building websites using the heaviest no-code/low-code solutions, or just slapping ALL the biggest libraries in, without a care in the world for performance. You're probably wasting a ton of efficiency in your current setups because you don't understand how a lot of it works under the hood. You don't understand samplers well, or what tokenization is doing. You may not have a good feel for what small and weak models can really do, or what you absolutely have to have large models for (When I say small models- Im talking models that make Claude Sonnet 3.7 look like a genius).

Whatever efficiencies you're aiming for are probably a drop in the bucket compared to what you could be doing if you really had a feel for all that. And the only thing holding you back from that knowledge is just taking the time to learn it.

The easiest way to learn this stuff is doing. You have the hardware now, so why not? Forget the little hype-bot that LinkedIn convinced you to install. Set it aside and use that Mac Mini to learn how LLMs work at a deeper level by trying to wrangle local models to do complex work.

THAT will be worth its weight in gold.

Also, don't cheat yourself. Yes, the local ecosystem is easier now. 10 minutes + an LM Studio install and tada: all done! But what did you really learn? No no; I'm saying to do it the long way around. Grab Open WebUI. Grab llama.cpp. Get em hooked up together. Use a little model like one of the new Qwen3.5 8b models. Get the responses to be actually good; try to find ways to make the model stop repeating itself. Things like that.

Next: write a small agent. Do it with that crappy little 8b or less model, and try to get something of value out of it.

This is all possible to do, but I promise it'll be harder than accomplishing the same thing with some 2026 proprietary API model. And that's the point.

Once you've done all that, you'll later go back and revisit what you think right now is great work with LLMs, and suddenly have the same realization every developer does when they go back to their old code: "Wow, I can do a lot better than this now."

Much like developers first learning to code, and thinking that just writing 500x "if statements" is good enough- you're only just now scratching the surface of how you should properly use LLMs. Now you need to start learning the more complex stuff. Don't settle for the novice approaches you're doing so far. There's SO MUCH MORE out there.

And who knows- you may just find that local models are fun enough to be worth obsessing over a bit ;)