Stanford Lectures...
A few days ago Stanford dumped a whole pile of AI/ML lectures up on their youtube. They're a pretty good watch if you get bored and want to dive more into this stuff.
A few days ago Stanford dumped a whole pile of AI/ML lectures up on their youtube. They're a pretty good watch if you get bored and want to dive more into this stuff.
YouTubeOver the past few days, I've realized that there are a lot of folks out there using LLMs that haven't had an opportunity to dig, even a little, into the basics of how LLMs really work. And I guess that makes sense; for the most part,
Just a random note, but Qwen3.6 35b a3b is putting a smile on my face. This little model feels like a big upgrade over 3.5's 27b or 35b a3b. Also- the Wilmer workflow for OpenCode is really going well. I need to test it more, because
So some year and a half after the request was made for me to put tool calling into Wilmer, I've finally got it in there. First off- it was a huge pain to implement; if I didn't have Wilmer itself and agentic coders to help, I&
In my last post, I mentioned using --image-min-tokens to increase the quality of image responses from Qwen3.5. I went to load Gemma 4 the same way, and hit an error: [58175] srv process_chun: processing image... [58175] encoding image slice... [58175] image slice encoded in 7490 ms [58175] decoding