Hacker Newsnew | past | comments | ask | show | jobs | submit | beej71's commentslogin

If they have DRM the answer is almost certainly no.

I just jailbroke my old Kindle 4 for fun. Found out of it ever connects to WiFi it unjailbrakes itself. :)

The email Amazon sent out said that if you factory reset your device after May 20 it becomes inoperable. I wonder if that means bricked, or if it just means you can't access your DRM kindle library.


You will still be able to use it if you factory reset, but you won't be able to register it to an Amazon account or download any of your DRM'd book purchases. The Kindle will still work and you'll still be able to read books you load over USB. The one annoyance is there's a nag pop-up telling you to register your Kindle, but it only shows up in the main menu and not when you're in a book.

I think you can disable the Over the Air Updates.

I'd like to see some data on this. My general-ed recall is minimal, and in programming before school, I certainly learned a ton more by coding than by testing. That's my perception of my time in school, as well.

> If DRM is the price I have to pay for a dead-simple ecosystem, multi-device support and free cloud storage, well, I guess I'm happy to pay it.

That makes one of us. To each their own, I guess.


The Kindle isn't a bad device on its own. Personally I use a Kobo. But I never pay for any ebook that I can't keep indefinitely one way or another.

I also have an old Kindle 4 that needs to be jailbroken before the May 30th deadline. Maybe I'll do that today. Gets you out of the ecosystem. And old Kindles can be found pretty cheap.


the drm is the reason why I never bought any kindle along with the relatively small non-expandable onboard storage, though the dx was tempting to me for a bit. I've stuck with Kobo, Pocketbook, and reMarkable and have been happy with them.

I've considered the PocketBook. How do you like it vs the Kobo?

I prefer it as I don't like the following things about the kobo: ads for their stuff on the front page, and cumbersome sync (they seem to use a sqlite db to scan the loaded books on each sync, which for my ebooks takes a long while.) By comparison, pocketbook is what I want in a device: a file manager like interface to access my library, no fuss sync (via rsync and usb, primarily - I use their cloud to store books to read across devices), a good ecosystem of third party installs, and a front page that just features my books instead of whatever they want me to buy.

When Amazon started locking it all down last year I bailed on their ecosystem for Kobo’s store, but I use a Boox device. As long as I can back it up in any format I’m happy, and as soon as Amazon crossed that line they lost my business.

If the book is readable, it can be pirated. Even the most labor-intensive piracy technique is not that difficult. And once a drm-free book is out there, it's out there.

Though sadly the new types of Kinds require a method of extracting Kindles to PDF which is an order of magnitude harder than the old Calibre DeDRM method. I had to boot Bluestacks and export license files and rub my tummy and pat my head and do the Hokey Pokey… but in the end, the books are now 100% mine.

Edit: It’s been a while. Looks like the process is more streamlined, but still not what it used to be.


Harder, for sure. But you just need one copy in the wild...

I recently revisited my childhood town and walked from my childhood home to my school. I hadn't done that for nearly 50 years. It was shorter than I remember, of course, but it was still several blocks. The last time I walked it, I was five. I also learned to ride a bicycle when I was five, so that took the place of walking for the latter part of the kindergarten school year.

I arrived at the school just as it was getting out for the day. I did not see a single student of any age leave without an adult.

Like so many people of my generation, I can only wonder at the cost, and be grateful that I was born when I was.


I grew up in the 70s in a town of 30,000 and consider that time free-range. There was no public transit, only bicycles.

Manhattan is one thing, but I would never let my kids go to the 70s unsupervised.

This is the helicopter parenting the article is condemning. Children made it through the 70's just fine.

I'll bet we see more and more of this in the future. As developer skills atrophy due to over-reliance on LLMs, we'll have to keep our skills sharp somehow. What better way than a sabbatical?

To answer "what better way," clearly using the skills regularly is much better. Letting them atrophy for potentially multiple years and then trying to resurrect them repeatedly doesn't seem like a recipe for maintaining sharp skills to me.

That's definitely optimal, but I don't think a lot of people are going to have that opportunity. It's not really in the short-term interest of a company to have people spending time on that.

There's no way around it; just like how once you get used to Python, you gradually become ignorant of and indifferent to the underlying layers. With the continuous development of AI, this will be inevitable.

News like this always makes me wonder about running my own model, something I've never done. A couple thousand bucks can get you some decent hardware, it looks like, but is it good for coding? What is your all's experience?

And if it's not good enough for coding, what kind of money, if any, would make it good enough?


I want to give give you realistic expectations: Unless you spend well over $10K on hardware, you will be disappointed, and will spend a lot of time getting there. For sophisticated coding tasks, at least. (For simple agentic work, you can get workable results with a 3090 or two, or even a couple 3060 12GBs for half the price. But they're pretty dumb, and it's a tease. Hobby territory, lots of dicking around.)

Do yourself a favor: Set up OpenCode and OpenRouter, and try all the models you want to try there.

Other than the top performers (e.g. GLM 5.1, Kimi K2.5, where required hardware is basically unaffordable for a single person), the open models are more trouble than they're worth IMO, at least for now (in terms of actually Getting Shit Done).


We need more voices like this to cut through the bullshit. It's fine that people want to tinker with local models, but there has been this narrative for too long that you can just buy more ram and run some small to medium sized model and be productive that way. You just can't, a 35b will never perform at the level of the same gen 500b+ model. It just won't and you are basically working with GPT-4 (the very first one to launch) tier performance while everyone else is on GPT-5.4. If that's fine for you because you can stay local, cool, but that's the part that no one ever wants to say out loud and it made me think I was just "doing it wrong" for so long on lm studio and ollama.

> We need more voices like this to cut through the bullshit.

Just because you can't figure out how to use the open models effectively doesn't mean they're bullshit. It just takes more skill and experience to use them :)


> We need more voices like this to cut through the bullshit.

Open models are not bullshit, they work fine for many cases and newer techniques like SSD offload make even 500B+ models accessible for simple uses (NOT real-time agentic coding!) on very limited hardware. Of course if you want the full-featured experience it's going to cost a lot.


I fell for this stuff, went into the open+local model rabbit hole, and am finally out of it. What a waste of time and money!

People that love open models dramatically overstate how good the benchmaxxed open models are. They are nowhere near Opus.


There is absolutely a use case for open models... but anyone expecting to get anywhere near the GPT 5.x or Claude 4.x experience for more demanding tasks (read: anything beyond moderate-difficulty coding) will be sorely disappointed.

I love my little hobby aquarium though... It's pretty impressive when Qwen Coder Next and Qwen 3.5 122B can accomplish (in terms of general agentic use and basic coding tasks), considering that the models are freely-available. (Also heard good things about Qwen 3.5 27B, but haven't used it much... yes I am a Qwen fanboi.)


You should be aware that any model you can run on less than $10k worth of hardware isn't going to be anywhere close to the best cloud models on any remotely complex task.

Many providers out there host open weights models for cheap, try them out and see what you think before actually investing in hardware to run your own.


Not sure why all the other commentors are failing to mention you can spend considerably less money on an apple silicon machine to run decent local models.

Fun fact: AWS offers apple silicon EC2 instances you can spin up to test.


gemma4 and qwen3.6 are pretty capable but will be slower and wrong more often than the larger models. But you can connect gemma4 to opencode via ollama and it.. works! it really can write and analyze code. It's just slow. You need serious hardware to run these fast, and even then, they're too small to beat the "frontier" models right now. But it's early days

My anecdotal experience with a recent project (Python library implemented and released to pypi).

I took the plan that I used from Codex and handed it to opencode with Qwen 3.5 running locally.

It created a library very similar to Codex but took 2x longer.

I haven't tried Qwen 3.6 but I hear it's another improvement. I'm confident with my AI skills that if/when cheap/subsidized models go away, I'll be fine running locally.


Unless you use H100 or 4x 5090 you won't get a decent output.

The best bang for the buck now is subcribing to token plans from Z.ai (GLM 5.1), MiniMax (MiniMax M2.7) or ALibaba Cloud (Qwen 3.6 Plus)

Running quantized models won't give you results comparable to Opus or GPT.


The latest Qwen3.6 model is very impressive for its size. Get an RTX 3090 and go to https://www.reddit.com/r/LocalLLaMA/ to see the latest news on how to run models locally. Totally fine for coding.

i think the new qwen models are supposed to be good based on some the articles that i read

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: