Hacker Newsnew | past | comments | ask | show | jobs | submit | AndroTux's commentslogin

They just want everyone coming from archive.org to feel right at home

What if the root (.) certificate breaks?

Resolvers are free to cache each TLD's keys. There's a finite, well-known list of TLDs and their keys - you can download all the root zone data from IANA: https://www.iana.org/domains/root/files (it's a few megabytes in uncompressed text form)

The world might be a little bit better with more decentralization of the root zone.


DENIC apparently resolved all .de domains to NXDOMAIN in 2010: https://www.theregister.com/2010/05/12/germany_top_level_dom...

I'm blaming chromehearts anyways

I can live with that

maybe your upstream doesn't validate DNSSEC?

maybe? I'm using PiHole and 8.8.8.8/1.1.1.1 as upstream, and both options show "DNSSEC" next to their options in settings, so I assumed DNSSEC was enabled (unless I have to enable this somewhere else as well?)

That's weird cause 8.8.8.8/1.1.1.1 will already answer with SERVFAIL right now, unless the domain is still in the cache.

Everyone knows you have to flip the USB cable twice before it’s no longer upside down.

usb superposition. my favorite of the classic phenomena

Brave explicitly blocks this

Last time this was discussed the consensus was Brave does not block it. Brave's fingerprinting protection does not include extensions.

https://news.ycombinator.com/item?id=46904361


Well, just because LinkedIn still tries to send the requests on Brave doesn't mean the blocking doesn't work. The question is whether any request will give a valid response.

That said, I can't find conclusive info on whether this is blocked exactly. Brave does block "plugins" (which is why I assumed this includes this specific kind of fingerprinting), and the getExtension() call (which is probably unrelated), according to this page: https://brave.com/privacy-updates/4-fingerprinting-defenses-...

But since they don't explicitly mention the chrome-extension URL, you might be right.


Yes, that's what they're saying. LIDL doesn't have a cloud. The Schwarz Group does.


Too bad, a LIDL branded cloud would be something really well marketable. Cloudside services (a'la Parkside)... or something along these lines.


Imagine spinning up a Silvercrest instance for a database. Using W5 to distribute messages across the cloud. Using Parkside for object storage.


So? There’s no game with payout odds in your favor.

Lotto (at least where I live) also shows the odds, which are always ridiculously low. Still, people play, because the human brain isn’t built to understand odds. It’s essentially worthless as a metric.


Exactly. The blog post states that the alternatives listed are similarly intuitive. They are not. If you just need a chat app, then sure, there’s plenty of options. But if you want an OpenAI compatible API with model management, accessibility breaks down fast.

I’m open to suggestions, but the alternatives outlined in the blog post ain’t it.


The reported alternatives seem pretty User-Friendly to me:

> LM Studio gives you a GUI if that’s what you want. It uses llama.cpp under the hood, exposes all the knobs, and supports any GGUF model without lock-in.

> Jan(https://www.jan.ai/) is another open-source desktop app with a clean chat interface and local-first design.

> Msty(https://msty.ai/) offers a polished GUI with multi-model support and built-in RAG. koboldcpp is another option with a web UI and extensive configuration options.

API wise: LM Studio has REST, OpenAI and more API Compatibilities.


All of those options were either too slow, or didnt work for me (Mac with Intel). I could have spent hours googling, but I downloaded Ollama and it just worked.

So no, they are not alternatives to ollama


LM Studio is basically Ollama except they give attribution. It offers all of the same features including the ability to host a server.


What you say was true in the past.

As other posters report, now llama-server implements an OpenAI compatible API and you can also connect to it with any Web browser.

I have not tried yet the OpenAI API, but it should have eliminated the last Ollama advantage.

I do not believe that the Ollama "curated" models are significantly easier to use for a newbie than downloading the models directly from Huggingface.

On Huggingface you have much more details about models, which can allow you to navigate through the jungle of countless model variants, to find what should be more suitable for yourself.

The fact criticized in TFA, that the Ollama "curated" list can be misleading about the characteristics of the models, is a very serious criticism from my point of view, which is enough for me to not use such "curated" models.

I am not aware of any alternative for choosing and downloading the right model for local inference that is superior to using directly the Huggingface site.

I believe that choosing a model is the most intimidating part for a newbie who wants to run inference locally.

If a good choice is made, downloading the model, installing llama.cpp and running llama-server are trivial actions, which require minimal skills.


> On Huggingface you have much more details about models...

For a (brand new!) newbie, it's very, very likely to be information overload.

They're still at the start of their journey, so simple tends to be better for 90% of users. ;)


What do you mean?

LMStudio is listed as an alternative. It offers a chat UI, a model server supporting OpenAI, Anthropic and LMStudio API interfaces. It supports loading the models on demand or picking what models you want loaded. And you can tweak every parameter.

And it uses llama.cpp which is the whole point of the blog post.


Thanks for pointing that out. From the description in the blog post it sounded like it was GUI only without an API, and I didn't bother looking into it because of that. But it look pretty nice, so I'll give it a try.


like someone said above: brew install llama.cpp

llama-server -hf ggml-org/gemma-4-E4B-it-GGUF --port 8000 (with MCP support and web chat interface)

and you have OpenAI API on the same 8000 port. (https://github.com/ggml-org/llama.cpp/tree/master/tools/serv... lists the endpoints)


And why do I use ggml-org/gemma-4-E4B-it-GGUF instead of one of the 162 other models that can be found under the ggml-org namespace? And how do I even know that this is the namespace to look at?

That's what I meant by model management. I'm too tired to scroll through a bazillion models that all have very cryptic names and abbreviations just to find the one that works well on my system with my software stack.

I want a simple interface that a tool like me can scroll through easily, click on, and then have a model that works well enough. If I put in that much brain power to get my LLM working, I might as well do the work myself instead of using an LLM in the first place.


1. Go to HF

2. Choose the model they recommend

3. Run the one-liner the site gives you

Bonus: faster access to latest models and better memory usage


The first model I see on the HF homepage is this one: MiniMaxAI/MiniMax-M2.7

Do you think that this 229B parameter model will work on my consumer PC?

Stop pretending like HF is in any way beginner friendly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: