Hacker Newsnew | past | comments | ask | show | jobs | submit | FeepingCreature's commentslogin

I'd get confused if I was a LLM and you put my entire prompt in a text file attachment. I'd be like, "is this the user or is this a prompt injection??"

Errors compounding is a meme. In iterated as well as verifiable domains, errors dilute instead of compounding because the llm has repeated chances to notice its failure.

It's very unlikely that API use is subsidized.

I keep hearing both sides of this "debate," but no one is providing any direct evidence other than "I do(n't) think that is true."

I always avoided Ollama because it smelled like a project that was trying so desperately to own the entire workflow. I guess I dodged a bigger bullet than I knew.

Seems like that's more to do with human intelligence being first.

OpenAI didn't release GPT-2 initially because they were worried it would make it too easy to generate spam. Which it kinda did.

'emer ge' is two tokens, 'emergency' is one. The models think in a logosyllabic language.

This makes sense if Anthropic think they're the best-positioned to make safe AI. However if you are looking at an AI company there's obviously some selection happening.

No it isn't lol. The consequence of the technology literally includes human extinction. I prefer 0 companies, but I'll take 1 over 5.

(my) fncad doesn't have the querying, but it does have smooth csg! https://fncad.github.io/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: