Hacker Newsnew | past | comments | ask | show | jobs | submit | DeepYogurt's commentslogin

CPE is a joke. The offical spec doc asserts that correctness of names is not in scope for the spec. See section 5. Well-Formed CPE Name Data Model

https://csrc.nist.gov/pubs/ir/7695/final


I guess this shows how low of a bar there is for getting something in as an ietf draft. Adds context for that ipv8 post from the other day for sure

Long overdue to be honest.

The shoe company? Edit: The shoe company.

I basically clicked in to this post because I thought it was funny that an AI company had the same name as a shoe company.

But, no, it’s actually the same company.

And a failed shoe company at that… I beleive they sold out for around $40M, which isn’t zero, but a lot less than the rounds of funding they raised.


We are collectively out of ideas

So many diseases to solve, nuclear fusion, better materials, expanding the frontier of science, communicating complex ideas to public, climate change, helping disadvantaged communities better, better farming, better participation platforms for good governance. There are so many aspects we can improve on with AI. But it is contingent on our govts prioritizing progess over destruction.

Are any of them good?


In my experience no, but I don't think that's a problem.

It's fascinating to see so many ideas and so much enthusiasm. I sometimes wonder if the fervor will die down as people realize it's still really hard to make truly fantastic software, but it's hard to say. There's a ton of inertia behind the vibe coding rush.

I also wonder if vibe coding is actually somewhat incompatible with the states of mind and contemplation that's often required to figure out how to solve problems properly. It isn't clear if you can brute force great solutions without putting in the initial domain distillation and idea incubation and so on. I'm sure there are exceptions but I have a feeling it'll never be trivial to come up with truly good and novel ideas for software, and vibing to get there might not make it any easier.


Without giving away exactly how old I am…

I am old enough to remember old programmers complaining about the wave of new shareware/freeware apps that people made with Visual Basic when that came out. Many of the apps were visually awful because it opened up desktop app development to people with no aesthetic experience.

I don’t see that awful style any more despite those tools for rapid UI creation still existing, did those people get better or did they get bored and move on to other things?

I guess the same will happen with vibe-coders, they’ll get the experience to make better software or their poor quality apps won’t give them what they want and they’ll move on.


Who amongst us has NOT had a little teleport at 2am? /s


Reminds me of this old article from The Onion:

Phantom Diner Appears Only To Those In Their Drunkest Hour https://theonion.com/phantom-diner-appears-only-to-those-in-...


He also claims to have teleported to (of all places) a ditch. What could possibly explain this phenomenon?


Had he just come from John Malkovich's head?


Came here to say this. If you want to find someone who has teleported to food, head to a Waffle House or Denny's at 2 am in a college town.



maybe a dumb question but what what does the "it" stand for in the 31B-it vs 31B?


Instruction Tuned. It indicates that thinking tokens (eg <think> </think>) are not included in training.


That’s not what it means. "-it" just indicates the model is instruction-tuned, i.e. trained to follow prompts and behave like an assistant. It doesn’t imply anything about whether thinking tokens like <think>....</think> were included or excluded during training. Thats a separate design choice and varies by model.


What does that mean for a user of the model? Is the "-it" version more direct with solutions or something?


It means that model was tuned to to act as chat bot. So write a reply on behalf of assistant and stop generating (by inserting special "end of turn" token to signal inference engine to stop generation).

Base model (without instruction/chat tuning) just generates text non stop ("autocomplete on steroids") and text is not necessarily even formatted as chat -- most text in training data isn't dialogue, after all.


good old illustrtation: https://www.ml6.eu/en/blog/large-language-models-to-fine-tun...

The it- one is the yellow smiling dot, the pt- is the rightmost monster head.


Use the it versions. The other versions are base models without post-training. E.g. base models are trained to regurgitate raw wikipedia, books, etc. Then these base models are post-trained into instruction-tuned models where they learn to act as a chat assistant.


Man, these vibe coded sites really are off putting


Right? At least they're easy to identify (and subsequently close).


> At least they're easy to identify

For better or worse, I found this one different.

Usually I see a solid wall of black, but this one was actually readable with scripts disabled.


You don’t like your bullet points as emojis?


Hate them, actually. They don't communicate - they glaze.

Almost as bad as the theft of em-dashes from polite society.


At least they all look the same so it's really easy to recognize and CTRL+W them.


Always with the purple and blue.


Mozilla has run its own vpn for a while now

https://www.mozilla.org/en-US/products/vpn/


>Mozilla has partnered with Mullvad in order to utilize our global network of VPN servers for its own VPN application.

https://mullvad.net/en/blog/mullvad-partnerships-page-has-be...

(2019). Are you saying that has changed?


The T&Cs of the current VPN still say Mullvad provides the service.

https://www.mozilla.org/en-US/about/legal/terms/subscription...


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: