So many diseases to solve, nuclear fusion, better materials, expanding the frontier of science, communicating complex ideas to public, climate change, helping disadvantaged communities better, better farming, better participation platforms for good governance. There are so many aspects we can improve on with AI. But it is contingent on our govts prioritizing progess over destruction.
In my experience no, but I don't think that's a problem.
It's fascinating to see so many ideas and so much enthusiasm. I sometimes wonder if the fervor will die down as people realize it's still really hard to make truly fantastic software, but it's hard to say. There's a ton of inertia behind the vibe coding rush.
I also wonder if vibe coding is actually somewhat incompatible with the states of mind and contemplation that's often required to figure out how to solve problems properly. It isn't clear if you can brute force great solutions without putting in the initial domain distillation and idea incubation and so on. I'm sure there are exceptions but I have a feeling it'll never be trivial to come up with truly good and novel ideas for software, and vibing to get there might not make it any easier.
I am old enough to remember old programmers complaining about the wave of new shareware/freeware apps that people made with Visual Basic when that came out. Many of the apps were visually awful because it opened up desktop app development to people with no aesthetic experience.
I don’t see that awful style any more despite those tools for rapid UI creation still existing, did those people get better or did they get bored and move on to other things?
I guess the same will happen with vibe-coders, they’ll get the experience to make better software or their poor quality apps won’t give them what they want and they’ll move on.
That’s not what it means. "-it" just indicates the model is instruction-tuned, i.e. trained to follow prompts and behave like an assistant. It doesn’t imply anything about whether thinking tokens like <think>....</think> were included or excluded during training. Thats a separate design choice and varies by model.
It means that model was tuned to to act as chat bot. So write a reply on behalf of assistant and stop generating (by inserting special "end of turn" token to signal inference engine to stop generation).
Base model (without instruction/chat tuning) just generates text non stop ("autocomplete on steroids") and text is not necessarily even formatted as chat -- most text in training data isn't dialogue, after all.
Use the it versions. The other versions are base models without post-training. E.g. base models are trained to regurgitate raw wikipedia, books, etc. Then these base models are post-trained into instruction-tuned models where they learn to act as a chat assistant.
https://csrc.nist.gov/pubs/ir/7695/final
reply