Interesting. I wonder how far we can push the "AI-generated UI" pattern with today's models. Is GPT 3.5 good enough for or will we need GPT 4, and if so, will it be fast enough (I assume yes, eventually)?
Yeah it's a great question. We're still exploring how far we can take this. I think we'll be able to get pretty basic examples working in 3.5 (like the Recipe example), but more complicated, flexible UIs will need GPT-4 for now (or some other finetuned model that specializes in this).
I'm a PM at a human data company (https://www.surgehq.ai) that helps the large language model companies ensure their models are safe (we're the “clever prompt engineers” who helped Redwood assess their model performance).
> helps the large language model companies ensure their models are safe
Here's the Merriam-Webster definition for the word you're using:
ensure : to make sure, certain, or safe : GUARANTEE
"ensure their models are safe" suggests you're claiming that you're using the "certain" definition, and that you can, for certain (which requires proof) guarantee safety of an LLM?
I appreciate that he's drawing clear lines (aside from the generically "severe" consequences promised in response to Russia using nukes, which seems like sensible strategic ambiguity). Have to wonder what the game plan is if Russia does indeed use nuclear weapons. All options seems terrible.
The most likely option would be to pursue a more severe economic sanctions and isolation system, similar to how we treat North Korea. But while Putin is capable of anything, I think use of nuclear weapons is unlikely at this point in the war. Taking out a city wouldn't align with the propaganda that he's been feeding the Russian people. And the Ukrainian military forces are already dispersed enough that they don't make good targets.