But you don't get the plain, cold, hard truth in the second case. You just get an LLM with output in that style. The model will still be as path dependent as ever, it doesn't output the truest answer, it selects the answer that best fits the prompt.
Datacenter capex decreasing means that the chips have to go somewhere else, so it doesn't matter too much that the fab capacity has been spoken for, if the demand side is slacking prices will decrease.
Sure, but they have competitors who'll be more than happy to pick up whatever OpenAI ultimately doesn't buy. Point being, from POV of suppliers, there's no reason to re-retool for consumer production.
I wouldn't call it optimized, since that implies that it gains performance due to the tail calls and would work otherwise, but the tail calls are integral to the function of the interpreter. It simply wouldn't work if the compiler can't be forced to emit them.
> Tail calls can be implemented without adding a new stack frame to the call stack. Most of the frame of the current procedure is no longer needed, and can be replaced by the frame of the tail call, modified as appropriate (similar to overlay for processes, but for function calls). The program can then jump to the called subroutine. Producing such code instead of a standard call sequence is called tail-callelimination or tail-calloptimization. (https://en.wikipedia.org/wiki/Tail_call)
I won't argue with wikipedia, but assuming it's correct the 'standard nomenclature' seems sloppy to me. The whole point of the original post was taking advantage of the guarantee of TCE (elimination), vs TCO ("give it a try but oops, oh well, whatever...").
I suppose maybe TCE (as distinct from TCO) should be expanded to include any mechanism that doesn't expand the stack / heap / whatever for things that rhyme with recursion (in which case the existing sloppiness may as well stand, but we need a new TLA).
Questioning standard nomenclature is useful too, as long as it provides insight and is not just bike-shedding. "optimization" (in the context of an optimizing compiler) is generally expected not to alter the semantics of a program.
> but the tail calls are integral to the function of the interpreter
Not really, a trampoline could emulate them effectively where the stack won't keep growing at the cost of a function call for every opcode dispatch. Tail calls just optimize out this dispatch loop (or tail call back to the trampoline, however you want to set it up).
You can have user-defined operators with plain old recursive descent.
Consider if you had functions called parse_user_ops_precedence_1, parse_user_ops_precedence_2, etc. These would simply take a table of user-defined operators as an argument (or reference some shared/global state), and participate in the same recursive callstack as all your other parsing functions.
I don't see how this would help in the least, what kind of criminal would be dissuaded by paying a small fee to set an elaborate scheme such as this in motion? This is not a spamming attack where the sheer volume would be costly. It doesn't even help to get a credit card on file, since they can use stolen CC numbers.
It's far more likely that hobbyists will be hurt than someone that can just write off the cost as a small expense for their criminal scheme.
That's actually a problem with Tahoe, it is not something new and bold, it's old-fashioned. Transparency already has come and gone as a UI fad, and it doesn't really make any big difference if you throw computationally expensive effects at it.
reply