Hacker Newsnew | past | comments | ask | show | jobs | submit | kryptiskt's commentslogin

But you don't get the plain, cold, hard truth in the second case. You just get an LLM with output in that style. The model will still be as path dependent as ever, it doesn't output the truest answer, it selects the answer that best fits the prompt.

Datacenter capex decreasing means that the chips have to go somewhere else, so it doesn't matter too much that the fab capacity has been spoken for, if the demand side is slacking prices will decrease.

OpenAI aren't the only ones who were increasing their datacenter capex.

No, but they are the ones who placed an order for 40% of the world's supply.

Sure, but they have competitors who'll be more than happy to pick up whatever OpenAI ultimately doesn't buy. Point being, from POV of suppliers, there's no reason to re-retool for consumer production.

I wouldn't call it optimized, since that implies that it gains performance due to the tail calls and would work otherwise, but the tail calls are integral to the function of the interpreter. It simply wouldn't work if the compiler can't be forced to emit them.

What I wrote is standard nomenclature

> Tail calls can be implemented without adding a new stack frame to the call stack. Most of the frame of the current procedure is no longer needed, and can be replaced by the frame of the tail call, modified as appropriate (similar to overlay for processes, but for function calls). The program can then jump to the called subroutine. Producing such code instead of a standard call sequence is called tail-call elimination or tail-call optimization. (https://en.wikipedia.org/wiki/Tail_call)


I won't argue with wikipedia, but assuming it's correct the 'standard nomenclature' seems sloppy to me. The whole point of the original post was taking advantage of the guarantee of TCE (elimination), vs TCO ("give it a try but oops, oh well, whatever...").

I suppose maybe TCE (as distinct from TCO) should be expanded to include any mechanism that doesn't expand the stack / heap / whatever for things that rhyme with recursion (in which case the existing sloppiness may as well stand, but we need a new TLA).


Questioning standard nomenclature is useful too, as long as it provides insight and is not just bike-shedding. "optimization" (in the context of an optimizing compiler) is generally expected not to alter the semantics of a program.

> but the tail calls are integral to the function of the interpreter

Not really, a trampoline could emulate them effectively where the stack won't keep growing at the cost of a function call for every opcode dispatch. Tail calls just optimize out this dispatch loop (or tail call back to the trampoline, however you want to set it up).


Yup, standard practice for interpreters in languages that don't have tail call optimization.

You lose in versatility, then you can't add user-defined operators, which is pretty easy with a Pratt parser.

You can have user-defined operators with plain old recursive descent.

Consider if you had functions called parse_user_ops_precedence_1, parse_user_ops_precedence_2, etc. These would simply take a table of user-defined operators as an argument (or reference some shared/global state), and participate in the same recursive callstack as all your other parsing functions.


The Web is a side project of CERN, they should have gotten a comped top-level domain by rights.

All circuits are analog when physically realized, the digital view is an abstraction.


I don't see how this would help in the least, what kind of criminal would be dissuaded by paying a small fee to set an elaborate scheme such as this in motion? This is not a spamming attack where the sheer volume would be costly. It doesn't even help to get a credit card on file, since they can use stolen CC numbers.

It's far more likely that hobbyists will be hurt than someone that can just write off the cost as a small expense for their criminal scheme.


Only because C code presents so many juicy security holes by default that it's completely unnecessary to subvert the projects to add them.


It's not a restriction born out of purity, notably uncompromising Haskell allows orphan instances.


That's actually a problem with Tahoe, it is not something new and bold, it's old-fashioned. Transparency already has come and gone as a UI fad, and it doesn't really make any big difference if you throw computationally expensive effects at it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: