I didn't know about Flapjax, thanks I'll check it out. Glitch-freedom is indeed a gap in this article. I focused on the signal algorithm exclusively without some implementation optimisation like batching updates; there is so much more to cover! Maybe in a next one, Thanks!
I wrote a whole screed here about how glitches are evil and Rx is evil for teaching people they’re normal, but then I thought about it a bit more—
The system as described isn’t actually glitchy, is it? It doesn’t eagerly run any user computations, just dirtying, and that is idempotent so the order is irrelevant. It’s also a bit useless because it only allows you to pull out values of your own initiative, not subscribe to them, but that’s fixable by notifying all subscribers after the dirtying is done, which can’t cause glitches (unless the subscribers violate the rules of the game by triggering more signals).
So now I’m confused whether all the fiddly priority-queue needlepoint is actually needed for anything but the ability to avoid recomputation when an intermediate node decides it doesn’t want to change its output despite a change in one of its inputs. I remember the priority queue being one of the biggest performance killers in Sodium, so that can’t be it, right?..
I’m also confused about whether push-pull as TFA understands it has much to do with Conal Elliott’s definition. I don’t think it does? I feel like I need to reread the paper again.
Also also, some mention of weak references would probably be warranted.
>> whether push-pull as TFA understands it has much to do with Conal Elliott’s definition.
> Virtually nothing that is getting sold/branded as "FRP" has anything to do with Conal Eliott's definition.
True but not what I meant. The article implicitly (and, in the links at the end, explicitly) refers to his 2009 paper “Push-pull functional reactive programming”, which describes a semantic model together with an specific implementation strategy.
So I was wondering if TFA’s “push-pull” has anything to do with Elliott 2009’s “push-pull”. I don’t think so, because I remember the latter doing wholly push-based recomputation of discrete reactive entities (Events and Reactives) and pull-based only for continuous entities that require eventual sampling (Behaviors).
With that said, I find it difficult to squeeze an actual algorithm out of Elliott’s high-level, semantics-oriented discussion, and usually realize that I misunderstood or misremembered something whenever I reread that paper (every few years). So if the author went all the way to reference this specific work out of all the FRP literature, I’m willing to believe that they are implying some sort of link that I’m not seeing. I would just like to know where it is.
After wondering what the heck glitch-freedom is and learning about it, I agree with you. It seems like it deserves at least a brief explanation in an article about how signals work.
I've gone with the universal `alien-signals` package for my project (which doesn't use a frontend framework that includes signals). They show benchmarks of being by far the fastest and have strict limits on code complexity. Those limits are also supposed to avoid glitches by design, and now at least some of that is tested[1].
So yeah topological sorting is one element, but that global stack is a data race! You need to test set inclusion AND insert into it in an ordered way. Global mutex is gross. To do so lock-free could maybe be done with a lock free concurrent priority queue with a pair of monatomic generation counters for the priorities processed then next, then some memo of updates so that the conflicting re-update is invalidated by violation the generation constraint. I see no less than 3 CAS, so updates across a highly contentious system get fairly hairy. But still, a naive approach is good enough for the 99% so let there be glitches!
yea, this is in javascript. it's inherently single-threaded in almost all contexts (e.g. node.js shared memory where you're intentionally bypassing core semantics for performance, and correctness is entirely on you)
wouldn't this be solved by synchronously invalidating everything before computing anything? it seems like that's what the described system is doing tbh, since `setValue` does a depth-first traversal before returning. or is there a gap where that strategy fails you?
I think most sourdough recipes are written by people who are really into sourdough, because they involve so much bullshit. I was going to give up on sourdough until I discovered:
* Fold it over itself a few times every hour or so.
* When it looks risen, put it into whatever you want to bake it in, let it rise for another 30 minutes or so, bake at around 200C for about 30 minutes.
It's easier than fast yeast bread, as there is less kneading.
'Tis true. At the same time, Project Valhalla will be the most significant change to the JVM in a very long time, and probably its best chance to stay relevant in the future.
I'm writing a book, which covers the mental models for writing code in a functional style. The examples are in Scala, but it will be useful if you use other modern languages like Rust, Kotlin, Swift, OCaml, or Typescript.
This article would benefit from an introduction that lays out the structure of what is to come. I'm expecting an article on effect systems, but it jumps straight into a chunky section on the implementation of function calls. I'm immediately wondering why this is here, and what is has to do with effect systems.
Also, this is a very operational description: how it works. It's also possible to give a denotational description: what it means. Having both is very useful. I find that people tend to start with the operational and then move to the denotational.
Yeah the next generation of Strix Halo is what would get me excited. I think right now TSMC has no capacity, so maybe we have to wait another year. Kinda ironic that all CPU/RAM capacity is being sold to LLM companies, and as a result we can't get the hardware needed for good local LLMs.
> all CPU/RAM capacity is being sold to LLM companies, and as a result we can't get the hardware needed for good local LLMs.
yeah... Ironic I guess. It's as if they've realised that it's only a matter of time until we get a "good enough" FOSS model that runs on consumer hardware. The fact that such a thing would demolish their entire business of getting VC hyped while giving out their service for a loss surely got lost to them. Surely they and Nvidia have not realised that the only thing that could stop this is to make good hardware unreachable for anything smaller than a massive corp
Mark my words: in less than one year, we'll probably get something akin to Opus 4.6 FOSS. China is putting as much money into that as they can because they know this would crash the US economy, which is in the green only thanks to big tech pumping up AI. China wants Trump either gone or neutered as soon as possible, which they know they can do by making Republicans as unelectable as possible - something that will probably do if the economy crashes and a recession happens
I love regular expression derivatives. One neat thing about regular expression derivatives is they are continuation-passing style for regular expressions. The derivative is "what to do next" after seeing a character, which is the continuation of the re. It's a nice conceptual connection if you're into programming language theory.
Low-key hate the lack of capitalization on the blog, which made me stumble over every sentence start. Great blog post a bit marred by unnecessary divergence from standard written English.
Maybe they drafted it on a phone where capitalization is harder. My guess is the all-lowercase world is mostly people who do most of their text creation on phones and similar, not keyboards.
No one will mistake your posts for LinkedIn slop. You actually have something to say, with coherent arguments presented in paragraphs containing multiple sentences.
If you want sentences without capitalization to be your thing, then go for it. It's just a weird hill to die on, taking away from the readability of your posts for no real reason.
In all honesty it's just never bothered me before and i've havent met many people bothered by it either
It's the same thing with dark mode as default, i chose it because it's my own preference and i'd love it everywhere, but i'm constantly being flashbanged by phone apps because someone decided #FFFFFF is a good background color while the app is loading.
It's your personal style. Researchers have their quirks, don't listen to the industry suits saying dumb shit like "it's unprofessional" you can mask if you're looking for a job at Google in the future, but for now enjoy being yourself and say fuck you to the lazy socially imposed dogma of this particular community
I agree with this, and I'd add there are two modes of processing errors: fail-fast (stop on first error) and fail-last (do as much processing as possible, collecting all errors). The later is what you want to do when, for example, validating a form: validate every field and return all the errors to the user.
The article doesn't mention the other side of the tradeoff, which is that features like Rust's traits or macros make the language more expressive. Given that Rust's LSP server is pretty snappy, these features don't seem to cause problem in practice for incremental compilation.
The author is the former lead developer of the Rust's LSP server, so it seems natural to have a preference for languages that would allow him to do a better job. Slow compilation (including incremental) is one of the most common complaints about Rust, despite a lot of effort spent on optimizations there. I think it would be good if more people are aware of the inherent downsides of maximizing expressiveness.
Rust compilation is actually very fast if you keep things Java-esque : No macros, no monomorphization (every Trait dyn). Obviously most the ecosystem isn't built that way, leaning more on the metaprogramming style because the language makes it so convenient.
* I think the first implementation in JS land was Flapjax, which was around 2008: https://www.flapjax-lang.org/publications/
* The article didn't discuss glitch-freedom, which I think is fairly important.