> Instead of carelessly flinging packets into the ether like an savage, you had a deterministic network of pipes
I love this. Ethernet is such shit. What do you mean the only way to handle a high speed to lower speed link transition is to just drop a bunch of packets? Or sending PAUSE frames which works so poorly everyone disables flow control.
Yes: https://fasterdata.es.net/performance-testing/troubleshootin.... A simplistic TCP server will blast packets on the link as fast as it can, up to the size of the TCP receive window. At that point it’ll stop transmitting and wait for an ACK from the client before sending another window’s worth of packets.
To handle a speed transition without dropping packets, the switch or router at the congestion point needs to be able to buffer the whole receive window. It can hold the packets and then dribble them out over the lower speed link. The server won’t send more packets until the client consumes the window and sends an ACK.
But in practice the receive window for an Internet scale link (say 1 gigabit at 20 ms latency) is several megabytes. If the receive window was smaller than that, the server would spend too much time waiting for ACKs to be able to saturate the link. It’s impractical to have several MB of buffer in front of every speed transition.
Instead what happens is that some switch or router buffer will overflow and drop packets. The packet loss will cause the receive window, and transfer rate, to collapse. The server will then send packets with a small window so it goes through. Then the window will slowly grow until there’s packet loss again. Rinse and repeat. That’s what causes the saw-tooth pattern you see on the linked page.
This is how old-school TCP figures out how fast it can send data, regardless of the underlying transport. It ramps up the speed until it starts seeing packet loss, then backs off. It will try increasing speed again after a bit, in case there's now more capacity, and back off again if there's loss.
You can achieve a bit of performance here by tuning it so it will never exceed the true speed of the link - which is only really useful when you know what that is and can guarantee it.
> It’s experimental but he built it with help from Claude in about a month.
We talk a lot about AI building programs from soup to nuts. But I think people overlook the more likely scenario. AI will turn 10x programmers into 100x programmers. Or in Matz’s case maybe 100x programmers into 500x programmers.
It actually helps me write better code. I am pretty lazy so I kind of don't do much refactoring unless I have to, I care mostly about happy paths and I kind of avoid treating edge cases unless I can't do it. I usually don't optimize much.
But since writing code is very easy now with AI I write better code because it doesn't take me more time and effort.
Ding ding ding, correct answer! OP's target audience was people who are supposed to be using an API endpoint. It's self-evident OP can write clearly enough to communicate with the target audience.
> Speaking to people in a meeting allows them to emote, express difficulty of understanding, understand the sentiment and priority of what they're hearing -- and most of all, it allows them to listen rather than read. People speak at a much lower information density, and this is a less taxing form of communication.
Is that why everything is a Youtube video these days instead of written articles?
The real danger of Tik Tok and Youtube is that it allowed people who can't communicate using writing onto the Internet.
Yes, see my comment below. Memo -> meeting, book -> podcast / audiobook, newspaper article -> 10min youtube video, even, meme -> yt-short/tiktok
People are naturally motivated to watch, listen, and interact with other people. There's less a need to explain why cognitive effort is required, lower risk to bounce-off the format because it's to difficult/boring/frustrating/etc. We're already primed to expend effort interacting with others.
I think there's also something more naturally-fit to our attention spans in oral media. Whilst people frequently claim our attention spans are dropping -- I think this is false (and some research agrees). Instead, media is being adapted to fit what our attention spans always were.
It is just in reading, and engaging with long-format content, our minds frequently drifted. We frequently stoped paying attention and returned, over and over.
Instead, with shorter oral media we largely pay more attention but over shorter intervals.
A conversation also proceeds to manage attention/interest/etc. well, in somewhat dynamically adapting itself to the level of cognitive effort its participants are willing to spend.
Certainly I find myself naturally adapting my phrasing, humor, and so on according to the people i'm talking to -- based on whether they are showing interest, listening, understanding and so on. This is how attention should always have been managed.
Writing always was, in my view, a necessary evil for the vast majority of purposes to which it was put. Now, not all, of course -- we still need checklists, scripts, technical notes, accounting books, and the like.
> Yes, see my comment below. Memo -> meeting, book -> podcast / audiobook, newspaper article -> 10min youtube video, even, meme -> yt-short/tiktok
Yeah, a dog can understand spoken words but can't read a memo. We should strive to use our human faculties and hold others to that standard, instead of lowering ourselves to communicating like animals.
> I tend to be very exacting in my word choice. If I used a specific word, I meant it. Many people I find speak in what I would describe as tone poems.
I love this. Ethernet is such shit. What do you mean the only way to handle a high speed to lower speed link transition is to just drop a bunch of packets? Or sending PAUSE frames which works so poorly everyone disables flow control.
reply