Hacker Newsnew | past | comments | ask | show | jobs | submit | mkj's commentslogin

Looking at the PR discussed, it's 34 commits! I'd probably ignore that too as a maintainer. The PR description isn't particularly motivating, "Cleans up the implementation", "see #6735 for the actual motivation".

Fair call-out, although couple things to point out, I am used to a Squash Merge workflow which I think makes reviews easier based on comments as the reviewer gets to see what changed after their comment easier. Many of the commits are merge commits. If you actually look at the timeline of the original PR, you will see that it also started with a smaller scope but as time passed I also went through the cycle of "while at it, let me also fix this" loop that I mentioned in the article.

The point of the article is: there is a feature that people would like, there is someone who wants to add it, the appropriate time and a lot more for this feature to be merged has been spent yet the feature is nowhere to be found. That's the two way street I am trying to get across. I wish I wasn't even able to open the PR, I wish the maintainer would utilize more automation tools to groom feature requests and potential contributors with agreed upon plans and agreed upon timelines so that both sides time could be used much more effectively.

As far as PR descriptions etc goes, I asked multiple times what the best route to merging would be. If that went through better descriptions, I was happy to do that, as you can see, I wasn't aware of the "no conventional commits" rule, so in my next PRs I used the correct approach, but that should be completely automatable. Yes, I should have spent more time studying Jellyfin's conventions, but I shouldn't have to, not because its unfair for me, simply because there are more contributors than maintainers, so maintainers should not rely on desired behavior from contributors, they should force that behavior as much as possible.


Many of those are "Merge branch 'master' into armanc/subtitle-sync-refactor". Rebasing the PR on top of master would bring that down to like 15 or something.

Fair enough. A 15 commit PR is still pretty long winded.

Isn't Nix just reinventing what Vesta did for software reproducibility decades earlier? https://vesta.sourceforge.net/

Are you saying Bram hasn't worked on VCS problems much? https://web.archive.org/web/20071213090008/http://codeville.... is 20 years.


It looks like firecracker already supports ACPI vmgenid, which will trigger Linux random to reseed? https://github.com/firecracker-microvm/firecracker/blob/main...

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

So that just (!) leaves userspace PRNGs.


BSD0 doesnt


Does that page even say which RISC-V CPUs are being used that are slow? I couldn't see it, which seems a bit of pointless complaining.


> RISC-V builders have four or eight cores with 8, 16 or 32 GB of RAM (depending on a board).

Which boards are used specifically should not matter much. There's not much available.

Except for the Milk-V Pioneer, which has 64 cores and 128GB ram. But that's an older architecture and it's expensive.


Intriguing work! Does it panic on any bad inputs? That's better than memory unsafety of libxml2, but still a DoS concern for some servers.


No it's not, it's 6pm!


> For example, std::time::Instant is implemented on the GPU using a device timer

The code is running on the gpu there. It looks like remote calls are only for "IO", the compiled stdlib is generally running on gpu. (Going just from the post, haven't looked at any details)


Which is a generally valid implementation of IO. For instance on the Nintendo Wii, the support processor ran its own little microkernel OS and exposed an IO API that looked like a remote filesystem (including plan 9 esque network sockets as filesystem devices).


I'm surprised this article doesn't provide a bigger list of calls that run on the gpu and further examples of what needs some cpu interop.


Flip on the pedantic switch. We have std::fs, std::time, some of std::io, and std::net(!). While the `libc` calls go to the host, all the `std` code in-between runs on the GPU.


It looks like it is only applied for PTY sessions, which most computer-computer connections wouldn't be using.

https://github.com/openssh/openssh-portable/blob/d7950aca8ea...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: