Hmm I dunno if I'm a fan of making things painful for students just because it was painful in the past. When I "learnt" assembly in uni we had to manually assemble the opcodes and type them into a 10 digit keypad. I didn't learn anything, it just put me off.
I'm pretty skeptical that using a line editor will have helped them learn. It probably helped them memorise their code but is that really learning? Dubious.
I tried that but most of those seem to be false positives. It seems to report Triple Construction based just on commas, which were apparently way more common in writing of the time. It also reports em-dashes which are obviously irrelevant in this case.
And nobody is really saying that you need to completely eliminate these constructs. 17 matches out of 305 words is an order of magnitude less than the example it opens with.
Yeah but there's a reasonable amount of content that has real information from a real human but they've used AI to help write it. Maybe they got a first draft from AI and then fixed it.
Those can still be things that I want to read but the AI rhetorical style is so tedious and overused at this point it's really annoying to read them. So this tool would help with those cases. (Assuming people actually use it.)
I believe you, but I have yet to see any I want to read.
I've had 5 or 6 times where I've thought, OK, finally someone has produced something useful this way. And in the end I've always been bitten, by hallucinations or by an inability to work out what the author cared about or was trying to express.
Again, I believe you've had ones where you did want to read it, I'm not trying to contradict that. I'm still waiting though, and until then I'd really prefer the aesthetic tells to still be obvious.
I think the problem with this logic is that it views language performance on an absolute scale, whereas people actually care about it on a relative scale compared to how fast it could be.
If you tell your boss "We spent $1m on servers this month and that's as cheap as its possible to be" he'll be like "ok fine". If you say "We spent $1m on servers this month but if we just disable this compiler security flag it could be $500k." ... you can guess what will happen.
(Counterpoint though: people use Python.)
But counter-counterpoint: Rust does so much more than preventing runtime memory errors. Even if Fil-C had no overhead (or I was using CHERI) I would still use Rust.
> than if you were using the equivalent tooling for Pascal, C, or Zig.
I think GP is talking about not-directly-related-to-safety things like sum types/pattern matching/traits/expressive type systems/etc. given the end of that paragraph. I don't think you can get "equivalent tooling" for such things the languages you list without raising interesting questions about what actually counts as Pascal/C/Zig.
I know what they were talking about. It was clearly intended to be a cheerfest for Rust.
> I don't think you can get "equivalent tooling" for such things the languages you list without raising interesting questions about what actually counts as Pascal/C/Zig.
I said builds. All of the languages I mentioned have "equivalent tooling" for that (i.e. compilers—to produce builds for the programs you choose to write in those languages).
> And if the target uses sudo at all you don't even need an exploit!
Why would a target executable use sudo? There are proper mechanisms for automated elevation of permissions and sudo isn’t it.
sudo is designed for user interactivity. And by default prompts for a password. However some people get lazy and disable the password entry requirement.
A target user. If you get local code execution on the account of a user that uses sudo you can trivially got root. Doesn't matter if they disabled the password authentication or not.
Of course it matters if they disabled password authentication. If you require password authentication when running sudo then an attacker has to find a RCE exploit and then crack a password. Which is waaay beyond any effort the average attacker is willing to invest. Because At that point, root access isn’t really worth the effort.
An attacker will probably just use the host for sending spam emails, bot / DDoS traffic or look for other daemons they can jump to which weren’t web accessible (eg a database).
And furthermore, if you’ve got a RCE in a daemon then that code is the running as the daemons’ user. Which shouldn’t be in the sudoers file (eg wheel group) to begin with.
Interesting. If that’s possible (I haven’t tested it, but I’m sure it is) then you wouldn’t even need to log the password. You could just alias sudo to a bash script that runs your malicious payload using the real sudo. Then the user would run the command, be prompted for their password by the real sudo, and be none the wiser that a malicious script has just been executed
For what it’s worth, Windows’ security model says it’s not an exploit that programs can grant themselves admin rights if the user is an admin (https://github.com/hfiref0x/UACME). But afaik Linux doesn’t have that model so it is a bit of an issue that this is possible
It’s literally in the opening post you replied to:
> A local privilege escalation to root via an exploitable service?
> Doesn't Linux have one of these CVEs...each week?
Why else would people be talking about docker, and user/group ownership of running services, and so on and so forth, in response to their comment and yours?
How are you going to do that without write access to the users home directory?
Like I said before, your RCE exploit will be running as the user and group of the service you exploited. For example www:www
So you’re not going to be able to write into Joe Bloggs .bashrc file unless Joe was stupid enough to enable write permission to “other”. Which, once again, requires the user to purposely modify the system into being less secure than its default configuration
> your RCE exploit will be running as the user and group of the service you exploited. For example www:www
Only if the exploit is through a web server or similar. If it's through the user's web browser, email client, video player, etc. etc. then you'll have write access to their home directory.
But thats not a daemon then. Thats a completely different type of exploit from the ones we were originally talking about.
Yes, if a desktop application has a bug then it can do damage. But at that point, who cares about sudo? The exploit already has access to your ssh keys, browser cookies and history (so can access banking and shopping sites), crypto-currency wallets and so on and so forth.
What an exploit has access to here is so much worse than getting root access on a desktop OS.
IPv4 is going to be a necessity for many many decades no matter what Microsoft do. Even when IPv6 is at 99%, people aren't going to want 1 in every 100 people to not be able to access their site at all. It'll need to be like 99.9% before we start seeing serious IPv6-only services.
I don't know what the percentage would be, but we do have some historical precedent that could give us a clue.
Best one I can think of is when bigger websites started actually dropping SSLv3 and TLSv1.0 (and later TLSv1.1) support, cutting off older browsers and operating systems. Google and Amazon still support TLSv1.0, but plenty of others (including Microsoft) have dropped 1.0 and 1.1. HN itself doesn't accept 1.1 anymore either.
Then there's browser support. Lots of websites - big and small - cut off support for Internet Explorer 6 when it was somewhere below 5% marketshare because the juice was no longer worth the squeeze. Of course, few of those actually fully cut off the ability to browse the (now broken) website fully but it's a datapoint suggesting trade-offs can and will be made for this sort of thing. Or to put it in the present: a significant amount of webapps don't support Firefox (3% market share) to the extent their product is completely unusable in it.
Sure, but the implementation in the public clouds is totally backwards.
What they should have done is have their core network default to IPv6 with IPv4 an optional add-on for things like public IP addresses, CDN endpoints, edge routers, VPNs, etc...
Instead, their core networks are IPv4 only for the most part with IPv6 a distant afterthought.
Yeah my thoughts exactly. Definitely slop. I have no objection to using AI to help writing. I just don't want to read the same sloppy cliches again and again and again. The short sentences. The Bigger Picture. Here's the rub. It's not just A, it's B.
It's like those cliche titles - for fun and profit, the unreasonable effectiveness of, all you need is, etc. etc. but throughout the prose. Stop it guys!
Sure. Short sentences like "It shouldn’t be.", "I’ve moved on.", "Ollama didn’t.", etc.
Not-this-but-that like "The local LLM ecosystem doesn’t need Ollama. It needs llama.cpp."
Weird signposting: "Benchmarks tell the story."
Heres-the-rub conclusion: "The Bigger Picture"
Starting every title with "The ...".
It's definitely largely human-written, but there are enough slop-isms to make it annoying to read. And of course it's totally possible for a human to write an an AI style, but that doesn't make it any less annoying.
* General DIY/inventions: DIY Perks, Uri Tuchman, Stuff Made Here, Colin Furze, Applied Science, Breaking Taps
I think it's actually not too bad at surfacing this stuff. They also have a "New to You" button you can click.
My main complaint is it will recommend a specific video to you for aaaages without you clicking on it before it finally realises you aren't interested. You can manually say you aren't interested, but it's two clicks and you shouldn't need to do that anyway.
Thanks!
I know a couple of 'em, will check the rest.
Indeed not hard to surface, but a handful of channels is a drop in the ocean of all the videos that must have been uploaded and are at least nice to watch and informative. Sometimes I get these rare gems inside my recommendations; a small channel with a couple of very interesting videos, maybe not the best or slickest productions, but definitely of interest. I guess the algo strongly favors a regular upload rhythm.
I can subscribe to these channels, but I can't even find them in my subscriptions. There's no overview, and sometimes I subscribe to channels that I know I already subscribed to (the channels themselves also experienced this unsubscribing behavior and made this known in their videos).
> My main complaint is it will recommend a specific video to you for aaaages without you clicking on it before it finally realises you aren't interested. You can manually say you aren't interested, but it's two clicks and you shouldn't need to do that anyway.
> I can subscribe to these channels, but I can't even find them in my subscriptions. There's no overview
You can just click "subscriptions" on the left to only show videos from channels you are subscribed to, and then there's another button somewhere to show a list of your subscriptions.
> a handful of channels is a drop in the ocean of all the videos that must have been uploaded and are at least nice to watch and informative
Big caveat: do not try to use Git and JJ in the same directory. It's probably fine if you only use JJ, but if you mix them you will horribly break things.
I suppose it depends what you mean by "horribly break things".
The only thing I've noticed is that `jj` will leave the git repo with either a detached HEAD, or with a funny `@` ref checked out.
I don't think that would trouble someone who's experienced with git and knows its "DAG of commits" model.
For someone who's less experienced, or only uses git for a set of branches with mostly linear history (like a sort of "fancy undo"), I could imagine getting a shock when trying to `git commit` and not seeing them on any of the branches!
Jujutsu uses git as its primary backing store and synthesizes anything else it needs on top on-the-fly. Any incompatibility here is considered a serious bug.
Obviously I can’t argue against your lived experience, but it is neither typical nor common. This is quite literally an explicitly-supported use, and one that many people do daily.
> Obviously I can’t argue against your lived experience, but it is neither typical nor common.
I consider myself a proficient jj user, and it was my lived experience too. Eventually you get your head around what is going on, but even then it requires a combination of jj and git commands to bring them into sync, so `jj status` and `git status` say roughly the same things.
The friction isn't that jj handles state differently, it's that `jj git export` doesn't export all of jj's state to git. Instead it leaves you in a detached HEAD state. When you are a newbie looking for reassurance this newfangled tool is doing what it claims by cross checking it with the one you are familiar with, this is a real problem that slows down adoption and learning.
There are good reasons for `jj git export` leaving it in a detached head state of course: it's because jj can be in states it can't export to git. If we had a command, say `jj git sync` that enforced the same constraints as `jj git push` (requiring a tracked branch and no conflicts) but targeted the local .git directory, it would bridge the conceptual gap for Git users. Instead of wondering why git status looks wrong, the user would get an immediate, actionable error explaining why the export didn't align the two worlds.
I'm pretty skeptical that using a line editor will have helped them learn. It probably helped them memorise their code but is that really learning? Dubious.
reply