Hacker Newsnew | past | comments | ask | show | jobs | submit | dboreham's commentslogin

It's almost as if management was a useful function in organizations ;)

I think it's more the cost to find a vulnerability that has significantly reduced, not the possibility that the vulnerability could have been found. But that cost mattered tremendously because someone has to fund the effort to find the bugs. This economics also applies to attackers.

Is Firefox less invested in this than Curl? I mean there must be some explanation for this.

It's in the first sentence of your quote:

"our continued collaboration with Anthropic"

Read this as: "we get discounts, rate limit increases, a direct line to responsible product managers; in exchange we participate in friendly marketing." It's extremely common in this line of business - typical of database vendors, software tool companies, etc.


This is more in response to my original post, but okay interesting point. (When I said "invested" here I meant invested in finding security flaws.)

Conspiratorial nonsense

I would expect Firefox to be less invested in this than Curl. Firefox is aimed at consumers, Curl is embedded in a wide variety of products.

Attacker might have MITMed the console.

I think the idea is the attacker didn't compromise both the local machine and the remote log sink machine. If you want to get really fancy the techniques used in cert revocation logs/blockchains could be used.

Blockchain is completely unnecessary (as it always is; I thought people stopped trying to ram that garbage into everything years ago).

I was answering this question from GP:

> Unfortunately it appears openssh doesn't even have an option to create such a logfile!! Why not??

The answer is because in Linux systems the logging logistics are handled at the system level, just like starting and running openssh itself. The answer to "why not" is because that's the logging system's job, not openssh's.

rsyslogd is one simple and direct way to distribute logs to remote machines.


Almost full-circle back to when Oracle took over the entire volume and implemented its own filesystem.

I wonder why this is not more common. LVM is easy to set up, and it's already common to allocate volumes for things like disk images for VMs, so why not databases?

Some Linux filesystems, notably ext4 and XFS, provide the necessary features to get 90% of the benefit simply by using O_DIRECT correctly. The last 10% is achieved by doing direct I/O to raw block devices, with the obvious caveat that this is not as easy to manage.

Both of these are commonly done in database storage engines.


If you preallocate and O_DIRECT, haven't you basically soaked up most of the benefit of skipping the filesystem?

Because the speed increase is - on modern, properly tuned filesystems - surprisingly small, due to how RDBMS's manage their pool; by working on large container files, they avoid most of the filesystem overhead.

Short TTL DNS or BGP anycast.

You don't need NAT traversal when talking to a cloud service.

We have browser-based HW used inside a construction site manager’s site office behind a random FW

JS is used because it's (still) the only code you can run in a browser. Although node and bun are regular OS processes, their use/popularity traces back to that browser environment one way or another.

I use the subscription and so far have had no problems.

JS people like shiny things.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: