Hacker Newsnew | past | comments | ask | show | jobs | submit | matja's commentslogin

The eigenvalue distribution looks somewhat similar to Benford's Law - isn't that expected for a human-curated corpus?

I would expect that for any sampling of data that has a roughly similar distribution over many scales.

Which will be true of many human curated corpuses. But it will also be similar to, for natural data as well. Such as the lengths of random rivers, or the brightness of random stars.

The law was first discovered because logarithm books tended to wear out at the front first. That turned out to because most numbers had a small leading digit, and therefore the pages at the front were being looked up more often.


SUB has higher latency than XOR on some Intel CPUs:

latency (L) and throughput (T) measurements from the InstLatx64 project (https://github.com/InstLatx64/InstLatx64) :

  | GenuineIntel | ArrowLake_08_LC | SUB r64, r64 | L: 0.26ns=  1.00c  | T:   0.03ns=   0.135c |
  | GenuineIntel | ArrowLake_08_LC | XOR r64, r64 | L: 0.03ns=  0.13c  | T:   0.03ns=   0.133c |
  | GenuineIntel | GoldmontPlus    | SUB r64, r64 | L: 0.67ns=  1.0 c  | T:   0.22ns=   0.33 c |
  | GenuineIntel | GoldmontPlus    | XOR r64, r64 | L: 0.22ns=  0.3 c  | T:   0.22ns=   0.33 c |
  | GenuineIntel | Denverton       | SUB r64, r64 | L: 0.50ns=  1.0 c  | T:   0.17ns=   0.33 c |
  | GenuineIntel | Denverton       | XOR r64, r64 | L: 0.17ns=  0.3 c  | T:   0.17ns=   0.33 c |
I couldn't find any AMD chips where the same is true.

.03ns is a frequency of 33 GHz. The chip doesn't actually clock that fast. What I think you're seeing is the front end detecting the idiom and directing the renamer to zero that register and just remove that instruction from the stream hitting the execution resources.

SUB does not have higher latency than XOR on any Intel CPU, when those operations are really performed, e.g. when their operands are distinct registers.

The weird values among those listed by you, i.e. those where the latency is less than 1 clock cycle, are when the operations have not been executed.

There are various special cases that are detected and such operations are not executed in an ALU. For instance, when the operands of XOR/SUB are the same the operation is not done and a null result is produced. On certain CPUs, the cases when one operand is a small constant are also detected and that operation is done by special circuits at the register renamer stage, so such operations do not reach the schedulers for the execution units.

To understand the meaning of the values, we must see the actual loop that has been used for measuring the latency.

In reality, the latency measured between truly dependent instructions cannot be less than 1 clock cycle. If a latency-measuring loop provides a time that when divided by the number of instructions is less than 1, that is because some of those instructions have been skipped. So that XOR-latency measuring loop must have included XORs between identical operands, which were bypassed.


Alpha: r31, f31

> This is a bit like saying stop using Ubuntu, use Debian instead.

Not really, because Ubuntu has always acknowledged Debian and explicitly documented the dependency:

> Debian is the rock on which Ubuntu is built.

> Ubuntu builds on the Debian architecture and infrastructure and collaborates widely with Debian developers, but there are important differences. Ubuntu has a distinctive user interface, a separate developer community (though many developers participate in both projects) and a different release process.

Source: https://ubuntu.com/community/docs/governance/debian

Ollama never has for llama.cpp. That's all that's being asked for, a credit.


OK. That says absolutely nothing about actual UX or anything that matters to most actual users (as opposed to argumentative HN ideologues).

If you think files are easier than a database, check out https://danluu.com/file-consistency/

What if the "slug" was a prefix for the API key revocation URL, so the API key was actually a valid URL that revoked itself if fetched/clicked? :)

i suspect a lot of tools will try to fetch the url without explicit user action (e.g. messengers do that kind of crap). Gotta be hard to keep keys non-revoked, which is a nice side-effect

but api keys arent meant to be revoked once used right?

I suppose there could be two checksums, or two hashes: the public spec that can be used by API key scanners on the client side to detect leaks, and an internal hash with a secret nonce that is used to validate that the API key is potentially valid before needing to look it up in the database.

That lets clients detect leaks, but malicious clients cant generate lots of valid-looking keys to spam your API endpoint and generate database load for just looking up API keys.


That second hash is called a Message Authentication Code (MAC), it's what the JWT HS256 algorithm does

I'm running Gemma 4 with the llama.cpp web UI.

https://unsloth.ai/docs/models/gemma-4 > Gemma 4 GGUFs > "Use this model" > llama.cpp > llama-server -hf unsloth/gemma-4-31B-it-GGUF:Q8_0

If you already have llama.cpp you might need to update it to support Gemma 4.


The "Nvidia on Linux compatibility" issues are something I wonder if I have side-stepped somehow either by lucky choice of GPUs, or lucky choice of Linux distros.

Was/is this a distro thing, or an actual issue?

Every Nvidia I've used [1] has worked perfectly, from the change for Xfree86 to Xorg, through the Compiz desktop wobbly window craze, to the introduction of GPGPU APIs like CUDA/OpenCL and recently Vulkan.

I do recall once helping a friend setup a Debian and a Ubuntu machine with Nvidia (which I never used before) and it took some figuring-out of how to install non-free drivers, so maybe my choices of Gentoo and Arch (not being as conservative towards non-free licenses as Debian/Ubuntu) always made it a non-issue?

[1] 6800 Ultra, 7800 GTX , 7900 GTX, 8800 GTX, GTX 280, GTX 480, GTX 680, GTX 760 Ti, RTX 2080, RTX 4080... probably missed some.


I've also never had any trouble with NVIDIA on the desktop. I think most issues people have are on laptops, which have odd hybrid/dual GPU setups, and which exercise suspend/hibernate much more aggressively.


That's a good point that I hadn't considered. I've never had a laptop with Nvidia, I probably subconsciously avoided those dual GPU setups as they sounded hacky and I never really needed fast 3D on a laptop.


FWIW I have an Asus Zephyrus G14 and the dual graphics cards works pretty well in Linux in hybrid mode. It's pretty cool, certain things (games) run on the dedicated Nvidia GPU. Everything else runs on the built in AMD GPU.

I'm guessing it's because the laptops are popular enough that there's a dedicated group of people that make it work [0].

I'm still on X11, dunno what the story is like with Wayland though.

[0] https://asus-linux.org/


As far as I know dual graphics laptops are a pain no matter the OS and chips.

The one sample i know of first hand is an amd/nvidia laptop that never obeys the settings about which GPU to use. In Windows.


If you have sufficiently old Nvidia GPUs, eventually drivers and supporting software stops shipping with distros. I have a bunch of older laptops that support in Ubuntu existed for like 10 years ago, but drivers stopped being updated and Ubuntu dropped them from their repos.


We've had open source AMD drivers for... 20ish years now? Meanwhile Nvidia begrudgingly added drivers support in the last year or two. So maybe some recency bias.


> The "Nvidia on Linux compatibility" issues are something I wonder if I have side-stepped somehow either by lucky choice of GPUs, or lucky choice of Linux distros.

It could also be lucky consequence of what games you play and what else you do with your computer.

I was a long-time Nvidia user, and had plenty of problems with their drivers. They ranged from minor annoyances when switching between virtual consoles (which some people never do) to total system freezes when playing a particular game (which some people never play). It would have been be easy for someone else to never encounter these problems.

Since switching to AMD a couple years ago, I have been much happier.


nvidia x11 support has been pretty good for quite some time. It's nvidia wayland support that has been less than stellar. That has gotten better in the last year to year and a half now.

Now, I think it's no big issue so long as you are using a distro that supports up to date drivers. That should be about everyone now as I think even debian stable currently has decent drivers.


Does Nvidia need to support Wayland, or does Wayland need to support Nvidia? I.e., what is the support at the API boundary which is missing?


I'm not sure exactly what the API boundries are.

I know that Nvidia is integrated into the kernel and that wayland is talking to nvidia through the kernel. I also know that for accelerated rendering, wayland is talking directly to the nvidia drivers (bypassing the kernel? IDK).

But I also know that in the nvidia release notes, they've mentioned changes to improve support and functionality of wayland.


Same, no issues with any nVidia card going back two decades, several PC's and laptops, Linux and Windows.


It has more to do with how you're using the cards. I don't see you mention gaming at all, that's where the biggest performance penalty and lack of support is apparent.


I just migrated to linux (Bazzite) in March, I have a RTX 3080. The only issue I ran into was that video stream compression is not supported on linux so I can't run 1440p 165hz with HDR on because my monitor doesn't support HDMI 2.1. Either I need to turn off HDR or lower refresh rate to 120hz.


I'm nowhere near qualified to say if the design is not safe, but I'm suprised the article doesn't mention that some heat shields are designed to indeed, blow chunks: https://en.wikipedia.org/wiki/Atmospheric_entry#Ablative


This one is an ablative heat shield, but it’s supposed to flake off gracefully, not break off in large chunks.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: