Everytime I read articles like that, I envy the engineers that worked in development of such tools. First microprocessors in jet fighters, electromechanical celestial navigation...
I think the opposite. Hardware is hard, as they say. Building such complex electromechanical designs to military specs without modern CAD tools must have been the equivalent of writing code in binary, without high level languages or even assembler.
It's a shame the only way to work on problems like these (and make a decent living) is to make tools of war.
The end game of much of silicon valley seems to be government (read: military) contracts. Probably because its the main branch of government that's thoroughly funded
I'm shooting from the depths of my memory, but I recall reading that one of the earliest government needs for computers was for the decennial census. At some point, it was requiring more than the 10 years to process the previous censuses (sp?) results.
another real fact: "Between 1964 and 1973, the United States conducted a covert "Secret War" in Laos, dropping over two million tons of ordnance during 580,000+ bombing missions, "
The revelation of secret bombing campaigns was one of the main reveals of The Pentagon Papers by Daniel Ellsberg. This arguably turned American public opinion against the war decisively as it revealed the USA had no cohesive strategy for winning and was repeatedly lying to the American public about the multiple fronts of the war in Southeast Asia for more than twenty years.
I’m sorry. I’m in a bad mood and that was unecessary. That being said, given the current hyper militarized climate in Silicon Valley, I find this detachment of the science / engineering from its use cases to be more than a little distasteful.
You are to be commended for an apology, it shows class and decency.
As for the militarization of Silicon Valley, it's been said we have god-like tech, but not the emotional discipline for such responsibilities. Aside from the fact that we humans suck, we repeat our worst mistakes without, it seems, a second thought. Then, when we're called out, we let our ego warp to any excuse that will suffice. The Kissinger example mentioned above almost made me ill.
Eh, it's easy to get caught by the romanticism of working on things like this, but I assure you besides like 4 people in charge of the big picture, everybody else is dealing with things which are exactly as mundane as things these days. Like putting it through 1000 heat cycles of -40 to 200 degrees and then vibrating it at 2gs for 200 hours and then measuring the tolerances of each part... or being in charge of three lines in a standards document for 2 years negotiating the details with the DoD.
I couldn't find the specification for the Angle Computer, but I've found specifications for other devices and you're exactly right: pages and pages of vibration requirements, fungus resistance, testing procedures, and then maybe if I'm lucky one page with useful information like the pinout. This is very annoying if I'm paying by the page. :-)
This means there are always 32 octets to a reverse-IPv6 address, and there are no shortcuts or macros to overcome this! That means if you wish to assign a singular name that maps from a legitimate /64 Network ID, you must populate 64 bits worth of octets in a zone with this data. It is an absurd non-solution. This never should've been allowed to happen, but it will basically mean that ISPs abandon reverse DNS entirely when they migrate to IPv6 implementations.
$ dig -x 2606:7100:1:67::26 | grep PTR
;6.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.7.6.0.0.1.0.0.0.0.0.1.7.6.0.6.2.ip6.arpa. IN PTR
Run this, then copy/paste the output into your zone file. Remove the ; and add "example.com." or whatever to the end.
I agree it's a pain to read, mostly because DNS addresses are written backwards, but an "absurd non-solution"? For a set of instructions that don't even depend on the format of the record (they work for v4 too), and which I could describe in one line in a HN comment?
If this is the craziest part of v6 then it must be incredibly well designed overall.
It is a pretty nice design, partly as a result of the fact that we've got a working system to look at (IPv4) and we have a lot more eyeballs "these days" (when IPv6 was designed, so, decades ago now) than when the Internet Protocol was a new idea.
I think perhaps the person you're responding to imagines that somehow DNS mandates a very naive implementation and so this behaviour would be incredibly expensive. The sort of person who sees a flip clock and imagines it needs 1440 different faces not 84 (or in some cases 72) because they haven't realised 12:34 and 12:35 simply use the same hour face.
“copy paste the output” is your solution? You think this somehow scales to manage entire networks like this with dynamic addressing? Do you perceive a network admin as a monkey who copy-pastes things all day?
This is exactly the absurd non-solution I am referring to, and it seems like if someone dismisses this with “one line instruction is all u need lol” they cannot even comprehend the scale at which real life operates.
Copying and pasting was just my attempt to demonstrate how simple a v6 rDNS record is to add. If you were interested in hiring me to write a solution for your ISP, that's fine, but you can't seriously expect random people to do it for you for free in a HN comment.
It should be pretty obvious that a script can generate these records from the forward records or from any other source of IPs/hosts, with no per-address effort needed on the part of the network admins.
Again, absolutely blind to the management of these things at scale. Yeah, I don't rightly care about "how easy it is" to generate them. You can't even comprehend or convey the massive number of records and zones that are involved in managing a network of devices that all require dynamic updates to reverse-DNS and add/update/remove device addresses on a regular basis.
DNS is a distributed database system, and so the challenge is not cramming in data with a brainless script, but managing how that data is distributed and accessed by thousands or millions of peer servers, caches, and clients worldwide.
IPv4 reverse-DNS was quite simple when it was broken on octet boundaries and there were only four of those boundaries in total. But even then, ISPs could often not be arsed to put the right data in there. Some left it blank and some waited until they were forced, by strict requirements that said reverse must match forward DNS in many cases.
I have never found any user-accessible software, not on any Linux distribution or on any cloud service, that would permit an ordinary consumer to manage even a /24 IPv4 network's reverse-DNS at scale, or programmatically, as opposed to by-hand "copy paste" as has been so condescendingly suggested here. There are plenty of hosted DNS providers, and there are plenty of monkey-brain Dashboard interfaces where you can pound out one A record at a time. But there was nothing to deal with dynamic addressing or DNS databases at scale. That's why IPv6's reverse DNS remains an absurd non-solution.
So... how many records and zones? I'm pretty sure I could convey it if I could work out what you were talking about.
You went from "you can't even comprehend or convey the massive number of records and zones that are involved" to one v4 /24, managed "at scale" but by an ordinary consumer, who you expect to be capable of programming. This is a bit all over the place.
It's not any harder to deal with v6 reverse DNS than it is v4. In fact, making every reverse label 4 bits instead of 8, combined with v6 being much bigger than v4, makes rDNS much easier to deal with in v6 because you can generally delegate reverse zones on exactly the same boundaries that you delegate the corresponding IP blocks. In v4, you often need to delegate on boundaries that aren't /8, /16 and /24 and it suddenly gets more annoying.
Scaling up for rDNS is no different to scaling up for forward DNS. It's a well-understood problem.
Anyone who's ever had to delegate DNS authority on anything other than an 8-bit boundary can understand the value of that feature.
At face value, yeah, that's replacing "8" with "4," but from a practical perspective, delegating authority for a customer IPv4 /25 requires, at minimum, 128 records. (Granted, there's also no practical need to be stingy about IPv6 allocations -- that IPv4 /25 customer could simply receive an IPv6 /48.)
I would firmly expect to see a lot more formulaic reverse (and presumably forward) DNS responses, where needed, since filling files with records you need to store on disk (and in memory) doesn't scale well. The tech has existed for years; I wrote my own implementation years ago, but these days I'd use something like PowerDNS with https://github.com/wttw/regexdns .
Might as well go big. 24 extra bytes per packet is not that big deal. And having that much extra space means you can screw up design multiple times and still be able to reuse lot of infra. Also getting rid of idea that you are even trying to manually manage the address space eases many things.
But it's not human readable anymore, nor backwards compatible. The expectation was that the industry is reasonable, but it proved to be as hard as it would be to push breaking email v2 implementation.
If you think v6 isn't backwards compatible then literally anything bigger than 32 bits will never count as backwards compatible for you. The whole point of making the address space bigger is to make it bigger, so what do you expect to achieve by complaining that the result is incompatible?
As a human, I've found that e.g. "fd00::53" is perfectly readable to me, and most of the time you're interacting with strings like "news.ycombinator.com" anyway which is identical to how it works in v4, so I'm not sure how far I'd agree with that part either.
Because everyone got updates immediately. If the default was 7 days, almost no one would get updates immediately but after 7 days, and now someone only finds about after 7 days. Unless there is a poor soul checking packages as they are published that can alert the registry before 7 days pass, though I imagine very few do that and hence a dedicated attacker could influence them to not look too hard.
If I remember correctly, in all the recent cases it was picked up by automated scanning tools in a few hours, not because someone updated the dependency, checked the code and found the issue.
So it looks like even if no one actually updates, the vast majority of the cases will be caught by automated tools. You just need to give them a bit of time.
While reading articles like this, I feel like we're just in the "denial" stage. We're just trying to look for negatives instead of embracing that this is a definite paradigm shift in our craft.
I don't think the argument is correct. Reasoning LLM will check itself and search multiple sources. It's essentially doing the same mental process as human would. Also consulting multiple LLMs completely breaks this argument.
IME, even when an LLM is right, a few follow-up questions always lead to some baffling cracks in its reasoning that expose it has absolutely no idea what it's talking about. Not just about the subject but basic common sense. I definitely wouldn't call it the "same mental process" a human does. It is an alien intelligence, and exposing a human mind to it won't necessarily lead to the same (or better) outcome as learning from other humans would.
Author’s central point is that an LLM answer “is optimized for arrival, not for becoming” (to paraphrase from the Google “Lucky” part).
So a reasoning LLM that does the comparisons and checks “like a human” still fails the author’s test.
That said, this still feels like a skill issue. If you want to learn, see opposing views gather evidence to form your own opinions about, LLMs can still help massively. You just have to treat them research assistants instead of answer providers.
but the point is that the metal process should be done by yourself. it is the difference between finding the answer myself or asking my classmate to just share his answer with me. in the latter case i am not learning what my classmate learned.
Tracking people is dystopian. But only collection of data allowed us to train the AI. I don't think EU has issues with tracking people unless a private party does it.
The display has some bearing on this. Generally, 1080p is good enough but some cinematography benefits from better resolution and as a result, requires a better display.
And here I am fighting gitlab pipelines.
reply