Hacker Newsnew | past | comments | ask | show | jobs | submit | dc396's commentslogin

Oh joy. Now the volunteers maintaining the IETF RFC tools get to waste their time trying to prevent folks who think it's cute to litter all over the IETF drafts.

I suppose it was inevitable. Another reason why we can't have nice things...


I was hoping someone had just shown that DNS was Turing complete (extending https://web.cs.ucla.edu/~todd/research/hotnets21.pdf). Using the DNS as a remote file store isn't as interesting.

Some quibbles:

- TXT records were intended as a catch all for stuff that didn't have a resource record officially defined. Their use in email authentication came (much) later and, arguably, for bad reasons (there was a record defined for one form of email authentication, SPF, but folks in the IETF thought it was too hard to have client libraries and DNS server user interfaces updated to support it, so they decided to use TXT instead).

- You can fit more than around 65,280 bytes in TXT records, not "about 2,000 characters of text" (Maybe the 2000 limit is a limitation of Cloudflare?)

- If you control the authoritative server, you could, in theory, chain an unbounded number of names within a zone, e.g., 000001.example.com, 000002.example.com, etc., to store an unlimited amount of data.

- Or, you can use open resolvers as a file store (https://blog.benjojo.co.uk/post/dns-filesystem-true-cloud-st...)


> Ok, and?

According to our new AI overlords, a short synopsis of potential risks of BPC 157 based on mechanistic and animal work to date (don't know human risks because there haven't been sufficient clinical studies):

* Possible pathologic angiogenesis (abnormal blood‑vessel growth), which theoretically could support tumor growth or inflammatory and autoimmune processes. * Modulation of nitric‑oxide pathways that, at high levels, might contribute to anemia, altered drug metabolism (CYP enzyme activity), and possibly neurodegenerative processes in theory. * Concerns that its pro‑healing, pro‑growth signalling (e.g., FAK–paxillin) could encourage cancer spread if malignant cells are already present; this remains theoretical, with no proof in humans. * Possible liver and kidney toxicity suggested in some commentary and extrapolated from preclinical work, but not well characterized in people. * Immune reactions or allergic responses, including fevers, rash, hives, muscle aches, or systemic inflammatory responses

These do not appear to be results that would appear overnight. It would be "nice" if the folks injecting random shit into their bodies also disclaimed any subsequent medical intervention as a result of said shit, but that I suspect that's unlikely.


My total layman view is that powerful drugs often have powerful side effects.


That's because you grew up in a society still deeply coded to puritan moral viewpoints.

People for so upset that GLP-1 has no long term side effects.

There's still the crowd completely sure everyone will get HyperCancer in 10 years or something (they won't).


We have no specific reason to believe there are concerns with GLP-1s for cancer or anything else, beyond the mildest signal in rodent studies around thyroids.

We do not have robust clinical data for things like BPC-157 but we do have strong preclinical data and an understanding of the mechanisms in play.

I use BPC-157/TB-500/Ghk-CU/KPV - so I'm certainly OK taking the risks. But those mechanisms mentioned before? The same things we're counting on for healing and inflammation reduction are the same things that we know can cause an increase in tumor growth rate and chance of metastasizing. VEGF/VEGFR2 expression are even suppression targets for some cancer therapies.

Are there powerful and useful medications out there, available today, that we both don't have good scientific data on and are free enough of serious side effects? For sure! Is everything out there that, though? No. Some things that work will have too serious of a side effect profile to be feasible. Some things won't work at all, despite however much anecdata is out there.

As for the general idea... I agree there's no law that says a medicine with a strong positive effect must also have strong side effects. And we have plenty that don't - statins, particularly the latest generation, like pitavastatin, are effectively side effect free for the hugely overwhelming majority of people and have great lipid lowering effects. Even older ones showed extremely minimal incidents of things like muscle pain - a vanishingly small number of people relative to the total amount on the medications report muscle pain, and when investigated, quite a lot of even that ends up being unrelated to the statins. Yet the narrative persists that make it sound like anyone on statins is going to have their muscles ache 24/7


I'm glad we have GLP-1, and I don't think there are really major side effects. But they are ineffective outside clinical trial setting for treating obesity.

It seems to be like treating alcoholism with disulfiram: it's a miracle in clinical trials but in the real world the patients just lower the doses or discontinue treatment after 1-2 years and go back to their old habits.


> But they are ineffective outside clinical trial setting for treating obesity.

This is one of the wildest claims I have ever seen on this website.

Would you claim insulin is ineffective outside of clinical trials for treating type 1 diabetes because people have to keep injecting it?


I hope it sounds less wild if you think obesity as disease of addiction. Reducing GLP1 dose can increase the enjoyment in eating, so it makes sense why treating obesity with GLP1 is like treating alcoholism with disulfiram: Effective in theory but hard to adhere outside trials.

Type 1 diabetes (or majority of diseases) doesn't involve addiction.


It is not ineffective outside of clinical trials. All the evidence says that people gain some weight back after they discontinue treatment - which is not a lack of efficacy. But they also usually gain back less then they lost.

https://pmc.ncbi.nlm.nih.gov/articles/PMC12361690/


It's kind of two separate topics: 1. Whether patients can adhere to GLP1. 2. Whether discontinuation leads to weight regain.


> they are ineffective outside clinical trial setting for treating obesity

This is totally false. I know a number of people who took GLP-1 to treat their obesity and then stopped and have stayed not obese.


I can't reply elsewhere so I will reply to this again.

> In my friends, all of them stopped taking GLP-1 drugs within 2 years because all of them lost the weight they wanted to. Out of curiosity, what sources lead you to believe this?

Anecdotes like this are interesting but in medicine they are not sufficient to make factual statements about drugs. In meta-analyses there is weight regain which is steeper as more weight is lost during treatment [1].

The weight regain seems to be rather slow, it can take years until the baseline weight is reached.

[1] https://www.bmj.com/content/392/bmj-2025-085304


> In meta-analyses there is weight regain which is steeper as more weight is lost during treatment

What does "steeper" mean? The studies I've seen show a net weight loss, even after regain, for the median patient.

> The weight regain seems to be rather slow, it can take years until the baseline weight is reached

Maybe. Right now, however, the evidence shows solid effects outside clinical settings. Your original statement was wrong–your sources own refute the claim.

If you're arguing the effects in the real world haven't consistently been as ridiculous as they were in clinical trials, sure, you get a brownie point. But broadly speaking, these drugs are terrifically effective, both when taken for life and when taken intermittently.


If only there were a federal administration whose responsibility it was to collect data about food and drugs so we could rely on something more than anecdotes from random strangers on the Internet.


Do you have a link to those data showing GLP-1 agonists are ineffective?


I emphasize it's like the drug disulfiram: Very effective as long as patients take the full dose, but the lack of real-world efficacy stems from the difficulty in adhering to the treatment.

This study found that 84.4% non-diabetic patients stop taking GLP-1 drugs within two years. https://jamanetwork.com/journals/jamanetworkopen/fullarticle...


> the lack of real-world efficacy stems from the difficulty in adhering to the treatment

Do you have a source for this "lack of real-world efficacy"?

> This study found that 84.4% non-diabetic patients stop taking GLP-1 drugs within two years

"With a with a median on-treatment weight change of −2.9%" [1]. Of those who discontinued and experienced "weight gain since discontinuation," they were "associated with an increased likelihood of GLP-1 RA reinitiation."

I'm genuinely struggling to see how this source shows real world inefficacy. In my friends, all of them stopped taking GLP-1 drugs within 2 years because all of them lost the weight they wanted to.

Out of curiosity, what sources lead you to believe this?

> it's like the drug disulfiram

Have clinicians made this connection?

[1] https://jamanetwork.com/journals/jamanetworkopen/fullarticle...


You didn’t come here with data. You came here with anecdotes and asserted that they were conclusive.


Have you ever looked at leaflets attached to any medicine prescribed by doctors?


You mean the ones that are the result of experience through controlled clinical trials with statistical analyses and error bars, yep, sure. I guess I have a bit more faith in those leaflets and the testing regimes that generates them than the word of some gymbro or influencer who injected themselves and didn't immediately fall over dead.


> It is very safe and tolerable.

Can you point to the clinical trials that demonstrate this?

> Doctors seem to be giving GLP peptides out like candy and those are injected.

There have been several _thousand_ clinical trials that have shown GLP-1s to be safe and effective.


Also LOL at the notion "peptides are safe because GLP-1 exists".

Pretty much all venoms are mixes of short (10-15 base) peptide chains.

It's the naturalistic fallacy in an utterly perverse form ( and also goes to show why a regulatory system is good: the average person has no idea that they're dealing with or even common sense about it).


"Liquids are safe because water exists"


FDA is a money sink. People basically bribe to get their own self tests approved much like the airlines have their own FAA inspectors that approve their own self tests under duress and bribes. It's all a scam and everyone here knows this even if they won't admit it.

Nullify the FDA, FAA and at least half of the other orgs. Give at least half of those budgets to the people. Make aircraft smart enough to evade all obstacles. Make it technically damn near impossible to collide with anything. Make aircraft coordinate themselves. All doable. Force retire all FDA and FAA and give them a balloon, a golden wrist-watch and send them away.

Give people an app to paste in all the things they take or plan to take in terms of foods, supplements, drugs, their allergies. Let the best AI figure out what will happen.


Loans, mostly.

Artemis costs about $4 billion per mission, with around $90 billion already spent. The war in Iran is costing the US about $1 billion per day, so (as of today), $35 billion spent.

The US debt is $39,000 billion ($39 trillion). So, combined, the entire Artemis program and war in Iran represent .32% of the US debt.


how does debt mean anything if u control the ability to mint tokens that u use to pay the debt?


See "inflation". Look at the outcomes of countries who remove controls on printing money (e.g., Zimbabwe, Venezuela).


/remind me after the 2026 elections.


This is a bit like saying sound waves have no attribution layer for music.

The Internet is a transport medium. It sounds like you are asking if it possible to (somehow) associate universal, intrinsic, and immutable attribution metadata with some or all (not sure what "viral" distinguishes in this context) Internet _content_ and have all receivers of that content accept the implications of that attribution metadata.

I think the failure of pretty much all the various digital rights management efforts applied on a MUCH smaller scale to infinitesimal subsets of content types that are now being schlepped across the Internet would suggest that no, there are no technical approaches that would realistically work.

And music, film, and television ownership/credit being tracked carefully? In certain law abiding environments, it's possible the owners/creators get a small fraction of what they believe they are entitled to, but in the majority of the world, not so much.


That’s a fair point, and I agree DRM-style control over content distribution hasn’t worked very well.

The distinction I’m thinking about is less about restricting the movement of content and more about tracking provenance. In other words, not trying to prevent copying or remixing, but making it easier to identify where something first appeared and who originally created it.

Music is an imperfect example, but the infrastructure around identifiers, registries, and rights databases at least creates a shared reference point for attribution. Internet-native media doesn’t really have anything comparable yet.

So the question I’m curious about is whether a similar kind of reference layer could exist for images, memes, and short-form media, even if the content itself continues to move freely?


Was wondering how long it'd take you to come in and trash talk DNSSEC. And now with added FUD ("and once you press that button it's much less likely that you're going to leave your provider").

At least you're consistent.


This is a topic I obviously pay a lot of attention to. Wouldn't it be weirder if I came here with a different take? What do you expect?

I don't think I'm out on a limb suggesting that random small domains should not enable DNSSEC. There's basically zero upside to it for them. I think there's basically never a good argument to enable it, but at least large, heavily targeted sites have a colorable argument.


Actually I think it probably is suspicious to have the exact same opinion after studying something over a long period of time. My opinions are more likely to remain consistent, rather than growing more nuanced or sophisticated, if all I've done is trot out the same responses over a longer period of time.

I've struggled to think of an especially unexamined example because after all they tend to sit out of conscious recall, I think the best I can do is probably that my favourite comic book character is Miracleman's daughter, Winter Moran. That's a consistent belief I've held for decades, I haven't spent a great deal of time thinking about it, but it's not entirely satisfactory and probably there is some introduced nuance, particularly when I re-examined the contrast between what Winter says about the humans to her father and what her step-sister Mist later says about them to her (human) mother because I was writing an essay during lockdown.


> Actually I think it probably is suspicious to have the exact same opinion after studying something over a long period of time.

This seems really odd, probably fundamentally incorrect. "Believing something over time means it is less likely that you are engaging in good faith"? Totally insane take.


On the contrary it's suspicious if I happened to guess exactly right with much less data and so have the same conclusion after learning more. I suggest that the more likely reason is that I didn't learn anything at all.


> On the contrary it's suspicious if I happened to guess exactly right with much less data and so have the same conclusion after learning more.

No it isn't? If I guess what time it is and then look and see that it's around sunset, which is evidence towards my initial guess being right, it is not "suspicious". This is just a fundamentally broken model of evidence.


> I don't think I'm out on a limb suggesting that random small domains should not enable DNSSEC. There's basically zero upside to it for them.

DNSSEC is great for super tiny sites. I only run a single server, but it's strongly recommended that every domain has at least two independent nameservers, ideally with anycasted IPs. DNSSEC lets me fully self-host my DNS, while also letting me add secondary mirrors to get the additional independent nameservers.

Of course, you can add secondary mirrors without DNSSEC (and this is still quite common), but DNSSEC means that I don't have to trust these mirrors [0], since DNSSEC means that they can't forge invalid responses without my private key. I'd almost argue that if you're using secondary mirrors without DNSSEC enabled, then you're not "really" self-hosting, since you're completely reliant on the third-party mirrors being trustworthy.

For larger sites that can afford multiple independent nameservers or for anyone who wants to use a hosted DNS service, then DNSSEC probably offers fewer benefits, since in those cases you're presumably able to trust all your nameservers.

[0]: Well, I still need to trust them a little bit for non-DNSSEC-supporting clients, but most of the major resolvers support DNSSEC these days. And even then, this makes an attack much more detectable than it would be otherwise.


> I don't think I'm out on a limb suggesting that random small domains should not enable DNSSEC.

Why? I can see this argument for large domains that might be using things like anycast and/or geography-specific replies. But for smaller domains?

> There's basically zero upside to it for them.

It can reduce susceptibility to automated wormable attacks. Or to BGP-mediated attacks.


Explain the "wormable attack" DNSSEC addresses? I feel pretty well read into wormability, having done a product in the space.


The vast majority of Let's Encrypt installations don't use CAA records or anything in DNS. Or they host the DNS along with the HTTPS servers.

So if the router between the web server and the Internet is compromised, it can just get trusted certs for all the HTTPS traffic going through it, enabling transparent MITM to inject its payload.


"The web server"? Which web server? Are the HTTP flows with executable content going to the web server or coming from it? I'm sorry, you haven't really cleared this up.


Any web server. Just imagine a worm getting onto a company's router and starting to transparently MITM traffic. Jabber.ru experienced such an attack, apparently.



I touched on this in the parallel comment where you linked this, but worth noting that DNSSEC does not solve this threat model, because re-routing the destination of legitimate IP addresses does not rely on modifying DNS responses.


It does solve it. Unless you know my private key, you can't fake the DNSSEC signatures. The linking DS records in the TLD are presumably out of your control and in future can be audited through something like Certificate Transparency logs.

So even if you fully control the network path, you will somehow have to get access to my private key material.


Solves part of it. They still control your HTTP and can make LE issue a certificate for you. So actually solves nothing.

Unless you had a CAA record saying only LE certs from your account are valid. And maybe you want that record to be authenticated.


Agreed. But I meant that in the world without LE but with DNSSEC+DANE this wouldn't be an issue.


The attacker did not fake any DNS records. They re-routed traffic to the legitimate IP addresses.


It would make them more secure and less vulnerable to attacks. But lazy sysadmins and large providers are too scared to do anything, in no small part due to your ... incorrect arguments against it.


No it wouldn't? How exactly would it make them more secure? It makes availability drastically more precarious and defends against a rare, exotic attack none of them actually face and which in the main is conducted by state-level adversaries for whom DNSSEC is literally a key escrow system. People are not thinking this through.


Boy, how would cryptographically the ROOT of the internet make it more secure? Right here dude: https://easydns.com/blog/2015/08/06/for-dnssec/


That entire post is that you should enable DNSSEC because it's "more secure", and there are no reasons not to.

"More secure" begs the question "against what?", which the blog post doesn't seem to want to go into. Maybe it's secure from hidden tigers.

My favourite DNSSEC "lolwut" is about how people argue that it's something "NIST recommends", whilst at the same time the most recent major DNSSEC outage was......... time.nist.gov! (https://ianix.com/pub/dnssec-outages.html)


DNSSEC is to DNS what HTTPS is to HTTP, so most of these kinds of questions can be answered by asking yourself the same questions about HTTPS.


You keep waving this blog post from 2015 at me. Not only have we discussed it before, but it was a top-level HN post with 79 comments, many of them from me.

Please don't stealth-edit your posts after I respond to them. If you need to edit, just leave a little note in your comment that you edited it.


Sorry, I thought my edit was fast enough.

Yes it did hit HN and you just said, "I stand by what I wrote." and then complain about buggy implementations and downtime connected to DNSSEC. As if that isn't true for all technologies, let alone /insecure/ DNS. DNS is connected to a lot of downtime because it undergirds the whole internet. Making the distributed database that delegates domain authority cryptographically secure makes everything above it more secure too.

I rebutted your arguments point-by-point. You don't update your blog post to reflect those arguments nor recent developments, like larger key sizes.


Did you write the article?


Yup.


So: I wrote a blog post in January of 2015, and 7 months later you wrote a blog post responding to it in August of 2015, and 10 years later you're still angry that I didn't update my blog post to point to the post that you wrote?

I write things people disagree with all the time. I can't recall ever having been mad that people didn't cite me for things we disagree about. Should I have expected all the people who hated coding agents to update their articles when I wrote "My AI Skeptic Friends Are All Nuts"? I didn't realize I was supposed to be complaining about that.


I advocate for DNSSEC in my personal life and you happen to jump on every DNSSEC HN submission and repeat your claims. So I post a link to my article debunking them. You won't engage in the substantive points here but insist that you have in the past and that you stand by your post. So I suggest your update your post to address my critiques.

I'm frustrated that you seem to blow me off and insult me when I try to engage in good faith discussion, but I'm not angry at you. I just ran into this post while procrastinating at work and here we are, in the same loop.

I think we are both trying to make the internet a safer place. It's sad we can't seem to have a productive conversation on the matter.


I advocate against DNSSEC in my personal life. I write about DNSSEC on HN because I write on HN a lot, and because this is a topic I have invested a lot of time in, going back long before the existence of HN itself. You can find stuff about it from me on NANOG in the 1990s. Your frustration seems like a "you" problem.


Its not like its just tptacek with this take, i would say its the majority view in the industry.


That doesn't make it correct. Imagine if someone had said, "We don't need to secure HTTP, we'll just rely on E2E encryption and trust-on-first-use". I would really like it if we had a way to automatically cryptographically verify non-web protocols when they connect.

But there is no money in making that a solution and a TON of money in selling you BS HTTPS certs. There is a lot of people spreading FUD about it. It's a shame.


> But there is no money in making that a solution and a TON of money in selling you BS HTTPS certs

Ah yes, because lets encrypt is rolling in the $$$$.


Mark Shuttleworth paid for his ride to the space station by selling HTTPS certs.

The sad thing is that Mozilla and others have to spend millions bankrolling Let's Encrypt instead of using the free, high assurance PKI that is native to the internet!


It's not really free, though. Rather, the costs are distributed rather than centralized, but running DNSSEC and keeping it working incurs new operational costs for the domain holders, who need to manage keys and DNSSEC signing, etc. And of course there are additional marginal costs to the registrars of managing customer DNSSEC, both building automation and providing customer service when it fails.

It's of course possible that the total numbers are lower than the costs of the WebPKI -- I haven't run them -- but I don't think free is the right word.


I mean, I guess the costs are paid for by the domain name fee. But at least it doesn't have to be a charitable activity covered by non-profits. The early HTTPS certs were especially worthless and price-gouging.


> But at least it doesn't have to be a charitable activity covered by non-profits.

LE isn't primarily funded by non-profits, as you can see from the sponsor list here: https://isrg.org/sponsors/

Anyway, I think there's a reasonable case that it would be better to have the costs distributed the way DNSSEC does, but my point is just that it's not free. Rather, you're moving the costs around. Like I said, it may be cheaper in aggregate, but I think you'd need to make that case.


> LE isn't primarily funded by non-profits, as you can see from the sponsor list here: https://isrg.org/sponsors/

I mean, Mozilla got the ball rolling and it's still run on donations (even if they come from private actors).

> Like I said, it may be cheaper in aggregate, but I think you'd need to make that case.

The PKI is already there: we have 7 people who can do a multisig for new root keys. There is a signing ceremony in a secure bunker somewhere that gets live streamed. The HSMs and servers are already paid for. Cert transparency/monitoring is nice but now it's hard-coded to HTTPS instead of being done more generically. There's a lot of duplicated effort.


> > LE isn't primarily funded by non-profits, as you can see from the sponsor list here: https://isrg.org/sponsors/ > > I mean, Mozilla got the ball rolling

Among others:

  Let’s Encrypt was created through the merging of two simultaneous
  efforts to build a fully automated certificate authority. In 2012, a
  group led by Alex Halderman at the University of Michigan and
  Peter Eckersley at EFF was developing a protocol for automatically
  issuing and renewing certificates. Simultaneously, a team at Mozilla
  led by Josh Aas and Eric Rescorla was working on creating a free
  and automated certificate authority. The groups learned of each
  other’s efforts and joined forces in May 2013.

  ...

  Initially, ISRG was funded almost entirely through large dona-
  tions from technology companies. In late 2014, it secured financial
  commitments from Akamai, Cisco, EFF, and Mozilla, allowing the
  organization to purchase equipment, secure hosting contracts, and
  pay initial staff. Today, ISRG has more diverse funding sources; in
  2018 it received 83% of its funding from corporate sponsors, 14%
  from grants and major gifts, and 3% from individual giving.
Except for the period before the launch when Mozilla and EFF were paying people's salaries, including mine, it was never really the case that Let's Encrypt was primarily funded by non-profits.

> and it's still run on donations (even if they come from private actors).

I agree, but I think it's important to be precise about what's happening here, and like I said, it's never been the case that LE was really funded by non-profits.

> > Like I said, it may be cheaper in aggregate, but I think you'd need to make that case. > > The PKI is already there: we have 7 people who can do a multisig for new root keys. There is a signing ceremony in a secure bunker somewhere that gets live streamed. The HSMs and servers are already paid for. Cert transparency/monitoring is nice but now it's hard-coded to HTTPS instead of being done more generically. There's a lot of duplicated effort.

I think this is a category error. The main operational cost for DNSSEC is not really the root, which is comparatively low load, but rather the distributed operations for every registry/registrar, and server to register keys, sign domains, etc.

One way to think about this is that running a TLD with DNSSEC is conceptually similar to operating a CA in that you have to take in everyone's keys and sign them. It's true you don't need to validate their domains, but that's not the expensive part. Operating this machinery isn't free, especially when you have to handle exceptional cases like people who screw up their domains and need manual help to recover. Now, it's possible that it's a marginal incremental cost, but I doubt it's zero. Upthread, you suggested that people are already paying for this in their domain registrations, but that just means that the TLD operator is going to have to absorb the incremental cost.


That's fair! My primary gripe was about the need for non-profits to step in to begin with. Sorry if I didn't communicate that well.

However, I'm don't feel sorry for registrars or TLDs. Verisign selling HTTPS certs while running the root TLDs is a conflict of interest and I believe the perverse incentives are a big part of the reason why DNSSEC and DANE are stalled out. TLDs are a monopoly business and ICANN is quasi-commercial entity that should never have been a for-profit business.

I certainly think it is fair to ask them to pay for all this.


This seems like a good place to uplevel.

I actually agree with you that in an abstract architectural sense a DNSSEC-style solution for authenticating they keys for endpoints is better. The problem from my perspective is that for a number of reasons that we've explored elsewhere in this thread, there is no practical way to get there from here.

To put this more sharply: in the world as it presently is with ubiquitous WebPKI deployment, the marginal benefit of DNSSEC strikes me as quite modest, even if it were universally deployed. Worse yet, the incremental benefit to any specific actor of deploying DNSSEC is even lower, which makes it very hard to get to universal deployment.

> However, I'm don't feel sorry for registrars or TLDs. Verisign selling HTTPS certs while running the root TLDs is a conflict of interest and I believe the perverse incentives are a big part of the reason why DNSSEC and DANE are stalled out. TLDs are a monopoly business and ICANN is quasi-commercial entity that should never have been a for-profit business. > >I certainly think it is fair to ask them to pay for all this.

I also do not feel sorry for registrars. However, it's also not clear to me that if somehow they were forced to incur incremental cost X per domain name, they would not find a way to pass it onto us. With that said, I also don't think that's really why DNSSEC and DANE are stalled out; rather I think that it's the deployment incentives I mentioned above.

Note that despite the confusing naming and the fact that VeriSign was once a CA, they no longer are and have not been since 2010, as described in the second paragraph of their Wikipedia page. https://en.wikipedia.org/wiki/Verisign. In fact, in my experience VeriSign is very pro-DNSSEC.


Yes, the whole point of LetsEncrypt was to prevent that from happening again, and it now dominates the market.


You're not providing any explanation for why I wouldn't trust OP on DNSSEC. And the FUD is pretty reasonable if you've had a lot of experience setting up certificate chains, because the chain of trust can fail for a lot of reasons that have nothing to do with your certificate and are sometimes outside of your control. It would really suck to turn it on and have some 3rd-party provider not implement a feature you're relying on for your DNSSEC implementation and then suddenly it doesn't work and nobody can resolve your website anymore. I've had a lot of wonky experiences with different features in EG X.509 that I've come to really mistrust CA-based systems that I'm not in control of. When you get down to interoperability between different software implementations it gets even rougher.


Which is exactly what happened to Slack, and took them offline for most of a business day for a huge fraction of their customers. This is such a big problem that there's actually a subsidiary DNSSEC protocol (DNSSEC NTA's) that addresses it: tactically disabling DNSSEC at major resolvers for the inevitable cases where something breaks.


As if DNS isn't a major contributing to A LOT of downtime. That doesn't mean it's not worth doing not investing in making deployment more seamless and less error prone.


> As if DNS isn't a major contributing to A LOT of downtime. That doesn't mean it's not worth doing not investing in making deployment more seamless and less error prone.

Ah yes. Let's take something that's prone to causing service issues and strap more footguns to it.

It's not worth it, because the cost is extremely quantifiable and visible, whereas the benefits struggle to be coherent.


The benefits are huge: there are lots of attacks that DNSSEC trivially prevents and it would help secure more than just web browsers.


Can you expand on this a bit, under the assumption that the traffic is using some form of transport security (e.g., TLS, SSH, etc.)?


DNS underlies domain authority and the validity of every connection to every domain name ultimately traces back to DNS records. The amount of infra needed to shore up HTTPS is huge and thus SSH and other protocols rely on trust-on-first-use (unless you manually hard-code public keys yourself - which doesn't happen). DNS offers a standard, delegable PKI that is available to all clients regardless of the transport protocol.

With DNSSEC, a host with control over a domain's DNS records could use that to issue verifiable public keys without having to contact a third party.

I ran into this while working on decentralized web technologies and building a parallel to WebPKI just wasn't feasible. Whereas we could totally feed clients DNSSEC validated certs, but it wasn't supported.


Thanks for the explanation. It seems like there are two cases here:

1. Things that use TLS and hence the WebPKI 2. Other things.

None of what you've written here applies to the TLS and WebPKI case, so I'm going to take it that you're not arguing that DNSSEC validation by clients provides a security improvement in that case.

That leaves us with the non-WebPKI cases like SSH. I think you've got a somewhat stronger case there, but not much of one, because those cases can also basically go back to the WebPKI, either directly, by using WebPKI-based certificates, or indirectly, by hosting fingerprints on a Web server.


> None of what you've written here applies to the TLS and WebPKI case, so I'm going to take it that you're not arguing that DNSSEC validation by clients provides a security improvement in that case.

It would benefit the likes of Wikileaks. You could do all the crypto in your basement with an HSM without involving anyone else.

> That leaves us with the non-WebPKI cases like SSH. I think you've got a somewhat stronger case there, but not much of one, because those cases can also basically go back to the WebPKI, either directly, by using WebPKI-based certificates, or indirectly, by hosting fingerprints on a Web server.

But do they? That requires adding support for another protocol.

I would like to live in a world where I don't have to copy/paste SSH keys from an AWS console just to have the piece-of-mind that my SSH connection hasn't been hijacked.


In practice, fleet operators run their own PKIs for SSH, so tying them to the DNSSEC PKI is a strict step backwards for SSH security.

There may be other applications where a global public PKI makes sense; presumably those applications will be characterized by the need to make frequent introductions between unrelated parties, which is distinctly not an attribute of the SSH problem.


And for everyone else that just wants to connect to an SSH session without having to setup PKI themselves? Tying that to the records used to find the domain seems like the obvious place to put that information to me!

DNSSEC lets you delegate a subtree in the namespace to a given public key. You can hardcode your DNSSEC signing key for clients too.

Don't get me started on how badly VPN PKI is handled....


Yes, modern fleetwide SSH PKIs all do this; what you're describing is table stakes and doesn't involve anybody delegating any part of their security to a global PKI run by other organizations.

The WebPKI and DNSSEC run global PKIs because they routinely introduce untrusting strangers to each other. That's precisely not the SSH problem. Anything you do to bring up a new physical (or virtual) involves installing trust anchors on it; if you're in that position already, it actually harms security to have it trust a global public PKI.

The arguments for things like SSHFP and SSH-via-DNSSEC are really telling. It's like arguing that code signing certificates should be in the DNS PKI.


DNSSEC PKI does not preclude one from hardcoding specific keys in the client as well.

Providing global PKI and enabling end-to-end authentication by default for all clients and protocols certainly would make the internet a safer place.


So now we're running two PKIs? What does the second one do? Why not three?


I would really appreciate it if you would respond to my points instead of just moving on to another argument.

Do you hardcode Github and AWS keys in your SSH config? Do you think it would be beneficial to global security if that happened automatically?


No, we run a fleet with thousands of physicals and hundreds of thousands of virtuals, of course we don't hardcode keys in our SSH configuration. Like presumably every other large fleet operator, we solve this problem with an internal SSH CA.

Further, I haven't "moved on to another argument". Can you answer the question I just asked? If I have an existing internal PKI for my fleet, what security value is a trust relationship with DNSSEC adding? Please try to be specific, because I'm having trouble coming up with any value at all.


We also have thousands of devices accessible over SSH and we maintain our own PKI for this purpose as well. We also use mTLS with a private CA and chain of trust, for what it's worth.

It's a solved problem, basically.


The difference is DNS provides a fairly obvious up side


Actually, does it? Yes, the obvious upside when I type in slack.com instead of 123.45.56.67 is very good. Does this same upside apply to addresses I don't type in? What's actually the advantage of addressing one of foobarcorp's infinitude of servers uasing the string "123-45-57-78.slp05.mus.foobar.com" instead of "123.45.57.78"? It seems to just waste bytes. And most communication is of the latter sort - an app talking to its own servers managed by the same company.


BGP can be hijacked. Anycast IPs exist. Rolling out a new release when one of your IPs is unavailable could be a severe challenge. SVC records are actually kinda neat.


All of that's a problem with DNS too, even updating the IP. You could still use it to get the initial entry point if you wanted. But when you serve a webpage with an automatically generated pointer to image3.yourdomain, the only reason not to make that an IP is HTTPS, and LE just started issuing IP address certificates. Think about it - it saves a few round trips.

If the IP is anycast, all the better.


I signed up for a Pro account yesterday, now when I try to access it, I get shuffled off to the 'create a new account' page.

Trying to access a human in support to understand wtf has been going on, perhaps unsurprisingly, has been a study in infuriation. Their AI support bot has been as useless as most other AI support bots.

Sending email to their support email address triggers the same AI support bot. It suggested waiting 24 hours and trying to access my account again. And then closed the issue after 4 hours.

Probably not related to the email delivery issue (I keep getting the link via, it just redirects to the create new account page), but perhaps indicative of something seriously broken and a lack of interest in actual support (even for paying customers).


As others have pointed out, using 'tmptest' works until someone buys tmptest -- unlikely, but people will buy anything these days.

I always use the ISO-3166 "user-assigned" 2-letter codes (AA, QM-QZ, XA-XZ, ZZ), with the theory being that ISO-3166 Maintenance Agency getting international consensus to move those codes back to regular country codes will take longer than the heat death of the universe, so using them for internal domains is probably safe.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: