Hacker Newsnew | past | comments | ask | show | jobs | submit | doytch's commentslogin

I get the mentality but it feels very much like security through obscurity. When did we decide that that was the correct model?

Security through obscurity is only problematic if that is the only, or a primary, layer of defense. As an incremental layer of deterrence or delay, it is an absolutely valid tactic. (Note, not commenting on whether that is the rationale here.)

That, and plenty of closed-source software at least has a decent security track record by now. I haven't seen an obvious cause-and-effect of open-source making something more secure. Only the other direction, where insecure closed-source software is kept closed because they know it's Swiss cheese.

This is not security via obscurity; it is reducing your attack surface as much as possible.

Reducing your attack surface as much as possible via obscurity.

I think cal.com is assuming LLMs are only good at hacking with the source code of the target, whether that's true I don't know

Going closed source is making the branch secret/private, not making it obscure. Obscurity would be zipping up the open source code (without a password) and leaving it online. Obscurity is just called taking additional steps to recover the information. Your passwords are not obscure strings of characters, they are secrets.

If there is a self-hosted version at all, then the compiled form is out there to be analysed. While compilation and other forms of code transformation that may occur are not 1->1, trivially reversed, operations, they are much closer to bad password security (symmetric encryption or worse) then good (proper hashing with salting/peppering/etc). Heck, depending on the languages/frameworks/other used the code may be hardly compiled or otherwise transformed at all in its distributed form. Tools to aid decompiling and such have existed for practically as long as their forward processes have, so I would say this is still obscurity rather than any higher form of protection.

Even if the back-end is never fully distributed any front-end code obviously has to be, and even if that contains minimal logic, perhaps little more than navigation & validation to avoid excess UA/server round-trip latency, the inputs & outputs are still easily open to investigation (by humans, humans with tools, or more fully automated methods) so by closing source you've only protected yourself from a small subset of vulnerability discovering techniques.

This is all especially true if your system was recently more completely open, unless a complete clean-room rewrite is happening in conjunction with this change.


Fully agree. But cal.com is SaaS-only, after they move to closed-source, there will be nothing to download.

right, they're just securing their application by making the bugs obscure. It's totally different.

Security through obscurity is still better than no obscurity...

hey cofounder here. since it takes my 16 year old neighbors son 15 mins and $100 claude code credits to hack your open source project

Are you at all worried that the message you are spreading here is "We are no longer confident in our own ability to secure your data?"

That's exactly the message I got from the video

That's literally the message

Right, but those capabilities are available to you as well. Granted the remediation effort will take longer but...you're going to do that for any existing issues _anyway_ right?

I understand why this is a tempting thing to do in a "STOP THE PRESSES" manner where you take a breather and fix any existing issues that snuck through. I don't yet understand why when you reach steady-state, you wouldn't rely on the same tooling in a proactive manner to prevent issues from being shipped.

And if you say "yeah, that's obv the plan," well then I don't understand what going closed-source _now_ actually accomplishes with the horses already out of the barn.


> those capabilities are available to you as well

Give him $100 to obtain that capability.

Give each open source project maintainer $100.

Or internalize the cost if they all decide the hassle of maintaining an open source project is not worth it any more.

I'm not aiming this reply at you specific, but it's the general dynamic of this crisis. The real answer is for the foundational model providers to give this money. But instead, at least one seems to care more about acquiring critical open source companies.

We should openly talk about this - the existing open source model is being killed by LLMs, and there is no clear replacement.


I don't think this really helps that much. Your neighbor could ask an LLM to decompile your binaries, and then run security analysis on the results.

If the tool correctly says you've got security issues, trying to hide them won't work. You still have the security issues and someone is going to find them.


If I understand correctly, their primary product is SaaS, and their non-DIY self-host edition is an enterprise product. So your neighbor wouldn't have access to the binaries to begin with.

It only takes 20 minutes and $200 to hack a closed source one too though. LLMs are ludicrously good at using reverse engineering tools and having source available to inspect just makes it slightly more convenient.

Very true, but that is still a meaningfully higher cost at scale. If, as people are postulating post-Mythos, security comes down to which side spends more tokens, it is a valid strategy to impose asymmetric costs on the attacker.

A little harder when you don’t have the source or the binaries.

Couldn't you just spend those $100 on claude code credits yourself and make sure you're not shipping insecure software? Security by obscurity is not the correct model (IMO)

> neighbors son 15 mins and $100 claude code credits

Is that true? Didn't the Mythos release say they spent $20k? I'm also skeptical of Anthropic here doing essentially what amounts to "vague posting" in an attempt scare everyone and drive up their value before IPO.


Why not can’t you (as in Cal.com) spend that amount of money and find vulnerabilities yourself?

You can keep the untested branch closed if you want to go with “cathedral” model, even.


> since it takes my 16 year old neighbors son 15 mins and $100 claude code credits to hack your open source project

To what end? You can just look at the code. It's right there. You don't need to "hack" anything.

If you want to "hack on it", you're welcome to do so.

Would you like to take a look at some of my open-source projects your neighbour's kid might like to hack on?


Was open source any more secure before LLMs became so cheap? For those same 100$ you could have a North Korean hacking your code for a whole month.

What makes you think it'll take him more than 16 mins and $110 claude code credits to hack your closed source project?

SaaS makes that harder.

*This comment sponsored by Anthropic

No it doesn't. Have you been actually "hacked"?

Please, go ahead!

whooptie fuggin doo, then spend $200 on finding and fixing the issues before you push your commits to the cloud

They probably lack a sufficient density of people who remember why "security through obscurity" become an infamous concept. It belongs to that family of bad ideas that's superficially appealing, especially if you're still at that stage of your career at which you think past generations were full of idiots and you, alone, have discovered how to do real software development.

The mental model I had of this was actually on the paragraph or page level, rather than words like the post demos. I think it'd be really interesting if you're reading a take on a concept in one book and you can immediately fan-out and either read different ways of presenting the same information/argument, or counters to it.


Listen I'm not a crazy huge fan of a lot of new tech, but this is pretty transformational. When reading the first article [1] I was struck by the fact that it granted so much new freedom to your "social identity" on the internet. The comparison to hosting providers was incredible, because imagine you building a website and posting your thoughts there or starting a business there...and then immediately being shut down and all your data lost because of some arbitrary change of policy at your "host".

Everyone always talks about how your Google account being tied to logins is scary because you can get arbitrarily locked out. This protocol makes something like functionally impossible since /you/ control your data.

[1] https://overreacted.io/open-social/


> Bob, however, isn’t technical. He doesn’t even know that there is a “repository” with his “data”. He got a repository behind the scenes when he signed up for his first open social app.

> Bob uses a free hosting service that came by default with his first open social app.

> If she owns alice.com, she can point at://alice.com at any server.

But does Bob own bob.com? That sounds beyond Bob's abilities and interest level. So Bob is stuck on one platform, in practice, and it's full other Bobs who are also stuck. The "free hosting service" (why is it free?) is now the gatekeeper. It may let him switch platforms.


Even if bob "owned" bob.com, that amounts to a row in some centralized database somewhere.


It seems to mean that Bob's free hosting service has to be a relatively benign and permanent institution like ICANN, and not some ropey old operation like Photobucket.

Others on this thread are talking about Decentralized IDs, https://en.wikipedia.org/wiki/Decentralized_identifier . Looks like those are stored in several competing systems. There's a blockchain one, a Chinese one that wants to know your real name, and a Microsoft one that wants to store your photo. This all has, uh, implications, which sound much less liberating than this at:// protocol does at first.


ICANN is relatively benign but not benign.


Technically sure, but (1) apps that Bob uses have no power over that database, and (2) if someone were to remove that row, Bob could change his handle to something else without losing his identity, data, or reach.


Did they invent the idea of a keypair?


Huh? He talked about and linked to a series of his older articles, including this one from 2013. [1] It's been a while.

[1]: https://stratechery.com/2013/the-intel-opportunity/


That article is from before Intel started to decline. Quoting the end of that article:

> Intel is already the best microprocessor manufacturing company in the world

Intel was not driving themselves into the dirt if they are the best in their field. Instead, I'd suggest looking at when the process nodes were achieved:

  |      | Someone Else   | Intel | Lead |
  |------+----------------+-------+------|
  | 32nm | 2011 (Samsung) |  2009 |    2 |
  | 22nm | 2013 (IBM)     |  2011 |    2 |
  | 14nm | 2015 (Samsung) |  2014 |    1 |
  | 10nm | 2017 (Samsung) |  2018 |   -1 |
  | 7nm  | 2018 (TSMC)    |  2021 |   -3 |
Seems almost exactly a decade ago that Intel lost their lead and fell behind the competition.


This is a gross simplification on both accounts.

US Army veterans do have a higher rate of arthritis but their days are quite different from the "run 3-5 days a week" that most people think of when talking about recreational runners.

And the pacemaker comment stood out so I did a bit of digging and found a study [1] you might be referring to. Again, the effects were strong only in the heavy-duty-exercisers/pro/semi-pro cross-country skiier group. Additionally, this didn't offset the gains to cardiovascular or mortality risk - that group was still "healthier."

[1] https://pubmed.ncbi.nlm.nih.gov/39101218/


The dosage of this study was "2 × 70 ml ∙d−1, each containing ∼595 mg NO3−". That's probably gonna be tough to get daily by eating beets...


You’re right - if you only used it for ‘font-heading-2’, you wouldn’t need it.

But like the person you’re responding to said, the ergonomics improve for the majority of cases that are just ‘flex items-center gap-2’.

And yes, you could write it all yourself but Tailwind is a good set of defaults for those classes and minimizes bike-shedding and improves consistency. These are useful things in lots of teams.

I don’t really use Tailwind on smaller personal projects because I love CSS, but in an org of mixed skill levels, it’s pretty damn useful.

(Also, Tailwind uses CSS variables. It had solid support for them in the last major and first class support for it in the current one.)


Join | Sr/Staff Golang/DB Developer | Remote (USA, Canada) | full-time | $175k – $205k/yr + equity

Hey, I'm an engineering manager at Join. We build collaboration tools for huge construction projects: think stadiums, hospitals, research facilities, etc. Our customers (GCs) love us and their customers (owners) love us so we're getting cool network effects out of that.

I'm looking for a senior/staff Golang/DB developer who has a bunch of tools in their belt, knows their tradeoffs, and /wants/ to share their knowledge with midlevels and help them avoid some of the scars you've accumulated over the years. :)

The full job description (along with my email) is at https://join.build/company/careers-sse/

If you're curious about anything, either reply or email me.


This was one of the biggest "oh shit" moments I had when learning Remix: I could reuse the same validators across front and backend and they could even be right there in the same file.


> Actually it would sort of be nice if these frameworks could be coded such that if I have JavaScript shut off, it just runs the code elsewhere and sends me the site.

Remix does!

Actually, most (all?) of the major frameworks will do SSR on pageload and only use client-side rendering for re-renders. But yeah, Remix will do exactly what you're asking for and force you to do full-page refreshes without JS. If that's what you really want.


Nice! This seems like a reasonable compromise.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: