Security through obscurity is only problematic if that is the only, or a primary, layer of defense. As an incremental layer of deterrence or delay, it is an absolutely valid tactic. (Note, not commenting on whether that is the rationale here.)
That, and plenty of closed-source software at least has a decent security track record by now. I haven't seen an obvious cause-and-effect of open-source making something more secure. Only the other direction, where insecure closed-source software is kept closed because they know it's Swiss cheese.
Going closed source is making the branch secret/private, not making it obscure. Obscurity would be zipping up the open source code (without a password) and leaving it online. Obscurity is just called taking additional steps to recover the information. Your passwords are not obscure strings of characters, they are secrets.
If there is a self-hosted version at all, then the compiled form is out there to be analysed. While compilation and other forms of code transformation that may occur are not 1->1, trivially reversed, operations, they are much closer to bad password security (symmetric encryption or worse) then good (proper hashing with salting/peppering/etc). Heck, depending on the languages/frameworks/other used the code may be hardly compiled or otherwise transformed at all in its distributed form. Tools to aid decompiling and such have existed for practically as long as their forward processes have, so I would say this is still obscurity rather than any higher form of protection.
Even if the back-end is never fully distributed any front-end code obviously has to be, and even if that contains minimal logic, perhaps little more than navigation & validation to avoid excess UA/server round-trip latency, the inputs & outputs are still easily open to investigation (by humans, humans with tools, or more fully automated methods) so by closing source you've only protected yourself from a small subset of vulnerability discovering techniques.
This is all especially true if your system was recently more completely open, unless a complete clean-room rewrite is happening in conjunction with this change.
Right, but those capabilities are available to you as well. Granted the remediation effort will take longer but...you're going to do that for any existing issues _anyway_ right?
I understand why this is a tempting thing to do in a "STOP THE PRESSES" manner where you take a breather and fix any existing issues that snuck through. I don't yet understand why when you reach steady-state, you wouldn't rely on the same tooling in a proactive manner to prevent issues from being shipped.
And if you say "yeah, that's obv the plan," well then I don't understand what going closed-source _now_ actually accomplishes with the horses already out of the barn.
Or internalize the cost if they all decide the hassle of maintaining an open source project is not worth it any more.
I'm not aiming this reply at you specific, but it's the general dynamic of this crisis. The real answer is for the foundational model providers to give this money. But instead, at least one seems to care more about acquiring critical open source companies.
We should openly talk about this - the existing open source model is being killed by LLMs, and there is no clear replacement.
I don't think this really helps that much. Your neighbor could ask an LLM to decompile your binaries, and then run security analysis on the results.
If the tool correctly says you've got security issues, trying to hide them won't work. You still have the security issues and someone is going to find them.
If I understand correctly, their primary product is SaaS, and their non-DIY self-host edition is an enterprise product. So your neighbor wouldn't have access to the binaries to begin with.
It only takes 20 minutes and $200 to hack a closed source one too though. LLMs are ludicrously good at using reverse engineering tools and having source available to inspect just makes it slightly more convenient.
Very true, but that is still a meaningfully higher cost at scale. If, as people are postulating post-Mythos, security comes down to which side spends more tokens, it is a valid strategy to impose asymmetric costs on the attacker.
Couldn't you just spend those $100 on claude code credits yourself and make sure you're not shipping insecure software? Security by obscurity is not the correct model (IMO)
> neighbors son 15 mins and $100 claude code credits
Is that true? Didn't the Mythos release say they spent $20k? I'm also skeptical of Anthropic here doing essentially what amounts to "vague posting" in an attempt scare everyone and drive up their value before IPO.
They probably lack a sufficient density of people who remember why "security through obscurity" become an infamous concept. It belongs to that family of bad ideas that's superficially appealing, especially if you're still at that stage of your career at which you think past generations were full of idiots and you, alone, have discovered how to do real software development.
The mental model I had of this was actually on the paragraph or page level, rather than words like the post demos. I think it'd be really interesting if you're reading a take on a concept in one book and you can immediately fan-out and either read different ways of presenting the same information/argument, or counters to it.
Listen I'm not a crazy huge fan of a lot of new tech, but this is pretty transformational. When reading the first article [1] I was struck by the fact that it granted so much new freedom to your "social identity" on the internet. The comparison to hosting providers was incredible, because imagine you building a website and posting your thoughts there or starting a business there...and then immediately being shut down and all your data lost because of some arbitrary change of policy at your "host".
Everyone always talks about how your Google account being tied to logins is scary because you can get arbitrarily locked out. This protocol makes something like functionally impossible since /you/ control your data.
> Bob, however, isn’t technical. He doesn’t even know that there is a “repository” with his “data”. He got a repository behind the scenes when he signed up for his first open social app.
> Bob uses a free hosting service that came by default with his first open social app.
> If she owns alice.com, she can point at://alice.com at any server.
But does Bob own bob.com? That sounds beyond Bob's abilities and interest level. So Bob is stuck on one platform, in practice, and it's full other Bobs who are also stuck. The "free hosting service" (why is it free?) is now the gatekeeper. It may let him switch platforms.
It seems to mean that Bob's free hosting service has to be a relatively benign and permanent institution like ICANN, and not some ropey old operation like Photobucket.
Others on this thread are talking about Decentralized IDs, https://en.wikipedia.org/wiki/Decentralized_identifier . Looks like those are stored in several competing systems. There's a blockchain one, a Chinese one that wants to know your real name, and a Microsoft one that wants to store your photo. This all has, uh, implications, which sound much less liberating than this at:// protocol does at first.
Technically sure, but (1) apps that Bob uses have no power over that database, and (2) if someone were to remove that row, Bob could change his handle to something else without losing his identity, data, or reach.
That article is from before Intel started to decline. Quoting the end of that article:
> Intel is already the best microprocessor manufacturing company in the world
Intel was not driving themselves into the dirt if they are the best in their field. Instead, I'd suggest looking at when the process nodes were achieved:
US Army veterans do have a higher rate of arthritis but their days are quite different from the "run 3-5 days a week" that most people think of when talking about recreational runners.
And the pacemaker comment stood out so I did a bit of digging and found a study [1] you might be referring to. Again, the effects were strong only in the heavy-duty-exercisers/pro/semi-pro cross-country skiier group. Additionally, this didn't offset the gains to cardiovascular or mortality risk - that group was still "healthier."
You’re right - if you only used it for ‘font-heading-2’, you wouldn’t need it.
But like the person you’re responding to said, the ergonomics improve for the majority of cases that are just
‘flex items-center gap-2’.
And yes, you could write it all yourself but Tailwind is a good set of defaults for those classes and minimizes bike-shedding and improves consistency. These are useful things in lots of teams.
I don’t really use Tailwind on smaller personal projects because I love CSS, but in an org of mixed skill levels, it’s pretty damn useful.
(Also, Tailwind uses CSS variables. It had solid support for them in the last major and first class support for it in the current one.)
Hey, I'm an engineering manager at Join. We build collaboration tools for huge construction projects: think stadiums, hospitals, research facilities, etc. Our customers (GCs) love us and their customers (owners) love us so we're getting cool network effects out of that.
I'm looking for a senior/staff Golang/DB developer who has a bunch of tools in their belt, knows their tradeoffs, and /wants/ to share their knowledge with midlevels and help them avoid some of the scars you've accumulated over the years. :)
This was one of the biggest "oh shit" moments I had when learning Remix: I could reuse the same validators across front and backend and they could even be right there in the same file.
> Actually it would sort of be nice if these frameworks could be coded such that if I have JavaScript shut off, it just runs the code elsewhere and sends me the site.
Remix does!
Actually, most (all?) of the major frameworks will do SSR on pageload and only use client-side rendering for re-renders. But yeah, Remix will do exactly what you're asking for and force you to do full-page refreshes without JS. If that's what you really want.
reply