What are you smoking, we hear about breaches of super important databases all the time and that doesn't seem to convince any company to give a single shit more than just enough to avoid negligence. Not to mention social media's entire business model is hacking people - keep them on your platform by any means necessary.
> we hear about breaches of super important databases all the time and that doesn't seem to convince any company to give a single shit more than just enough to avoid negligence.
I'm not sure why you think this is counter to my point (perhaps we should wonder what you yourself are smoking?), which to reiterate was that:
1. Most current security issues are due to the various insecure foundations we build our technology on, and
2. By the time Neuralink type implants are common, that won't be the case anymore.
We have both cars and pacemakers that can kill people if you send the right wireless commands. Why would Neuralink be different?
I agree that we do have the technology to make it secure if we want to. We've made flight software secure in the '80s or so.
What we don't have, is the incentives. We've built everything on insecure foundations to get to the market cheaper and faster. These incentives don't change for Neuralink. In fact, they create kind of gold rush conditions that make things worse.
What could change things dramatically overnight was the governenent stepping in and enforcing safety regulations, even at the cost of red tape and slow bureaucratic processes. And it's starting, slowly. But e.g. the EU is promoting SBOM's, sobtheir underlying mental model is still one where you tape random software together quickly.
At some point in the future no one will be using x86 or any variation, and we will all be using a secure architecture. Same as with insecure languages, far enough in the future, every language in common use will be safe.
I believe by the time brain implants are common, we will be far enough in the future that we will be using secure foundations for those brain implants.
> What could change things dramatically overnight was the governenent stepping in and enforcing safety regulations,
For a damn brain implant I don't see why they wouldn't.
I can tell you're high because #2. The only way Neuralink is secure is if we get rid of the system that incentivizes #1, aka capitalism, and not replace it with something equally bad or worse.
Oh, and Musk isn't allowed a Neuralink tripwire to blow up your brain via his invention because he saw pronouns listed somewhere and got triggered.
> The only way Neuralink is secure is if we get rid of the system that incentivizes #1, aka capitalism, and not replace it with something equally bad or worse.
Oh man, you've ingested that anti-capitalism koolaid like so many young college kids are so quick to do. It's always such a shame.
This isn't really anything to do with capitalism, it's a question of regulation e.g. what the FDA does, and also a question of time because when enough time passes, most computing will be secure by default due to having rid the insecure foundations.
And more than that, it's an issue with democracy more than capitalism. Fix the way people vote if you want to fix the world, or prevent the types of people who want to believe the earth is flat from having a vote at all.
Security will never be a "largely solved problem", when there are humans involved (and probably even when humans are not involved).
There is no technical solution to people uploading high res photos with location metadata to social network de jour. Or the CEO who wants access to all his email on his shiny new gadget. Or the three letter agency who think ubiquitous surveillance is a great way to do their job. Or the politician who can be easily convinced the backdoors that can only be used by "the good guys" exist. Or the team who does all their internal chat including production secrets in a 3rd party chat app, only to have them popped and their prod credentials leaked on some TOR site. Or the sweatshop IT outsourcing firm that browbeats underpaid devs into meeting pointless Jira ticket closure targets. Or the "move fast and break things" startup culture that's desperately cutting corners to be first-to-market.
None of the people involved in bringing "enhanced human" tech to market will be immune to any of those pressures. (I mean, FFS, in the short term we're really talking about a product that _Elon_ is applying his massive billionaire brain to, right? I wonder what the media friendly equivalent term to "Rapid Unscheduled Disassembly" for when Nerualink starts blowing up people's brains is going to be?)
> Security will never be a "largely solved problem", when there are humans involved (and probably even when humans are not involved).
It absolutely will. I didn't say completely solved, I said largely solved.
> There is no technical solution to people uploading high res photos with location metadata to social network de jour.
Bad example honestly, since most social media sites strip out exif data by default these days. Not sure there are any that don't.
> Or the CEO who wants access to all his email on his shiny new gadget. Or the three letter agency who think ubiquitous surveillance is a great way to do their job. Or the politician who can be easily convinced the backdoors that can only be used by "the good guys" exist. Or the team who does all their internal chat including production secrets in a 3rd party chat app, only to have them popped and their prod credentials leaked on some TOR site. Or the sweatshop IT outsourcing firm that browbeats underpaid devs into meeting pointless Jira ticket closure targets. Or the "move fast and break things" startup culture that's desperately cutting corners to be first-to-market.
Yes yes, humans can be selfish and take risks and be bribed and negligent and blah blah blah.
The context of the comment was in neuralink implants getting hacked the way an out of date smart tv might. As when it comes to the actual tech, security will be a solved problem, because most of the problems we see today are due to everything being built on top of insecure foundations on top of insecure foundations.
Using chips with a secure architecture, safe languages and safe protocols is going to result in secure implants.
Not to say there might not be some new vulnerability, but I disagree with this idea people love to repeat that security is impossible.