There is a frustration, as a user, that as the value of the iOS exploits increase, they become more and more 'underground'. The time between OS release and public jailbreak is continually growing - and it doesn't seem to only be due to the hardening of the OS. People are selling their exploits rather than releasing them publicly. And the further underground they go, the more likely they will be utilized for nefarious purposes rather than allowing me to edit my own HOSTS file. The most recent iOS jailbreak (to be able to gain root access to my iPhone) lasted less than a month before Apple stopped signing the old OS. Yet its clear this (new) quick action on Apple's part does not (yet?) stop persistent state-sponsored adversaries.
It is more and more clear that to accept Apple's security (which seems to be getting better, but obviously still insufficient) I must also accept Apple's commercial limitations to the use of a device I own. And I suppose that the dividing line between the ability to exploit a vulnerability and to 'have control' is a sliding scale for every user: one man's 'obvious' kernel exploit is another man's 'obvious' phishing scam.
It is not a new tension, but it does seem the stakes on both sides seem to be getting higher and higher - total submission to an onerous EULA vs total exploitable knowledge about me and my device. Both sides seem to have forced each other to introduce the concept of 'total' to those stakes, and that is frustrating. More-so when it's not yet clear which threat is greater.
This is what happens when the concepts of security, DRM and commercial restriction get entangled.
This reminds me of the PlayStation 3. It remained an un-hacked console for so long and the theory goes that the people who wanted to tinker with it could do so without being forced to fight on the same side as the bad guys, because Sony allowed 'Other OS'. When Sony closed off 'Other OS', this gave incentive for people to actually try and jailbreak the system [1].
Yet now the people that just wanted to tinker had to take the same route that people who just wanted to pirate would have to take. By locking the platform down further, Sony only succeeded in merging the two camps (benign tinkerers and pirates). I think there's a lot of validity in this theory.
It's a tough choice. As an iOS user I've long since come to the same acceptance as you - that the added security is worth the extra restrictions. Yet it doesn't have to be this way. Protecting your platform from hackers shouldn't be the same as protecting your platform from SNES emulators or games with adult themes.
You say Apple's security isn't sufficient. It certainly appears that as time goes on Apple's security is pretty sufficient for most users. We're talking about exploits worth 1+ million dollars being used in a targeted attack against a single individual (or, more likely, a relatively small number of targeted individuals over time). This isn't something that the overwhelming majority of users need to be concerned about. Obviously it would be great if Apple's security was so good that not even nation states could get past it, but that's an incredibly high bar.
I don't think the bar is just nation-states. The bar also includes those with any of the following:
- $1M Cash
- skilled working knowledge of Apple's software and hardware
- fast reflexes to quickly react and apply a newly-public exploit derived from any of the above
Together, the number of world-wide actors who fall into one of those categories is actually fairly large. Those all have the capability to have total 'access' to my device. Given the value (to me) and the amount of data on that device, that's a huge hole, even for the majority of people. Further, with these networked exploits, the distinction between having one individual targeted, and all individuals running that OS is actually fairly slight. It wouldn't take that much more work to spam a well-designed exploit against an entire class of (normal) users.
This presumes that the exploits can only be found by companies with deep pockets, which are probably deep only because they are willing to sell them. What if there are equally good teams who are not in it for the money?
I think its a bit naive to believe that a 1+ million worth exploit will remain a secret until the patch is shipped, at which point it is worth exactly nothing. There are so many way that the exploit will trickle down to more people until reach mass use.
To mention a few ways: The buyer might want to recover the purchasing price by reselling. If its a government agency, they might want to establish credential with future sellers. Reverse engineers have a 1+ million dollars incentive, and they have a much smaller window to sell it before it becomes worthless.
Its like piracy. First its the group with so called FTP access. During that period of time, maybe a release is worth X amount of dollars, but whats the chance that it remains there and never reach a mass audience?
I'm not sure what you mean. In this case the exploit did remain a secret until such time as a targeted individual was paranoid enough to send the link to an investigative team instead of clicking on it (and this is what resulted in the patch), or in other words, nobody else besides the attackers, the victim, the investigative team, and Apple knew about the exploit until the patch shipped.
Just as an aside;
The 1M USD exploit was a bug bounty/publicity stunt. While that also was a chain exploit, that doesn't automatically mean this is a "1M USD" exploit, like everyone is claiming. Or am I missing something?
It's very clear that Apple doesn't want you to have full control over your device. So jailbreaking will only become harder and harder. Either accept it, or move to a more open platform.
My Android has an unlockable bootloader but you need to actually request the key from the manufacturer. Malware can't unlock it against my will without a jailbreak. Seems like a decent arrangement to me- safe by default, but if I want to root my phone I can.
What i would love to see is a bootlader where i can load my own signatures. Preferably done via USB only, and by putting the device into a mode that require certain button inputs during power up.
Signed boot has uses, but we need to be sure that the user does the signing.
I was tripped up trying to unlock an LG G5 yesterday by the secure boot validation.
Turns out, in Android 6, there's a Developer Option called "Allow OEM Unlock" which does enable the ability to unlock the bootloader through fastboot.
While I can't sign my own bootloader, having a developer option to enable the unlock that can only be triggered from inside the OS is an interesting trade off.
Yeah, but that's life with a complex bundle of software. It's not as though Apple's attempt at making a walled garden makes them magically better at not writing exploitable bugs in the userland. Mostly where they're better is at making it harder for normal users to intentionally install something that turns out to be malware, which isn't nothing, but a with exploits like Trident, that makes absolutely no difference.
Even the Nexus unlock where you don't need a key but need to boot into fastboot is okay. I don't think many pieces of software will be able to automatically perform the steps required for that, including a confirmation on the phone and one on the computer.
Sidenote, it's valuable for devices to come to you locked, because it means the device you have received does not have 3rd party malware. A common problem in some markets for Android phones.
I guess ambiguity / assumed apple fanboyism is why you're downvoted, but if I'm understanding you correctly, then I feel largely the same way.
Apple's walled garden and "moral" approach to guarding their garden is incredibly frustrating, but the sheer number of vulnerabilities affecting different levels of the Android stack is so disheartening. This is true of my PC/Mac as well, all it takes is someone to plug in a malicious USB stick and it'll infect the firmware on the USB host, and that's it, game over. Can spread from there to disk firmware, also growing increasingly complicated and opaque, and who knows where else.
It's reaching a point where I trust my iOS device more than any other, depressingly because of the walled garden. I can't build suitable fences around my kit myself to protect myself against every new vuln (now even monitors can have their firmware exploited and screenloggers installed), giving up trying and sacrificing some freedom for that sense of security is terrible, but feels like the only logical course of action at this point.
I fully admit this line of thinking hasn't completely solidified for me.
But empirically the iOS ecosystem is demonstrating that there is a close source ecosystem with better security properties than the open source one. Security advances the same virtues of user self determination as open source does.
On iPhone, user-mode exploits may remain sandboxed, unless they break from sandbox too. On jailbroken iPhones, that last step may be pre-made for them.
On Android, user-mode exploits may remain sandboxed too, unless they break from the sandbox - same as iPhone. On rooted Android devices, the last step may be pre-made for them.
You can not compare stock iPhone and rooted Androids - just like you can not compare jailbroken iPhone and stock Android.
I look at it the other way: as exploits become more and more underground, I feel safer: I know those exploits are more likely to be used by state actors against activists and other people who are doing illegal stuff, and less likely to be used against me and millions of other users to install malware on our phones (to make them send spam, to make them send expensive texts...)
Perhaps you feel that way because you have the luxury of living in a place where human rights are respected. That these exploits are being used by regimes to shut down opposition is terrible in its own right.
Edit: As for net effect on global society, I think having my phone be part of a botnet that sends spam is less impactful than disrupting democratic progress.
"I know those exploits are more likely to be used by state actors against activists and other people who are doing illegal stuff"
State actors usually have their own opinions about what's legal and what's not, and they tend to give themselves the benefit of the doubt because... the mission must succeed! So no, this "undergrounding" should not make you feel safer. You never know when you're going to cross paths with the next Snowden or any whistleblower or human rights activist.
I know its cliche at this point but I'm going to once again point out that to some state actors, simply being gay is the illegal stuff they are looking for, and the penalty may be death. They do not feel safer now.
You're not an activist yet. It's probably unwise to dismiss this possibility out of hand: governments change, and you might find yourself in strong enough disagreement that you'll need to speak up or act. Or your employer might do something so nefarious you'll feel the need to blow the whistle.
If there is ever a point when you feel the need to rise up, but can't because you gave in to government and corporate surveillance and lock down in the first place, that would be a pity.
It is more and more clear that to accept Apple's security (which seems to be getting better, but obviously still insufficient) I must also accept Apple's commercial limitations to the use of a device I own. And I suppose that the dividing line between the ability to exploit a vulnerability and to 'have control' is a sliding scale for every user: one man's 'obvious' kernel exploit is another man's 'obvious' phishing scam.
It is not a new tension, but it does seem the stakes on both sides seem to be getting higher and higher - total submission to an onerous EULA vs total exploitable knowledge about me and my device. Both sides seem to have forced each other to introduce the concept of 'total' to those stakes, and that is frustrating. More-so when it's not yet clear which threat is greater.