Why aren't we doing more to validate the identity of the service we are trying to connect to? CAs don't allow me to establish my own personal web of trust. If I connect once to my bank in a method I deem safe, I should be able to store their credentials in an easy to validate way.
That way if I fall for a phishing attack, the browser can CLEARLY indicate to me that I'm encountering a new entity, not one I have an established relationship with.
Concurrently, OSes need to do a way better job of supporting two factor locally and out-of-the-box. To even use a yubikey personally you have to install their software and disable the existing login methods or else you can still login the original way you set up.
While we're at it, browsers and operating systems should actually lock out the second a key is no longer connected/in range. I know smart cards can behave similarly, but this needs to be grandparents level of easy to set up and control.
I would feel much safer with my elderly family having "car keys" to their PC.
They closest thing to avoiding being phished by a different "secure" entity is that your password manager will refuse to autofill (*) your credentials. But it's true that this is far from sufficient - this kind of autofill is wonky and doesn't work with all pages, so users can get conditioned to working around it by manually copying and pasting from the password manager to the browser, which defeats the protection. Many users prefer to always copy-and-paste anyway, because that avoids having to install the password manager's corresponding browser addon which can seem more secure.
(*): Note that "autofill" only means "automatically populate credentials", not "automatically populate credentials without any user interaction". Clicking the username field, choosing a credential from a dropdown that the password manager populated for you based on which credentials match the website in question, and then having it be applied is also "autofill".
You're right, that's woefully insufficient. The authentication challenge should clearly (using color and text) whether or not the challenge is an established part of your trust network and the hardware token should be able to validate the authenticity of the challenge modal itself.
Users should be able to take an action they trust, while at the same time having the choice of that action taken away (or made more cumbersome) if they are about to get themselves into trouble.
There are people far smarter than me working on these problems, but I feel like they are so hyperfocused on state-security that they refuse to listen to anyone regarding actual usability.
Specifically what's going on here in the cheapest FIDO devices is roughly this:
On every site where you enroll, random private keys are generated - this ensures you can't be tracked by the keys, your Facebook login and your GitHub login with WebAuthn are not related, so although if both accounts are named "ChikkaChiChi" there are no prizes for guessing it's the same person, WebAuthn does not help prove this.
A private key used to prove who you are to say example.com is not stored on the device, instead, it's actually encrypted using a symmetric key that is really your device's sole "identity" the thing that makes it different from the millions of others, and with a unique "Relying Party" ID or RPID which for WebAuthn is basically (SHA256 of) the DNS name using AEAD encryption mode, and then, sent to example.com during your enrolment, along with the associated public key and other data.
They can't decrypt it, in fact, they aren't formally told it's encrypted at all, they're just given this huge ID number for your enrolment, and from expensive devices (say, an iPhone) it might not be encrypted at all, it might just really be a huge randomly chosen ID number. Who knows? Not them. But even if they were 100% sure it was encrypted too bad, the only decryption key is baked inside your authenticator which they don't have.
What they do have is the public key, which means when you can prove you know that private key (by your device signing a message with it) you must be you. This "I'm (still) me" feature is deliberately all that cheap Security Keys do, out of the box, it's precisely enough to solve the authentication problem, with the minimum cost to privacy.
Now, when it's time to log in to example.com, they send back that huge ID. Your browser says OK, any Security Keys that are plugged in, I just got this enrolment ID, from example.com, who can use that to authenticate ? Each authenticator looks at the ID, and tries to decrypt it, knowing their symmetric key and the fact it's for example.com. AEAD mode means the result is either "OK" and the Private Key, which they can then use to sign the "I'm (still) me" proof for WebAuthn and sign you in, or "Bzzt wrong" with no further details and that authenticator tells the browser it didn't match so it must be some other authenticator.
This means, if you're actually at example.org instead of example.com the AEAD decryption would fail and your authenticator doesn't even know why this didn't work, as far as it knows, maybe you forgot to plug in the right authenticator? You not only don't send valid credentials for example.com to the wrong site, your devices don't even know what the valid credentials are because they couldn't decrypt the message unless it's the correct site.
That way if I fall for a phishing attack, the browser can CLEARLY indicate to me that I'm encountering a new entity, not one I have an established relationship with.
Concurrently, OSes need to do a way better job of supporting two factor locally and out-of-the-box. To even use a yubikey personally you have to install their software and disable the existing login methods or else you can still login the original way you set up.
While we're at it, browsers and operating systems should actually lock out the second a key is no longer connected/in range. I know smart cards can behave similarly, but this needs to be grandparents level of easy to set up and control.
I would feel much safer with my elderly family having "car keys" to their PC.