But this seems like a technological solution to a very human problem. If I can trick a user into approving the login, then hardware fobs, secure elements, etc, are meaningless.
The audience here is likely to assume that the security is solid. And it probably is. But this is a technology targeting your average user. It'll certainly be easier for the end user. But it seems like it introduces a human-based attack vector that may be easier to exploit.
What I mean is: with modern hardware keys you literally can't trick a user into approving a remote login, or a login to a fake domain. It's not possible unless you can control the domain that the user is logging into (say you've got code execution on their machine or compromised their network and broken TLS, attacks which are significantly more complex than phishing). Hardware keys enforce that the device can only authenticate against the real domain, not a phishing domain. The core of this is a real improvement of how 2FA works at the protocol layer, rather than simply a change to how the user interacts with the device.
Hardware keys also require that the key can only authenticate a local session, so there's also no risk that your "hardware key tap" can be captured and used by a remote adversary who doesn't control the local computer.
The audience here is likely to assume that the security is solid. And it probably is. But this is a technology targeting your average user. It'll certainly be easier for the end user. But it seems like it introduces a human-based attack vector that may be easier to exploit.