Pranav Bharadwaj

Pranav Bharadwaj

From CAPTCHAs to Passkeys: Human Verification in 2025

Generally speaking, there are two kinds of authentication that happen on the internet:

  1. Are you a human? (e.g. CAPTCHAs)
  2. Who are you? (e.g. username/password, OAuth)

In this post, we will be focusing on the first: human verification. While you might not be able to stop all bots, you can greatly increase the difficulty to bypass your checks at a meaningful scale. We want mechanisms that are user-friendly, expensive to bypass, and difficult to implement at scale.

Software-Based Human Verification

Traditional CAPTCHAs (Interaction Analysis and Challenge-Based Testing)

Challenge-based testing prompts users to solve tasks that should be easy for humans and hard for bots, such as identifying objects in images, typing distorted text, or simply clicking a checkbox ("I'm not a robot"). A Completely Automated Public Turing test to tell Computers and Humans Apart, or CAPTCHA (up till reCAPTCHA v2), does exactly what its name suggests. Traditional CAPTCHAs generally involve tasks that are:

  1. Easy for humans to solve
  2. Hard for bots to solve

Checkbox CAPTCHAs rely on interaction analysis: evaluating user's behavior on the site, such as mouse movements, to determine if the user is human. Other CAPTCHAs present a challenge to the user and evaluate humanness based on a combination of interaction analysis and correctness.

Bots now outperform humans in solving CAPTCHAs, both in terms of speed and accuracy. A 2023 study by researchers at the University of California, Irvine, involving 1,400 participants, found that bots solved distorted-text CAPTCHAs with nearly 100% accuracy, while human accuracy ranged between 50% and 84% (Searles, Andrew, et al. 2023). The same study revealed that bots also outperformed humans on other CAPTCHA types, including image recognition and puzzle sliders. In some cases, bots achieved accuracy rates of 85% to 100%, surpassing human performance (Searles, Andrew, et al. 2023).

CAPTCHAs still play an important role in deterring unsophisticated attacks. In particular, they can mitigate automated DDoS attacks, credential stuffing, and basic bot scraping by introducing a small but meaningful computational or interaction cost. Even simple CAPTCHA challenges can reduce automated abuse traffic by up to 50% in some contexts [5]. Although advanced attackers can often bypass traditional CAPTCHAs, deploying them effectively filters out bulk automated traffic that might otherwise overwhelm login forms, comment sections, or checkout systems [4]. As a result, CAPTCHAs act more as a friction layer, raising the cost for attackers and deterring mass-scale abuse, rather than serving as an absolute barrier against sophisticated agents. Believe it or not, checkbox CAPTCHAs reduce checkout conversions [4]. Every action users have to take introduces friction, so we want to avoid requiring user action as much as possible. The internet is both blessed and cursed by our friends who buy DoorDash with Klarna.

Passive Risk Scoring

Instead of directly challenging the user, newer systems like reCAPTCHA v3 and Cloudflare Turnstile observe user behavior in the background and assign a risk score based on environmental signals, interaction patterns, and historical data. If the score is high enough, the user is authenticated without any interruption. However, highly sophisticated bots may eventually learnt to blend into low-risk behavior profiles.

Looking Forward

As AI capabilities advance, software-based detection faces a fundamental problem: any behavioral pattern that can be detected can eventually be replicated. Advanced techniques like IP reputation scoring, device fingerprinting, and browser environment analysis raise the bar, but sophisticated attackers can learn to game these systems.

This suggests moving beyond software-based behavioral analysis toward hardware-based verification that relies on cryptographic attestation rather than pattern matching.

Hardware-Based Human Verification

Passkeys

An excellent example of hardware-based human verification are passkeys. A passkey is a cryptographic credential tied to a user device, used to authenticate securely without relying on passwords. When using a passkey, the device prompts the user for a verification step, such as entering a PIN or scanning a fingerprint, which provides a strong signal that a real human is present during authentication.

A popular implementation of this is Apple’s Secure Enclave, which you may have interacted with if you use Face ID or Touch ID. Secure Enclave is a physically isolated hardware component that securely stores and processes sensitive data using dedicated encryption keys [1]. Sensitive information like fingerprints, facial scans, and even non-biometric data like iMessage encryption keys and Apple Pay tokens are never accessible to macOS, iOS, or any applications running on the main operating system [1]. Data never leaves the Secure Enclave; instead, the system generates a hardware-signed attestation certifying that authentication was successful, without exposing the user’s information [1].

While a determined attacker could theoretically compromise devices by physically disassembling MacBooks or building large-scale biometric forgeries, these attacks have a high cost and scale poorly. Whether or not asking for a fingerprint is user-friendly is up in the air (I think it is). MacOS and iOS are uniquely suited for this as Apple controls the hardware, and for that same reason, Windows, Linux, and Android are uniquely bad for this. When a bad actor possesses control of the hardware, they can set it up to provide false attestations on request, undermining the trust model. When the hardware is unreliable, hardware attestation doesn’t mean much.

Though Windows, Android, and (to a lesser extent) Linux support passkeys, they rely on hardware security components such as Trusted Platform Modules (TPMs) on Windows and Trusted Execution Environments (TEEs) or StrongBox Keymasters on Android. These components aim to securely store cryptographic keys and protect user verification mechanisms, such as PINs or biometric data, and protect against tampering at the hardware level [2][3]. However, because these ecosystems allow installation on user-controlled hardware, an attacker with device control can often bypass these protections; for example, by rooting Android devices, disabling Secure Boot on Windows, or flashing custom firmware. In contrast, Apple’s Secure Enclave is tightly integrated with both the hardware and operating system, making it significantly harder to spoof biometric authentication or falsify attestation at scale [1].

External Security Keys

An external security key is a (hopefully) small and portable hardware device that securely stores cryptographic keys and can perform authentication operations without exposing a user’s sensitive information to their computer or the internet. External security keys such as YubiKeys, SoloKeys, or Titan Keys maintain the benefits of other hardware attestation methods like passkeys, but with an important distinction: the cryptographic operations happen entirely on the external device, rather than on the user's computer or phone. This separation makes external security keys more resilient against device-level compromise than passkeys. However, this shifts trust to the external key itself. Let’s hope YubiKey is not plotting to hack us all. Despite the benefits, there is a simple barrier: buying these costs money. Relying on a component that users have to buy separately is by far the least user-friendly out of any of the aforementioned methods.

Multi-Factor Authentication (MFA)

Multi-Factor Authentication (MFA) introduces an additional layer of defense by requiring users to present two or more independent forms of verification. Typically, this includes something the user knows (like a password or PIN), something the user has (like a phone or hardware key), or something the user is (like a fingerprint or face scan). In the context of human verification, MFA is valuable because it raises the cost and complexity for automated agents to successfully impersonate users. A bot might be able to mimic browser behavior or fake a device fingerprint, but without access to a separate trusted factor, such as a biometric passkey, it cannot easily complete the authentication process.

However, MFA is not a perfect solution to the fundamental limits of software-based verification. Many MFA methods still rely on user-controlled devices (e.g., a phone receiving a push notification), which can themselves be compromised. Malware on the device, SIM swapping attacks, or phishing techniques targeting MFA codes ("MFA fatigue" attacks) can bypass weaker implementations.

Hardware-backed MFA methods, such as using a passkey or external security key as a second factor, offer a more resilient approach. They tie verification to cryptographic operations performed on secure hardware, making it significantly harder for attackers to scale impersonation.

Ultimately, while MFA greatly improves resilience against basic and intermediate threats, the long-term challenge remains: strong verification increasingly requires anchoring trust in hardware, not just analyzing behavior or device context.

The End

What the “best” mechanism is largely depends on the context. If you are building human verification for checkout, being user-friendly and reducing friction is very important. However, if you have something that should be highly secured, using MFA would be worth the additional friction. At least amongst the methods described, there is a tradeoff between security and ease-of-use. A poor man’s CAP theorem.

References

[1] Apple Platform Security, Secure Enclave

[2] Microsoft Docs, Windows Hello for Business Overview

[3] Android Developers, Biometric Authentication Overview

[4] Google Cloud, Choose the appropriate reCAPTCHA key type

[5] Cloudflare, Announcing Turnstile, a user-friendly, privacy-preserving alternative to CAPTCHA

Searles, Andrew, et al. "An empirical study & evaluation of modern {CAPTCHAs}." 32nd usenix security symposium (usenix security 23). 2023.