By Jerri-Lynn Scofield, who has worked as a securities lawyer and a derivatives trader. She is currently writing a book about textile artisans.
Motherboard ran a piece last week that suggests the biometric identification fairy’s days may be numbered – at least in her ability to protect smartphones, .
The original paper announcing the research results is .
According to Motherboard:
AI can generate fake fingerprints that work as master keys for smartphones that use biometric sensors. According to the researchers that developed the technique, the attack can be launched against individuals with “some probability of success.”
In most cases, spoofing biometric IDs requires making a or that matches an existing individual. In a posted to arXiv earlier this month, however, researchers from New York University and the University of Michigan detailed how they trained a machine learning algorithm to generate fake fingerprints that can serve as a match for a “large number” of real fingerprints stored in databases.
I won’t discuss the technical details here, and refer interesting readers to the complete paper linked to above.
Although the paper focused on smartphone fingerprint sensors, I note that the authors of that paper think that their method has broader applicability (from the ):
Recent research has demonstrated the vulnerability of fingerprint recognition systems to dictionary attacks based on MasterPrints. MasterPrints are real or synthetic fingerprints that can fortuitously match with a large number of fingerprints thereby undermining the security afforded by fingerprint systems….
The underlying method is likely to have broad applications in fingerprint security as well as fingerprint synthesis [my emphasis].
This Is Bad News for the Biometric ID Fairy
Now, Motherboard has a touchingly optimistic faith in the biometric ID fairy’s capabilities (even while appreciating that researchers have shown it is possible to spoof these systems).
Biometric IDs seem to be about as close to a perfect identification system as you can get. These types of IDs are based on the unique physical traits of individuals, such as fingerprints, irises, or even the . In recent years, however, security researchers have demonstrated that it is possible to fool , if not , forms of biometric identification.
A couple of problems here, which I discussed at much greater length in this post from last year, Biometric ID Fairy: A Misguided Response to the Equifax Mess that Will Only Enrich Cybersecurity Grifters and Strengthen the Surveillance State. First, unlike a password or number, biometrics cannot be changed. So if your biometric ID is hacked, you cannot replace your fingers, or get a new eyeball.
Second is the confusion between identification and authentication — which in that post, our own Richard Smith described as a “big unsolved problem”– until we figure out how to build hackproof systems. (In the interests of keeping this post short, I won’t repeat those arguments here – as they’re not central to this discussion. I refer interested readers to that prior post.)
This paper focused on smartphones only- but the researchers suggest their research has broader applicability.
Does this mean that bands of marauding mischiefmakers will overnight find ways to breach existing security systems?
Four points here. First, the current state of research is at the proof of concept stage – rather than on the verge of designing systems that can weaponize the concept. Or at least it seems, based on publicly reported research — who knows what the criminal masterminds are up to in their secret lairs?
Second, for ergonomic reasons, smartphone fingerprint systems are small in size, and obtain only partial images of a user’s fingerprint. According to the paper (p.1):
Since small portions of a fingerprint are not as distinctive as the full fingerprint, the chances of a partial fingerprint (from one finger) being incorrectly matched with another partial fingerprint (from a different finger) are higher.
Meaning that it may be much more difficult to achieve similar results when matching full fingerprints.
Third, hardware, algorithms, and more rigorous engineering all matter here, and all cost money, wrote a security expert I consulted (unfortunately, having neglected to secure permission to use this person’s name, I’ve elected not to attribute these thoughts directly, and trust I will be forgiven for shamelessly cribbing some of the following ideas – sometimes in the same words that they were conveyed to me). In the design of the most secure systems, groups with resources tend to build layered defenses that lever multiple factors: e.g. something you know – a password; something that you own – an ID card; something that you are – a biometric. On top of that, such systems also incorporate additional controls (e.g. man traps, armed guards, CCTVs). The more of these elements the system incorporates, the more secure it would be – although the cost of creating and maintaining such a level of security is significant.
Fourth, when designing the ID system itself, in the trade-off between systems that produce lots of false positives – e.g., allowing you or I to pass as Donald Trump and enter the White House situation room (assuming we could get the hair and mannerisms right) – and too many false negatives – e.g., preventing Donald Trump from entering the room – the more secure systems skew toward minimizing false positives, at the expense of allowing more false negatives. This is how it should be. In these more secure systems, biometrics would be only one part of the security protocol, and there are presumably real live humans involved who can override or reboot the protocol – or use a back-up procedure – if a problem were to arise.
By contrast, smartphones err to the side of the fingerprint ID spectrum that minimizes false negatives. They are presumably engineered to allow users who’ve had a night on the town to be able to access their system – no matter the deterioration in fine motor skills that might have resulted from said night on the town. Plus, they’re designed to work even if the fingertip presented is in less than an optimal state: if it’s marred by the remnants of ribs, fish and chips, kebabs, or the vegan equivalent. Even with one’s finger in such a condition, users still expect to be able to use their smartphones.
All I’m suggesting here is that if the concept is proven using smartphone fingerprint sensors, it may not necessarily work with more sophisticated sensors – and further research would be necessary.
Also, permit me another aside. At present, who would bother creating such a hacking system? It might just be easier either to use tried and tested stick ‘em up methods – “Insert your finger, or grandma gets it” – or simple state coercion – “Insert your finger, and I’ll clear you through passport control”- rather than employing the engineered fruits that may follow from the research discussed here.
The Bottom Line
Research like this is another reason the biometric ID fairy can’t solve ID theft and fraud problems. Also, it suggests hackers might exploit this vulnerability of smartphone fingerprint sensors – perhaps sooner rather than later – and compromise your device. But I suspect, dear readers, most of you already know that.