I’ve been in the information security game for 25 years; and the one thing I’ve learnt is that whereas I might care about security, privacy and identity, the average person can't be bothered. In fact it gets worse, as today the craving for a friction-less user experience trumps almost everything else.
What brought this home to me was a presentation at this year’s Usenix's Enigma 2018 security conference in California, where Google software engineer Grzegorz Milka revealed (presentation link) that as of today less than 10 per cent of active Google accounts use two-step authentication to lock down their services.
This free, two-step authentication service, was introduced over seven years ago (Sept 2010) initially for Gmail accounts, but its it take-up by Joe Public has been negligible.
Last summer I needed to contact Amazon after some fraudulent activity on my account, their only advice was to change my password (which I had already done – as well is deleting all credit cards from the account) – when I asked about two-factor authentication their support line denied it existed.
However while checking my Amazon account settings in case anything had been changed I stumbled across it – and guess what; it can leverage my already existing Google two-step authentication service. [Your Account › Login & security › Advanced Security Settings if you are interested – it also supports the Microsoft authenticator.]
As we design Identity 3.0; the next generation for digital identity, the challenge has been “how do we make it simple?”
But I think, based on everything I have learnt to date, that we need to go significantly further than this if it’s to be universally adopted, and add to our design criteria;
- How do we make it the simplest, most friction-less, option?
- How do we make security, privacy and primacy near-invisible?
- How do we make it the default?
Because only then will we get the other 90% to adopt a security and privacy enhancing approach and start to beat the bad-guys.
Paul Simmonds, CEO Global Identity Foundation, January 2018
The UK Government has been promising a biometrics strategy since 2012, but it has been repeatedly delayed and is now due to be published sometime in 2018.
The committee chairman Norman Lamb has written (see link below) expressing his disappointment in the government's position and asked for more clarity on the delay.
"I remain concerned about how the Review will be implemented as well as uncertainty on the government's position on other important areas – DNA, fingerprints and so on,"
www.parliament.uk/documents/commons-committees/science-technology/Correspondence/171219-Letter-to-Baroness-Williams-Biometrics-strategy.pdf
Biometrics are seen by many (alongside blockchain) as "the holy grail" of identity; but are in fact a potential nightmare!
Some of the problems are as follows;
- Who do you trust with YOUR biometrics.
We assume the fingerprint on our phone is secured "on device", never backed-up and the NSA (or other spooks) never have access; but how does Joe Public know this? Let alone verify this? But at least the phone is a device that you (sort of) own and control.
For everything else the question you should be asking is; “where is your biometric held and processed”? and it is usually impossible to find this out, let alone verify this – even if Joe Public knew or understood the nasty questions to ask.
- We've already seen the first beaches of biometric data;
Biometric data stolen from corporate lunch rooms system
https://www.theregister.co.uk/2017/07/10/malware_scum_snack_on_lunchroom_kiosks/
UIDAI says Aadhaar system secure, refutes reports of biometric data breach
http://smartinvestor.business-standard.com/pf/pfnews-479889-pfnewsdet-UIDAI_says_Aadhaar_system_secure_refutes_reports_of_biometric_data_breach.htm
OPM Now Admits 5.6m Feds’ Fingerprints Were Stolen By Hackers
https://www.wired.com/2015/09/opm-now-admits-5-6m-feds-fingerprints-stolen-hackers/
- Processing and replay attacksWhen you place your finger on a sensor or have your photo taken on (for example) a door entry system; where is this processed?
- Is it on a secure chip?
- Is it passed as a raw data capture over the wire or is there some form of encoding performed?
- Can what is passed over the wire be replayed?
- Is it processed on a central PC or server controlling the doors, or possibly multiple doors?
- Or (because it faster/cheaper) processed on a cloud system?
- Or maybe it goes off to an army of people in China or India (because people are better at facial recognition matching) who get the two photos to compare and then click “match” or “no-match”? Bottom
line, even if you knew to ask, there is no easy way of knowing how the
system is architected, or how your data is being handled. Lifting
fingerprints off the wine glass to gain entry are the stuff of “Mission
Impossible” and films of a similar genre, but how many of these
solutions are vulnerable to such an attack, or liable to a replay
attack?
- The locus-of-control problem (Jericho Forum Commandment #8)1;
Biometric identity systems turns a variable (maybe Person) to a binary (IS Person), according to some undisclosed (to the person taking the risk) criteria, which is probably de-tuned to ensure minimal false positives.
Thus in a global identity ecosystem if I get an "IS DEFINITELY Person" from a Kazakhstan Bank system - do I trust it? (probably not), thus;
The only solution is that you have to enrol your biometric with MY system which I control and thus I understand the risk - back to the locus of control problem!
- The authentication ennoblement problem;
Using biometrics are often perceived as, or sold as, a more secure method of authentication, quite possibly as they are usually more expensive and difficult to implement.
More worryingly, biometric authentication is usually implemented using a system that is in effect a black box, with the person taking the risk having no knowledge of how that binary “passed authentication” is actually arrived at; or even how many attempts it took, or the probability of the match.
- The access-creep problem;
Try this simple experiment; register your fingerprint as (say) #6 on your friends or partners phone (with their permission), now set them up for e-banking, and get them to enable fingerprint authentication. Now simply access their e-banking app using your fingerprint!
Yes it's trivial, but its access-creep - yes my partner was happy for me to unlock their phone (useful for me to look at traffic if they are driving or to make a phone call) but they never understood this would also let me have full access to their bank account.
The bank (or whoever) just sees fingerprint authentication as a "stronger" authentication method than PIN and applies, possibly incorrectly, a higher level of trust. At least with a PIN code I can have a four digit PIN for my phone unlock and an (enforced) six-digit PIN (and thus different PIN) for my e-banking.
The conclusion we came to when looking at how a global identity ecosystem should be designed, is that YOUR biometric should only be held in a dedicated device, under your control, which only releases a crypto-proof of "sameness".
Ultimately, biometrics will only be truly usable if the entity taking the risk is able to evaluate the complete risk-chain; from how the human authenticates to the device, and then all the attributes of the components in-between.
If you couple this with the device able to assert both "model" and "chain of custody" (and potentially "duress"), then you have a robust model by which the entity you are asserting the sameness to; can make a risk-based decision as THEY understand the full risk-chain.
References:
- Jericho Forum Commandment #8 – “Authentication, authorization, and accountability must interoperate/exchange outside of your locus/area of control”
https://collaboration.opengroup.org/jericho/commandments_v1.2.pdf
Paul Simmonds, CEO Global Identity Foundation, December 2017