INESC TEC Science Bits – Episode 6
PODCAST INESC TEC Science Bits (27:23 – 25.716 KB)
Ana Filipa Sequeira, Centre for Telecommunications and Multimedia
João Pinto, Centre for Telecommunications and Multimedia
Keywords: biometrics | biometric recognition | biometric traits | machine learning | computer vision | explainability of AI | AI bias and fairness
Biometric recognition: “who someone is” instead of “what someone owns or knows”
Biometric recognition is a way to authenticate a person based on anatomical, physiological or behavioural characteristics that are unique between different people.
Biometrics is based on who “someone is” instead of relying on what “someone owns” – like a key or a card – or on what “someone knows” – like a pin or a password.
These two last types of authentication, commonly referred as token-based, face obvious problems, like the fact that objects can be stolen or lost, and information can be forgotten or maliciously accessed – whereas the biometric traits are intrinsic to the person, always present and not easily modifiable, theoretically.
Anatomical, physiological or behavioural traits: it all depends on the application settings
It all depends on the application settings, and on what kind of measurements are taken from the users, while ensuring comfort and usability. Face, fingerprint and iris are the most common biometric traits, but we are not limited to those.
AUTOmotive is an example of a project that combines different types of traits to identify people. The project focuses on acquiring ECG signals (physiological traits) from drivers at the steering wheel, and on capturing facial videos (anatomical traits). With these traits, acquired continuously, the biometric system is able to recognise the drivers, while they keep on doing everything naturally, even without being aware of what’s happening.
In the early days, it was believed that iris and fingerprint were permanent during a lifetime, and nowadays there is a heated discussion about this, at least for the long-term permanence. But despite this fact, fingerprint, for example, was the first trait to be used, in a systematic way, in authentication systems and remains a favourite to this day, due to many advantages – one of which is performance. INESC TEC collaborated with the Portuguese Mint and Official Printing Office in the VCardID project, and developed a national algorithm for biometric identification through fingerprints, which was incorporated in the current Portuguese national citizen cards. This was a very successful project in the Biometrics field, and a significant accomplishment to INESC TEC.
Can any human characteristic be used?
No. Abiometric trait needs to comply with the following attributes: universality (each person must possess it); uniqueness (no two persons should share it); permanence (it should neither change nor be mutable); measurability (it should be promptly presentable to a sensor and easy quantifiable). And a biometric system needs to take in account three additional aspects: performance (this aspect is used to measure systems’ accuracy, to reject impostors and allow authorised individuals only); acceptability (how people react to a biometric system and their willingness to use the system appropriately) and circumvention (the ease at which a system can be fooled using an artifact).
Anti-spoofing: how to avoid hackers’ attacks?
Despite the fact that the development of recognition systems had a huge investment, and that they’re used in many applications (that go from our handheld devices to high security facilities’ control), it is true that the security aspects and the countermeasures against the attacks on the vulnerable points of these high performing systems were overlooked for a long time.
The first and immediate vulnerability is the acquisition sensor, which is the target of the so-called “spoofing attacks”. We’ve all heard the news about the hackers fooling the fingerprint system of the Samsung Galaxy S10, or the iris recognition system of the Samsung S8; or maybe even the cases detected in border control – less newsworthy, but much more worrying on the large-scale world safety level, in my opinion.
What failed here was exactly the application of anti-spoofing techniques, which should have been incorporated in these recognition systems. It was not due to lack of research in the field, which is vast and prolific in results.
Why is this an open challenge? Because once we develop measures to detect attacks made with photos, attackers will use videos and masks; and once we develop sensors that detect that type of material – silicone for example – I am sure that new types will appear.
“Good scientists should not let their fear of evil prevent them from doing science”
Mostly everything can be used for good and evil; the world will keep on turning. As scientists, we shouldn’t stop doing science just because we are afraid of the consequences, because someone else will do it.
The “bad” part is even more obvious when we talk about biometrics, which is the recognition of certain things that are not sufficiently unique between people to recognise them specifically, but are enough to identify certain groups or states of people, like gender or emotion recognition, full of subjectivity and very easily to biased because there is a lot of variability inside those groups, and subjective labels that depend on the people who annotated the data.
One of the greatest examples of that is the “Beauty AI contest”, where people from all over the world sent pictures to be evaluated by robots – which is, in fact, a machine-learning algorithm – and there were submissions from all over the world. However, if we look at the leader board, or “beauty ranking”, we observe that, in the five categories for each gender, there was a single black person and a few Asian people.
All of this led the promoters to cancel the third version of the competition, thus reconsidering the whole idea behind this. Perhaps the algorithms are using beauty characteristics that are very specific to white people. This is the issue with soft biometrics and subjectivity associated to the labels.
Can one consider the algorithm as racist?
The real issues in the current algorithms that indicate a biased behaviour are much more related to the bias in the data than actually to the methods per se. In other words, the models learn from biased data – for example, data with a smaller amount of pictures of dark coloured skin people or Asians. Besides that, Africans and Asians tend to have very dark irises and it could lead to a “biometric problem”. In this sense, if the model learns form biased data or even if the tested sample is compared against a biased database, this will lead to biased results – usually higher false positives.
It is not correct to assume that biometrics is bad and there is a concerted effort to harm certain groups of people. That would be just like saying that a finance algorithm to grant loans is bad because it discriminates people according to their income.
Race, ethnicity and skin colour play a huge role in face biometrics, just like income when predicting whether a person will be able to pay back the loan. Algorithms, similarly to people, resort to these essential aspects to differentiate people. If we try to describe the differences between two faces without using concepts like race, ethnicity, skin colour, gender, eye colour, or other sources of discrimination, we quickly understand that it would be impossible to have an effective biometric algorithm without considering them.
IWBF 2020 – International Workshop on Biometrics and Forensics
We are proud to say that more than 30 researchers, from Porto and other 15 locations in Europe, India, U.S.A. and Hong Kong, participated in the IWBF2020, via the Zoom platform.
We co-organised it with the Norwegian Biometrics Lab of the NTNU University in Norway and the European Association of Biometrics. The IEEE Biometrics Council and the International Association of Pattern Recognition (IAPR) co-sponsored the conference.
IWBF2020 included 27 accepted papers after a double-blind review process managed by our amazing three Programme Chairs, Andreas Uhl, Hugo Proença and Lena Klasén. In addition, the conference proceedings are already available in IEEE Xplore
The online event comprised many interesting sessions, but I’d like to highlight the two keynote talks, one by Prof. Peter Eisert on “Explainable AI for Face Morphing Attack Detection” and the second by Prof. Zeno Geradts on “Forensic Aspects and The Analysis of Deepfake Videos”.
What is next?
The biometrics community is still too focused on a few major biometric traits. I think this is not good. Just like entrepreneurs say: “never put all your eggs in one basket”. Every trait has its advantage and some situations call for specific traits. Biometrics has a huge potential to make our lives easier, to make our computers recognise us and respond to our moods, to make our cars safer, etc. If we want to bring biometrics to improve every aspect of our lives, we shouldn’t just stick to what already works, we should venture into exotic traits in unexpected and challenging scenarios.