1
Biometrics and Applications

Christophe CHARRIER1, Christophe ROSENBERGER1 and Amine NAIT-ALI2

1GREYC, Normandy University, University of Caen, ENSICAEN, CNRS, France

2LISSI, University of Paris-Est Créteil Val de Marne, France

Biometrics is a technology that is now common in our daily lives. It is notably used to secure access to smartphones or computers. This chapter aims to provide readers with an overview of this technology, its history and the solutions provided by research on societal and scientific issues.

1.1. Introduction

There are three generic ways to verify or determine an individual’s identity: (1) what we know (PIN, password, etc.); (2) what we have (badge, smart card, etc.); and (3) what we are (fingerprint, face, etc.) or what we know how to do (keystroke dynamics, gait, etc.). Biometrics is concerned with this last set of approaches. Biometrics, and more precisely security biometrics, consists of verifying or identifying the identity of an individual based on their morphological characteristics (such as fingerprints), behavioral characteristics (such as voice) or biological characteristics (such as DNA).

The biometric features by which an individual’s identity can be verified are called biometric modalities. Examples of some biometric modalities are shown in Figure 1.1. These modalities are based on the analysis of individual data, and are generally grouped into three categories: biological, behavioral and morphological biometrics. Biological biometrics is based on the analysis of biological data related to the individual (saliva, DNA, etc.). Behavioral biometrics concerns the analysis of an individual’s behavior (gait, keyboard dynamics, etc.). Morphological biometrics relates to particular physical traits that are permanent and unique to any individual (fingerprints, face, etc.).

Schematic illustration of the examples of biometric modalities used to verify or determine the identity of an individual.

Figure 1.1. Examples of biometric modalities used to verify or determine the identity of an individual

Nowadays, the use of facial or fingerprint recognition has come to feel natural to many people, notably among the younger generations. Biometric technology is part of our everyday lives (used for border control, smartphones, e-payment, etc.). Figure 1.2 shows the spectacular evolution and market prospects of this technology. In an increasingly digital world, biometrics can be used to verify the identity of an individual using a digital service (social network or e-commerce). While fingerprints and facial or iris recognition are among the most well-known biometric modalities (notably due to their use in television series or movies), a very wide range of biometric data can be captured from an individual’s body or from digital traces. An individual can be recognized in the physical and digital worlds using information from both spheres.

The use of this technology raises a number of questions: how new is this technology? How does a biometric system work? What are the main areas of current and future research? These questions will be addressed in the three main sections of this chapter: the history of biometrics (section 1.2), the technological foundations of biometrics (section 1.3) and the scientific issues and perspectives (section 1.4).

Bar graph depicts the evolution and perspectives of the biometrics market.

Figure 1.2. Evolution and perspectives of the biometrics market (source: Biometric System Market, October 2019)

1.2. History of biometrics

Biometrics may be as old as humanity itself. In essence, biometrics relates to a measurement that can be performed on living things, and in a security context, it refers to the recognition of individuals by their physical and/or behavioral characteristics. This property of recognition is primarily human based, and not dependent on technology. As humans, we recognize one another through aspects such as facial features, hands or gait; the human brain has the capacity to distinguish, compare and, consequently, recognize individuals. In reality, biometrics – as we now understand it – is simply a technological replication of what the human brain can do. Key aims include speed, reproducibility, precision and memorization of information for populations of theoretically infinite size (Nait-Ali and Fournier 2012).

From the literature, we find that biometrics began to be conceptualized several centuries BC, notably in the Babylonian civilization, where clay tablets used for trading purposes have been found to contain fingerprints. Similarly, fingerprinted seals appear to have been used in ancient China and ancient Egypt. It was not until the 14th century, however, that a Persian book, entitled Jaamehol-Tawarikh, mentioned the use of fingerprints for individual identification. Other later publications concerning the fingerprint and its characteristics include the work of G. Nehemiah (1684), M. Malpighi (1686), and a book published in 1788, in which the anatomist J. Mayer highlighted the unique nature of papillary traces.

It was only during the industrial revolution, notably in the mid-19th century, that the ability to clearly identify individuals became crucial, particularly due to an intensification of population mobility as a result of the development of commercial exchanges. The first true identification procedures were established in 1858, when William Herschel (working for the Indian Civil Service at the time) first used and included palm prints, then fingerprints, in the administrative files of employees (see Figure 1.3). Later, several medical scientists, anthropologists and statisticians, including Henry Faulds, Francis Galton and Juan Vucetich, developed their own studies of fingerprints. Vucetich was even responsible for the first instance of criminal identification using this technique, which took place in Argentina in 1892 (the Francisca Rojas case).

Photographs of a) William James Herschel, and b) example of palm and finger prints.

Figure 1.3. a) William James Herschel (1833–1917), and b) example of palm and finger prints (source: public domain)

A further turning point in biometrics occurred in the 1870s when Alphonse Bertillon, a French police officer, began to implement anthropometric techniques which came to be known as the Bertillon System, or “bertillonnage”. Broadly speaking, this involved taking multiple measurements of the human body, including the face and hands. By combining these measurements with a photograph of the person and other physical descriptions (see Figure 1.4), Bertillon developed files which could be used to identify criminals and delinquents, even if they were disguised or using a false identity (see Figure 1.5). The first criminal identification using this technique in France occurred in 1902: Henri Léon Scheffer was identified by matching fingerprints taken from a crime scene with the information on his anthropological documents. At this time, the Bertillon system was used to a greater or lesser extent in many countries around the world.

Some 30 years later (1936), an ophthalmologist, Frank Burch, introduced the concept of identifying individuals by iris characteristics, although Burch did not develop this idea into an identification system. Biometrics as we now understand it began to take shape in the 1960s, drawing on technological advances in electronics, computing and data processing. The first semi-automatic facial recognition system was developed by the American Woodrow W. Bledsoe (Bledsoe and Chan 1965). The system consists of manually taking the coordinates of the characteristic points of the face from a photograph. These coordinates are then stored in a database and processed by computer by calculating distances with respect to reference points. In the same year, the first model of the acoustic speech signal was proposed by Gunnar Fan, in Sweden, laying the foundations for speech recognition. The first automatic biometric systems began to appear in the 1970s. Notable examples include a system for recognizing individuals by hand shape (1974), a system for extracting minutiae from fingerprints (FBI, 1975), a facial recognition system (Texas Instruments, 1976), a patent for a system for extracting signature characteristics for individual verification (1977), a patent for an individual verification system using 3D features of the hand (David Sidlauskas, 1985), a patent for the concept of recognizing individuals by the vascular network features at the back of the eye (Joseph Rice, 1995) and a patent for the concept of identifying individuals by characteristics of the iris (Leonard Flom and Aran Safir, 1986); the algorithm for this final system was later patented by John Daugman in 1994.

Schematic illustration of plate taken from the Identification Anthropométrique journal.

Figure 1.4. Plate taken from the Identification Anthropométrique journal (1893). a) Criminal types. b) Anthropometric file

Schematic illustration of example of an anthropometric file using the Bertillon system.

Figure 1.5. Example of an anthropometric file using the Bertillon system (source: public domain)

The 1980s–1990s also saw an upsurge in activity with respect to facial recognition, notably with the application of principal component analysis (PCA) techniques by Kirby and Sirovich in 1988 (Kirby and Sirovich 1990), then the introduction of Eigenfaces by Turk and Pentland (1991). Turk and Pentland’s paper was well received by the biometrics community, and has been cited over 18,500 times at the time of writing (2020). The authors demonstrated facial recognition using a limited number of parameters (compared to the number of pixels in a digital image), permitting the use of real-time applications. The performance of this method was quickly surpassed in the 2000s by a wide range of new data-processing approaches, and thanks to developments in computer science and electronics, an accelerating factor in the design of biometric systems. Following on from early uses for security projects, including industrial, military and governmental applications, biometrics has gradually gained ground in the field of commercial products and services. For example, fingerprint authentication (e.g. Touch-ID) was first integrated into smartphones in 2013, followed by facial recognition (e.g. Face-ID) in 2017. Research and development in this area is currently booming, and biometrics research, applications and modalities continue to expand at a rapid pace. The socioeconomic implications of the technology are likely to prove decisive in the coming decades; the story of biometrics is far from over.

1.3. The foundations of biometrics

In this section, we shall present key foundational elements involved in biometrics and highlight the scientific issues at play in this domain.

1.3.1. Uses of biometrics

Before going into detail concerning the operation of biometrics, it is interesting to consider its applications. The first objective of biometrics is identity verification, that is, to provide proof to corroborate an assertion of the type “I am Mr X”. A facial photograph or fingerprint acts in a similar way to a password; the system compares the image with a pre-recorded reference to ensure that the user is who they claim to be. The second application of biometrics concerns the identification of individuals in cases where their collaboration is not generally required (e.g. facial recognition based on video surveillance footage). Finally, biometrics is often used to secure access to places or tools (premises, smartphones and computers), for border control (automated border crossing systems), by police services (identity control) or for payment security (notably on smartphones), as shown in Figure 1.6.

Photograph of some applications of biometrics.

Figure 1.6. Some applications of biometrics (physical access control, social networks)

1.3.2. Definitions

In order to recognize or identify an individual k, reference information Rk must be collected for the individual during an initial enrollment phase. During the authentication/identification phase, a new sample is captured, denoted as E. A biometric system will compare sample E to Rk in an attempt to authenticate an individual k, or to multiple references in a biometric database in cases of identification. A decision is then made (is this the right person?) by comparing the comparison score (in this case, taken as a distance) to a pre-defined threshold T:

image

The threshold T is defined by the application. In the case of distance, the lower the threshold, the stricter the system is, because it requires a small distance between the sample and the individual’s reference as proof of identity. A strict (high security) threshold will result in false rejections of legitimate users (measured by the FRR, false rejection rate). A looser threshold will result in an increased possibility of imposture (measured by the FAR, false acceptance rate). To set the threshold T for a given application, we consider the maximum permissible FAR for the system; the FRR results from this choice. As an example, consider a high security setting with an acceptable FAR rate of one in a million attempts. In this context, we expect an FRR of less than 2%. The equal error rate (EER) is the error obtained when the threshold is set so that the FRR is equal to the FAR. The EER is often used as an indicator of the performance of a biometric system, although using the associated threshold to parameterize a system is not of any particular practical use; it is simply easier to understand the performance of a system on the basis of a single EER value.

1.3.3. Biometric modalities

There are three main groups of biometric modalities (types of biometric information): morphology (part of the person’s body, such as the face or the iris), behavior (an individual action, such as the voice or the way of signing) and physiology (such as DNA). The first two modalities are the most widespread in transactional contexts due to processing time limitations. These three categories of biometric modalities are illustrated below, represented by DNA, signature dynamics and fingerprints.

Schematic illustration of the three categories of biometric modalities.

Figure 1.7. Illustrations of the three categories of biometric modalities: DNA, signature dynamics and fingerprints

Almost any morphological or behavioral characteristic may be considered as a biometric characteristic, as long as it satisfies the following properties (Prabhakar et al. 2003):

  • – universality: all people to be identified must possess the characteristic;
  • – uniqueness: the information should be as different as possible from one person to the next;
  • – permanence: the collected information must remain present throughout the individual’s lifetime;
  • – collectability: it must be possible to collect and measure the information in order to permit comparison;
  • – acceptability: the system must respect certain criteria (ease of acquisition, rapidity, etc.) in order to permit use.

Table 1.1. Comparison of biometric modalities based on the following properties: (U) universality, (N) uniqueness, (P) permanence, (C) collectability, (A) acceptability and (E) performance. For performance, the number of stars is linked to the value of the equal error rate (EER) obtained in the state of the art source: Mahier et al. (2008)

Criterion
Modality
UNPCAE
DNAYesYesYesLowLow*****
BloodYesNoYesLowNo*
GaitYesNoLowYesYes***
Typing dynamicsYesYesLowYesYes****
VoiceYesYesFaibleYesYes****
IrisYesYesYesYesLow*****
RetinaYesYesYesYesLow*****
FaceYesNoLowYesYes****
Hand geometryYesNoYesYesYes****
Veins on handYesYesYesYesYes*****
EarYesYesYesYesYes*****
FingerprintYesYesYesYesYes****

Not all biometric features have these properties, or they may have them, but to different degrees. Table 1.1, taken from Mahier et al. (2008), compares the main biometric modalities according to the properties listed above. As we see from this table, no characteristic is ideal; different modalities may be more or less suitable to particular applications. For example, DNA-based analysis is one of the most effective techniques for verifying an individual’s identity or for identification (Stolovitzky et al. 2002). However, it cannot be used for logical or physical access control, both due to the computation time and the fact that nobody would be willing to provide a sample of their blood for verification purposes. The choice of modality is thus based on a compromise between some or all of these properties according to the needs of each application. Note that the choice of the biometric modality may also depend on local cultures. In Asia, methods requiring physical contact, such as fingerprints, are not widely accepted for hygiene reasons; contactless methods are more widespread, and more readily accepted, in this setting.

1.4. Scientific issues

Biometrics is a rapidly evolving field as new operational applications emerge in our daily lives (e.g. unlocking smartphones via facial recognition). Several scientific issues relating to biometrics, resulting from the new needs of this technology, are discussed below.

1.4.1. Presentation attacks

There are many ways of attacking a biometric system (Ratha et al. 2001). An attacker may alter the storage of biometric credentials (e.g. replace a user’s biometric credentials in order to spoof the system), or replace a sub-module, such as the decision module, so that it returns a positive response to any attempt. In this section, we shall focus on presentation attacks, which consist of presenting the capture subsystem with biometric data intended to alter the operation of the biometric system. This type of attack can be quite easy to perform, for example by presenting a photo of the user’s face printed on paper. Impostors may also present biometric systems with falsified biometric data (e.g. a gelatin fingerprint), with or without the participation of the individual concerned. One particularly active area of research concerns the development of hardware or software mechanisms to detect this type of attack (Galbally et al. 2019).

The most common attack of this type is carried out on facial recognition systems. Facial recognition technology has come on in leaps and bounds since its invention in the 1970s, and is now the most “natural” of all biometric measures. By the same token, it has become a major focus for hackers. For example, Grigory Bakunov has developed a solution that can confuse facial recognition devices, by designing an algorithm that creates specific makeup arrangements to fool facial recognition software (see Figure 1.8(a)).

In late 2017, a Vietnamese company successfully bypassed the Face ID facial recognition feature of Apple’s iPhone X using a mask (see Figure 1.8(b)).

At the same time, researchers at a German company developed an attack technique to bypass Windows 10 Hello facial authentication. A key element of the attack appears to be taking a picture of the authenticated user with a near-infrared (IR) camera, since Windows Hello uses infrared imaging to unlock Windows devices (see Figure 1.8(c)).

In May 2018, Forbes magazine reported that researchers at the University of Toronto (Canada) had developed an algorithm (privacy filter) that confuses facial recognition software. The software changes the value of specific pixels in the image posted online. These changes, imperceptible to the human visual system (HVS), confuse the recognition algorithms.

Photographs depict the examples of techniques used to hack facial recognition systems.

Figure 1.8. Examples of techniques used to hack facial recognition systems

One response to these types of attack is to use video rather than still images (Matta and Dugelay 2009). Some operators use interviews, via video conferencing software, to authenticate a person. Unfortunately, new attacks have already been developed for video authentication, and we can expect these attacks to become more sophisticated in the years to come. Video streams can now be manipulated in real time to show the facial reactivity of a counterfeiter on top of another person’s face (Thies et al. 2016), or through face swapping (Bitouk et al. 2008).

Numerous works have been published on this subject, mostly by researchers in the Image Forensics community (Redi et al. 2011; Yeap et al. 2018; Roy et al. 2020); the main approach involves looking for abnormalities in images or flows to identify locations where manipulations have occurred. Modifications are detected on the basis of inconsistencies or estimated abnormalities on image points, inconsistencies in sensor noise, recompressions, internal or external recopies and inconsistencies in terms of illumination or contours. Several technological challenges have been launched by DARPA, IEEE and NIST in the United States and by the DGA in France (including a DEFALS with participation from EURECO, UTT and SURYS) to measure the effectiveness of this type of method. It should be noted that significant progress has recently been made because of deep learning techniques. Passive detection can also draw on knowledge of the particularities of attacks, such as what is known to happen during morphing between two images (Raghavendra et al. 2016), or on the history of operations on the images in question (Ramachandra and Busch 2017).

However, the effectiveness of these countermeasures is beginning to be undermined by advances in inpainting technologies using deep learning, which create highly credible computer-generated images, produced in real time, using just a few photos of the person whose identity is being spoofed and a video stream of the spoofer responding (potentially) to all of the requests of the preceding tests. However, face spoofing can be detected in video streams by focusing on known features of the processed images, such as specific 3D characteristics of a face (Galbally et al. 2014). Evidently, more work is urgently needed in this area.

1.4.2. Acquisition of new biometric data or hidden biometrics

The objective here is to collect known biometric data by new capture methods (3D, multi-spectral (Venkatesh et al. 2019) and motion (Buriro et al. 2019)), or capture new biometric information (for example, the electrical signal from an individual’s body (Khorshid et al. 2020)). The goal is to propose new information which offers improved individual recognition, or which has a greater capacity to detect presentation attacks.

Elsewhere, considerable efforts have been made in recent years in exploring a specific form of biometrics, known as hidden biometrics. The principle consists of identifying or verifying people on the basis of physical characteristics, which are not accessible by traditional techniques, or which are not directly observable or perceivable by humans. This property makes systems particularly robust to attacks.

Hidden biometrics also concerns features that vary over time, that cannot be quantified at a given moment, and which can only be predicted (e.g. variations resulting from aging) or recovered (e.g. by rejuvenation). In this case, we speak of forward or backward prediction.

Certain modalities used in hidden biometrics rely on technologies developed in the fields of medicine or forensic science, particularly for data acquisition. Examples include the use of electrocardiograms (ECG), electroencephalograms (EEG) or electromyographs (EMG), involving a variety of imaging techniques (infrared, thermal, ultrasound, etc.) (Nait-Ali 2019a, 2019b).

In this section, we shall focus on three modalities used in hidden biometrics, namely human brain biometrics, hand biometrics and digital facial aging/rejuvenation.

In 2011, researchers showed that a biological signature, the “Braincode”, can be obtained from the human brain and used to distinguish between individuals. Both 2D and 3D processing approaches have been explored, using images obtained by magnetic resonance imaging (MRI). In the 2D approach, one idea is to extract biometric features from a single specific axial slice, as shown in Figure 1.9. Defining a region of interest (ROI) in the form of a crown, using an algorithm similar to the one used in iris biometrics, a recognition rate of around 98.25% can be achieved. In the 3D approach, the whole volume of the image obtained via the MRI scan is used in order to obtain a Braincode. The idea is to explore the whole volume image obtained by MRI to extract the Braincode. In an article published in Aloui et al. (2018), the envelope of the brain was estimated, highlighting the structure of convolutions, as shown in Figure 1.10.

Schematic illustration of brain biometry via MRI.

Figure 1.9. Brain biometry via MRI. a) Determination of a region of interest (ROI) from an axial slice. b) Extraction of “brainprint” characteristics using a similar approach to iris biometrics

While this modality cannot currently be used for practical applications, notably due to its technical complexity, cost and low level of user acceptability, future uses are not to be excluded.

Schematic illustration of hidden brain biometrics.

Figure 1.10. Hidden brain biometrics: extraction of a brainprint from MRI images of the brain. a) Curvilinear envelopes, estimated using one brain at three different depths (10 voxels-1 cm). b) 2D projection of the estimated envelopes

Palm biometrics in the visible or infrared range (vein biometrics) are potentially vulnerable to attack. One reason for this relates to the superficiality of features extracted from the region of interest.

Technically, this risk can be considerably reduced by using a modality based on X-ray imaging. In this context, experiments have been carried out on many samples; researchers have shown that a biometric signature can be extracted by modeling the phalanges of the hand (see Figure 1.11 (Kabbara et al. 2013, 2015; Nait-Ali 2019a)).

In the algorithm in question, the image is segmented in order to highlight all of the phalanges. Each phalanx is then modeled using a number of parameters, which are then concatenated to create a biometric signature. Evidently, this approach raises questions concerning the impact of X-rays on user health. The study in question took the recommendations of the National Council on Radiation Protection and Measurements (NCRP) into account, limiting the radiation dose of the systems to 0.1 μSv/scan to ensure user safety.

1.4.3. Quality of biometric data

The quality of biometric data is not always easy to estimate. While quality metrics have been established for morphological modalities such as fingerprints (Yao et al. 2016b), much work is still needed in the case of behavioral modalities.

Work carried out in recent years has highlighted the importance of sample quality for recognition systems or comparison algorithms. The performance of a biometric system depends, to a great extent, on the quality of the sample image. Over the last decade, many research works have focused on defining biometric data quality metrics for the face (Nasrollahi and Moeslund 2008; Wasnik et al. 2017), vein networks (Qin and El Yacoubi 2017) and, especially, fingerprints (Tabassi et al. 2011; Yao et al. 2015a; Liu et al. 2016).

Schematic illustration of hidden palmar biometrics.

Figure 1.11. Hidden palmar biometrics. a) Imaging in the visible domain. b) X-ray imaging is more robust against attacks. Once the phalanges have been modeled, the biometric signature can be extracted

The development of a quality measurement for biometric data centers on an objective demonstration of the superiority of one indicator over others. In the case of image quality, the aim is to develop an algorithm that assigns quality ratings that correlate perfectly with human judgment; in biometrics, a quality measurement must combine elements of image quality with elements relating to the quality of the extracted biometric characteristics, ensuring that a system will perform well. In this case, the working framework is different and the real-world situation is not fully known, and this can prove problematic.

Yao et al. (2015a) have proposed a methodology for quantifying the performance of a quality metric for biometric data. Their approach is generic, and can be applied to any modality. The method estimates the proximity of a metric to an optimal judgment.

1.4.3.1. Relevance of a quality metric

The principle of the proposed method consists of evaluating the relevance of a metric for a user enrollment task using a database of biometric samples from several users. In this case, a heuristic is needed to designate the user’s reference sample. Once this choice has been made, all legitimate scores in the database are calculated by comparing the samples with the reference of each user. The same is done for imposture scores, by comparing a reference with the samples of all other individuals in the database. These scores are used to compute the FRR and FAR for different values of the decision threshold. These values are then used to calculate the DET curve (evolution of the quantity of false rejections as a function of false acceptances), the EER and the area under the DET curve (AUC).

Two co-existing strategies may be used to choose the reference sample for a user:

  1. 1) choice of the first sample as the point of reference for an individual. This approach is widespread, and is considered as the default option (see Figure 1.12(a));
  2. 2) choice of a reference based on a heuristic (see Figure 1.12(b)).

The heuristic may be based on a measurement of sample quality. In this case, the sample with the highest quality is selected as the reference sample for the user.

Another option is to use a heuristic based on the minimum AUC value. This comes down to determining the optimal choice of a reference sample with respect to the performance of the biometric system in question (lowest AUC).

A further alternative is to choose the sample which results in the highest value of the AUC.

Figure 1.13 shows the performances obtained on a biometric database for different reference choice heuristics. The DET curve using the worst sample as a reference is shown in black, and the DET curve using the best sample is shown in green. We see that the choice of reference results in system performances with an AUC of between 0.0352 and 0.2338. Using two metrics, we obtain performances of 0.0991 (codecolorBlue) and 0.0788 (red). Metric 1 (codecolorBlue curve) is considered less efficient than metric 2 (red curve). This demonstrates the possibility for improvement in sample quality measurements.

1.4.3.2. Metric behavior

Twelve biometric databases from the FVC competition (Maltoni et al. 2009) were used to study the behavior of metrics: FVC 2000 (DB1, DB2, DB3, DB4), FVC 2002 (DB1, DB2, DB3, DB4) and FVC 2004 (DB1, DB2, DB3, DB4). Five additional synthetic fingerprint databases of different qualities were also generated using SFINGE (Cappelli et al. 2004): SFINGE0 (containing fingerprints of varying quality), SFINGEA (excellent quality), SFINGEB (good quality), SFINGEC (average quality) and SFINGED (poor quality).

Schematic illustration of examples of methods used in selecting enrollment samples.

Figure 1.12. Examples of methods used in selecting enrollment samples

Graph depicts the representation of performance as a function of reference choice.

Figure 1.13. Representation of performance as a function of reference choice: worst choice (black), best choice (green), choice using metric 1 (blue) and choice using metric 2 (red)

Seven current fingerprint quality metrics were tested:

  1. 1) NFIQ: this metric classifies fingerprint images by five quality levels, based on a neural network (Tabassi et al. 2011). This metric has served as the industry standard for the past 15 years, and is included in all commercial biometric systems.
  2. 2) NFIQ 2.0: Olsen et al. (2013) trained a two-layer self-organizing map (SOM neural network) to obtain a SOM unit activation histogram. The trained characteristic is then input into a random forest in order to estimate genuine matching scores. NFIQ 2.0 is the new ISO standard for measuring fingerprint quality.
  3. 3) OCL: Lim et al. (2002) developed a quality measure based on a weighted combination of local and global quality scores, estimated as a function of several characteristics, such as the orientation certainty level.
  4. 4) QMF: this metric is calculated by considering several different aspects, such as (1) the fingerprint image itself (using blind image quality evaluation and texture functions) and (2) the associated minutia model (Yao et al. 2015a). The quality metric is implemented via a linear combination of quality characteristics.
  5. 5) NBIS: this quality indicator is a simple measure, based on the quality of minutia extracted by the NIST NBIS program (Ko 2007). The metric corresponds to the mean value of the quality of the minutia in the model.
  6. 6) MSEG: this measure is based on cutting out the pixels from the foreground of a poor-quality image (Yao et al. 2016a).
  7. 7) MQF: an original metric that calculates a quality score based on a minutia model (Yao et al. 2015b). Metrics of this type present a significant advantage for biometric systems integrated into chips or smart objects, where computation and storage constraints in the secure part of the chip mean that only the minutia model is available.

Table 1.2 shows the quality score obtained from the metrics tested on the first sample of four datasets. Note that, for NFIQ, a low value corresponds to high fingerprint quality, while for all other metrics a high score indicates good performance. The evaluation results seem to be effective. For example, almost all metrics identify a fingerprint from the SFINGEA (high quality) database as being of the best quality. We also note significant differences in scores between the metrics, especially between the image from the FVC2002DB3 database and the SFINGED database. However, since the metrics have values in very different ranges, these results are not directly comparable.

Table 1.2. Examples of values obtained for the first samples from different databases using different quality indicators

DatasetNFIQNFIQ2OCLQMFNBISMSEGMQF
FVC2000DB12650.7383.8114.160.4459 802
FVC2000DB34400.7128.0615.110.1829 804
SFINGEA1690.9076.0957.460.8355 720
SFINGED3280.4791.19100.00643 546

Many further avenues remain to be explored in order to develop better quality metrics for fingerprint samples.

1.4.4. Efficient representation of biometric data

Once biometric data has been collected and authenticated, the best possible representation must be extracted in order to make comparisons. For many years, the search for relevant parameters to characterize biometric data focused on signal attributes and images (Wu et al. 2019). Recently, however, the use of convolutional neural networks has revolutionized the field, and statistical learning approaches are becoming increasingly widespread (Parkhi et al. 2015). Current research aims to generalize this type of approach to all biometric modalities, even in cases where large databases are not available.

Schematic illustration of examples of fingerprints from the different image databases.

Figure 1.14. Examples of fingerprints from the different image databases

The performance of biometric systems has progressively improved since the 2000s because of advancements in electronics (more powerful sensors), computing (more powerful computers) and algorithmics (more sophisticated data processing techniques). These three components are closely linked within the classical architecture of a biometric system; in other words, the failure of one of these three essential components can seriously compromise the overall performance of the system. Considering the algorithmic aspect, the last few years have seen significant changes due to the success of deep learning methods in various image processing and data analysis applications. The generic structure of biometric systems has been modified to include the parameter extraction phase within a multi-layered neural architecture, such as that shown in Figure 1.15. This architecture has resulted in considerable improvements in the performance of biometric systems, reliant on access to powerful computers equipped with GPU, for example, and on the availability of large databases for learning purposes. Learning mechanisms result in the creation of robust models that are easy to use in the test phase.

In this context, deep learning has been successfully applied to a range of biometric modalities:

  1. 1) Facial recognition (performance ∼99%): DeepFace (AlexNet model), DeepID, DeepID2, DeepID3 (Modèle VGGNet-10), VGGface (VGGNet-16 model), FaceNet (googLeNet model).
  2. 2) Fingerprint recognition (performance ∼95% to ∼98%): FingerNet, DeepCNN.
  3. 3) Palmprint recognition (performance > 99%): Deep Scatering, MobileNetV2 + SVM, Deform-invariant.
  4. 4) Iris recognition (performance ∼99%): DeepCNN, Deep Scatering, Deep Features.
  5. 5) Signature recognition (performance 81–93%): Embending, SIGAN.
  6. 6) Gait recognition (performance 68–95%) : Yan et al., Li et al., Zhang et al.
Schematic illustration of generic diagram of a neural architecture.

Figure 1.15. Generic diagram of a neural architecture. a) Single-layer architecture using biometric characteristics as input. b) Multi-layer architecture (used in deep learning), in which biometric characteristics may be estimated in the hidden layers

More details may be found in published “survey” articles, such as in Minaee et al. (2019).

1.4.5. Protecting biometric data

Biometric data are, by definition, personal data. The General Data Protection Regulation (GDPR) of 2018 reinforces the obligation to protect individual privacy (Voigt and Von dem Bussche 2017), and, evidently, covers biometric data. These data must be protected for both security and user privacy reasons. Solutions are therefore needed, both in terms of protection (Atighehchi et al. 2019) and for evaluating attack resistance (Gomez-Barrero and Galbally 2020).

1.4.5.1. State of the art

There are several possible approaches to protecting biometric data:

  • Classic encryption: classic approaches to encrypting biometric data usually rely on symmetric (e.g. AES) and asymmetric (e.g. RSA) cryptosystems. This approach has the disadvantage of needing to decrypt the biometric reference for comparison with a capture, making it vulnerable during this time. This approach is necessary (especially for storage in a biometric database) but not sufficient, considering the lifetime of a biometric data element (an algorithm which is secure at present may be broken during an individual’s lifetime). This is the minimum that is expected of a biometric system.
  • Storage in a secure element: this approach consists of storing clear or, preferably, encrypted biometric data in a hardware enclave such as a microcircuit chip (Wang et al. 2018). This method is classically used in smartphones, for example, where a micro-circuit chip may be linked to a fingerprint sensor. This chip stores the biometric reference and carries out the comparison with a new capture (see Figure 1.16). A hardware enclave guarantees very high resistance to physical and logical attacks (Maciej et al. 2019; Im et al. 2020). This type of approach also allows the individual to remain in possession of their own biometric data (in a smartphone or passport).
  • Cancelable biometrics: one major criticism of biometrics is the intrinsic non-revocability of biometric data. For instance, a fingerprint cannot be changed in the same way as a password. The aim of cancelable biometrics is to regenerate a biometric signature for an individual, even in cases of interception by an attacker, without impinging on their privacy. The general principle is to apply a non-invertible transformation of the biometric data, parameterized by a secret key (represented by a random number or a password (Lacharme and Plateaux 2011)), as illustrated in Figure 1.17. This type of approach is similar to two-factor authentication (combining biometrics and secret information). Note that knowledge of the secret information alone is not sufficient for an attacker to retrieve the initial biometric data, but may be sufficient to permit an attack (e.g. imposture) in certain cases (Lacharme et al. 2013). The verification step is relatively simple, requiring comparison with a distance (notably Hamming). Diversifying keys allows one individual to possess several biometric signatures using the same biometric data, preventing an attacker from accessing multiple digital services using the same individual identity.
  • Secure computation: this approach aims to offer protection for secure storage of biometric data (particularly in the Cloud) and permit identity verification without having to decrypt an individual’s biometric reference. Homomorphic encryption (Barrier 2016) is one possible approach used here. However, while biometric data elements are often small in size (such as a fingerprint, which may require less than 200 bytes), homomorphic encryption results in much larger signatures (several megabytes), and this limits its use for practical applications.
Photographs depict hardware storage of biometric data.

Figure 1.16. Hardware storage of biometric data in a) a biometric physical access control system and b) a biometric passport

Schematic illustration of operation of a cancelable biometric system based on a transformation.

Figure 1.17. Operation of a cancelable biometric system based on a transformation

1.4.5.2. Discussion

Biometric data is a particularly sensitive kind of personal data (due to its irrevocable nature and inherent connection to an individual). The protection of these data is therefore crucial, and unprotected storage of biometric data elements is, evidently, unthinkable. Conventional encryption offers a standard default approach, but is not sufficient. Hardware-based protection is a good approach, but is limited to objects in an individual’s possession (such as smartphones, smart watches, smart cards and flash drives). The current trend in biometric authentication is to adopt a centralized, cloud-based approach. Cancelable biometrics is another promising avenue, and the small size of protected biometric signatures is an advantage in terms of implementation; this approach is also more secure than classic encryption. Secure computation is an interesting prospect, but the biometric signatures used must be compatible with fast, Internet-based transmission. Other approaches using secret information or cryptographic keys must include a clear separation between the holder of the cryptographic keys and the service used to store the signatures, in order to avoid any breach of privacy (as decryption is generally possible).

Research into the protection of biometric data is unfortunately lacking. Many researchers aim to develop increasingly powerful and precise systems in terms of recognition errors, but the practical adoption of these systems is essentially dependent on security and privacy protection criteria. As we saw in the previous section, the use of deep learning techniques for biometrics may provide further answers in this area. Architectures of this type may include biometric data protection mechanisms, as recent articles have pointed out (Jami et al. 2019; Walia et al. 2020).

1.4.6. Aging biometric data

Biometric data do not always remain constant over the course of time. While fingerprints change little, an individual face can vary considerably. Behavioral biometric data are also subject to intrinsic intraclass variation that is difficult to manage in terms of recognition. This raises the need for new biometric systems with the capacity to manage data aging, in order to ensure consistently high performance, in terms of recognition, over the course of time (Pisani et al. 2019).

Despite the remarkable progress made in deep-learning based facial recognition approaches in recent years, in terms of both verification and identification performance, the neural architecture still has limitations. These limitations relate to the database used in the learning phase. If the selected database does not contain enough instances, the result may be systematically affected. For example, the performance of a facial biometric system may decrease if the person to be verified or identified was enrolled over 10 years ago. In adult individuals, aging results in changes to the texture of the face, notably with the appearance of wrinkles and skin sagging due to a loss of elasticity. These changes may be accentuated by weight gain or loss. In cases where enrollment is performed on a young child, verification/identification of the same individual’s identity during adolescence or even adulthood using a static or “invariant” facial recognition system may fail. To counteract this problem, researchers have developed models for facial aging or digital rejuvenation, and work in this area is still ongoing. Both GAN (Generative Adversarial Network) and statistical models have produced impressive results in terms of perception. Digital rejuvenation or aging is used to compensate for the differences in facial characteristics, which appear over a given time period. This feature can be incorporated into facial recognition systems in order to improve their performance.

Figure 1.18 shows virtual faces at different stages of adult life. Figure 1.19 shows changes in appearance from childhood through adolescence and into adulthood. As Farazdaghi and Nait-Ali (2017) have indicated, the appearance of a face at different times can be predicted using specific mathematical models, which combine a statistical element with a parametric model, of which the parameters are estimated using geometric transformations based on anthropometric measurements. The model can be extended to process 3D digitized faces, as shown in Figure 1.20. Initial results for applications in a facial verification context can be found in Heravi et al. (2019).

Photographs of 2D digital aging and rejuvenation of an adult face.

Figure 1.18. 2D digital aging and rejuvenation of an adult face. Models are shown across several age ranges: 21–30, 31–40, 41–50, 51–60, 61–70 and 71–80 years old

1.5. Conclusion

In this chapter, we aimed to provide a synthetic overview of key aspects of security biometrics at the time of writing. The broad outlines given here are intended to provide readers with a framework from which to explore the subject further, based on their own interests, aims and priorities. This chapter is intended for a wide audience, from beginners to experts; certain technical details were voluntarily omitted, and readers are encouraged to consult further works on the subject as required. Biometrics is a flourishing area of both research and application. Only time will tell what the future holds.

Photographs of 2D digital aging and rejuvenation of a child’s face.

Figure 1.19. 2D digital aging and rejuvenation of a child’s face. a) Models of faces for several age ranges: 3–4, 7–8, 12–13 and 17–18 years old. b) Digital rejuvenation of a face based on a reference photo (adult) and comparison of the “artificial” face with a real photograph (child at age 4)

Schematic illustration of 3D aging and rejuvenation male and female examples.

Figure 1.20. 3D aging and rejuvenation: male and female examples

1.6. References

Aloui, K., Nait-Ali, A., Naceur, M.S. (2018). Using brain prints as new biometric feature for human recognition. Pattern Recognition Letters, 113, 38–45.

Atighehchi, K., Ghammam, L., Barbier, M., Rosenberger, C. (2019). GREYC-hashing: Combining biometrics and secret for enhancing the security of protected templates. Future Generation Computer Systems, 101, 819–830.

Barrier, J. (2016). Chiffrement homomorphe appliqué au retrait d’information privé. PhD Thesis, INSA, Toulouse.

Bitouk, D., Kumar, N., Dhillon, S., Belhumeur, P., Nayar, S.K. (2008). Face swapping: Automatically replacing faces in photographs. ACM Trans. Graph., 27(3), 1–8.

Bledsoe, W.W. and Chan, H. (1965). A man-machine facial recognition system – Some preliminary results. Technical report, Panoramic Research, Inc., Palo Alto.

Buriro, A., Crispo, B., Conti, M. (2019). Answerauth: A bimodal behavioral biometric-based user authentication scheme for smartphones. Journal of Information Security and Applications, 44, 89–103.

Cappelli, R., Maio, D., Maltoni, D. (2004). Sfinge: An approach to synthetic fingerprint generation. In International Workshop on Biometric Technologies Proceedings. BT, Calgary, 147–154.

Farazdaghi, E. and Nait-Ali, A. (2017). Backward face ageing model (b-fam) for digital face image rejuvenation. IET Biometrics, 6(6), 478–486.

Galbally, J., Marcel, S., Fierrez, J. (2014). Biometric antispoofing methods: A survey in face recognition. IEEE Access, 2, 1530–1552.

Galbally, J., Fierrez, J., Cappelli, R. (2019). An introduction to fingerprint presentation attack detection. In Handbook of Biometric Anti-Spoofing, Marcel, S., Nixon, M.S., Fierrez, J., Evans, N. (eds). Springer, Berlin/Heidelberg.

Gomez-Barrero, M. and Galbally, J. (2020). Reversing the irreversible: A survey on inverse biometrics. Computers & Security, 90, 101700.

Heravi, F.M.Z., Farazdaghi, E., Fournier, R., Nait-Ali, A. (2019). Impact of aging on three-dimensional facial verification. Electronics, 8(10), 1170.

Im, J.-H., Jeon, S.-Y., Lee, M.-K. (2020). Practical privacy-preserving face authentication for smartphones secure against malicious clients. IEEE Transactions on Information Forensics and Security, 15, 2386–2401.

Jami, S.K., Chalamala, S.R., Jindal, A.K. (2019). Biometric template protection through adversarial learning. In International Conference on Consumer Electronics. IEEE, Las Vegas.

Kabbara, Y., Shahin, A., Nait-Ali, A., Khalil, M. (2013). An automatic algorithm for human identification using hand X-ray images. In 2nd International Conference on Advances in Biomedical Engineering. IEEE, Tripoli.

Kabbara, Y., Naït-Ali, A., Shahin, A., Khalil, M. (2015). Hidden biometrie identification/authentication based on phalanx selection from hand X-ray images with safety considerations. In International Conference on Image Processing Theory, Tools and Applications. IEEE, Orléans.

Khorshid, A.E., Alquaydheb, I.N., Kurdahi, F., Jover, R.P., Eltawil, A. (2020). Biometric identity based on intra-body communication channel characteristics and machine learning. Sensors, 20(5), 1421.

Kirby, M. and Sirovich, L. (1990). Application of the Karhunen-Loeve procedure for the characterization of human faces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(1), 103–108.

Ko, K. (2007). User’s guide to Nist biometric image software (NBIS). Report, NIST Interagency/Internal Report (NISTIR), 7392.

Lacharme, P. and Plateaux, A. (2011). PIN-based cancelable biometrics. International Journal of Automated Identification Technology (IJAIT), 3(2), 75–79 [Online]. Available at: https://hal.archives-ouvertes.fr/hal-00984027/file/IJAIT2.pdf.

Lacharme, P., Cherrier, E., Rosenberger, C. (2013). Preimage attack on biohashing. In International Conference on Security and Cryptography (SECRYPT). IEEE, Reykjavik.

Lim, E., Jiang, X., Yau, W. (2002). Fingerprint quality and validity analysis. In International Conference on Image Processing. IEEE, Rochester.

Liu, X., Pedersen, M., Charrier, C., Bours, P., Busch, C. (2016). The influence of fingerprint image degradations on the performance of biometric system and quality assessment. In International Conference of the Biometrics Special Interest Group (BIOSIG). IEEE, Darmstadt.

Maciej, B., Imed, E.F., Kurkowski, M. (2019). Multifactor authentication protocol in a mobile environment. IEEE Access, 7, 157185–157199.

Mahier, J., Pasquet, M., Rosenberger, C., Cuozzo, F. (2008). Biometric authentication. In Encyclopedia of Information Science and Technology, Khosrow-Pour, M. (ed.). IGI, Hershey.

Maltoni, D., Maio, D., Jain, A.K., Prabhakar, S. (2009). Handbook of Fingerprint Recognition, 2nd edition. Springer, London.

Matta, F. and Dugelay, J.-L. (2009). Person recognition using facial video information: A state of the art. Journal of Visual Languages & Computing, 20, 180–187.

Minaee, S., Abdolrashidi, A., Su, H., Bennamoun, M., Zhang, D. (2019). Biometric recognition using deep learning: A survey. arXiv preprint, arXiv:1912.00271.

Nait-Ali, A. (2019a). Hidden Biometrics: When Biometric Security Meets Biomedical Engineering. Springer Nature, Singapore.

Nait-Ali, A. (2019b). Biometrics under Biomedical Considerations. Springer, Singapore.

Nait-Ali, A. and Fournier, R. (2012). Signal and Image Processing for Biometrics. ISTE Ltd, London, and John Wiley & Sons, New York.

Nasrollahi, K. and Moeslund, T.B. (2008). Face quality assessment system in video sequences. In European Workshop on Biometrics and Identity Management. BioID, Roskilde.

Olsen, A.M., Tabassi, E., Makarov, A., Busch, C. (2013). Self-organizing maps for fingerprint image quality assessment. In Conference on Computer Vision and Pattern Recognition Workshops. IEEE, Portland.

Parkhi, O., Vedaldi, A., Zisserman, A. (2015). Deep face recognition. In British Machine Vision Conference. BVMC, Swansea.

Pisani, P.H., Mhenni, A., Giot, R., Cherrier, E., Poh, N., Ferreira de Carvalho, A.C.P.D.L., Rosenberger, C., Amara, N.E.B. (2019). Adaptive biometric systems: Review and perspectives. ACM Computing Surveys (CSUR), 52(5), 1–38.

Prabhakar, S., Pankanti, S., Jain, A.K. (2003). Biometric recognition: Security and privacy concerns. IEEE Security Privacy, 1(2), 33–42.

Qin, H. and El Yacoubi, M.A. (2017). Deep representation for finger-vein image quality assessment. IEEE Transactions on Circuits and Systems for Video Technology, 28(8), 1677–1693.

Raghavendra, R., Raja, K.B., Busch, C. (2016). Detecting morphed face images. In 8th International Conference on Biometrics Theory, Applications and Systems. IEEE, Niagara Falls.

Ramachandra, R. and Busch, C. (2017). Presentation attack detection methods for face recognition systems: A comprehensive survey. ACM Computing Surveys (CSUR), 50(1), 1–37.

Ratha, N.K., Connell, J.H., Bolle, R.M. (2001). Enhancing security and privacy in biometrics-based authentication systems. IBM Systems Journal, 40(3), 614–634.

Redi, J., Taktak, W., Dugelay, J.-L. (2011). Digital image forensics: A booklet for beginners. Multimedia Tools Appl., 51, 133–162.

Roy, A., Dixit, R., Naskar, R., Chakraborty, R.S. (eds). (2020). Copy-move forgery detection exploiting statistical image features. In Digital Image Forensics. Springer, Singapore.

Stolovitzky, G., Rudin, N., Inman, K., Rigoutsos, I. (2002). DNA based identification. In Biometrics: Personal Identification in Networked Society, Jain, A.K., Bolle, R., Pankanti, S. (eds). Kluwer Academic Publishers, Norwell.

Tabassi, E., Wilson, C., Watson, C. (2011). Fingerprint Image Quality (NFIQ). Report, NISTIR.

Thies, J., Zollhöfer, M., Stamminger, M., Theobalt, C., Nießner, M. (2016). Face2Face: Real-time face capture and reenactment of RGB videos. In Conference on Computer Vision and Pattern Recognition. IEEE, Las Vegas.

Turk, M. and Pentland, A. (1991). Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1), 71–86.

Venkatesh, S., Ramachandra, R., Raja, K., Busch, C. (2019). A new multi-spectral iris acquisition sensor for biometric verification and presentation attack detection. In Winter Applications of Computer Vision Workshops. IEEE, Waikoloa Village.

Voigt, P. and Von dem Bussche, A. (2017). The EU General Data Protection Regulation (GDPR): A Practical Guide, 1st edition. Springer International Publishing, New York.

Walia, G.S., Aggarwal, K., Singh, K., Singh, K. (2020). Design and analysis of adaptive graph based cancelable multi-biometrics approach. IEEE Transactions on Dependable and Secure Computing, 19, 54–66.

Wang, D., Shen, J., Liu, J.K., Choo, K.-K.R. (2018). Rethinking authentication on smart mobile devices. Wireless Communications and Mobile Computing, 1–4.

Wasnik, P., Raja, K.B., Ramachandra, R., Busch, C. (2017). Assessing face image quality for smartphone based face recognition system. In 5th International Workshop on Biometrics and Forensics. IEEE, Coventry.

Wu, W., Elliott, S.J., Lin, S., Sun, S., Tang, Y. (2019). Review of palm vein recognition. IET Biometrics, 9(1), 1–10.

Yao, Z., Le Bars, J.-M., Charrier, C., Rosenberger, C. (2015a). Fingerprint quality assessment combining blind image quality, texture and minutiae features. In International Conference on Information Systems Security and Privacy. IEEE, Angers.

Yao, Z., Le Bars, J.-M., Charrier, C., Rosenberger, C. (2015b). Quality assessment of fingerprints with minutiae delaunay triangulation. In International Conference on Information Systems Security and Privacy. IEEE, Angers.

Yao, Z., Charrier, C., Rosenberger, C. (2016a). Pixel pruning for fingerprint quality assessment. In International Biometric Performance Testing Conference (IBPC). NIST, Gaithersburg.

Yao, Z., Le Bars, J.-M., Charrier, C., Rosenberger, C. (2016b). Literature review of fingerprint quality assessment and its evaluation. IET Biometrics, 5(3), 243–251.

Yeap, Y.Y., Sheikh, U., Ab Rahman, A.A.-H. (2018). Image forensic for digital image copy move forgery detection. In 14th International Colloquium on Signal Processing and Its Applications. IEEE, Penang.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.197.212