Veronika Kuchta and Gaurav Sharma
Post‐quantum cryptography is an essential research topic which became more popular since the start of research on quantum computing. Quantum computers are highly powerful machines which take advantage of subatomic particles which exist in more than one state at any time. Such machines are able to process information in an incomparably faster time than the fastest computers. IBM and Google are the leading companies in this race for the first quantum computer that will then be made publicly available and extremely useful. The main feature of such a powerful computer is that it will be able to perform calculations which are almost impossible to be simulated by a conventional computer. A computer with this feature will ewasily be able to break all of the current cryptographic constructions which have proven to be secure under number‐theoretical assumptions. A possible solution to this problem can be offered by the following research fields which are assumed to be resistant against quantum attacks:
We will focus here on one specific topic and its relation to the Internet of Things (IoT). We should consider the fact that we are at the very beginning of the application process of quantum‐resistant algorithms to IoT. Therefore, in this chapter we want to make the reader familiar with notions and the main constructions of lattice‐based cryptography and share our ideas on how to use these schemes in IoT.
In Section 2, we will have a closer look into the main definitions of lattice‐based cryptography and introduce the reader to the solutions of certain essential problems in this topic. In section 3, we discuss the main cryptographic primitives constructed from lattices. In section 4 we show the relation between lattice‐based cryptoschemes and IoT and provide some examples for these relations.
As we know, cryptography requires average‐case intractability, meaning that there are problems for which random instances are hard to solve. This intractability differs from the worst‐case notion of hardness, where a problem is considered to be hard if there exists some intractable instances. This latter notion of hardness is considered to be NP‐complete. A trivial conclusion follows that if there are problems which are hard in worst case, they often appear to be easier on the average.
A crucial contribution to this problem, according to lattice, has been made by Ajtai [3] who proved that certain problems are hard on the average if there are some lattice‐related problems which are hard in the worst case. Using this result, cryptographers can construct schemes which are note feasible to break, unless all instances of some certain lattice problems are easy to solve.
Let denote the quotient ring modulo , where is a positive integer. Elements in are given as , with . It holds that is an additive group supporting scalar multiplication by integers, i.e. for an integer and .
We use bold capital letters to denote matrices, such as and bold lower‐case letters to denote column vectors, such as . To indicate horizontal concatenation of vectors and matrices we use the following notation: .
Let be the basis of a lattice which consists of linearly independent vectors. The dimensional lattice is then defined as .
The ‐th minimum of lattice , denoted by is the smallest radius such that contains linearly independent vectors of norms . (The norm of vector is defined as , where are coefficients of vector . We denote by the minimum distance measured in the infinity norm, which is defined as . Additionally, we recall and its fundamental parallelepiped is given by . Given a basis for a lattice and a vector we define as the unique vector in such that . If is a lattice, its dual lattice is defined as
The following specific lattices contain as a sub‐lattice for a prime . For and , define:
Many lattice‐based works rely on Gaussian‐like distributions called Discrete Gaussians. In the following paragraph we recall the main notations of this distribution.
Let be a subset of . For a vector and a positive , define
The discrete Gaussian distribution over with center and parameter is given by . The distribution is usually defined over the lattice for . Ajtai [3] showed how to sample an uniform matrix with a basis of with low Gram‐Schmidt norm. Gentry et al. [18] defined an algorithm for sampling from the introduced discrete Gaussian for a given basis of the dimensional lattice with a Gaussian parameter , where is the orthonormal basis of , defined as follows:
Let be a set of vectors with the following functionalities:
The Gram‐Schmidt norm is denoted by .
In the next definitions we recall the most popular computational problems on lattices which can also be found in the report of Peikert [37]. The most well‐studied problem is the Shortest Vector Problem (SVP):
Apart from the main definition of the SVP problem, there are many approximation problems which are parameterized by an approximation factor which is represented as a function in the lattice parameter , i.e. . The corresponding approximation problem for the SVP problem is defined as follows:
It is known that many cryptoschemes can be proved secure under the hardness of certain lattice problems in the worst case. But there is no known proof for the search version of . But there are many proofs which are based on the following decision version of approximate‐SVP problem:
Another approximate version of SVP is recalled in the following definition:
Special case (presented in [13]) of short integer solution (SIS) problem introduced by Ajtai [3]. Another particularly important computational problem for cryptographic constructions is the Bounded‐Distance Decoding problem.
The main difference between and is the uniqueness of the solution to the earlier problem, while the target of the latter can be an arbitrary point.
Most modern lattice‐based cryptographic schemes rely on the following average‐case problems, Short Integer Solution (SIS) and Learning with Errors (LWE) problems, their analogues defined over rings. They involve analytic techniques such as Gaussian probability distribution.
In matrix form, this problem looks as follows: collecting the vectors into a matrix and the error terms and values as the entries of the ‐dimensional vector we obtain the input .
In this chapter we want to briefly survey some of the previous significant works in lattice cryptography. The particularly ground‐breaking work of Ajtai [3] provided the first worst‐case to average‐case reductions for lattice problems. In that work, Ajtai introduced the average‐case short integer solution (SIS) and showed that solving it is at least as hard as approximating various lattice problems in the worst case. In a later work [4], Ajtai and Dwork presented a lattice‐based public‐key encryption scheme which became the basic template for all lattice‐based encryption schemes.
Almost at the same time, a concurrent work had been published by Hoffstein, Pipher, Silverman [25] introducing the NTRU public‐key encryption scheme. It was the first construction using polynomial rings. The advantages of that construction are the practical efficiency and particularly compact keys. The NTRU system is parameterized by a certain polynomial ring , where for a prime , or for which is a power of two, with a sufficiently large modulus , which defines the quotient ring .
Around the same time, Goldreich, Goldwasser and Halevi [20] published a paper on public‐key encryption scheme and a digital‐signature scheme, both based on lattices. The main idea behind their constructions was that a public key is a “bad” basis of some basis consisting of long and non‐orthogonal lattice vectors, while the secret key is a “good” basis of the same lattice consisting of short lattice vectors.
Later on, Oded Regev [40] provided improvements to the results of Ajtai and Dwork's work by introducing Gaussian measures and harmonic analysis over lattices. The main consequences of these new techniques were simpler algorithms and tighter approximation factors for the underlying worst‐case lattice problems. In another important work, Regev [41] introduced the average‐case learning with error (LWE) problems, for which Oded Regev was awarded the Goedel Prize in 2018. In the same paper, the author introduced a new cryptosystem which can be proved secure under the new LWE assumption. This construction had the favourable feature of more efficient public keys, secret keys and ciphertexts, where the efficiency of the former was improved from to and the ciphertext efficiency improved from to .
Some core constructions which were built upon (ring)‐SIS/LWE assumptions are listed below:
An example of this is the SWIFFT construction by Lyubashevsky et al. [34]. It is an instantiation of the ring‐SIS‐based hash function and is highly efficient in practice, as it uses the fast Fourier transform in . Without going too deeply into details, we sketch the construction as follows: Let be an invertible quadratic matrix over . The SWIFFT hash function maps a key consisting of vectors and a vector to the product , where is not an uniformly chosen matrix, but a block‐matrix generated using structured block as shown in [34]. The collision resistance property follows from the usage of worst case lattice problems on ideal lattices. A collision‐resistant hash function is a highly significant cryptographic tool, which plays the role of an important building block in many cryptoschemes. From a hash function, we move to more specific cryptographic constructions, discussed in the following sections.
This topic contains all encryption schemes which are IND‐CPA secure. As the name already reveals, passively secure schemes involve a passive eavesdropper, who learns no more information about the messages, except seeing the public key and the ciphertext. The first such scheme has been introduced by Regev [41]. Gentry, Peikert and Vaikuntanathan [18] defined the dual version of Regev's encryption. The main feature of their construction is that public keys are uniformly random with many possible secret keys. But since Regev's encryption scheme [41] served as a basic building block for many successful schemes, we believe that it is useful to have this scheme recalled here. This scheme is based on LWE problem, where the size of the public keys is bits and that of the ciphertext is bits, where the ciphertext is an encryption of a single bit and denotes the dimension of the underlying LWE problem. Let be the number of samples and denote an error distribution over . The three algorithms of Regev's LWE cryptoscheme are given as follows:
It holds , with . The secret and public keys satisfy the relation .
As mentioned before, the security of this scheme relies on the hardness of LWE assumption with parameters .
Later on, Gentry et al. [18] constructed an LWE‐based encryption scheme which is dual to Regev's encryption [41]. The dualism is represented as follows: in [41] the public keys are generated corresponding to a non‐uniform LWE distribution with an unique secret key, while there are several options of ciphertext randomness which yield the same ciphertext. In the dual LWE [18], the public keys are uniformly and random, having many possible secret keys, while the encryption randomness is unique and outputs a certain ciphertext.
An enhancement of LWE cryptoscheme with smaller public and secret keys was provided by Lindner and Peikert [30], which uses a code‐based cryptosystem of Alekhnovich [5] and adapts it to LWE.
This is represented by encryption schemes which are indistinguishable against chosen‐ciphertext attacks, in other words, they are IND‐CCA secure. Fujisaki and Okamoto showed a technique on how to convert any IND‐CPA cryptoscheme into an IND‐CCA secure public key encryption scheme. A lattice‐based instantiation of the Fujisaki‐Okamoto technique was introduced by Peikert [36]. The pioneer work of actively secure encryption over lattices (LWE assumption) defined in a standard model, was defined by Peikert and Waters [38]. Their construction was based on a concept called lossy trapdoor function family, where the public key of a function can either be generated along with a trapdoor , and is a function that can be inverted using , or it can be generated without any trapdoor. Finally, the public keys generated in these two different ways are indistinguishable.
These are functions which are easy to evaluate but hard to invert. They can be generated with some trapdoor information. Gentry et al. [18] were the first to show that certain types of trapdoor functions can be constructed from lattice problems. These trapdoor functions can be used in many applications such as digital signature schemes, identity‐based and attribute‐based encryption. The basic idea of such constructions in [34] is to generate a collection of pre‐image sampleable functions which are collision resistant and one‐way trapdoor functions. To do so, let be the public basis of lattice . Then, in order to evaluate a function from the collection of those preimage sampleable functions in a random lattice point , we need to disturb it by a short error vector , such that . Inverting means decoding the function to some lattice point which is not necessarily equal to , because of the disturbance there is more than one preimage of .
The core idea of gadget trapdoors is that they are represented by special matrices, called “gadget matrices” whose use makes solving LWE and SIS easy. This special primitive which is a favour of cryptographic trapdoors is significant for many constructions that have used gadget matrices as a building block. The easiest way to represent a gadget trapdoor is a vector of powers of two, i.e. , where . For a given , it is easy to find a solution to the equation
by decomposing into its binary representation in . This SIS problem can be adapted to an LWE problem as follows: given any vector with errors in the interval , discover the most‐significant bit of .
The one‐dimensional SIS and LWE problem as described above can be extended to ‐dimensional SIS and LWE using the block‐diagonal gadget matrix defined as
The LWE problem is given by an approximate vector , from which we have to recover vector . The corresponding SIS problem is to find a short solution to the equation , given a vector , meaning that we have to find short solutions for for each . This procedure to find a SIS solution can be expressed using a decomposition function
This gadget trapdoor is also useful to generate trapdoors for a parity‐check matrix , where such a matrix is essential for several lattice‐based cryptoschemes and its trapdoor is a sufficiently short integer matrix. Let be such a trapdoor of , then the following equation holds: , where is an invertible matrix, normally called the tag of the trapdoor . When the quality of a trapdoor is increasing, it's maximum norm is decreasing.
The first paper in this topic was published by Lyubashevsky and Micciancio [33]. The authors developed a one‐time signature which is convertible to many‐time signature schemes using tree‐hashing techniques. The security of their scheme is based on hardness of Ring‐SIS assumption as defined in Def. 8. Because of the key size, signature size, message length and running time, the key generation, signature generation and verification procedure are all quasi‐linear in the security parameter, i.e. . Due to the polylogarithmic factors used in the [33] scheme, that construction appears not to be practical in use. Therefore, the search for more efficient signatures lead to further contributions. Lyubashevsky [32] presented the first approach of a three‐move identification scheme which can be converted into a non‐interactive signature scheme applying Fiat‐Shamir characteristics, where the resulting scheme is defined in the Random Oracle Model (ROM). In the following paragraph, we sketch the main idea of the lattice‐based identification protocol from [32] as it represents a basic tool for several cryptographic constructions.
The prover's secret key is given by a short integer matrix and the corresponding public key is for a given public parameter . The interactive protocol consists of the following steps:
The goal of rejection sampling is to achieve independence from the distribution of from the secret . To do so, rejection sampling algorithm shifts the distribution of to be a discrete Gaussian distribution with the center at zero instead of . To avoid the problem when gets rejected too often, we assume that has a discrete Gaussian distribution with parameters which are proportional to such that the two distributions, one centered at zero and another at have sufficient overlap. In a later work, Ducas et al. [15] provided further refinement of that idea.
Applying Fiat‐Shamir heuristic to the interactive protocol yields a signature scheme with signature length of 60 kilobits making it practical for implementation.
This primitive was first introduced by Goldreich, Goldwasser, Micalli [21]. A pseudorandom function (PRF) with a secret seed is a map from a certain domain to a certain range with the essential property that given a randomly chosen function from the family of such PRFs, it is infeasible to distinguish it from a uniformly random function given access to an oracle. The first PRF from lattice problems was constructed by Banerjee, Peikert, Rosen [8]. The authors first introduced the derandomized version of LWE, called learning with rounding (LWR). The main difference between this problem and LWE is that the error is deterministic. We shortly recall the LWR problem as follows: Let be a secret and the LWR problem is to distinguish noisy and random inner products with from a random value, i.e. for a given and the rounding function defined as , the samples are given as follows:
We recall the very first lattice‐based PRF from [8] based on LWE problem. This construction is particularly randomness‐efficient and practical. The function is given by a rounded subset product. Let be a set of secret keys given as short Gaussian‐distributed matrices, an uniformly random vector . Then, the lattice‐based PRF is given by the following function:
An important feature of key homomorphism for lattice‐based PRFs has been introduced by Boneh et al. [10]. The authors provided the first standard‐model construction of key‐homomorphic PRF secure under LWE assumption. This additional property plays a significant role in distributing the function of key generation center, as it satisfies the following function , for different secret keys .
Furthermore, some other more advanced constructions have been developed in the last decade. These are:
The first concept of fully homomorphic encryption (FHE) has been proposed by Rivest, Adleman and Dertouzos in 1978 [42]. The main idea of homomorphic encryption is to allow computation on encrypted data. In other words: given some data , one can compute a ciphertext which encrypts an evaluated data for any desired function . The first FHE from lattices was introduced by Gentry [17], which is based on average‐case assumption about ideal lattices. Brakerski and Vaikuntanathan [12] introduced a new version of FHE from lattices, which is based on the standard LWE assumption. We shortly recall their scheme as it was used in many applications and served as a building block for several constructions.
The scheme in [12] supports addition and multiplication of multiple ciphertexts, while each ciphertext encrypts a single bit. Exploiting the homomorphic property of a scheme, we can evaluate any Boolean circuit. The secret key in this scheme is an LWE secret . The encryption of a bit is given by an LWE sample for an odd modulus q. The error term is an encoding of the message as its least‐significant bit. For the ciphertext the following relations hold: , with being a small error, and . To decrypt , we simply compute and lift the result to the unique representative of . We output .
As mentioned earlier, the scheme in [12] supports addition and multiplication, where the former operation is a straightforward computation in these schemes, while the latter represents a bigger challenge, to achieve the feature of multiplication of different ciphertexts. For reasons of simplicity, we observe two different ciphertexts which we want to evaluate, exploiting homomorphic property of the scheme. When we add two ciphertexts , we get an encryption of the sum of the corresponding plaintexts , as we can see:
And as shown above for this error term holds . The problem here is that the number of different ciphertexts cannot be unbounded as the error magnitude will exceed the limit of .
To multiply two ciphertexts, we use the mathematical construct called tensor product , which is an encryption of , under the secret key . Here also, the number of multiplication factors is bounded from the beginning. One of the main drawbacks of homomorphic encryption is that it always increases the error rate of a ciphertext. In the same paper [12], the authors introduced a technique to improve this issue, which is called ‘key switching technique’. This technique allows the conversion of a ciphertext that encrypts some message into another ciphertext, that still encrypts the same message, but under some different secret key and uses the previously introduced gadget trapdoor. For further details we refer to the original paper.
Another idea for the solution, called bootstrapping, was introduced by Gentry [17]. The idea involves a technique which reduces the error rate of a ciphertext and allows unbounded homomorphic computation. The technique is to homomorphically evaluate the decryption function on a low‐error encryption of the secret key, which represents a part of the public key.
An alternative scheme of homomorphic encryption was introduced by Gentry et al. [19] in 2013 which has some attractive properties. In order to perform homomorphic evaluations of the ciphertext, no key‐switching technique is required. The scheme [19] can also be adapted to an identity‐based‐encryption or an attribute‐based encryption. An extension of these schemes was proposed by [27], defining identity‐based encryption and attribute‐based encryption schemes in multi‐identity and multi‐authority settings, respectively.
An Identity‐Based Encryption was first introduced by Shamir [44] and has the useful feature where any string can serve as a public key. The secret key is generated by an authority, called a key generation center (KGC), which has control over some master secret key. On input, this master secret key and some public string, the KGC derives a secret key for a user who can be identified by the mentioned public string. Every user can encrypt a certain message for a receiver, using the corresponding public string indicating the receiver's identity. The first lattice‐based IBE scheme was proposed by Gentry et al. [18] and is defined in the random oracle model. Since real random functions are not practical in use, it is still useful to assume that a hash function can behave like a truly random function and so provide a security proof. Even though such models would not be secure, when the pretended random function is replaced by a real hash function, it is still better to provide a construction in the random oracle model, as it will enable the understanding of some obvious attacks. The construction in [18] is, so far, most efficient IBE scheme which is secure against quantum attacks. The main idea behind the [18] construction is that the secret keys of each user are represented by a signature of the user's identity generated by the KGC. The corresponding public key is simply the hash value of the user's identity string. To gain a better understanding of the basic IBE scheme, we provide a short overview of it. An IBE scheme encompasses the following steps:
An IBE scheme which is defined in the standard model, was later defined by Cash et al. [14]. In the same work, the authors provided a construction of an hierarchical identity‐based encryption (HIBE) which allows any user to use their secret key in a secure way to delegate it to any subordinate user in hierarchy.
The first construction of attribute‐based encryption (ABE) was introduced by Sahai and Waters [43]. It represents a generalization of identity‐based encryption. There are two flavours of ABE, ciphertext‐policy ABE (CP‐ABE) and key‐policy ABE (KP‐ABE). In the first case (CP‐ABE), the user's secret key is generated based on a certain set of attributes which satisfy a predicate formula. The ciphertext is computed by embedding that predicate formula, which is often called attribute policy. The user who is in possession of the attribute‐based secret key is able to decrypt the received ciphertext if and only if their secret key's attribute set satisfies the predicate formula. In the second case (KP‐ABE), the secret key is generated using an attribute predicate, while the attribute set is embedded into the ciphertext. The first lattice‐based construction of ABE has been provided by Agrawal et al. [2] which is inherited from the lattice‐based HIBE construction of [1]. An ABE for arbitrary predicates, expressed as circuits has been introduced by Gorbunov et al. [22]. The first lattice‐based ABE scheme was proposed by Agrawal et al. [2] and represents an adaptation of the lattice‐based HIBE scheme [1]. Here we recall this scheme.
First of all, we state that a finite field is isomorphic to a matrix sub‐ring , for a prime and . Then for an attribute vector of length the following holds:
Using this matrix we define and sample a Gaussian‐distributed solution using a sampling algorithm described in the previous Section 3.9., i.e. .
An ABE scheme arbitrary for any predicates represented as a priory bounded depth circuits based on a Lattice assumption was proposed by Gorbunov et al. [22]. The secret key in this scheme grows proportionally to the size of the circuit. An improvement of this scheme was later proposed by Boneh et al. [9].
Nowadays, we live in a world where more and more smart devices such as mobile phones, TV, smart household appliances, and cars are connected to the Internet. Most of such devices are equipped with wireless sensors which establish an information flow between the Internet and the device. For instance, many smart medical devices collect health information from customers and forward it to an Internet‐based database. Therefore, it is obvious that security and privacy of user's sensitive data, are the most significant challenges in such IoT systems. We know that next to symmetric cryptography, public key cryptography is one of the fundamental tools in IoT security. While the former provides security against quantum attacks, the latter is assumed to be vulnerable to these attacks. Lattice‐based cryptography as one promising topic of quantum‐based cryptography, offers solutions to IoT systems which one day may be exposed to quantum attacks.
IoT applications envision lattice‐based cryptography as a promising candidate for strong security primitives. The future smart IoT devices will be equally vulnerable to quantum threats and it would be extremely difficult to update their security settings from classical to quantum‐secure cryptography. Moreover, the computational and storage efficiency of lattice‐based primitives also motivates the adoption of it for lightweight devices. Existing implementations of R‐LWE on an 8‐bit microcontroller can finish faster than RSA‐1024 [31].
In [28], the authors reviewed the important role of attribute‐based and identity‐based authentication systems in IoT. They discussed the main opportunities and challenges which are faced in IoT by using attribute‐based authentication schemes. The authors also mention the significant meaning of attribute‐based signature (ABS) schemes in IoT, which represent an alternative to ABE schemes, where the signer generates a signature of a certain message using an attribute‐based secret key. The verification succeeds if and only if the signer can prove to the verifier, that they have the right key based on a set of attributes, which satisfy a certain predicate. The lattice‐based research on this area of cryptoscheme is not big; only few papers have been proposed in the recent years [16,26,45]. Therefore, because of the significant role of ABE and ABS schemes in IoT and the future perspective of quantum computers, the research topic of lattice‐based ABE and ABS is particularly interesting and a direct application of these schemes to IoT is useful.
In [39], an R‐LWE‐based signature scheme ‘BLISS’ is implemented to ascertain the feasibility of BLISS for lightweight devices. The authors provided a scalable implementation of BLISS on a Xilinx Spartan‐6 FPGA with optional 128‐bit, 160‐bit, or 192‐bit security. Other variants of BLISS are also available in literature in order to speed up the key generation process.
Homomorphic encryption is another cryptographic primitive which attracts more and more IoT developers because of its useful features which allow the evaluation of encrypted data. Often sensitive data in the IoT has to be processed in the cloud. In [29], the authors showed how confidentiality can be guaranteed in the cloud by using homomorphic encryption. They also proposed an acceleration mechanism to speed up the homomorphic evaluation.
Pseudorandom functions play a significant role in IoT too, as shown recently in [24]. As mentioned by the authors, remote software update procedures are becoming more useful in automative industry. Therefore, security of those procedures are considered to play a core role in that sector. Message authentication codes and pseudorandom functions (PRFs) are particularly helpful in the provision of security of the remote software updates. Thus, with the future move to quantum computers, quantum‐resistant constructions will be required, where lattice‐based schemes, in this particular case, the PRFs could offer an efficient and practical solution.
The post‐quantum key exchange is the most challenging phenomenon to address. The initial effort was made by Bos et al. [11] to transform classical TLS to the R‐LWE version of TLS considering post‐quantum requirements. With a small performance penalty their R‐LWE key exchange was successfully embedded in TLS and implemented on web servers. Alkim et al. [7] presented an efficient implementation of R‐LWE‐based key exchange protocol, namely New Hope [6]. Initially, their proposal was to implement it on large Intel processors however, their implementation is finally optimized for ARM Cortex‐M family of 32‐bit microcontrollers. Moreover, within Cortex‐M family, they choose Cortex‐M0 as low‐end and Cortex‐M4 as high‐end targets. The authors achieved 128‐bit security in a post‐quantum setting while implementing these embedded microcontrollers. Following the lattice‐based construction of NTRUEncrypt which is accepted as IEEE standard P1363.1, Guillen et al. [23] implemented NTRUEncrypt for Cortex‐M0 based microcontrollers. This ensures again the feasibility of lattice‐based encryption on IoT devices. Another implementation of R‐LWE‐based encryption scheme is presented by Liu et al. [31] which is computationally equivalent to NTRUEncrypt. The authors achieved 46‐bit security with encryption and decryption timings as 24.9 ms and 6.7 ms respectively, on an 8‐bit ATxmega128 microcontroller.
Our main target in this contribution was to provide a general overview of lattice‐based primitives and to summarize how these constructions can be applied to IoT. We first introduced the reader to the complex notions of lattice‐based cryptography, yet tried to avoid details which are too specific for the concept of this chapter. Then, using these notions, we provided a summary of lattice‐based constructions. Finally, in the last section, we reviewed the state‐of‐the‐art applications of cryptographic primitives to the IoT systems, where most of the existing applications are based on classical number‐theoretic assumptions, we want to catch the reader's attention to the importance of a switch to quantum‐resistant constructions, where lattice‐based constructions provide a powerful solution.
18.119.133.228