This talk focuses on the shortest vector problem over so-called ideal lattices. Whereas it is believed to be hard in the worst-case, prior works have shown that there are several specific cases, where the problem becomes easy to solve. We continue this line of work and provide a simple condition under which an ideal lattice defines an easy instance of the shortest vector problem. Namely, we show that the more automorphisms stabilize the ideal, the easier it is to find a short vector in it. This observation was already made for prime ideals in Galois fields, and we extend it to any ideal (whose prime factors are not ramified) of any number field, generalizing the works of Pan et al. (Eurocrypt'21) and Porter et al. (ArXiv'21). We then provide a cryptographic application of this result by showing that particular instances of the partial Vandermonde knapsack problem, also known as partial Fourier recovery problem, can be solved classically in polynomial time. As a proof of concept, we implemented our attack and managed to solve those particular instances for concrete parameter settings proposed in the literature. For random instances, we can halve the lattice dimension with non-negligible probability. We conclude the presentation by clarifying the impacts of our results to lattice-based cryptography. This is a joint work with Erell Gachon and Alice Pellet-Mary.
This will be a hybrid seminar. The speaker will be giving the talk from ENS de Lyon.
Katharina Boudgoust is a postdoctoral researcher in the Cryptography and Security team of the Aarhus University, Denmark, under the supervision of Peter Scholl. She is interested in lattice-based cryptography, and more specifically in the computational hardness assumptions underlying lattice-based cryptosystems and threshold cryptography. From September 2018 to November 2021, Katharina was a PhD student with the EMSEC team at IRISA Laboratory in Rennes, France, under the supervision of Adeline Roux-Langlois and Pierre-Alain Fouque. Her PhD thesis focused on the theoretical hardness of algebraically structured Learning With Errors.
We show that for many fundamental cryptographic primitives, proving classical security under the learning-with-errors (LWE) assumption, does \emph{not} imply post-quantum security. This is despite the fact that LWE is widely believed to be post-quantum secure, and our work does not give any evidence otherwise. Instead, it shows that post-quantum insecurity can arise inside cryptographic constructions, even if the assumptions are post-quantum secure.
Concretely, our work provides (contrived) constructions of pseudorandom functions, CPA-secure symmetric-key encryption, message-authentication codes, signatures, and CCA-secure public-key encryption schemes, all of which are proven to be classically secure under LWE via black-box reductions, but demonstrably fail to be post-quantum secure. All of these cryptosystems are stateless and non-interactive, but their security is defined via an interactive game that allows the attacker to make oracle queries to the cryptosystem.The polynomial-time quantum attacker can break these schemes by only making a few \emph{classical} queries to the cryptosystem, and in some cases, a single query suffices.
Previously, we only had examples of post-quantum insecurity under post-quantum assumptions for stateful/interactive protocols. Moreover, there appears to be a folklore belief that for stateless/non-interactive cryptosystems with black-box proofs of security, a quantum attack against the scheme should translate into a quantum attack on the assumption. This work shows otherwise. Our main technique is to carefully embed interactive protocols inside the interactive security games of the above primitives.
As a result of independent interest, we also show a 3-round \emph{quantum disclosure of secrets (QDS)} protocol between a classical sender and a receiver, where a quantum receiver learns a secret message in the third round but, assuming LWE, a classical receiver does not.
Joint work with Alex Lombardi, Ethan Mook, and Daniel Wichs. Eprint: https://eprint.iacr.org/2022/869
Note the changed time.
Willy Quach is a 6th year PhD student at Northeastern University advised by Daniel Wichs.
The random oracle methodology is central to the design of many practical cryptosystems. A common challenge faced in several systems is the need to have a random oracle that outputs from a structured distribution $\mathcal{D}$, even though most heuristic implementations such as SHA-3 are best suited for outputting bitstrings.
Our work explores the problem of sampling from discrete Gaussian (and related) distributions in a manner that they can be programmed into random oracles. We make the following contributions:
-We provide a definitional framework for our results. We say that a sampling algorithm $\mathsf{Sample}$ for a distribution is explainable if there exists an algorithm $\mathsf{Explain}$ where, for a $x$ in the domain, we have that $\mathsf{Explain}(x) \rightarrow r \in {0,1}^n$ such that $\mathsf{Sample}(r)=x$. Moreover, if $x$ is sampled from $\mathcal{D}$ the explained distribution is statistically close to choosing $r$ uniformly at random. We consider a variant of this definition that allows the statistical closeness to be a "precision parameter'' given to the $\mathsf{Explain}$ algorithm. We show that sampling algorithms which satisfy our `explainability' property can be programmed as a random oracle.
-We provide a simple algorithm for explaining \emph{any} sampling algorithm that works over distributions with polynomial sized ranges. This includes discrete Gaussians with small standard deviations.
-We show how to transform a (not necessarily explainable) sampling algorithm $\mathsf{Sample}$ for a distribution into a new $\mathsf{Sample}'$ that is explainable. The requirements for doing this is that (1) the probability density function is efficiently computable (2) it is possible to efficiently uniformly sample from all elements that have a probability density above a given threshold $p$, showing the equivalence of random oracles to these distributions and random oracles to uniform bitstrings. This includes a large class of distributions, including all discrete Gaussians.
-A potential drawback of the previous approach is that the transformation requires an additional computation of the density function. We provide a more customized approach that shows the Miccancio-Walter discrete Gaussian sampler is explainable as is. This suggests that other discrete Gaussian samplers in a similar vein might also be explainable as is.
Note the changed time.
George is a fourth year PhD student at UT Austin advised by Dr. Brent Waters.
In this talk, I will present a new zero-knowledge proof of knowledge for the syndrome decoding (SD) problem on random linear codes. Instead of using permutations like most of the existing protocols, we rely on the MPC-in-the-head paradigm in which we reduce the task of proving the low Hamming weight of the SD solution to proving some relations between specific polynomials. Specifically, we propose a 5-round zero-knowledge protocol that proves the knowledge of a vector x such that $y=Hx$ and $wt(x)\leq w$ and which achieves a soundness error close to $1/N$ for an arbitrary $N$.
While turning this protocol into a signature scheme, we achieve a signature size of 11-12 KB for 128-bit security when relying on the hardness of the SD problem on binary fields. Using larger fields (like $\mathbb{F}_{256}$), we can produce fast signatures of around 8 KB. This allows us to outperform Picnic3 and be competitive with $\text{SPHINCS}^{+}$. Since the security relies on the hardness of the syndrome decoding problem for random linear codes which is known to be NP-hard and for which the cryptanalysis state of the art has been stable for many years, it results in a conservative signature scheme. Moreover, our scheme outperforms all the former code-based signature schemes for the common “signature size + public key size” metric.
Joint work with Antoine Joux and Matthieu Rivain.
Thibauld Feneuil is a PhD student at CryptoExperts & Sorbonne University (France). He is working on zero-knowledge proofs in post-quantum cryptography.
We investigate the security of succinct arguments against quantum adversaries. Our main result is a proof of knowledge-soundness in the post-quantum setting for a class of multi-round interactive protocols, including those based on the recursive folding technique of Bulletproofs.
To prove this result, we devise a new quantum rewinding strategy, the first that allows for rewinding across many rounds. This technique applies to any protocol satisfying natural multi-round generalizations of special soundness and collapsing. For our main result, we show that recent Bulletproofs-like protocols based on lattices satisfy these properties, and are hence sound against quantum adversaries.
Nick Spooner is an Assistant Professor of Computer Science at the University of Warwick. His work focuses on post-quantum proof systems. His interests include interactive proof systems and zero knowledge in general, post-quantum cryptography, quantum information, coding theory and computational complexity. Previously he was a postdoc at Boston University, and he received his PhD from UC Berkeley in 2020.
This talk focuses on the shortest vector problem over so-called ideal lattices. Whereas it is believed to be hard in the worst-case, prior works have shown that there are several specific cases, where the problem becomes easy to solve. We continue this line of work and provide a simple condition under which an ideal lattice defines an easy instance of the shortest vector problem. Namely, we show that the more automorphisms stabilize the ideal, the easier it is to find a short vector in it. This observation was already made for prime ideals in Galois fields, and we extend it to any ideal (whose prime factors are not ramified) of any number field, generalizing the works of Pan et al. (Eurocrypt'21) and Porter et al. (ArXiv'21). We then provide a cryptographic application of this result by showing that particular instances of the partial Vandermonde knapsack problem, also known as partial Fourier recovery problem, can be solved classically in polynomial time. As a proof of concept, we implemented our attack and managed to solve those particular instances for concrete parameter settings proposed in the literature. For random instances, we can halve the lattice dimension with non-negligible probability. We conclude the presentation by clarifying the impacts of our results to lattice-based cryptography. This is a joint work with Erell Gachon and Alice Pellet-Mary.
This will be a hybrid seminar. The speaker will be giving the talk from ENS de Lyon.
Katharina Boudgoust is a postdoctoral researcher in the Cryptography and Security team of the Aarhus University, Denmark, under the supervision of Peter Scholl. She is interested in lattice-based cryptography, and more specifically in the computational hardness assumptions underlying lattice-based cryptosystems and threshold cryptography. From September 2018 to November 2021, Katharina was a PhD student with the EMSEC team at IRISA Laboratory in Rennes, France, under the supervision of Adeline Roux-Langlois and Pierre-Alain Fouque. Her PhD thesis focused on the theoretical hardness of algebraically structured Learning With Errors.
It is a long standing open problem in code-based cryptography to find search to decision reductions for structured versions of the decoding problem (e.g. for quasi-cyclic codes). Such results in the lattice-based setting have been carried out using number fields: Polynomial-LWE, Ring-LWE, Module-LWE ...
In this talk, I will present a function field analogue of the LWE problem. This new framework leads to another point of view on structured codes, strengthening the connections between lattice-based and code-based cryptography.
This framework can be instantiated with function field analogues of the cyclotomic number fields, namely Carlitz extensions, leading to the first search to decision reductions on various structured versions of the decoding problem, and Ring-LPN, which have applications to secure multi party computation. and to an authentication protocol.
This is a joint work with Alain Couvreur and Thomas Debris-Alazard.
Maxime is a PhD student at LIX (Laboratoire d'Informatique de l'École Polytechnique) & Inria, France. He is interested in the hardness of various problems related to algebraically structured codes (either in the Hamming or the rank metric).
We present a much-improved practical protocol, based on the hardness of Module-SIS and Module-LWE problems, for proving knowledge of a short vector s satisfying As = t mod q. The currently most-efficient technique for constructing such a proof works by showing that the L_\infty norm of sis small using CRT slots technique (Esgin et al., Asiacrypt 2020). In this work, we show that there is a more direct and more efficient way to prove that s has a small L_2 norm which does not require an equivocation with the L_\infty norm, nor any conversion to the CRT representation. We observe that the inner product between two vectors r and s can be made to appear as a coefficient of a product (or sum of products) between polynomials with coefficient vectors r and s. Thus, by using a polynomial product proof system and hiding all but one coefficient, we can prove knowledge of the inner product of two vectors (or of a vector with itself) modulo q. Using a cheap, "approximate range proof", one can then lift the proof to be over Z instead of Zq. Our protocols for proving short norms work over all (interesting) polynomial rings but are particularly efficient for rings like Z[X]/(X^n+1) in which the function relating the inner product of vectors and polynomial products happens to be a "nice" automorphism.
The new proof system can be plugged into constructions of various lattice-based privacy primitives in a black-box manner. As examples, we instantiate a verifiable encryption scheme and a group signature scheme which are more than twice as compact as the previously best solutions.
Ngoc Khanh Nguyen is a fourth-year PhD student at IBM Research Europe - Zurich and ETH Zurich, Switzerland, supervised by Dr Vadim Lyubashevsky and Prof. Dennis Hofheinz. Previously, Khanh obtained his B.Sc. and M.Eng. degrees in Mathematics and Computer Science at the University of Bristol, UK.
This paper investigates anonymity of all NIST PQC Round~3 KEMs: Classic McEliece, Kyber, NTRU, Saber, BIKE, FrodoKEM, HQC, NTRU Prime (Streamlined NTRU Prime and NTRU LPRime), and SIKE.
We show the following results:
We found that Streamlined NTRU Prime has another technical obstacle for the IND-CCA security proof in the QROM.
Those answer the open problem to investigate the anonymity and robustness of NIST PQC Round~3 KEMs posed by Grubbs, Maram, and Paterson (EUROCRYPT 2022). We use strong disjoint-simulatability of the underlying PKE of KEM and strong pseudorandomness and smoothness/sparseness of KEM as the main tools, which will be of independent interest.
The full paper is available at https://eprint.iacr.org/2021/1323
Keita Xagawa received his B.S. degree from Kyoto University and M.S. and D.S. degrees from Tokyo Institute of Technology in 2005, 2007, and 2010, respectively. He joined NTT Corporation in 2010.
Computing the class group and the unit group of a number field is an important problem of algorithmic number theory. Recently, it has also become an important problem in cryptography, since it is used in multiple algorithms solving the shortest vector problem in ideal lattices.
Subexponential algorithms (in the discriminant of the number field) are known to solve this problem in any number fields, but they heavily rely on heuristics. The only non-heuristic known algorithm, due to Hafner and McCurley, is restricted to imaginary quadratic number fields.
In this talk, I will present a rigorous subexponential algorithm computing units and class group (and more generally T-units) in any number field, assuming the extended Riemann hypothesis.
This is a joint work with Koen de Boer and Benjamin Wesolowski.
Alice is a CNRS researcher (chargée de recherche) at the university of Bordeaux. She is part of the Institut de Mathématiques de Bordeaux (IMB) and of the Lfant inria team. She is interested in lattice based cryptography, and more specifically in the hardness of algorithmic problems related to algebraically structured lattices.
A core goal of the NIST PQC competition is to produce public-key encryption (PKE) schemes which, even if attacked with a large-scale quantum computer, maintain the security guarantees needed by applications. The main security focus in the NIST PQC context has been IND-CCA security, but other applications demand that PKE schemes provide anonymity (Bellare et al., ASIACRYPT 2001), and robustness (Abdalla et al., TCC 2010). Examples of such applications include anonymous communication systems, cryptocurrencies, anonymous credentials, searchable encryption, and auction protocols. Almost nothing is known about how to build post-quantum PKE schemes offering these security properties. In particular, the status of the NIST PQC candidates with respect to anonymity and robustness is unknown.
In this work, we initiate a systematic study of anonymity and robustness for post-quantum PKE schemes. Firstly, we identify implicit rejection as a crucial design choice shared by most post-quantum KEMs, show that implicit rejection renders prior results on anonymity and robustness for KEM-DEM PKEs inapplicable, and transfer prior results to the implicit-rejection setting where possible. Secondly, since they are widely used to build post-quantum PKEs, we examine how the Fujisaki-Okamoto (FO) transforms (Fujisaki and Okamoto, Journal of Cryptology 2013) confer robustness and enhance weak anonymity of a base PKE.
We then leverage our theoretical results to study the anonymity and robustness of three NIST KEM finalists---Saber, Kyber, and Classic McEliece---and one alternate, FrodoKEM. Overall, our findings for robustness are definitive: we provide positive robustness results for Saber, Kyber, and FrodoKEM, and a negative result for Classic McEliece. Our negative result stems from a striking property of KEM-DEM PKE schemes built with the Classic McEliece KEM: for any message $m$, we can construct a single hybrid ciphertext $c$ which decrypts to the chosen $m$ under any Classic McEliece private key.
Our findings for anonymity are more mixed: we identify barriers to proving anonymity for Saber, Kyber, and Classic McEliece. We also found that in the case of Saber and Kyber, these barriers lead to issues with their IND-CCA security claims. We have worked with the Saber and Kyber teams to fix these issues, but they remain unresolved. On the positive side, we were able to prove anonymity for FrodoKEM and a variant of Saber introduced by D'Anvers et al. (AFRICACRYPT 2018). Our analyses of these two schemes also identified technical gaps in their IND-CCA security claims, but we were able to fix them.
This is a joint work with Paul Grubbs (University of Michigan) and Kenneth G. Paterson (ETH Zurich).
Varun Maram is a third-year PhD student in the Applied Cryptography group at ETH Zurich, supervised by Prof. Kenny Paterson. His current research interests lie in the area of post-quantum cryptography, both in the asymmetric and symmetric settings. Varun is also one of the primary submitters of "Classic McEliece", a finalist algorithm for PKE and key-establishment in the NIST PQC standardization process.
We provide the first constructions of non-interactive zero-knowledge and Zap arguments for NP based on the sub-exponential hardness of Decisional Diffie-Hellman against polynomial time adversaries (without use of groups with pairings).
Central to our results, and of independent interest, is a new notion of interactive trapdoor hashing protocols.
I have a broad interest in Theoretical Computer Science. Currently, I am working on Cryptography and Coding Theory. I am a fifth year Ph.D. student at Johns Hopkins University, fortunately co-advised by Abhishek Jain and Xin Li.
The continuous learning with errors (CLWE) problem was recently introduced by Bruna et al. (STOC 2021). They showed that its hardness implies infeasibility of learning Gaussian mixture models, while its tractability implies efficient Discrete Gaussian Sampling and thus asymptotic improvements in worst-case lattice algorithms. No reduction between CLWE and LWE is currently known, in either direction.
We propose four public-key encryption schemes based on the hardness of CLWE, with varying tradeoffs between decryption and security errors, and different discretization techniques. Some of our schemes are based on hCLWE, a homogeneous variant, which is no easier than CLWE. Our schemes yield a polynomial-time algorithm for solving hCLWE, and hence also CLWE, using a Statistical Zero-Knowledge oracle.
This is joint work with Andrej Bogdanov, Miguel Cueto Noval and Alon Rosen. E-print: https://eprint.iacr.org/2022/093
Charlotte is a PhD student in the Cryptography group at IST Austria under the supervision of Krzysztof Pietrzak.
In this paper, we study relationship between security of cryptographic schemes in the random oracle model (ROM) and quantum random oracle model (QROM). First, we introduce a notion of a proof of quantum access to a random oracle (PoQRO), which is a protocol to prove the capability to quantumly access a random oracle to a classical verifier. We observe that a proof of quantumness recently proposed by Brakerski et al. (TQC '20) can be seen as a PoQRO. We also give a construction of a publicly verifiable PoQRO relative to a classical oracle. Based on them, we construct digital signature and public key encryption schemes that are secure in the ROM but insecure in the QROM. In particular, we obtain the first examples of natural cryptographic schemes that separate the ROM and QROM under a standard cryptographic assumption.
On the other hand, we give lifting theorems from security in the ROM to that in the QROM for certain types of cryptographic schemes and security notions. For example, our lifting theorems are applicable to Fiat-Shamir non-interactive arguments, Fiat-Shamir signatures, and Full-Domain-Hash signatures etc. We also discuss applications of our lifting theorems to quantum query complexity.
Takashi Yamakawa is a researcher at NTT in Japan. He earned his PhD in Science from The University of Tokyo in 2017.His research focuses on post-quantum and quantum cryptography.
A natural and recurring idea in the knapsack/lattice cryptography literature is to start from a lattice with remarkable decoding capability as your private key, and hide it somehow to make a public key. This is also how the code-based encryption scheme of McEliece (1978) proceeds.
This idea has never worked out very well for lattices: ad-hoc approaches have been proposed, but they have been subject to ad-hoc attacks, using tricks beyond lattice reduction algorithms. On the other hand the framework offered by the Short Integer Solution (SIS) and Learning With Errors (LWE) problems, while convenient and well founded, remains frustrating from a coding perspective: the underlying decoding algorithms are rather trivial, with poor decoding performance.
In this work, we provide generic realisations of this natural idea (independently of the chosen remarkable lattice) by basing cryptography on the lattice isomorphism problem (LIP). More specifically, we provide:
a worst-case to average-case reduction for search-LIP and distinguish-LIP within an isomorphism class, by extending techniques of Haviv and Regev (SODA 2014).
a zero-knowledge proof of knowledge (ZKPoK) of an isomorphism. This implies an identification scheme based on search-LIP.
a key encapsulation mechanism (KEM) scheme and a hash-then-sign signature scheme, both based on distinguish-LIP.
The purpose of this approach is for remarkable lattices to improve the security and performance of lattice-based cryptography. For example, decoding within poly-logarithmic factor from Minkowski's bound in a remarkable lattice would lead to a KEM resisting lattice attacks down to a poly-logarithmic approximation factor, provided that the dual lattice is also close to Minkowski's bound. Recent works have indeed reached such decoders for certain lattices (Chor-Rivest, Barnes-Sloan), but these do not perfectly fit our need as their duals have poor minimal distance.
Wessel is a PhD student in the Cryptology Group at CWI in Amsterdam under the supervision of Léo Ducas.
In 2020, Bernard and Roux-Langlois introduced the Twisted-PHS algorithm to solve Approx-SVP for ideal lattices on any number field, based on the PHS algorithm by Pellet-Mary, Hanrot and Stehlé in 2019. They performed experiments for prime conductors cyclotomic fields of degrees at most 70, reporting approximation factors reached in practice. The main obstacle for these experiments is the computation of a log-$\mathcal{S}$-unit lattice, which requires classical subexponential time.
In this paper, our main contribution is to extend these experiments to 210 cyclotomic fields of any conductor $m$ and of degree up to 210. Building upon new results from Bernard and Ku{\v{c}}era on the Stickelberger ideal, we construct a maximal set of independent $\mathcal{S}$-units lifted from the maximal real subfield using explicit Stickelberger generators obtained via Jacobi sums. Hence, we obtain full-rank log-$\mathcal{S}$-unit sublattices fulfilling the role of approximating the full Tw-PHS lattice. Notably, our obtained approximation factors match those from Bernard and Roux-Langlois using the original log-$\mathcal{S}$-unit lattice in small dimensions.
As a side result, we use the knowledge of these explicit Stickelberger elements to remove almost all quantum steps in the CDW algorithm, by Cramer, Ducas and Wesolowski in 2021, under the mild restriction that the plus part of the class number verifies $h^{+}_{m}\leq O(\sqrt{m})$.
The full paper is available on ePrint:2021/1384. This is joint work with Andrea Lesavourey, Tuong-Huy Nguyen and Adeline Roux-Langlois.
Olivier BERNARD is a research engineer at the cryptology laboratory of Thales, Gennevilliers, and is currently in the third year of a part-time PhD under the supervision of Pierre-Alain Fouque and Adeline Roux-Langlois. His research interests mainly focus on number theoretic cryptanalyses and more generally on algorithmic number theory.
Lattice-based cryptography is one of the leading proposals for post-quantum cryptography. The Shortest Vector Problem (SVP) is arguably the most important problem for the cryptanalysis of latticebased cryptography, and many lattice-based schemes have security claims based on its hardness. The best quantum algorithm for the SVP is due to Laarhoven [Laa16 PhD thesis] and runs in (heuristic) time $d^{0.2653d+o(d)}$. We present an improvement over Laarhoven’s result and present an algorithm that has a (heuristic) running time of $d^{0.2570d+o(d)}$ where $d$ is the lattice dimension. We also present time-memory trade-offs where we quantify the amount of quantum memory and quantum random access memory of our algorithm. The core idea is to replace Grover’s algorithm used in [Laa16 PhD thesis] in a key part of the sieving algorithm by a quantum random walk in which we add a layer of local sensitive filtering.
2nd year PhD student at Inria Paris advised by André Chailloux.
We present an algorithm solving the ROS (Random inhomogeneities in a Overdetermined Solvable system of linear equations) problem in polynomial time for $\ell > \log p$ dimensions. Our algorithm can be combined with Wagner’s attack, and leads to a sub-exponential solution for any dimension $\ell$ with best complexity known so far. When concurrent executions are allowed, our algorithm leads to practical attacks against unforgeability of blind signature schemes such as Schnorr and Okamoto–Schnorr blind signatures, threshold signatures such as GJKR and the original version of FROST, multisignatures such as CoSI and the two-round version of MuSig, partially blind signatures such as Abe–Okamoto, and conditional blind signatures such as ZGP17.
This is joint work with Fabrice Benhamouda, Tancrède Lepoint, Julian Loss, Mariana Raykova.
Michele is a postdoc at UC Berkeley under the supervision of Alessandro Chiesa.
Prior to that, he was a PhD student at École Normale Supérieure, under the supervision of Georg Fuchsbauer. He is interested in the intersection between authentication and anonymity.
In the past, he contributed to Python, Debian, and Tor.
The Fiat-Shamir transform is a general method for reducing interaction in public-coin protocols by replacing the random verifier messages with deterministic hashes of the protocol transcript. The soundness of this transformation is usually heuristic and lacks a formal security proof. Instead, to argue security, one can rely on the random oracle methodology, which informally states that whenever a random oracle soundly instantiates Fiat-Shamir, a hash function that is ``sufficiently unstructured'' (such as fixed-length SHA-2) should suffice. Finally, for some special interactive protocols, it is known how to (1) isolate a concrete security property of a hash function that suffices to instantiate Fiat-Shamir and (2) build a hash function satisfying this property under a cryptographic assumption such as Learning with Errors.
In this work, we abandon this methodology and ask whether Fiat-Shamir truly requires a cryptographic hash function. Perhaps surprisingly, we show that in two of its most common applications --- building signature schemes as well as (general-purpose) non-interactive zero-knowledge arguments --- there are sound Fiat-Shamir instantiations using extremely simple and non-cryptographic hash functions such as sum-mod-$p$ or bit decomposition. In some cases, we make idealized assumptions (i.e., we invoke the generic group model), while in others, we prove soundness in the plain model.
On the negative side, we also identify important cases in which a cryptographic hash function is provably necessary to instantiate Fiat-Shamir. We hope this work leads to an improved understanding of the precise role of the hash function in the Fiat-Shamir transformation.
Joint work with Yilei Chen, Alex Lombardi and Fermi Ma. Eprint: https://eprint.iacr.org/2020/915
Willy Quach is a 5th year PhD student at Northeastern University advised by Daniel Wichs.
Σ-Protocols provide a well-understood basis for secure algorithmics. Compressed Σ-protocol theory (CRYPTO 2020) was introduced as a strengthening yielding protocols with low communication complexity. It is built around basic Σ-protocols for proving that a compactly committed (long) vector satisfies a linear constraint. The communication complexity of these protocols is first compressed, from linear down to logarithmic, using a recursive ``folding-technique'' adapted from Bulletproofs (Bootle et al., EUROCRYPT 2016, and Bünz et al., S&P 2018), at the expense of logarithmic rounds. Proving in ZK that the secret vector satisfies a given constraint -- captured by a (non-linear) circuit -- is then by (blackbox) reduction to the linear case, via arithmetic secret-sharing techniques adapted from MPC.
This abstract modular theory has been instantiated from a variety of cryptographic hardness assumptions, i.e., the discrete-logarithm, strong-RSA, knowledge-of-exponent assumption. In two separate works, it has also been generalized to a bilinear circuit model and instantiated from the ring-SIS assumption. Thus for all these platforms compressed Σ-protocol theory yields circuit zero-knowledge protocols with (poly)-logarithmic communication.
All in all, our theory should more generally be useful for modular (``plug-&-play'') design of practical cryptographic protocols; this is further evidenced by our separate work on proofs of partial knowledge.
Joint work with: Ronald Cramer, Serge Fehr, Lisa Kohl and Matthieu Rambaud
Note the changed time.
Thomas Attema is a researcher at the applied research institute TNO in The Netherlands, where he works on (applied) multi-party computation, zero-knowledge proof systems and post-quantum cryptography. Moreover, he is pursuing a part-time PhD in the Cryptology group of CWI under the supervision of Ronald Cramer.
In this talk, we study the passive security model of approximate homomorphic encryption schemes. We present new passive security definitions for homomorphic encryption in the approximate computation setting, by naturally extending the traditional notion of IND-CPA security. We propose both indistinguishability-based and simulation-based variants, as well as restricted versions of the definitions that limit the order and number of adversarial queries (as may be enforced by some applications). We prove implications and separations among different definitional variants, showing a hierarchy of approximate homomorphic encryption schemes. Our models provide a solid theoretical basis for the security evaluation of approximate homomorphic encryption schemes (against passive attacks).
As the main application of our passive security models, we present passive attacks against CKKS, the homomorphic encryption scheme for arithmetic on approximate numbers presented at Asiacrypt 2017. The attack is both theoretically efficient (running in expected polynomial time) and very practical, leading to complete key recovery with high probability and very modest running times. We implemented and tested the attack against major open source homomorphic encryption libraries, including HEAAN, SEAL, HElib, PALISADE and Lattigo, and when computing several functions that often arise in applications of the CKKS scheme to privacy-preserving machine learning. Our attack shows that the traditional formulation of IND-CPA security (or indistinguishability against chosen plaintext attacks) achieved by CKKS does not adequately capture security against passive adversaries when applied to approximate encryption schemes, and that a different, stronger definition is required to evaluate the security of such schemes.
We further discuss possible modifications to CKKS that may serve as countermeasures to our attacks, and we also discuss open problems of efficiently achieving provable security under our new definitions.
This talk is based on the joint work with Daniele Micciancio.
Baiyu Li graduated with a Ph.D. degree from UCSD in 2021, advised by Daniele Micciancio. His research interests include formal methods for secure computation protocol design and analysis, as well as lattice-based cryptography. Previously he received his Master's and Bachelor's degrees in CS and Pure Math from the University of Waterloo.
In this talk we will give a quantum reduction from finding short codewords in a random linear code to decoding for the Hamming metric. This is the first time such a reduction (classical or quantum) has been obtained. Our reduction adapts to linear codes Stehlé-Steinfield-Tanaka-Xagawa’ re-interpretation of Regev’s quantum reduction from finding short lattice vectors to solving the Closest Vector Problem. The Hamming metric is a much coarser metric than the Euclidean metric and this adaptation has needed several new ingredients to make it work. For instance, in order to have a meaningful reduction it is necessary in the Hamming metric to choose a very large decoding radius and this needs in many cases to go beyond the radius where decoding is unique. Another crucial step for the analysis of the reduction is the choice of the errors that are being fed to the decoding algorithm. For lattices, errors are usually sampled according to a Gaussian distribution. However, it turns out that the Bernoulli distribution (the analogue for codes of the Gaussian) is too much spread out and can not be used for the reduction with codes. Instead we choose here the uniform distribution over errors of a fixed weight and bring in orthogonal polynomials tools to perform the analysis and an additional amplitude amplification step to obtain the aforementioned result.
The seminar will be at 13:00 UK time, 14:00 FR time
Thomas Debris-Alazard is a research scientist (chargé de recherche) at Inria in the Grace project-team. He was previously a postdoctoral research assistant in the Information Security Group under the supervision of Martin R. Albrecht. He received his PhD from Inria under the supervision of Jean-Pierre Tillich.