In this work, we introduce the sparse LWE assumption, an assumption that draws inspiration from both Learning with Errors (Regev JACM 10) and Sparse Learning Parity with Noise (Alekhnovich FOCS 02). Exactly like LWE, this assumption posits indistinguishability of ( A, sA+e mod p) from (A, u) for a random u where the secret s, and the error vector e is generated exactly as in LWE. However, the coefficient matrix A in sparse LPN is chosen randomly from Z^{n \times m}_p so that each column has Hamming weight exactly k for some small k. We study the problem in the regime where k is a constant or polylogarithmic. The primary motivation for proposing this assumption is efficiency. Compared to LWE, the samples can be computed and stored with roughly O(n/k) factor improvement in efficiency. Our results can be summarized as:
Foundations: We show several properties of sparse LWE samples, including: 1) The hardness of LWE/LPN with dimension k implies the hardness of sparse LWE/LPN with sparsity k and arbitrary dimension n >= k. 2) When the number of samples m=Omega(n log p), length of the shortest vector of a lattice spanned by rows of a random sparse matrix is large, close to that of a random dense matrix of the same dimension (up to a small constant factor). 3) Trapdoors with small polynomial norm exist for random sparse matrices with m = O(n log p) columns. 4) Efficient algorithms for sampling such matrices together with trapdoors exist when the number of columns, m = Otilde(n^2).
Cryptanalysis: We examine the suite of algorithms that have been used to break LWE and sparse LPN. While naively many of the attacks that apply to LWE do not exploit sparsity, we consider natural extensions that make use of sparsity. We propose a model to capture all these attacks. Using this model we suggest heuristics on how to identify concrete parameters. Our initial cryptanalysis suggests that concretely sparse LWE with a modest k and slightly bigger dimension than LWE will satisfy similar level of security as LWE with similar parameters.
Applications: We show that the hardness of sparse LWE implies very efficient homomorphic encryption schemes for low degree computations. We obtain the first secret key Linearly Homomorphic Encryption (LHE) schemes with slightly super-constant, or even constant, overhead, which further has applications to private information retrieval, private set intersection, etc. We also obtain secret key homomorphic encryption for arbitrary constant-degree polynomials with slightly super-constant, or constant, overhead.
We stress that our results are preliminary. However, our results make a strong case for further investigation of sparse LWE.
I'm a 2nd-year PhD Student at Carnegie Mellon University advised by Prof. Aayush Jain. Before starting my PhD, I used to work as a software engineer at Google. I completed my Bachelors' and Masters' degrees from Massachusetts Institute of Technology in 2018 and 2019 respectively.
Threshold signatures have recently seen a renewed interest due to applications in cryptocurrency while NIST has released a call for multi-party threshold schemes, with a deadline for submission expected for the first half of 2025. So far, all lattice-based threshold signatures requiring two-rounds or less are based on heavy tools such as (fully) homomorphic encryption (FHE) and homomorphic trapdoor commitments (HTDC). This is not unexpected considering that most efficient two-round signatures from classical assumptions either rely on idealized model such as algebraic group models or on one-more type assumptions, none of which we have a nice analogue in the lattice world.
In this work, we construct the first efficient two-round lattice-based threshold signature without relying on FHE or HTDC. It has an offline-online feature where the first round can be preprocessed without knowing message or the signer sets, effectively making the signing phase non-interactive. The signature size is small and shows great scalability. For example, even for a threshold as large as 1024 signers, we achieve a signature size roughly 11 KB. At the heart of our construction is a new lattice-based assumption called the algebraic one-more learning with errors (AOMMLWE) assumption. We believe this to be a strong inclusion to our lattice toolkits with an independent interest. We establish the selective security of AOMMLWE based on the standard MLWE and MSIS assumptions, and provide an in depth analysis of its adaptive security, which our threshold signature is based on.
I'm a postdoctoral researcher at PQShield under the supervision of Shuichi Katsumata. I'm aslo a collaborative researcher at AIST. I hold a PhD degree from the University of Electro-Communications, Japan. I have mainly been working on classical and (lattice-based) post-quantum aggregatable signatures, e.g., aggregate signature, multi-signature, and threshold signature.
This work completely breaks the sequentiality assumption (and broad generalizations thereof) underlying the candidate lattice-based proof of sequential work (PoSW) recently proposed by Lai and Malavolta at CRYPTO 2023. In addition, it breaks an essentially identical variant of the PoSW, which differs from the original in only an arbitrary choice that is immaterial to the design and security proof (under the falsified assumption). This suggests that whatever security the original PoSW may have is fragile, and further motivates the search for a construction based on a sound lattice-based assumption.
Specifically, for sequentiality parameter $T$ and SIS parameters $n,q,m = n \log q$, the attack on the sequentiality assumption finds a solution of quasipolynomial norm $m^{\lceil \log T \rceil}$ (or norm $O(\sqrt{m})^{\lceil \log T \rceil}$ with high probability) in only logarithmic ~$O_{n,q}(\log T)$ depth; this strongly falsifies the assumption that finding such a solution requires depth linear in $T$. (The ~${O}$ notation hides polylogarithmic factors in the variables appearing in its subscript.) Alternatively, the attack finds a solution of polynomial norm $m^{1/\varepsilon}$ in depth ~$O_{n,q}(T^{\varepsilon})$, for any constant $\varepsilon > 0$. Similarly, the attack on the (slightly modified) PoSW constructs a valid proof in polylogarithmic ~${O}_{n,q}(\log^2 T)$ depth, thus strongly falsifying the expectation that doing so requires linear sequential work.
Yi is currently a PhD student in CSE at University of Michigan. Yi works on lattice problems with Chris Peikert, and applied cryptography with Paul Grubbs.
Yi got his Master's degree in CS at Courant Institute of Mathematical Sciences, New York University. During that time Yi worked on lattice problems with Oded Regev, and applied cryptography with Yevgeniy Dodis.
In this work, we introduce the sparse LWE assumption, an assumption that draws inspiration from both Learning with Errors (Regev JACM 10) and Sparse Learning Parity with Noise (Alekhnovich FOCS 02). Exactly like LWE, this assumption posits indistinguishability of ( A, sA+e mod p) from (A, u) for a random u where the secret s, and the error vector e is generated exactly as in LWE. However, the coefficient matrix A in sparse LPN is chosen randomly from Z^{n \times m}_p so that each column has Hamming weight exactly k for some small k. We study the problem in the regime where k is a constant or polylogarithmic. The primary motivation for proposing this assumption is efficiency. Compared to LWE, the samples can be computed and stored with roughly O(n/k) factor improvement in efficiency. Our results can be summarized as:
Foundations: We show several properties of sparse LWE samples, including: 1) The hardness of LWE/LPN with dimension k implies the hardness of sparse LWE/LPN with sparsity k and arbitrary dimension n >= k. 2) When the number of samples m=Omega(n log p), length of the shortest vector of a lattice spanned by rows of a random sparse matrix is large, close to that of a random dense matrix of the same dimension (up to a small constant factor). 3) Trapdoors with small polynomial norm exist for random sparse matrices with m = O(n log p) columns. 4) Efficient algorithms for sampling such matrices together with trapdoors exist when the number of columns, m = Otilde(n^2).
Cryptanalysis: We examine the suite of algorithms that have been used to break LWE and sparse LPN. While naively many of the attacks that apply to LWE do not exploit sparsity, we consider natural extensions that make use of sparsity. We propose a model to capture all these attacks. Using this model we suggest heuristics on how to identify concrete parameters. Our initial cryptanalysis suggests that concretely sparse LWE with a modest k and slightly bigger dimension than LWE will satisfy similar level of security as LWE with similar parameters.
Applications: We show that the hardness of sparse LWE implies very efficient homomorphic encryption schemes for low degree computations. We obtain the first secret key Linearly Homomorphic Encryption (LHE) schemes with slightly super-constant, or even constant, overhead, which further has applications to private information retrieval, private set intersection, etc. We also obtain secret key homomorphic encryption for arbitrary constant-degree polynomials with slightly super-constant, or constant, overhead.
We stress that our results are preliminary. However, our results make a strong case for further investigation of sparse LWE.
I'm a 2nd-year PhD Student at Carnegie Mellon University advised by Prof. Aayush Jain. Before starting my PhD, I used to work as a software engineer at Google. I completed my Bachelors' and Masters' degrees from Massachusetts Institute of Technology in 2018 and 2019 respectively.
Threshold cryptography has recently been a very active field of research, notably following the NIST call for threshold primitives. Notable works in post-quantum cryptography built threshold signatures by relying on noise flooding [dPKM+23,EKT24,BKLM+24], and designed short and efficient schemes. They however left opened the questions of distributing the key generation, and of making the signing process robust - i.e. guaranteeing that a valid signature is output even in the present of malicious parties. In this talk, I will present our novel techniques for building Distributed Key Generation and Robust Threshold Signatures. To do so, we introduced a novel framework for constructing verifiable short secret sharing based on random submersions — that is projection over a random subspace blinded by a small Gaussian noise. Our techniques apply to all the aforementionned work, and we also showcased their applications to Plover [EENP+24] to build the first hash-and-sign threshold signature, additionally having a DKG and robust signing procedure.
I am a PhD student at PQShield and Univ Rennes 1 under the supervision of Thomas Prest and Pierre-Alain Fouque. I am especially interested in applied cryptography and the construction of efficient primitives and protocols. My works so far included lattice-based constructions for masking-friendly signatures, threshold signatures, and also secure messaging.
Threshold signatures improve both availability and security of digital signatures by splitting the signing key into shares handed out to different parties. Later on, any subset of at least parties can cooperate to produce a signature on a given message. While threshold signatures have been extensively studied in the pre-quantum setting, they remain sparse from quantum-resilient assumptions.
We present the first efficient lattice-based threshold signatures with signature size 13~KiB and communication cost 40~KiB per user, supporting a threshold size as large as 1024~signers. We provide an accompanying high performance implementation. The security of the scheme is based on the same assumptions as Dilithium, a signature recently selected by NIST for standardisation which, as far as we know, cannot easily be made threshold efficiently.
All operations used during signing are due to symmetric primitives and simple lattice operations; in particular our scheme does not need heavy tools such as threshold fully homomorphic encryption or homomorphic trapdoor commitments as in prior constructions. The key technical idea is to use one-time additive masks to mitigate the leakage of the partial signing keys through partial signatures.
I am a cryptography researcher at the Ethereum Foundation and PQShield. I am generally interested in public key cryptography with a heavy focus on zero-knowledge proofs. I am also interested in threshold cryptography. I enjoy building and analysing practical cryptographic solutions to real world problems.
The Linear Code Equivalence (LCE) Problem has received increased attention in recent years due to its applicability in constructing efficient digital signatures. Notably, the LESS signature scheme based on LCE is under consideration for the NIST post-quantum standardization process, along with the MEDS signature scheme that relies on an extension of LCE to the rank metric, namely Matrix Code Equivalence (MCE) Problem. Building upon these developments, a family of signatures with additional properties, including linkable ring, group, and threshold signatures, has been proposed. These novel constructions introduce relaxed versions of LCE (and MCE), wherein multiple samples share the same secret equivalence. Despite their significance, these variations have often lacked a thorough security analysis, being assumed to be as challenging as their original counterparts. Addressing this gap, our work delves into the sample complexity of LCE and MCE --- precisely, the sufficient number of samples required for efficient recovery of the shared secret equivalence. Our findings reveal, for instance, that one shouldn't use the same secret twice in the LCE setting since this enables a polynomial time (and memory) algorithm to retrieve the secret. Consequently, our results unveil the insecurity of two advanced signatures based on variants of the LCE Problem.
Alessandro Budroni is a researcher at the Technology Innovation Institute of Abu Dhabi. He defended his Ph.D. in 2022 at the University of Bergen under the supervision of Professor Igor Semaev. Before that, he obtained a master's degree at the University of Trento and worked as a cryptography engineer at Miracl, London. His research interests range from cryptography with codes and lattices to efficient algorithms for secure cryptographic implementations.
A functional commitment allows a user to commit to an input x and later open up the commitment to a value f(x) for an arbitrary function f. We require that the size of the commitment and the size of the opening be sublinear in the length of the input x. In this talk, I first describe how to construct functional commitments from the recently-introduced \ell-succint SIS assumption (this is a falsifiable q-type generalization of the classic SIS assumption). Then, I will highlight some challenges in constructing lattice-based extractable functional commitments (which are equivalent to succinct non-interactive arguments of knowledge) and describe a (heuristic) attack on recently-proposed lattice-based knowledge assumptions underlying extractable functional commitments and SNARKs.
Based on joint work with Hoeteck Wee
David Wu is an assistant professor in the Department of Computer Science at the University of Texas at Austin. He is broadly interested in applied and theoretical cryptography as well as computer security. Previously, David received a PhD in computer science from Stanford University in 2018 and was an assistant professor at the University of Virginia from 2019 to 2021. He has received the NSF CAREER Award, the Microsoft Research Faculty Fellowship, and a Google Research Scholar Award. His work has been recognized with a Best Paper Award at CRYPTO (2022), two Best Young-Researcher Paper Awards at CRYPTO (2017, 2018) and an Outstanding Paper Award at ESORICS (2016).
Location tracking accessories (or ``tracking tags'') such as those sold by Apple, Samsung, and Tile, allow owners to track the location of their property via offline finding networks. The tracking protocols were designed to ensure that no entity (including the vendor) can use a tag's broadcasts to surveil its owner. These privacy guarantees, however, seem to be at odds with the phenomenon of {\em tracker-based stalking}, where attackers use these very tags to monitor a target's movements. Numerous such criminal incidents have been reported, and in response, manufacturers have chosen to substantially weaken privacy guarantees in order to allow users to detect stalker tags. This compromise has been adopted in a recent IETF draft jointly proposed by Apple and Google.
We put forth the notion of {\em abuse-resistant offline finding protocols} that aim to achieve a better balance between user privacy and stalker detection. We present an efficient protocol that achieves stalker detection under realistic conditions without sacrificing honest user privacy. At the heart of our result, and of independent interest, is a new notion of {\em multi-dealer secret sharing} which strengthens standard secret sharing with novel privacy and correctness guarantees. We show that this primitive can be instantiated efficiently on edge devices using variants of Interleaved Reed-Solomon codes combined with new lattice-based decoding algorithms.
Gabrielle Beck is an independent researcher and consultant who defended her PhD at Johns Hopkins last semester. Before Hopkins, she completed her bachelor's degree at the University of Michigan where she studied computer science. Her research interests mainly involve developing new practical cryptographic constructions to solve the problems of everyday people with a special focus in metadata-hiding and other unconventional problem areas. Separately, she has recently become very interested in some select topics within coding theory.
The Learning With Errors (LWE) problem asks to find s from an input of the form (A, b = As + e) ∈ (Z/qZ)^{m×n} × (Z/qZ)^{m}, for a vector e that has small-magnitude entries. In this work, we do not focus on solving LWE but on the task of sampling instances. As these are extremely sparse in their range, it may seem plausible that the only way to proceed is to first create s and e and then set b = As + e. In particular, such an instance sampler knows the solution. This raises the question whether it is possible to obliviously sample (A, As+e), namely, without knowing the underlying s. A variant of the assumption that oblivious LWE sampling is hard has been used in a series of works constructing Succinct Non-interactive Arguments of Knowledge (SNARKs) in the standard model. As the assumption is related to LWE, these SNARKs have been conjectured to be secure in the presence of quantum adversaries. Our main result is a quantum polynomial-time algorithm that samples well-distributed LWE instances while provably not knowing the solution, under the assumption that LWE is hard. Moreover, the approach works for a vast range of LWE parametrizations, including those used in the above-mentioned SNARKs.
I am a PhD student at ENS Lyon under the supervision of Prof. Damien Stehlé. I am interested in a wide range of topics in theoretical computer science. I am currently working on qunatum cryptanalysis of lattice-based primitives.
The problem of decoding random codes is a fundamental problem for code-based cryptography, including recent code-based candidates in the NIST post-quantum standardization process. In this paper, we present a novel sieving-style information-set decoding (ISD) algorithm, addressing the task of solving the syndrome decoding problem. Our approach involves maintaining a list of weight-$2p$ solution vectors to a partial syndrome decoding problem and then creating new vectors by identifying pairs of vectors that collide in $p$ positions. By gradually increasing the parity-check condition by one and repeating this process iteratively, we find the final solution(s). We show that our novel algorithm performs better than other ISDs in the memory-restricted scenario when applied to McEliece. Notably, in the case of problem instances with very low relative weight, it seems to significantly outperform all previous algorithms. In particular, for code-based candidate BIKE, the algorithm has lower bit complexity than the previous best results.
I am Ph.D. student at Lund University. My main research interest is code-based post-quantum cryptography. I also performed cryptanalysis on a few LPN-based post-quantum schemes (wPRF, Firekite).
Much recent work has developed efficient protocols for threshold signatures, where $n$ parties share a signing key and some threshold $t$ of those parties must interact to produce a signature. Yet efficient threshold signatures with post-quantum security have been elusive, with the state-of-the-art being a two-round scheme by Damgård et al. (PKC'21) based on lattices that supports only the full threshold case (i.e., $t=n$).
We show here a two-round threshold signature scheme based on standard lattice assumptions that supports arbitrary thresholds $t\leq n$. Estimates of our scheme's performance at the $128$-bit security level show that in the 3-out-of-5 case, we obtain signatures of size $20$ KB and public keys of size $14$ KB. We achieve improved parameters if only a small number of signatures are ever issued with the same key.
As an essential building block and independent contribution, we construct an actively secure threshold (linearly) homomorphic encryption scheme that supports arbitrary thresholds $t \leq n$.
This is joint work with Kamil Doruk Gur and Jonathan Katz at the University of Maryland, and the paper is available at: https://eprint.iacr.org/2023/1318.
I am an Associate Professor in Cryptology at the Department of Information Security and Communication Technology at the Norwegian University of Science and Technology (NTNU) in Trondheim, where I am the Research Group Leader of the NTNU Applied Cryptology Lab. My main foci of research are lattice-based cryptography and zero-knowledge protocols. My interests also span the areas of post-quantum cryptography, anonymous communication, multiparty computation, homomorphic encryption, electronic voting and secure implementation.
We present cryptanalysis of the inhomogenous short integer solution (ISIS) problem for anomalously small moduli by exploiting the geometry of BKZ reduced bases of q-ary lattices.
We apply this cryptanalysis to examples from the literature where taking such small moduli has been suggested. A recent work [Espitau–Tibouchi–Wallet–Yu, CRYPTO 2022] suggests small versions of the lattice signature scheme FALCON and its variant MITAKA.
For one small parametrisation of FALCON we reduce the estimated security against signature forgery by approximately 26 bits. For one small parametrisation of MITAKA we successfully forge a signature in seconds.
We also show that our attack applies to the ``distortion'' technique suggested in [Espitau–Tibouchi–Wallet–Yu, CRYPTO 2022].
I am currently a postdoctoral researcher at Centrum Wiskunde en Informatica (CWI) with Léo Ducas focussing on constructions from and cryptanalysis of lattice based cryptography. I will start as a junior lecturer at King's College London in January.
S|LWE> is a quantum variant of LWE where the error term is encoded in the amplitude of a quantum state. The S|LWE> problem was originally studied by Chen, Liu, Zhandry [Eurocrypt 2022], motivated by solving a variant of SIS problem. In this talk I will talk about the previous result by Chen, Liu, Zhandry, as well as a recent paper on the hardness and algorithms for S|LWE> with Gaussian and other amplitudes. Our main results in the recent paper are 1. There exist quantum reductions from standard LWE or worst-case GapSVP to S|LWE> with Gaussian amplitude with unknown phase, and arbitrarily many samples. 2. There is a subexponential time algorithm for with Gaussian amplitude with known phase, given subexponentially many quantum samples. The algorithm is modified from Kuperberg's sieve, and in fact works for more general amplitudes as long as the amplitudes and phases are completely known. One way of interpreting our new result is: to show a sub-exponential time quantum algorithm for standard LWE, all we need is to handle phases in amplitudes better, either in the algorithm or the reduction. Based on two papers, one with Qipeng Liu, Mark Zhandry (eprint: https://eprint.iacr.org/2021/1093), and another with Zihan Hu, Qipeng Liu, Han Luo, Yaxin Tu (eprint https://eprint.iacr.org/2023/1498).
Yilei is an assistant professor at Tsinghua University Institute for Interdisciplinary Information Science (IIIS). Yilei's main research interests are lattice-based cryptography and quantum computation.
A fundamental concept in cryptography is pseudorandomness, namely the ability to efficiently generate many bits that look indistinguishable from random. For example, if two parties can agree on a short random seed, a pseudorandom generator allows them to later obtain long random strings that look essentially uniform.
Motivated by concerns in secure multiparty computation, a recent generalization of this primitive has been designed and studied. Namely, from two short correlated seeds, long correlated strings can be generated later, without requiring any further communication from the parties. Such objects, termed pseudorandom correlation generators (PCGs), result in some of the most efficient secure computation protocols.
In this presentation, we will first discuss a general recipe for PCGs, studying the important special case of the oblivious transfer (OT) correlation. We will observe that a crucial ingredient is a pseudorandom generator that has sufficient linear structure; i.e., a pseudorandom generator derived from a “learning parity with noise”-type assumption. In the second half of the talk, we will discuss new coding-theoretic assumptions which have been proposed to design highly-efficient PCGs. As a particular highlight, we will discuss a PCG requiring only linear time to expand short seeds into many OT instances.
Based on joint works with Elette Boyle, Geoffroy Couteau, Niv Gilboa, Yuval Ishai, and Peter Scholl.
Lisa Kohl is a senior researcher with the CWI Cryptology Group in Amsterdam. A special focus of her research work lies in exploring new directions for secure computation, with the goal of developing practical post-quantum secure protocols. Website: lisakohl.me
Nicolas Resch is an assistant professor with the Theoretical Computer Science Group from the Informatics Institute of the University of Amsterdam. His research focusses on error-correcting codes and cryptography and (naturally) their intersection. Website: nicolas-resch.caard.co
Let $G$ be a finite abelian group and let $g_1,...,g_n \in G$. Define the relation lattice $L = { (x_1,...,x_n) \in Z^n : \sum_{i=1}^n x_i g_i = 0 }$. Ducas and Pierrot (Designs, Codes and Cryptography, 2019) showed how relation lattices can provide families of dense lattices with a polynomial-time bounded distance decoding algorithm. Li, Ling, Xing and Yeo (IEEE Transactions on Information Theory 2020) proposed using a relation lattice for a version of the McEliece code-based post-quantum encryption scheme. I will survey these results and discuss some improvements to the cryptosystem of Li, Ling, Xing and Yeo.
Also: Bush House (SE) 1.06
Professor Galbraith is a researcher in mathematics of public key cryptography at the University of Auckland in New Zealand. He has held positions at Royal Holloway, Bristol, Institute of Experimental Mathematics (Essen), and Waterloo.
Can we design a private variant of "Google search" that would enable users to search the Internet privately without revealing their queries to Google? Fully homomorphic encryption gives a solution to this problem, where users would send their search queries to Google encrypted in a way that allows Google to compute the encrypted search results, without learning anything about the query itself. However, this solution would require Google to run a huge computation that touches the entire content of the Internet to answer each encrypted search query, making it realistically infeasible! Is there a way for Google to preprocess the Internet content into a special data structure that would allow it to answer any encrypted query efficiently by only accessing some small number of locations in the data structure? We give a solution to this problem as a special case of our work.
Concretely, we construct the first schemes for "doubly efficient private information retrieval" and "fully homomorphic encryption in the RAM model" under standard cryptographic hardness assumptions (Ring Learning with Errors). Previously we only had heuristic candidate constructions that only satisfied relaxed variants of these primitives. Our new solutions combine tools from cryptography together with data structures for fast polynomial evaluation (Kedlaya and Umans '08).
Joint work with Ethan Mook and Daniel Wichs.
I am an assistant professor in the Computer Science Department at University of Virginia. Earlier, I was a postdoctoral researcher at Northeastern University, advised by Professor Daniel Wichs. I got my Ph.D. in Computer Science at Cornell and then worked as a postdoc at CMU, both advised by Professor Elaine Shi. Prior to that, I got my B.S. and M.S. degrees from National Taiwan University, and then I was a software engineer.
Laconic cryptography is an emerging paradigm that enables cryptographic primitives with sublinear communication complexity in just two messages. In particular, a two-message protocol between Alice and Bob is called laconic if its communication and computation complexity are essentially independent of the size of Alice's input. This can be thought of as a dual notion of fully-homomorphic encryption, as it enables "Bob-optimized" protocols. This paradigm has led to tremendous progress in recent years. However, all existing constructions of laconic primitives are considered only of theoretical interest: They all rely on non-black-box cryptographic techniques, which are highly impractical.
This work shows that non-black-box techniques are not necessary for basic laconic cryptography primitives. We propose a completely algebraic construction of laconic encryption, a notion that we introduce in this work, which serves as the cornerstone of our framework. We prove that the scheme is secure under the standard Learning With Errors assumption (with polynomial modulus-to-noise ratio). We provide proof-of-concept implementations for the first time for laconic primitives, demonstrating the construction is indeed practical: For a database size of , encryption and decryption are in the order of single digit milliseconds.
Laconic encryption can be used as a black box to construct other laconic primitives. Specifically, we show how to construct:
All of the above have essentially optimal parameters and similar practical efficiency. Furthermore, our laconic encryption can be preprocessed such that the online encryption step is entirely combinatorial and therefore much more efficient. Using similar techniques, we also obtain identity-based encryption with an unbounded identity space and tight security proof (in the standard model).
Russell W. F. Lai is an Assistant Professor at the Department of Computer Science of Aalto University since 2022. He works on a range of topics in cryptography including lattice-based cryptography, succinct arguments, and anonymous systems. He obtained his PhD degree from Friedrich-Alexander University Erlangen-Nuremberg in 2022.
The Groth-Sahai proof system is a highly efficient pairing-based proof system for a specific class of group-based languages. Cryptographic primitives that are compatible with these languages (such that we can express, e.g., that a ciphertext contains a valid signature for a given message) are called “structure-preserving”. The combination of structure-preserving primitives with Groth-Sahai proofs allows to prove complex statements that involve encryptions and signatures, and has proved useful in a variety of applications. However, so far, the concept of structure-preserving cryptography has been confined to the pairing setting.
In this talk, I will outline a strategy for structure-preserving cryptography from lattices. At the heart of this framework lies a lattice-based argument system for “noisy” languages (formalized as “structure-preserving sets”), and the observation that this proof system is compatible with a number of existing lattice-based primitives. We demonstrate the usefulness of our framework with a lattice-based construction of verifiably encrypted signatures. As a secondary contribution, we present a more efficient variant of Rückert's signature scheme. I should note that (like in the group-based setting of structure-preserving cryptography), all our constructions are in the standard model.
Before joining ETH Zurich's CS department in 2020, I used to work at the Karlsruhe Institute of Technology in Germany, and before that at the Centrum Wiskunde en Informatica in Amsterdam, the Netherlands. I work on the foundations of cryptography, and in particular on the design and analysis of cryptographic building blocks (such as public-key encryption and signatures).
I will present our work on witness encryption and null IO, as well as some musings on the evasive LWE assumption (in particular, relation to prior zeroizing attacks on IO). Based on joint works with Vinod Vaikuntanathan, Daniel Wichs, and David Wu.
Hoeteck is a senior scientist at NTT Research, currently on leave from CNRS and ENS, Paris.
We introduce a new lattice basis reduction algorithm with approximation guarantees analogous to the LLL algorithm and practical performance that far exceeds the current state of the art. We achieve these results by iteratively applying precision management techniques within a recursive algorithm structure and show the stability of this approach. We analyze the asymptotic behavior of our algorithm, and show that the heuristic running time is $O(n^{\omega}(C+n)^{1+\varepsilon})$ for lattices of dimension $n$, $\omega\in (2,3]$ bounding the cost of size reduction, matrix multiplication, and QR factorization, and $C$ bounding the log of the condition number of the input basis $B$. This yields a running time of $O\left(n^\omega (p + n)^{1 + \varepsilon}\right)$ for precision $p = O(\log \|B\|_{max})$ in common applications. Our algorithm is fully practical, and we have published our implementation. We experimentally validate our heuristic, give extensive benchmarks against numerous classes of cryptographic lattices, and show that our algorithm significantly outperforms existing implementations.
Note the changed time.
Keegan Ryan is a 4th year PhD student advised by Prof. Nadia Heninger at the University of California, San Diego. His research interests include practical cryptanalysis of real-world systems, particularly problems involving lattice reduction.
We introduce the first candidate lattice-based Designated Verifier (DV) ZK-SNARK protocol with \emph{quasi-optimal proof length} (quasi-linear in the security/privacy parameter), avoiding the use of the exponential smudging technique. Our ZK-SNARK also achieves significant improvements in proof length in practice, with proofs length below 6 KB for 128-bit security/privacy level. Our main technical result is a new regularity theorem for `private' re-randomization of Module LWE (MLWE) samples using discrete Gaussian randomization vectors, also known as a lattice-based leftover hash lemma with leakage, which applies with a discrete Gaussian re-randomization parameter that is polynomial in the statistical privacy parameter. To obtain this result, we obtain bounds on the smoothing parameter of an intersection of a random q-ary SIS module lattice, Gadget SIS module lattice, and Gaussian orthogonal module lattice over standard power of 2 cyclotomic rings, and a bound on the minimum of module gadget lattices. We then introduce a new candidate \emph{linear-only} homomorphic encryption scheme called Module Half-GSW (HGSW), which is a variant of the GSW somewhat homomorphic encryption scheme over modules, and apply our regularity theorem to provide smudging-free circuit-private homomorphic linear operations for Module HGSW.
Note the changed time.
Ron Steinfeld is an Associate Professor at Monash University. His research focuses on post-quantum cryptography and its applications. He obtained his Ph.D. degree in cryptography at Monash University in 2003. He was a postdoctoral ARC Research Fellow at Macquarie University. He joined Monash University in 2012.
A fully homomorphic encryption (FHE) scheme allows a client to encrypt and delegate its data to a server that performs computation on the encrypted data that the client can then decrypt. While FHE gives confidentiality to clients' data, it does not protect the server's input and computation. Nevertheless, FHE schemes are still helpful in building delegation protocols that reduce communication complexity, as the ciphertext's size is independent of the size of the computation performed on them.
We can further extend FHE by a property called circuit privacy, which guarantees that the result of computing on ciphertexts reveals no information on the computed function and the inputs of the server. Thereby, circuit private FHE gives rise to round optimal and communication efficient secure two-party computation protocols. Unfortunately, despite significant efforts and much work put into the efficiency and practical implementations of FHE schemes, very little has been done to provide useful and practical FHE supporting circuit privacy. In this work, we address this gap and design the first randomized bootstrapping algorithm whose single invocation sanitizes a ciphertext and, consequently, serves as a tool to provide circuit privacy. We give an extensive analysis, propose parameters, and provide a C++ implementation of our scheme. Our bootstrapping can sanitize a ciphertext to achieve circuit privacy at an 80-bit statistical security level in 1.2 or 1.4 seconds, depending on whether the parameter set targets a fast Fourier or a number theoretic transform-based implementation. In addition, we can perform non-sanitized bootstrapping in around 0.27 or 0.14 seconds on a laptop where the additional sanitization key takes less than $0.5$ MB of memory. Crucially, we do not need to increase the parameters to perform computation before or after sanitization takes place. For comparison's sake, we revisit the Ducas-Stehl\'e washing machine method. In particular, we give a tight analysis, estimate efficiency, review old, and provide new parameters.
Kamil Kluczniak is currently a Research Group Leader at CISPA Helmholtz Center for Information Security. His work focuses on efficient fully homomorphic encryption schemes. He obtained his Ph.D. degree in cryptography from the Polish Academy of Sciences in 2016. In 2017 he worked as a postdoctoral researcher at the Hong Kong Polytechnique University. In 2018 he joined the CISPA-Stanford Center, and from 2019-2021 he was a visiting assistant professor at Stanford University.
We investigate the security of succinct arguments against quantum adversaries. Our main result is a proof of knowledge-soundness in the post-quantum setting for a class of multi-round interactive protocols, including those based on the recursive folding technique of Bulletproofs.
To prove this result, we devise a new quantum rewinding strategy, the first that allows for rewinding across many rounds. This technique applies to any protocol satisfying natural multi-round generalizations of special soundness and collapsing. For our main result, we show that recent Bulletproofs-like protocols based on lattices satisfy these properties, and are hence sound against quantum adversaries.
Nick Spooner is an Assistant Professor of Computer Science at the University of Warwick. His work focuses on post-quantum proof systems. His interests include interactive proof systems and zero knowledge in general, post-quantum cryptography, quantum information, coding theory and computational complexity. Previously he was a postdoc at Boston University, and he received his PhD from UC Berkeley in 2020.
We show that for many fundamental cryptographic primitives, proving classical security under the learning-with-errors (LWE) assumption, does \emph{not} imply post-quantum security. This is despite the fact that LWE is widely believed to be post-quantum secure, and our work does not give any evidence otherwise. Instead, it shows that post-quantum insecurity can arise inside cryptographic constructions, even if the assumptions are post-quantum secure.
Concretely, our work provides (contrived) constructions of pseudorandom functions, CPA-secure symmetric-key encryption, message-authentication codes, signatures, and CCA-secure public-key encryption schemes, all of which are proven to be classically secure under LWE via black-box reductions, but demonstrably fail to be post-quantum secure. All of these cryptosystems are stateless and non-interactive, but their security is defined via an interactive game that allows the attacker to make oracle queries to the cryptosystem.The polynomial-time quantum attacker can break these schemes by only making a few \emph{classical} queries to the cryptosystem, and in some cases, a single query suffices.
Previously, we only had examples of post-quantum insecurity under post-quantum assumptions for stateful/interactive protocols. Moreover, there appears to be a folklore belief that for stateless/non-interactive cryptosystems with black-box proofs of security, a quantum attack against the scheme should translate into a quantum attack on the assumption. This work shows otherwise. Our main technique is to carefully embed interactive protocols inside the interactive security games of the above primitives.
As a result of independent interest, we also show a 3-round \emph{quantum disclosure of secrets (QDS)} protocol between a classical sender and a receiver, where a quantum receiver learns a secret message in the third round but, assuming LWE, a classical receiver does not.
Joint work with Alex Lombardi, Ethan Mook, and Daniel Wichs. Eprint: https://eprint.iacr.org/2022/869
Note the changed time.
Willy Quach is a 6th year PhD student at Northeastern University advised by Daniel Wichs.
The random oracle methodology is central to the design of many practical cryptosystems. A common challenge faced in several systems is the need to have a random oracle that outputs from a structured distribution $\mathcal{D}$, even though most heuristic implementations such as SHA-3 are best suited for outputting bitstrings.
Our work explores the problem of sampling from discrete Gaussian (and related) distributions in a manner that they can be programmed into random oracles. We make the following contributions:
-We provide a definitional framework for our results. We say that a sampling algorithm $\mathsf{Sample}$ for a distribution is explainable if there exists an algorithm $\mathsf{Explain}$ where, for a $x$ in the domain, we have that $\mathsf{Explain}(x) \rightarrow r \in {0,1}^n$ such that $\mathsf{Sample}(r)=x$. Moreover, if $x$ is sampled from $\mathcal{D}$ the explained distribution is statistically close to choosing $r$ uniformly at random. We consider a variant of this definition that allows the statistical closeness to be a "precision parameter'' given to the $\mathsf{Explain}$ algorithm. We show that sampling algorithms which satisfy our `explainability' property can be programmed as a random oracle.
-We provide a simple algorithm for explaining \emph{any} sampling algorithm that works over distributions with polynomial sized ranges. This includes discrete Gaussians with small standard deviations.
-We show how to transform a (not necessarily explainable) sampling algorithm $\mathsf{Sample}$ for a distribution into a new $\mathsf{Sample}'$ that is explainable. The requirements for doing this is that (1) the probability density function is efficiently computable (2) it is possible to efficiently uniformly sample from all elements that have a probability density above a given threshold $p$, showing the equivalence of random oracles to these distributions and random oracles to uniform bitstrings. This includes a large class of distributions, including all discrete Gaussians.
-A potential drawback of the previous approach is that the transformation requires an additional computation of the density function. We provide a more customized approach that shows the Miccancio-Walter discrete Gaussian sampler is explainable as is. This suggests that other discrete Gaussian samplers in a similar vein might also be explainable as is.
Note the changed time.
George is a fourth year PhD student at UT Austin advised by Dr. Brent Waters.
In this talk, I will present a new zero-knowledge proof of knowledge for the syndrome decoding (SD) problem on random linear codes. Instead of using permutations like most of the existing protocols, we rely on the MPC-in-the-head paradigm in which we reduce the task of proving the low Hamming weight of the SD solution to proving some relations between specific polynomials. Specifically, we propose a 5-round zero-knowledge protocol that proves the knowledge of a vector x such that $y=Hx$ and $wt(x)\leq w$ and which achieves a soundness error close to $1/N$ for an arbitrary $N$.
While turning this protocol into a signature scheme, we achieve a signature size of 11-12 KB for 128-bit security when relying on the hardness of the SD problem on binary fields. Using larger fields (like $\mathbb{F}_{256}$), we can produce fast signatures of around 8 KB. This allows us to outperform Picnic3 and be competitive with $\text{SPHINCS}^{+}$. Since the security relies on the hardness of the syndrome decoding problem for random linear codes which is known to be NP-hard and for which the cryptanalysis state of the art has been stable for many years, it results in a conservative signature scheme. Moreover, our scheme outperforms all the former code-based signature schemes for the common “signature size + public key size” metric.
Joint work with Antoine Joux and Matthieu Rivain.
Thibauld Feneuil is a PhD student at CryptoExperts & Sorbonne University (France). He is working on zero-knowledge proofs in post-quantum cryptography.
This talk focuses on the shortest vector problem over so-called ideal lattices. Whereas it is believed to be hard in the worst-case, prior works have shown that there are several specific cases, where the problem becomes easy to solve. We continue this line of work and provide a simple condition under which an ideal lattice defines an easy instance of the shortest vector problem. Namely, we show that the more automorphisms stabilize the ideal, the easier it is to find a short vector in it. This observation was already made for prime ideals in Galois fields, and we extend it to any ideal (whose prime factors are not ramified) of any number field, generalizing the works of Pan et al. (Eurocrypt'21) and Porter et al. (ArXiv'21). We then provide a cryptographic application of this result by showing that particular instances of the partial Vandermonde knapsack problem, also known as partial Fourier recovery problem, can be solved classically in polynomial time. As a proof of concept, we implemented our attack and managed to solve those particular instances for concrete parameter settings proposed in the literature. For random instances, we can halve the lattice dimension with non-negligible probability. We conclude the presentation by clarifying the impacts of our results to lattice-based cryptography. This is a joint work with Erell Gachon and Alice Pellet-Mary.
This will be a hybrid seminar. The speaker will be giving the talk from ENS de Lyon.
Katharina Boudgoust is a postdoctoral researcher in the Cryptography and Security team of the Aarhus University, Denmark, under the supervision of Peter Scholl. She is interested in lattice-based cryptography, and more specifically in the computational hardness assumptions underlying lattice-based cryptosystems and threshold cryptography. From September 2018 to November 2021, Katharina was a PhD student with the EMSEC team at IRISA Laboratory in Rennes, France, under the supervision of Adeline Roux-Langlois and Pierre-Alain Fouque. Her PhD thesis focused on the theoretical hardness of algebraically structured Learning With Errors.
It is a long standing open problem in code-based cryptography to find search to decision reductions for structured versions of the decoding problem (e.g. for quasi-cyclic codes). Such results in the lattice-based setting have been carried out using number fields: Polynomial-LWE, Ring-LWE, Module-LWE ...
In this talk, I will present a function field analogue of the LWE problem. This new framework leads to another point of view on structured codes, strengthening the connections between lattice-based and code-based cryptography.
This framework can be instantiated with function field analogues of the cyclotomic number fields, namely Carlitz extensions, leading to the first search to decision reductions on various structured versions of the decoding problem, and Ring-LPN, which have applications to secure multi party computation. and to an authentication protocol.
This is a joint work with Alain Couvreur and Thomas Debris-Alazard.
Maxime is a PhD student at LIX (Laboratoire d'Informatique de l'École Polytechnique) & Inria, France. He is interested in the hardness of various problems related to algebraically structured codes (either in the Hamming or the rank metric).
We present a much-improved practical protocol, based on the hardness of Module-SIS and Module-LWE problems, for proving knowledge of a short vector s satisfying As = t mod q. The currently most-efficient technique for constructing such a proof works by showing that the L_\infty norm of sis small using CRT slots technique (Esgin et al., Asiacrypt 2020). In this work, we show that there is a more direct and more efficient way to prove that s has a small L_2 norm which does not require an equivocation with the L_\infty norm, nor any conversion to the CRT representation. We observe that the inner product between two vectors r and s can be made to appear as a coefficient of a product (or sum of products) between polynomials with coefficient vectors r and s. Thus, by using a polynomial product proof system and hiding all but one coefficient, we can prove knowledge of the inner product of two vectors (or of a vector with itself) modulo q. Using a cheap, "approximate range proof", one can then lift the proof to be over Z instead of Zq. Our protocols for proving short norms work over all (interesting) polynomial rings but are particularly efficient for rings like Z[X]/(X^n+1) in which the function relating the inner product of vectors and polynomial products happens to be a "nice" automorphism.
The new proof system can be plugged into constructions of various lattice-based privacy primitives in a black-box manner. As examples, we instantiate a verifiable encryption scheme and a group signature scheme which are more than twice as compact as the previously best solutions.
Ngoc Khanh Nguyen is a fourth-year PhD student at IBM Research Europe - Zurich and ETH Zurich, Switzerland, supervised by Dr Vadim Lyubashevsky and Prof. Dennis Hofheinz. Previously, Khanh obtained his B.Sc. and M.Eng. degrees in Mathematics and Computer Science at the University of Bristol, UK.
This paper investigates anonymity of all NIST PQC Round~3 KEMs: Classic McEliece, Kyber, NTRU, Saber, BIKE, FrodoKEM, HQC, NTRU Prime (Streamlined NTRU Prime and NTRU LPRime), and SIKE.
We show the following results:
We found that Streamlined NTRU Prime has another technical obstacle for the IND-CCA security proof in the QROM.
Those answer the open problem to investigate the anonymity and robustness of NIST PQC Round~3 KEMs posed by Grubbs, Maram, and Paterson (EUROCRYPT 2022). We use strong disjoint-simulatability of the underlying PKE of KEM and strong pseudorandomness and smoothness/sparseness of KEM as the main tools, which will be of independent interest.
The full paper is available at https://eprint.iacr.org/2021/1323
Keita Xagawa received his B.S. degree from Kyoto University and M.S. and D.S. degrees from Tokyo Institute of Technology in 2005, 2007, and 2010, respectively. He joined NTT Corporation in 2010.
Computing the class group and the unit group of a number field is an important problem of algorithmic number theory. Recently, it has also become an important problem in cryptography, since it is used in multiple algorithms solving the shortest vector problem in ideal lattices.
Subexponential algorithms (in the discriminant of the number field) are known to solve this problem in any number fields, but they heavily rely on heuristics. The only non-heuristic known algorithm, due to Hafner and McCurley, is restricted to imaginary quadratic number fields.
In this talk, I will present a rigorous subexponential algorithm computing units and class group (and more generally T-units) in any number field, assuming the extended Riemann hypothesis.
This is a joint work with Koen de Boer and Benjamin Wesolowski.
Alice is a CNRS researcher (chargée de recherche) at the university of Bordeaux. She is part of the Institut de Mathématiques de Bordeaux (IMB) and of the Lfant inria team. She is interested in lattice based cryptography, and more specifically in the hardness of algorithmic problems related to algebraically structured lattices.
A core goal of the NIST PQC competition is to produce public-key encryption (PKE) schemes which, even if attacked with a large-scale quantum computer, maintain the security guarantees needed by applications. The main security focus in the NIST PQC context has been IND-CCA security, but other applications demand that PKE schemes provide anonymity (Bellare et al., ASIACRYPT 2001), and robustness (Abdalla et al., TCC 2010). Examples of such applications include anonymous communication systems, cryptocurrencies, anonymous credentials, searchable encryption, and auction protocols. Almost nothing is known about how to build post-quantum PKE schemes offering these security properties. In particular, the status of the NIST PQC candidates with respect to anonymity and robustness is unknown.
In this work, we initiate a systematic study of anonymity and robustness for post-quantum PKE schemes. Firstly, we identify implicit rejection as a crucial design choice shared by most post-quantum KEMs, show that implicit rejection renders prior results on anonymity and robustness for KEM-DEM PKEs inapplicable, and transfer prior results to the implicit-rejection setting where possible. Secondly, since they are widely used to build post-quantum PKEs, we examine how the Fujisaki-Okamoto (FO) transforms (Fujisaki and Okamoto, Journal of Cryptology 2013) confer robustness and enhance weak anonymity of a base PKE.
We then leverage our theoretical results to study the anonymity and robustness of three NIST KEM finalists---Saber, Kyber, and Classic McEliece---and one alternate, FrodoKEM. Overall, our findings for robustness are definitive: we provide positive robustness results for Saber, Kyber, and FrodoKEM, and a negative result for Classic McEliece. Our negative result stems from a striking property of KEM-DEM PKE schemes built with the Classic McEliece KEM: for any message $m$, we can construct a single hybrid ciphertext $c$ which decrypts to the chosen $m$ under any Classic McEliece private key.
Our findings for anonymity are more mixed: we identify barriers to proving anonymity for Saber, Kyber, and Classic McEliece. We also found that in the case of Saber and Kyber, these barriers lead to issues with their IND-CCA security claims. We have worked with the Saber and Kyber teams to fix these issues, but they remain unresolved. On the positive side, we were able to prove anonymity for FrodoKEM and a variant of Saber introduced by D'Anvers et al. (AFRICACRYPT 2018). Our analyses of these two schemes also identified technical gaps in their IND-CCA security claims, but we were able to fix them.
This is a joint work with Paul Grubbs (University of Michigan) and Kenneth G. Paterson (ETH Zurich).
Varun Maram is a third-year PhD student in the Applied Cryptography group at ETH Zurich, supervised by Prof. Kenny Paterson. His current research interests lie in the area of post-quantum cryptography, both in the asymmetric and symmetric settings. Varun is also one of the primary submitters of "Classic McEliece", a finalist algorithm for PKE and key-establishment in the NIST PQC standardization process.
We provide the first constructions of non-interactive zero-knowledge and Zap arguments for NP based on the sub-exponential hardness of Decisional Diffie-Hellman against polynomial time adversaries (without use of groups with pairings).
Central to our results, and of independent interest, is a new notion of interactive trapdoor hashing protocols.
I have a broad interest in Theoretical Computer Science. Currently, I am working on Cryptography and Coding Theory. I am a fifth year Ph.D. student at Johns Hopkins University, fortunately co-advised by Abhishek Jain and Xin Li.
The continuous learning with errors (CLWE) problem was recently introduced by Bruna et al. (STOC 2021). They showed that its hardness implies infeasibility of learning Gaussian mixture models, while its tractability implies efficient Discrete Gaussian Sampling and thus asymptotic improvements in worst-case lattice algorithms. No reduction between CLWE and LWE is currently known, in either direction.
We propose four public-key encryption schemes based on the hardness of CLWE, with varying tradeoffs between decryption and security errors, and different discretization techniques. Some of our schemes are based on hCLWE, a homogeneous variant, which is no easier than CLWE. Our schemes yield a polynomial-time algorithm for solving hCLWE, and hence also CLWE, using a Statistical Zero-Knowledge oracle.
This is joint work with Andrej Bogdanov, Miguel Cueto Noval and Alon Rosen. E-print: https://eprint.iacr.org/2022/093
Charlotte is a PhD student in the Cryptography group at IST Austria under the supervision of Krzysztof Pietrzak.
In this paper, we study relationship between security of cryptographic schemes in the random oracle model (ROM) and quantum random oracle model (QROM). First, we introduce a notion of a proof of quantum access to a random oracle (PoQRO), which is a protocol to prove the capability to quantumly access a random oracle to a classical verifier. We observe that a proof of quantumness recently proposed by Brakerski et al. (TQC '20) can be seen as a PoQRO. We also give a construction of a publicly verifiable PoQRO relative to a classical oracle. Based on them, we construct digital signature and public key encryption schemes that are secure in the ROM but insecure in the QROM. In particular, we obtain the first examples of natural cryptographic schemes that separate the ROM and QROM under a standard cryptographic assumption.
On the other hand, we give lifting theorems from security in the ROM to that in the QROM for certain types of cryptographic schemes and security notions. For example, our lifting theorems are applicable to Fiat-Shamir non-interactive arguments, Fiat-Shamir signatures, and Full-Domain-Hash signatures etc. We also discuss applications of our lifting theorems to quantum query complexity.
Takashi Yamakawa is a researcher at NTT in Japan. He earned his PhD in Science from The University of Tokyo in 2017.His research focuses on post-quantum and quantum cryptography.
A natural and recurring idea in the knapsack/lattice cryptography literature is to start from a lattice with remarkable decoding capability as your private key, and hide it somehow to make a public key. This is also how the code-based encryption scheme of McEliece (1978) proceeds.
This idea has never worked out very well for lattices: ad-hoc approaches have been proposed, but they have been subject to ad-hoc attacks, using tricks beyond lattice reduction algorithms. On the other hand the framework offered by the Short Integer Solution (SIS) and Learning With Errors (LWE) problems, while convenient and well founded, remains frustrating from a coding perspective: the underlying decoding algorithms are rather trivial, with poor decoding performance.
In this work, we provide generic realisations of this natural idea (independently of the chosen remarkable lattice) by basing cryptography on the lattice isomorphism problem (LIP). More specifically, we provide:
a worst-case to average-case reduction for search-LIP and distinguish-LIP within an isomorphism class, by extending techniques of Haviv and Regev (SODA 2014).
a zero-knowledge proof of knowledge (ZKPoK) of an isomorphism. This implies an identification scheme based on search-LIP.
a key encapsulation mechanism (KEM) scheme and a hash-then-sign signature scheme, both based on distinguish-LIP.
The purpose of this approach is for remarkable lattices to improve the security and performance of lattice-based cryptography. For example, decoding within poly-logarithmic factor from Minkowski's bound in a remarkable lattice would lead to a KEM resisting lattice attacks down to a poly-logarithmic approximation factor, provided that the dual lattice is also close to Minkowski's bound. Recent works have indeed reached such decoders for certain lattices (Chor-Rivest, Barnes-Sloan), but these do not perfectly fit our need as their duals have poor minimal distance.
Wessel is a PhD student in the Cryptology Group at CWI in Amsterdam under the supervision of Léo Ducas.
In 2020, Bernard and Roux-Langlois introduced the Twisted-PHS algorithm to solve Approx-SVP for ideal lattices on any number field, based on the PHS algorithm by Pellet-Mary, Hanrot and Stehlé in 2019. They performed experiments for prime conductors cyclotomic fields of degrees at most 70, reporting approximation factors reached in practice. The main obstacle for these experiments is the computation of a log-$\mathcal{S}$-unit lattice, which requires classical subexponential time.
In this paper, our main contribution is to extend these experiments to 210 cyclotomic fields of any conductor $m$ and of degree up to 210. Building upon new results from Bernard and Ku{\v{c}}era on the Stickelberger ideal, we construct a maximal set of independent $\mathcal{S}$-units lifted from the maximal real subfield using explicit Stickelberger generators obtained via Jacobi sums. Hence, we obtain full-rank log-$\mathcal{S}$-unit sublattices fulfilling the role of approximating the full Tw-PHS lattice. Notably, our obtained approximation factors match those from Bernard and Roux-Langlois using the original log-$\mathcal{S}$-unit lattice in small dimensions.
As a side result, we use the knowledge of these explicit Stickelberger elements to remove almost all quantum steps in the CDW algorithm, by Cramer, Ducas and Wesolowski in 2021, under the mild restriction that the plus part of the class number verifies $h^{+}_{m}\leq O(\sqrt{m})$.
The full paper is available on ePrint:2021/1384. This is joint work with Andrea Lesavourey, Tuong-Huy Nguyen and Adeline Roux-Langlois.
Olivier BERNARD is a research engineer at the cryptology laboratory of Thales, Gennevilliers, and is currently in the third year of a part-time PhD under the supervision of Pierre-Alain Fouque and Adeline Roux-Langlois. His research interests mainly focus on number theoretic cryptanalyses and more generally on algorithmic number theory.
Lattice-based cryptography is one of the leading proposals for post-quantum cryptography. The Shortest Vector Problem (SVP) is arguably the most important problem for the cryptanalysis of latticebased cryptography, and many lattice-based schemes have security claims based on its hardness. The best quantum algorithm for the SVP is due to Laarhoven [Laa16 PhD thesis] and runs in (heuristic) time $d^{0.2653d+o(d)}$. We present an improvement over Laarhoven’s result and present an algorithm that has a (heuristic) running time of $d^{0.2570d+o(d)}$ where $d$ is the lattice dimension. We also present time-memory trade-offs where we quantify the amount of quantum memory and quantum random access memory of our algorithm. The core idea is to replace Grover’s algorithm used in [Laa16 PhD thesis] in a key part of the sieving algorithm by a quantum random walk in which we add a layer of local sensitive filtering.
2nd year PhD student at Inria Paris advised by André Chailloux.
We present an algorithm solving the ROS (Random inhomogeneities in a Overdetermined Solvable system of linear equations) problem in polynomial time for $\ell > \log p$ dimensions. Our algorithm can be combined with Wagner’s attack, and leads to a sub-exponential solution for any dimension $\ell$ with best complexity known so far. When concurrent executions are allowed, our algorithm leads to practical attacks against unforgeability of blind signature schemes such as Schnorr and Okamoto–Schnorr blind signatures, threshold signatures such as GJKR and the original version of FROST, multisignatures such as CoSI and the two-round version of MuSig, partially blind signatures such as Abe–Okamoto, and conditional blind signatures such as ZGP17.
This is joint work with Fabrice Benhamouda, Tancrède Lepoint, Julian Loss, Mariana Raykova.
Michele is a postdoc at UC Berkeley under the supervision of Alessandro Chiesa.
Prior to that, he was a PhD student at École Normale Supérieure, under the supervision of Georg Fuchsbauer. He is interested in the intersection between authentication and anonymity.
In the past, he contributed to Python, Debian, and Tor.
The Fiat-Shamir transform is a general method for reducing interaction in public-coin protocols by replacing the random verifier messages with deterministic hashes of the protocol transcript. The soundness of this transformation is usually heuristic and lacks a formal security proof. Instead, to argue security, one can rely on the random oracle methodology, which informally states that whenever a random oracle soundly instantiates Fiat-Shamir, a hash function that is ``sufficiently unstructured'' (such as fixed-length SHA-2) should suffice. Finally, for some special interactive protocols, it is known how to (1) isolate a concrete security property of a hash function that suffices to instantiate Fiat-Shamir and (2) build a hash function satisfying this property under a cryptographic assumption such as Learning with Errors.
In this work, we abandon this methodology and ask whether Fiat-Shamir truly requires a cryptographic hash function. Perhaps surprisingly, we show that in two of its most common applications --- building signature schemes as well as (general-purpose) non-interactive zero-knowledge arguments --- there are sound Fiat-Shamir instantiations using extremely simple and non-cryptographic hash functions such as sum-mod-$p$ or bit decomposition. In some cases, we make idealized assumptions (i.e., we invoke the generic group model), while in others, we prove soundness in the plain model.
On the negative side, we also identify important cases in which a cryptographic hash function is provably necessary to instantiate Fiat-Shamir. We hope this work leads to an improved understanding of the precise role of the hash function in the Fiat-Shamir transformation.
Joint work with Yilei Chen, Alex Lombardi and Fermi Ma. Eprint: https://eprint.iacr.org/2020/915
Willy Quach is a 5th year PhD student at Northeastern University advised by Daniel Wichs.
Σ-Protocols provide a well-understood basis for secure algorithmics. Compressed Σ-protocol theory (CRYPTO 2020) was introduced as a strengthening yielding protocols with low communication complexity. It is built around basic Σ-protocols for proving that a compactly committed (long) vector satisfies a linear constraint. The communication complexity of these protocols is first compressed, from linear down to logarithmic, using a recursive ``folding-technique'' adapted from Bulletproofs (Bootle et al., EUROCRYPT 2016, and Bünz et al., S&P 2018), at the expense of logarithmic rounds. Proving in ZK that the secret vector satisfies a given constraint -- captured by a (non-linear) circuit -- is then by (blackbox) reduction to the linear case, via arithmetic secret-sharing techniques adapted from MPC.
This abstract modular theory has been instantiated from a variety of cryptographic hardness assumptions, i.e., the discrete-logarithm, strong-RSA, knowledge-of-exponent assumption. In two separate works, it has also been generalized to a bilinear circuit model and instantiated from the ring-SIS assumption. Thus for all these platforms compressed Σ-protocol theory yields circuit zero-knowledge protocols with (poly)-logarithmic communication.
All in all, our theory should more generally be useful for modular (``plug-&-play'') design of practical cryptographic protocols; this is further evidenced by our separate work on proofs of partial knowledge.
Joint work with: Ronald Cramer, Serge Fehr, Lisa Kohl and Matthieu Rambaud
Note the changed time.
Thomas Attema is a researcher at the applied research institute TNO in The Netherlands, where he works on (applied) multi-party computation, zero-knowledge proof systems and post-quantum cryptography. Moreover, he is pursuing a part-time PhD in the Cryptology group of CWI under the supervision of Ronald Cramer.
In this talk, we study the passive security model of approximate homomorphic encryption schemes. We present new passive security definitions for homomorphic encryption in the approximate computation setting, by naturally extending the traditional notion of IND-CPA security. We propose both indistinguishability-based and simulation-based variants, as well as restricted versions of the definitions that limit the order and number of adversarial queries (as may be enforced by some applications). We prove implications and separations among different definitional variants, showing a hierarchy of approximate homomorphic encryption schemes. Our models provide a solid theoretical basis for the security evaluation of approximate homomorphic encryption schemes (against passive attacks).
As the main application of our passive security models, we present passive attacks against CKKS, the homomorphic encryption scheme for arithmetic on approximate numbers presented at Asiacrypt 2017. The attack is both theoretically efficient (running in expected polynomial time) and very practical, leading to complete key recovery with high probability and very modest running times. We implemented and tested the attack against major open source homomorphic encryption libraries, including HEAAN, SEAL, HElib, PALISADE and Lattigo, and when computing several functions that often arise in applications of the CKKS scheme to privacy-preserving machine learning. Our attack shows that the traditional formulation of IND-CPA security (or indistinguishability against chosen plaintext attacks) achieved by CKKS does not adequately capture security against passive adversaries when applied to approximate encryption schemes, and that a different, stronger definition is required to evaluate the security of such schemes.
We further discuss possible modifications to CKKS that may serve as countermeasures to our attacks, and we also discuss open problems of efficiently achieving provable security under our new definitions.
This talk is based on the joint work with Daniele Micciancio.
Baiyu Li graduated with a Ph.D. degree from UCSD in 2021, advised by Daniele Micciancio. His research interests include formal methods for secure computation protocol design and analysis, as well as lattice-based cryptography. Previously he received his Master's and Bachelor's degrees in CS and Pure Math from the University of Waterloo.
In this talk we will give a quantum reduction from finding short codewords in a random linear code to decoding for the Hamming metric. This is the first time such a reduction (classical or quantum) has been obtained. Our reduction adapts to linear codes Stehlé-Steinfield-Tanaka-Xagawa’ re-interpretation of Regev’s quantum reduction from finding short lattice vectors to solving the Closest Vector Problem. The Hamming metric is a much coarser metric than the Euclidean metric and this adaptation has needed several new ingredients to make it work. For instance, in order to have a meaningful reduction it is necessary in the Hamming metric to choose a very large decoding radius and this needs in many cases to go beyond the radius where decoding is unique. Another crucial step for the analysis of the reduction is the choice of the errors that are being fed to the decoding algorithm. For lattices, errors are usually sampled according to a Gaussian distribution. However, it turns out that the Bernoulli distribution (the analogue for codes of the Gaussian) is too much spread out and can not be used for the reduction with codes. Instead we choose here the uniform distribution over errors of a fixed weight and bring in orthogonal polynomials tools to perform the analysis and an additional amplitude amplification step to obtain the aforementioned result.
The seminar will be at 13:00 UK time, 14:00 FR time
Thomas Debris-Alazard is a research scientist (chargé de recherche) at Inria in the Grace project-team. He was previously a postdoctoral research assistant in the Information Security Group under the supervision of Martin R. Albrecht. He received his PhD from Inria under the supervision of Jean-Pierre Tillich.