
Ebook: Enhancing Cryptographic Primitives with Techniques from Error Correcting Codes

The NATO Advanced Research Workshop on Enhancing Cryptographic Primitives with Techniques from Error Correcting Codes has been organized in Veliko Tarnovo, Bulgaria, on October 6-9, 2008 by the Institute of Mathematics and Informatics of the Bulgarian Academy of Sciences in cooperation with COSIC, KU Leuven and in the framework of the NATO Science for Peace and Security program. Goal of the organizers was to gather international experts from both fields - coding theory and cryptography - in order to exchange ideas, define new challenges and open problems for future research. These proceedings present the state-of-the-art in the current research on cryptography applying techniques and results from coding theory. Enhancing Cryptographic Primitives with Techniques from Error Correcting Codes is divided into two parts. In the first part the papers based on the lectures of the invited speakers, and in the second part the papers based on the talks of the participants in the workshop are included.
The NATO Advanced Research Workshop on Enhancing Cryptographic Primitives with Techniques from Error Correcting Codes has been organized on October 6–9, 2008 in Veliko Tarnovo, Bulgaria, by the Institute of Mathematics and Informatics of the Bulgarian Academy of Sciences in cooperation with COSIC, KU Leuven and in the framework of the NATO Science for Peace and Security program. The workshop was sponsored by NATO and the Institute of Mathematics and Informatics of the Bulgarian Academy of Sciences.
Co-directors of the NATO ARW were Prof. Dr. Bart Preneel (Katholieke Universiteit Leuven and IBBT, Belgium) and Prof. Dr. Sc. Stefan Dodunekov (Institute of Mathematics and Informatics, Bulgarian Academy of Sciences). The program committee of the workshop has invited 14 key speakers and 29 participants. All of them gave presentations and participated in the discussions. The ARW had 43 attendees from 16 countries.
The Institute of Mathematics and Informatics of the Bulgarian Academy of Sciences, has been responsible for the local organization. We would like to thank specially to Tsonka Baicheva, Stela Zhelezova, Yuri Borissov, Nikolai Manev and all the members of the department of Mathematical Foundations of Informatics for the perfect organization of the workshop. All the participants enjoyed the intensive scientific and social program.
We would like also to thank all the key speakers and participants for their interesting talks and for their contribution to the proceedings. We believe that we have achieved our goal, namely to gather international experts from both fields – coding theory and cryptography – in order to exchange ideas, define new challenges and open problems for future research. These proceedings present the state-of-the-art in the current research on cryptography applying techniques and results from coding theory.
The proceedings of the ARW are divided into two parts. The first part includes the papers based on the lectures of the invited speakers and the second part includes the papers based on the talks of the participants in the workshop. The order of appearance of the papers in the proceedings corresponds to the scientific program of the workshop.
February, 2009
Bart Preneel, Stefan Dodunekov, Vincent Rijmenm, Svetla Nikova
The purpose of an authentication code is to give proof that a message is authentic, i.e. it was not sent by an imposter and it has not been altered on its way to the receiver. Secrecy could also be an issue here, but in many cases the receiver just wants to be sure that the message is genuine.
Unlike digital signature schemes, authentication codes aim at unconditional security. The opponent is assumed to have unlimited computer power. Many constructions of authentication codes are based on error correcting codes. After an introduction, an overview of these techniques will be given.
Digital communications engineers strive to transmit data in a more efficient and reliable manner, while cryptographers endeavor to offer both data confidentiality and integrity. In the 1970's and 1980's, researchers in digital communications successfully developed techniques for multi-level coded modulation that simultaneously allow the correct of transmissions errors and the demodulation of signals upon the arrival at the receiver. Reminiscent of coded modulation, signcryption is a cryptographic technique developed to “hit two birds in one stone”, namely to provide both data confidentiality and origin unforgeability with reduced overhead. In this article the author recounts his unique experience in witnessing the evolution and perfection of coded modulation techniques, and more important, how the experience inspired him to embark on the journey of searching for efficient techniques for combining public key encryption and digital signatures; the author also attempts to illuminate future directions for efficient, hybrid cryptographic techniques.
The goal of this paper is to present a survey on certain recent developments in the theory of unconditional Multi Party Computation (MPC) and Secret Sharing Schemes (SSS) related to Error Correcting Codes.
A short survey of a recent attack on the filter generator due to Rønjom and Helleseth is presented. The attack is generalized to the nonlinear combiner generator. The original attack uses linear combinations of bits in the binary keystream in order to arrive at a linear equation system to be solved for the secret key. Here an argument shows that, when adopting the attack to the nonlinear combiner, then using linear combinations over an extension field is sometimes essential to make the attack work or to reduce the complexity of the attack. Even though a correlation attack may work better for a badly chosen Boolean function used in the combiner generator, the attack presented in this paper does not depend on the Boolean function in the same way and hence can be applied to some correlation-immune combiner generators.
We will briefly survey the well known criteria used to rate an S-box, including nonlinearity, XOR table, SAC, etc. We will outline some connections between the functions used in S-boxes and certain error-correcting codes. The class of codes includes 2-error-correcting BCH-like codes. The class of functions includes APN functions. We compare the ratings of 8×8 S-boxes arising from various well known functions, including some new APN functions.
We illustrate how coding theory was applied in the design of the cryptographic hash function LANE [8]. The generic structure of the LANE compression function could potentially be vulnerable to a class of meet-in-the-middle attacks. While difficult to avoid at first sight, restating the problem in the domain of error correcting codes naturally leads to a simple and elegant solution. This ensures that these attacks do not apply to LANE.
While the design of block ciphers that are provably secure against all attacks, still eludes us, it is possible to make designs that resist differential and linear cryptanalysis, which are two very important general methods of cryptanalysis on symmetric algorithms. The Wide Trail design strategy allows to construct ciphers for which it is very easy to prove security against differential cryptanalysis. This is possible by choosing linear maps for which it is easy to show lower bounds on the number of active S-boxes. These linear maps can be derived from MDS codes in order to obtain the best possible diffusion. The same methods can be used to obtain provable security against linear cryptanalysis, in particular when linear cryptanalysis is defined in terms of the trace map.
MDS codes are widely used in constructive coding theory. However some examples indicate that using non-MDS codes (but with parameters close to those of MDS codes) one can obtain better results. The application of the family of Near-MDS codes seems to be promising in various cryptographic problems.
Recently, many new almost perfect nonlinear (APN) and almost bent (AB) functions have been constructed. These functions
Two codes can be associated with APN and AB functions. This is useful to distinguish functions up to equivalence. We give a short proof about the dimension of one of these codes.
We slightly extend the known concepts of equivalence to the more general case of functions
We recall the two different notions of algebraic immunity of vectorial functions: the basic algebraic immunity and the graph algebraic immunity. We introduce a new one, the component algebraic immunity, which helps studying the two others. We study the tightness of the bounds on the two first notions and prove bounds between them three. We recall the known bounds on the r-th order nonlinearity of a vectorial function, given its basic algebraic immunity. We recall why the basic algebraic immunity is not a relevant parameter when the number of output bits of the vectorial function is not small enough and/or when the S-box is used in a block cipher. This leads us to showing bounds on the r-th order nonlinearity, given the graph algebraic immunity, that we deduce from similar bounds involving the component algebraic immunity.
This paper yields a generic framework for design of stream ciphers based on a joint employment of pseudo-randomness, randomness and coding, and its two particular settings called SCF-E and SCF-W: The first one is based on a simple homophonic encoding via random bits embedding; The second one employs the wire-tap channel coding. In the both schemes decoding complexities with and without secret key are extremely different providing the origin for the security. The developed frameworks have potential of providing that complexity of recovering secret key in the known plaintext attacking scenario is close to the complexity of recovering the secret key via the exhaustive search. The proposed design approach can be considered as a trade-off between the increased security and decreased communications efficiency which in a number of scenarios appears as a suitable one.
Here we briefly survey the state of the art results related to nonlinearity of resilient Boolean functions. We mainly study the existing constructions based on coding theoretic techniques.
In this paper we use a well known connection between the nonlinearity of vectorial boolean functions and the minimum distance of certain linear codes to construct functions with higher nonlinearity than known before for specific dimensions.
The McEliece cryptosystem is a public-key cryptosystem based on coding theory that has successfully resisted cryptanalysis for thirty years. The original version, based on Goppa codes, is able to guarantee a high level of security, and is faster than competing solutions, like RSA. Despite this, it has been rarely considered in practical applications, due to two major drawbacks: i) large size of the public key and ii) low transmission rate. Several attempts have been made for overcoming such drawbacks, but the adoption of most families of codes has not been possible without compromising the system security. Low-Density Parity-Check (LDPC) codes are state-of-art forward error correcting codes that permit to approach the Shannon limit while ensuring limited complexity. Quasi-Cyclic (QC) LDPC codes are a particular class of LDPC codes, able to join low complexity encoding of QC codes with high-performing and low-complexity decoding techniques based on the belief propagation principle. In a previous work it has been proposed to adopt a particular family of QC-LDPC codes in the McEliece cryptosystem to reduce the key size and increase the transmission rate. It has been shown that such variant is able to counter all the classic attacks, and also attacks that can compromise the security of previous LDPC-based versions. Recently, however, new attacks have been found that are able to exploit a flaw in the transformation from the private key to the public one. Such attacks can be effectively countered by changing the form of some constituent matrices, without altering the system parameters. This change has marginal effects on the complexity of the cryptosystem that, instead, preserves its security against all known attacks. This work gives an overview of the QC-LDPC codes-based McEliece cryptosystem and its cryptanalysis. Two recent versions are considered, and their ability to counter all the currently known attacks is discussed. A third version able to reach a higher security level is also proposed. Finally, it is shown that the new QC-LDPC codes-based cryptosystem scales favorably when larger keys are needed, as very recently pointed out by the successful implementation of an attack against the original cryptosystem.
A mapping f : GF(q) → GF(q) is called planar if for every nonzero a ∊ GF(q) the difference mapping Df,a : x
Irregularly clocking of Linear Finite State Machines such as LFSRs is widely employed in stream ciphers. A special form of irregular clocking, known as jumping was introduced in [1,2] and further developed in [3]. Stream ciphers based on jumping appeared as candidates in the eSTREAM competition. One of the identified problems of these ciphers concerns the presence of linear relations of low degree in the key stream of the cipher, the distribution of which appears to be heavily biased. In [4] this issue is addressed leading to an O(n2n) algorithm to determine all linear relations. This paper addresses the problem of finding all linear relations of a given construction efficiently. It is shown that the coefficients of the linear relations are symmetric Boolean functions of the jump control bits, leading to a polynomial time and memory algorithm to determine the number of linear relations and the highest bias value. The results show that it is feasible to determine the linear equivalence bias for LFSMs with characteristic polynomials of degree 100 and higher.
The algebraic immunity AI(f) of a Boolean function f is defined as the minimum degree of all annihilators of f. The high value of algebraic immunity consists a necessary condition for Boolean functions used in stream ciphers to resist algebraic attacks. of the two values. In this paper, we introduce the notion of extended algebraic immunity
Building on a recent paper which defined complementary array pairs of types I, II, and III, this paper further characterises a class of type-I pairs defined over the alphabet {-1,0,1} and shows that a subset of these pairs are local-unitary-equivalent to a subset of the type-III pairs defined over a bipolar ({1,-1}) alphabet. Enumerations of the distinct structures in this class and its subset are given.
In the present paper we study properties of perfectly balanced Boolean functions. Based on the concept of Boolean function barrier, we propose a novel approach to construct large classes of perfectly balanced Boolean functions.
Algebraic attacks are efficient if one can express the cipher as a system of multivariate algebraic equations whose solution gives the secret key. The complexity of the attack depends on the degree of these equations. For any n-variable Boolean function, it is always possible to find a Boolean function g ≠ 0 with degree at most
Recently a new theorem has been introduced which gives a theoretical upper bound on the degree of the product of two monomial trace functions. This bound is much lower than the general bound
We present here the investigation of the trace representation of the annihilators of the functions with Kasami and Niho exponents. Our goal is to determine an abstract formula - if it is possible - for the annihilators from which one can immediately gain the algebraic immunity of the Boolean power functions.
In 1993, Bossert, Mahr and Heilig proposed a cryptographic scheme based on error-correcting codes that provides simultaneously error correction and message authentication. They also sketched out an idea how the proposed scheme to be analyzed. Elaborating on this idea, we reveal an weakness in a concrete implementation which can be exploited by active intruder to launch an attack with considerable probability of success. Countermeasures against this probabilistic attack are given.
Since 1996 it is already known that the wide used cryptographic algorithms, with formally proven security, can be broken by inducing errors during encryption process. Therefore, the threat of successful fault analysis is reasonable and some countermeasures have to be taken. One approach for achieving the security against fault analysis is to use error detection and error correction codes.
When we would like to choose the most appropriate error control code for a given application we are interested of its error-control capabilities. Measures of this capabilities are the probability of undetected errors and the probability of correct decoding. In order to be able to check these characteristics of the code we need its weight, cosets' leaders weight and cosets' weight distributions. A list of all binary cyclic codes of length up to 33, some binary distance-optimal codes of length up to 33 and all cyclic redundancy check codes of up to 10-bit redundancy is prepared and all the weight spectra necessary for checking their error control performance are calculated. Then, depending on the conditions on which the code will be used, its error control performance can be checked with any precision for a linear time and the most suitable one can be chosen.