Monday, 2 May 2016

A Short List of Candidate One Way Functions - CryptoGraphy

As we said above, the most basic primitive for cryptographic applications is a one-way function which is “easy” to compute but “hard” to invert. (For public key encryption, it must also have a trapdoor.) By “easy”, we mean that the function can be computed by a probabilistic polynomial time algorithm, and by “hard” that any probabilistic polynomial time (PPT) algorithm attempting to invert it will succeed with “small” probability (where the probability ranges over the elements in the domain of the function.) Thus, to qualify as a potential candidate for a one-way function, the hardness of inverting the function should not hold only on rare inputs to the function but with high probability over the inputs.

Several candidates which seem to posses the above properties have been proposed.

1. Factoring. The function

 f : (x,y) 7→ xy is conjectured to be a one way function. The asymptotically proven fastest factoring algorithms to date are variations on Dixon’s random squares algorithm [131]. It is a randomized algorithm with running time L(n)√2 where L(n) = e√lognloglogn. The number field sieve by Lenstra, Lenstra, Manasee, and Pollard with modifications by Adlemann and Pomerance is a factoring algorithm proved under a certain set of assumptions to factor integers in expected time

   

[133, 3]. 

2. The discrete log problem. 

Let p be a prime. The multiplicative group Z∗ p = ({x < p|(x,p) = 1},·mod p)is cyclic, so that Z∗ p ={gi mod p|1≤i≤p−1} for some generator g ∈Z∗ p . The function f : (p,g,x)7→( gx mod p,p,g) where p is a prime and g is a generator for Z∗ p is conjectured to be a one-way function. Computing f(p,g,x) can be done in polynomial time using repeated squaring. However, The fastest known proved solution for its inverse, called the discrete log problem is the index-calculus algorithm, with expected running time L(p)√2 (see [131]). An interesting problem is to find an algorithm which will generate a prime p and a generator g for Z∗ p. It is not known how to find generators in polynomial time. However, in [8], E. Bach shows how to generate random factored integers (in a given range N 2 ...N). 
Coupled with a fast primality tester (as found in [131], for example), this can be used to efficiently generate random tuples (p − 1,q1,...,qk) with p prime. Then picking g ∈ Z∗ p at random, it can be checked if (g,p−1) = 1, ∀qi, g p−1 qi mod p 6= 1, and gp−1 mod p = 1, in which case order(g) = p−1( order(g) =|{gi mod p|1≤i≤p−1}|). It can be shown that the density ofZ∗ p generators is high so that few guesses are required. The problem of efficiently finding a generator for a specific Z∗ p is an intriguing open research problem. 

3. Subset sum. 

Let ai ∈ {0,1}n,~a = (a1,...,an),si ∈ {0,1},~s = (s1,...,sn), and let f : (~a,~s) 7→ (~a,Pn i=1 siai). An inverse of (~a,Pn i=1 siai) under f is any (~a,~s0 i) so thatPn i=1 siai =Pn i=1 s0 iai. This function f is a candidate for a one way function. The associated decision problem (given (~a,y), does there exists ~s so thatPn i=1 siai = y?) is NP-complete. Of course, the fact that the subset-sum problem is NP-complete cannot serve as evidence to the one-wayness of fss. On the other hand, the fact that the subset-sum problem is easy for special cases (such as “hidden structure” and low density) can not serve as evidence for the weakness of this proposal. The conjecture that f is one-way is based on the failure of known algorithm to handle random high density instances. Yet, one has to admit that the evidence in favor of this candidate is much weaker than the evidence in favor of the two previous ones.

4. DES with fixed message. 

Fix a 64 bit message M and define the function f(K) = DESK(M) which takes a 56 bit key K to a 64 bit output f(K). This appears to be a one-way function. Indeed, this construction can even be proven to be one-way assuming DES is a family of pseudorandom functions, as shown by Luby and Rackoff [139].

5. RSA.

 This is a candidate one-way trapdoor function. Let N = pq be a product of two primes. It is believed that such an N is hard to factor. The function is f(x) = xe mod N where e is relatively prime to (p−1)(q−1). The trapdoor is the primes p,q, knowledge of which allows one to invert f efficiently. The function f seems to be one-way. To date the best attack is to try to factor N, which seems computationally infeasible.


Modern Encryption: A Computational Complexity Based Theory - CryptoGraphy

Modern Cryptography abandons the assumption that the Adversary has available infinite computing resources, and assumes instead that the adversary’s computation is resource bounded in some reasonable way. 

Inparticular, in these notes we will assume that the adversary is a probabilistic algorithm who runs in polynomial time. Similarly, the encryption and decryption algorithms designed are probabilistic and run in polynomial time. The running time of the encryption, decryption, and the adversary algorithms are all measured as a function of a security parameter k which is a parameter which is fixed at the time the cryptosystem is setup. Thus, when we say that the adversary algorithm runs in polynomial time, we mean time bounded by some polynomial function in k. Accordingly, in modern cryptography, we speak of the infeasibility of breaking the encryption system and computing information about exchanged messages whereas historically one spoke of the impossibility  of breaking the encryption system and finding information about exchanged messages. We note that the encryption systems which we will describe and claim “secure” with respect to the new adversary are not “secure” with respect to a computationally unbounded adversary in the way that the one-time pad system was secure against an unbounded adversary. But, on the other hand, it is no longer necessarily true that the size of the secret key that A and B meet and agree on before remote transmission must be as long as the total number of secret bits ever to be exchanged securely remotely. In fact, at the time of the initial meeting, A and B do not need to know in advance how many secret bits they intend to send in the future. We will show how to construct such encryption systems, for which the number of messages to be exchanged securely can be a polynomial in the length of the common secret key. How we construct them brings us to anther fundamental issue, namely that of cryptographic, or complexity, assumptions. As modern cryptography is based on a gap between efficient algorithms for encryption for the legitimate users versus the computational infeasibility of decryption for the adversary, it requires that one have available primitives with certain special kinds of computational hardness properties. Of these, perhaps the most basic is a one-way function. Informally, a function is one-way if it is easy to compute but hard to invert. Other primitives include pseudo-random number generators, and pseudorandom function families, which we will define and discuss later. From such primitives, it is possible to build secure encryption schemes. Thus, a central issue is where these primitives come from. Although one-way functions are widely believed to exist, and there are several conjectured candidate one-way functions which are widely used, we currently do not know how to mathematically prove that they actually exist. We shall thus design cryptographic schemes assuming we are given a one-way function. We will use the conjectured candidate one-way functions for our working examples, throughout our notes. We will be explicit about what exactly can and cannot be proved and is thus assumed, attempting to keep the latter to a bare minimum.

We shall elaborate on various constructions of private-key encryption algorithms later in the course. The development of public key cryptography in the seventies enables one to drop the requirement that A and B must share a key in order to encrypt. The receiver B can publish authenticated2 information (called the public-key) for anyone including the adversary, the sender A, and any other sender to read at their convenience (e.g in a phone book). We will show encryption algorithms in which whoever can read the public key can send encrypted messages to B without ever having met B in person. The encryption system is no longer intended to be used by a pair of prespecified users, but by many senders wishing to send secret messages to a single recipient. The receiver keeps secret (to himself alone!) information (called the receiver’s private key) about the public-key, which enables him to decrypt the cypher texts he receives. We call such an encryption method public key encryption. We will show that secure public key encryption is possible given a trapdoor function. Informally, a trapdoor function is a one-way function for which there exists some trapdoor information known to the receiver alone, with which the receiver can invert the function. The idea of public-key cryptosystems and trapdoor functions was introduced in the seminal work of Diffie and Hellman in 1976 [71, 72]. Soon after the first implementations of their idea were proposed in [176], [170], [143]. A simple construction of public key encryption from trapdoor functions goes as follows. Recipient B can choose at random a trapdoor function f and its associated trapdoor information t, and set its public key to be a description of f and its private key to be t. If A wants to send message m to B, A computes E(f,m) = f(m). To decrypt c = f(m), B computes f−1(c) = f−1(f(m)) = m. We will show that this construction is not secure enough in general, but construct probabilistic variants of it which are secure.

Introduction to Modern CryptoGraphy

Introduction to Modern Cryptography

Cryptography is about communication in the presence of an adversary. It encompasses many problems (encryption, authentication, key distribution to name a few). The field of modern cryptography provides a theoretical foundation based on which we may understand what exactly these problems are, how to evaluate protocols that purport to solve them, and how to build protocols in whose security we can have confidence. We introduce the basic issues by discussing the problem of encryption.

1.Encryption: Historical Glance

The most ancient and basic problem of cryptography is secure communication over an insecure channel. Party A wants to send to party B a secret message over a communication line which may be tapped by an adversary. The traditional solution to this problem is called private key encryption. In private key encryption A and B hold a meeting before the remote transmission takes place and agree on a pair of encryption and decryption algorithms E and D, and an additional piece of information S to be kept secret. We shall refer to S as the common secret key. 

The adversary may know the encryption and decryption algorithms E and D which are being used, but does not know S. After the initial meeting when A wants to send B the cleartext or plaintext message m over the insecure communication line, 

A encrypts m by computing the ciphertext c = E(S,m) and sends c to B. Upon receipt, 
B decrypts c by computing m =D(S,c). The line-tapper (or adversary), 
who does not know S, should not be able to compute m from c. Let us illustrate this general and informal setup with an example familiar to most of us from childhood, the substitution cipher. In this method A and B meet and agree on some secret permutation f: Σ → Σ (where Σ is the alphabet of the messages to be sent). To encrypt message m = m1 ...mn where mi ∈ Σ, A computes E(f,m) = f(m1)...f(mn). To decrypt c = c1 ...cn where ci ∈Σ, B computes D(f,c) = f−1(c1)...f−1(cn) = m1 ...mn = m. In this example the common secret key is the permutation f. The encryption and decryption algorithms E and D are as specified, and are known to the adversary. We note that the substitution cipher is easy to break by an adversary who sees a moderate (as a function of the size of the alphabet Σ) number of ciphertexts. A rigorous theory of perfect secrecy based on information theory was developed by Shannon [192] in 1943. 1. In this theory, the adversary is assumed to have unlimited computational resources. Shannon showed that secure (properly defined) encryption system can exist only if the size of the secret information S that A and B agree on prior to remote transmission is as large as the number of secret bits to be ever exchanged remotely using the encryption system.
An example of a private key encryption method which is secure even in presence of a computationally unbounded adversary is the one time pad. A and B agree on a secret bit string pad = b1b2 ...bn, where bi ∈R {0,1} (i.e pad is chosen in {0,1}n with uniform probability). This is the common secret key. To encrypt a message m = m1m2 ...mn where mi ∈ {0,1}, A computes E(pad,m) = m⊕pad (bitwise exclusive or). To decrypt ciphertext c∈{0,1}n, B computes D(pad,c) = pad⊕c = pad⊕(m⊕pad) = m. It is easy to verify that ∀m,c the Prpad [E(pad,m) = c] = 1 2n. From this, it can be argued that seeing c gives “no information” about what has been sent. (In the sense that the adversary’s a posteriori probability of predicting m given c is no better than her a priori probability of predicting m without being given c.) Now, suppose A wants to send B an additional message m0. If A were to simply send c =E(pad,m0), then the sum of the lengths of messages m and m0 will exceed the length of the secret key pad, and thus by Shannon’s theory the system cannot be secure. Indeed, the adversary can computeE(pad,m)⊕E(pad,m0) = m⊕m0 which gives information about m and m0 (e.g. can tell which bits of m and m‘ are equal and which are different). To fix this, the length of the pad agreed upon a-priori should be the sum total of the length of all messages ever to be exchanged over the insecure communication line.