Gabriele Biondo

First steps on complexity theory

Talking about cryptography requires another important concept: Computational complexity. Wikipedia gives a very high level definition; the lemma is very interesting and it is worth a read. However if one reads this at a glance, understanding how this concept fits with cryptography is not straightforward.

Let’s see the overall problem and then drill down on the specific instance. Be aware of the fact that the topic is so complex that, in general, university courses are dedicated to it. Here it is just a very short summary.

Complexity theory is a branch of computability theory aiming to understand the minimal set of resources (mainly, time and memory) required to solve a problem. Problems are then categorized in several classes (complexity classes) on the basis of the best algorithm one can use to solve a given instance of the problem.

The classic example is the one of sorting 10 numbers (it may be a deck of card, a generic set of information to sort, and the like). One of the most classic algorithm is called Insertion sort, and it works as described below. We will compare the performances of this algorithm with another well known one, Merge sort.

Insertion sort

Suppose one wants to sort an array of numbers. This array’s lenght is \(n\)

  1. Cycle on the vector \(\vec{V}\) (e.g.: the index \(i\) ranges from \(1\) to \(n\))
    1. \(a:=\min\{v_i, v_{i+1},\dots,v_n\}\)
    2. let \(v_k=a\)
    3. swap \(v_k\) with \(v_i\)

Now, the outer cycle takes place \(n\) times. The inner cycle (the “\(\min\)”), per sè, checks \(n-1\) elements the first iteration, \(n-2\) the second, and in general \(n-i\) on the \(i\)-th time. The last time, it does not even take place.

It is easy to prove that this process takes \(n\left(n-1\right)/2\) steps. Shortly, the computation time is \(\mathcal{O}\left(n^2-n\right)=\mathcal{O}\left(n^2\right) \).

C Code

void InsertionSort(int x[], int n) 
{
   int i, j, app;
 
   for (i = 1; i < n; i++)
   {
      app = x[i];
       
      for (j = i - 1; (j >= 0) && (x[j] > app); j--)
               x[j+1] = x[j];
       
      x[j + 1] = app;
   }
}

Merge sort

This algorithm is based on the fact that a set of cardinality 0 or 1 is by definition an ordered set. In its most generic definition, this algorithm works as:

  1. If the vector \(\mid\vec{V}\mid=0\), the algorithm converged, otherwise:
    1. Let \(\mid\vec{V}\mid=m\); let \(a:= \left\lfloor m/2\right\rfloor \) and \(b:=a+1\); let then \(\vec{W_1}=\left[v_1, v_2,\dots,v_a\right]\) – the square brackets to indicate the sorted sequence – and \(\vec{W_2}=\left[v_b, v_{b+1},\dots,v_m\right]\)
    2. Let \(\vec{S}=mergesort\left(\vec{W_1}\right)\) and \(\vec{T}=mergesort\left(\vec{W_2}\right)\)
    3. Merge the two subsequences into the merged ordered vector:
      1. let \(k:=1 \)
      2. loop on the indexes \(i\ in \left[1,2,\dots,a\right]\) and \(j\ in \left[b,b+1,\dots,m\right]\)
      3. if \(s_i<t_j\)
        1. \(v_k:=s_i; i:=i+1\)
      4. else
        1. \(v_k:=t_j; j:=j+1\)
      5. \(k:=k+1\)

If \(T\left(n\right) \) is the function that gives back the computing time, we have \(T\left(n\right) =2T\left(n/2\right) \), which gives \(T\left(n\right) = \mathcal{O}\left(n\cdot \log n\right) \). In fact, assuming for the sake of simplicity \(n=2^m\), and if \(c,d>0\), we have:

\(T\left(n\right)\leq2T\left(n/2\right)+dn \)

but

\(2T\left(n/2\right)+dn\leq 2\left(2T\left(n/2^{2}\right)+dn/2\right)+dn \)

iterating, we find that

\(T\left(n\right)\leq2^{m}T\left(n/2^{m}\right)+2^{m-1}dn/2^{m-1}+\dots++2^{2}dn/2^{2}++2^{1}dn/2+dn \)

or shortly

\(T\left(n\right)\leq2^{m}T\left(1\right)+dn\sum_{k=1}^{m}1=2^{m}c+dmn\)

But \(m=\log n\), which leads to the claim.

Conclusions

Now, we have shown that two different algorithms solve the same problem, but with different performances. In fact, the comparison of the performances is depicted below:

Obviously, this does not even start the study of complexity; it is per sè a long and difficult topic (a wonderful book about it is Introduction to Algorithms, Second Edition, by y Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford Stein).

The idea of presenting this topic here is helping the reader understanding that complexity is an inherent property of problems; and very often hardware performances cannot scale with it.

We shall come back on this one later, with some examples.

Leave a Reply

Your email address will not be published. Required fields are marked *