Analysis of Algorithms

 

Actually there can be more than one solution to a problem and developing an efficient algorithm requires lot of practice and skill. When we have two or more algorithms for a given solution then it necessary to compare then and choose the better one. This is one of the most important motivation to analyze an algorithm.

The choice of the particular algorithm depends on the following considerations:

  • Programming Effort
  • Space Complexity
  • Time Complexity
  • Big-Oh Notation

 

Programming Effort

As far as the programming effort is concerned one must be vary careful in the sense that our algorithms must be as simple as we can. And when there is any doubt then choose the simplest one. However the simplest and most straightforward way of solving a problem is sometimes not the best one. Usually this occurs when the simplest one consumes too much computer time and memory space.

 

Space Complexity

By space complexity we mean the amount of memory it needed to run to completion. As far as the memory space of a program is concerned, it consists of the instruction space, data space and additionally it needs to store the information needed to resume the suspended (partially complete) function.

 

Time Complexity

The time complexity of an algorithm is the amount of time it needs to run to completion. Computer time, an algorithm might require for its execution, would usually depend on the “size” of the data input. In other words we can say that the time complexity of an algorithm is often a function of input size “n”, that is f(n). If there are less input data then it may take less time than that it takes time for more input data. Therefore we can say that the same algorithm may take different time to execute for different inputs having the same size.

There are generally three types of time complexities – best case, worst case and average case. In best case it is supposed that the algorithm takes minimum time to execute it for an input of size ‘”n”. Same way in worst case it is supposed that the algorithm takes maximum time to execute it for an input of size ‘”n”. The time that an algorithm will require to execute a typical input data of size “n” is known as its average case time complexity.

It is very difficult to find the exact time taken by an algorithm to complete it because it does not depend upon a single factor. It all depends on several important factors, many of these are outside the domain of the programmer. For example a simple ‘C’ statement may take a few time during its translation into its machine code version, where as another ‘C’ statement, where resources are being shared, may take more time during its translation into its machine code version. This is the main reason that the time complexity is usually expressed in the form of f(n), where “n” is the size of input data for a given instance.

 

Time and Space Trade-offs

Now the question arises – Which one to choose first between the time and space in computer algorithm? Actually the best algorithm is one that takes less time to complete its execution and occupies less space in memory. But unfortunately it is not always possible to achieve both of these objectives because some algorithms may occupy more space but takes less time to complete its execution while others may require less space but takes more time to complete its execution. In general as far as space and time complexities are concerned, time complexity is more important than the space one because in modern computers the memory space is not a severe constraint.

 

Big-Oh Notation

When an algorithm is executed, the major criteria in calculating the time complexity is depend on the order of magnitude of the frequency of execution of statements. As studied in earlier section that f(n), a function of size “n”, is used to express the time and/or space complexity for a given instance of the problem being solved and this function is meant to be independent of machine architecture or programming language. And the most important notation used to express this function f(n) is Big Oh notation, a characterization scheme that allows to measure the time and/or space complexities of an algorithm.

f(n) = O(g(n)) (should be read as “f of n is equal to big oh of g of n” or f(n) is of the order of g(n)) if and only if there are two positive constants “c” and “n0” so that the following inequality holds for all n ³ n0

| f(n) |  £  | O(g(n)) |

Now let us understand the relation between f(n) and g(n). Here f(n) represents the time complexity of an algorithm when it is run on an input of size “n” and g(n) be a known standard function like n2, n3, nlog2n etc. It means that the behavior of g(n) is known before the time complexity of the algorithm is computed. When we say that the computing time of an algorithm is O(g(n)) we mean that its execution takes no more that a constant times g(n).

Generally O(1) means a constant computing time, O(n) is called linear, O(n2) is called a quadratic, O(n3) is called cubic, and O(2n) is called exponential. This is clear that O(n) is much better than O(n2) and O(n2) is much better than O(n3). Thus if we have two algorithms which perform the same task, and the first one has a computing time O(n) and the second one has O(n2) then obviously the first one will be superior. The reason behind this is that as the value of increases the computing time of second algorithm will get far worse than the time for the first one. There are several computing times, the most frequently used computing times are O(1), O(n), O(n2), O(n3), O(log2n), O(nlog2n) and O(2n).

 

Figure-(a) and Figure-(b) shows how the computing time grows with a constant equal to one.

alt

Firgure (a)


log2n

n

nlog2n

n2

n3

2n

0

1

0

1

1

1

3.322

10

33.22

102

103

@103

6.644

102

664.4

104

106

@1030

9.966

103

9966.0

106

109

@10300

 

Figure (b)

 

The figure (b) shows that the log2n function grows most slowly whereas the exponential function  grows the most rapidly. Of course these results are not exact but these are approximate. The result may be an expression having a sequence of decreasing terms. In such a case, the largest term is the most important one. Suppose that we have an expression as:

n3 + n2 + 20

then the overall complexity of the algorithm is O(n3). Note that the exact value of constant is not important because the analysis attempts to reach an approximate result only. Figure-(c) shows some examples of this.

 

g(n)

f(n) = O(g(n))

5

O(1)

20n + 17

O(n)

40n2 + 3n - 10

O(n2)

10n3 + 26n2 + 220

O(n3)

 

Figure ( c)

 

Limitations of Big Oh Notation

One of the two basic limitations of Big Oh notation is that it contains no consideration of programming effort. And the second one is that it ignores the importance of constants, that is 100000 n2 is supposed to be O(n2). Thus as we have studied earlier that O(n2) will take less time than the other algorithm which is O(n3). But when compare 100000 n2 time and n3 then the later one is faster for n < 100000.

 

 


Like it on Facebook, Tweet it or share this article on other bookmarking websites.

No comments