In this chapter, we shall discuss

How to estimate the time required for a program.

How to reduce the running time of a program from days or years to fractions of a second.

The results of careless use of recursion.

The analysis required to estimate the resource use of an algorithm is generally a theoretical issue, and therefore a formal framework is required. We begin with some mathematical definitions.

Throughout the book we will use the following four definitions:

**DEFINITION: ***T*(*n*) *= O*(*f(n*)) if there are *constants c* and *n*_{0 }such that* T*(*n*) *cf *(*n*) when *n* *n*_{0}.

**DEFINITION: ***T*(*n*) = (*g*(*n*)) if there are *constants c* and *n*_{0} such that* T*(*n*) *cg*(*n*) when *n* *n*_{0}.

**DEFINITION: ***T*(*n*) = (*h*(*n*)) if and only if *T*(*n*)* = O*(*h*(*n*)) and *T*(*n*) *= *(*h*(*n*)).

**DEFINITION: ***T*(*n*) *= *o(*p*(*n*)) if *T*(*n*) *= O*(*p*(*n*)) and* T*(*n*)* ** *(*p*(*n*)).

The idea of these definitions is to establish a relative order among functions. Given two functions, there are usually points where one function is smaller than the other function, so it does not make sense to claim, for instance, *f*(*n*)* < g*(*n*). Thus, we compare their *relative rates of growth*. When we apply this to the analysis of algorithms, we shall see why this is the important measure.

The important things to know are

If T_{1}(n) =O(f(n))andT_{2}(n) =O(g(n)),then

(a)T_{1}(n) +T_{2}(n) =max(O(f(n)),O(g(n))),

(b)T_{1}(n) *T_{2}(n) =O(f(n) *g(n)),

Function Name

--------------------

cConstant

lognLogarithmic

log^{2}nLog-squared

nLinear

nlogn

n^{2 }Quadratic

n^{3 }Cubic

2Exponential^{n }

If *T*(*x*) is *a polynomial of degree n, then T*(*x*) = (*x ^{n}*).

log* ^{k }n* =

T_{1}(n) +T_{2}(n)c_{3}f(n) +c_{3}g(n)

c_{3}(f(n) +g(n))

2c_{3 }max(f(n),g(n))

cmax(f(n),g(n))

The limit can have four possible values:

The limit is 0: This means that *f*(*n*) = o(*g*(*n*)).

The limit is c 0: This means that *f*(*n*) = (*g*(*n*)).

The limit is : This means that *g*(*n*) = o(*f*(*n*)).

The limit oscillates: There is no relation (this will not happen in our context).

The most important resource to analyze is generally the running time. Several factors affect the running time of a program. Some, such as the compiler and computer used, are obviously beyond the scope of any theoretical model, so, although they are important, we cannot deal with them here. The other main factors are the algorithm used and the input to the algorithm.

As an example, in the next section, we shall consider the following problem:

MAXIMUM SUBSEQUENCE SUM PROBLEM:

*For input -2, 11, -4, 13, -5, -2, the answer is 20 (a _{2} through a_{4}).*

This problem is interesting mainly because there are so many algorithms to solve it, and the performance of these algorithms varies drastically. We will discuss four algorithms to solve this problem. The running time on some computer (the exact computer is unimportant) for these algorithms is given in Figure 2.2.

Algorithm 1 2 3 4

-------------------- -----------------------------------

Time(O)n^{3}(O)n^{2}(Ologn) (n)n

------------------------------------------------------------

Inputn= 10 0.00103 0.00045 0.00066 0.00034

Sizen= 100 0.47015 0.01112 0.00486 0.00063

n= 1,000 448.77 1.1233 0.05843 0.00333

n= 10,000 NA 111.13 0.68631 0.03042

n= 100,000 NA NA 8.0113 0.29832

Figure 2.3 shows the growth rates of the running times of the four algorithms. Even though this graph encompasses only values of *n* ranging from 10 to 100, the relative growth rates are still evident. Although the graph for Algorithm 3 seems linear, it is easy to verify that it is not, by using a straightedge (or piece of paper). Figure 2.4 shows the performance for larger values. It dramatically illustrates how useless inefficient algorithms are for even moderately large amounts of input.

There are several ways to estimate the running time of a program. The previous table was obtained empirically. If two programs are expected to take similar times, probably the best way to decide which is faster is to code them both up and run them!

To simplify the analysis, we will adopt the convention that there are no particular units of time. Thus, we throw away leading constants. We will also throw away low-order terms, so what we are essentially doing is computing a Big-Oh running time. Since Big-Oh is an upper bound, we must be careful to never underestimate the running time of the program. In effect, the answer provided is a guarantee that the program will terminate within a certain time period. The program may stop earlier than this, but never later.

Here is a simple program fragment to calculate

unsigned int

sum( int n )

{

unsigned int i, partial_sum;

/*1*/ partial_sum = 0;

/*2*/ for( i=1; i<=n; i++ )

/*3*/ partial_sum += i*i*i;

/*4*/ return( partial_sum );

}

*The running time of a for loop is at most the running time of the statements inside the for loop (including tests)* *times the number of iterations.*

As an example, the following program fragment is *O*(*n*^{2}):

for( i=0; i<n; i++ )

for( j=0; j<n; j++ )

k++;

RULE 3-CONSECUTIVE STATEMENTS:

*These just add (which means that the maximum is the one that counts *--* see 1(a) on page 16).*

for( i=0; i<n; i++)

a[i] = 0;

for( i=0; i<n; i++ )

for( j=0; j<n; j++ )

a[i] += a[j] + i + j;

if( cond )

S1

else

S2

Clearly, this can be an over-estimate in some cases, but it is never an under-estimate.

Other rules are obvious, but a basic strategy of analyzing from the inside (or deepest part) out works. If there are function calls, obviously these must be analyzed first. If there are recursive procedures, there are several options. If the recursion is really just a thinly veiled *for* loop, the analysis is usually trivial. For instance, the following function is really just a simple loop and is obviously *O *(*n*):

unsigned int

factorial( unsigned int n )

{

if( n<= 1 )

return 1;

else

return( n * factorial(n-1) );

}

/* Compute Fibonacci numbers as described Chapter 1 */

unsigned int

fib( unsigned int n )

{

/*1*/ if( n<= 1 )

/*2*/ return 1;

else

/*3*/ return( fib(n-1) + fib(n-2) );

}

T(n) =T(n- 1) +T(n- 2) + 2

Since *fib*(*n*) = *fib*(*n* - 1) + *fib*(*n* - 2), it is easy to show by induction that *T*(*n*) *fib*(*n*). In Section 1.2.5, we showed that *fib*(*n*) < (5/3)* ^{ }*. A similar calculation shows that

We will now present four algorithms to solve the maximum subsequence sum problem posed earlier. The first algorithm is depicted in Figure 2.5. The indices in the *for* loops reflect the fact that, in C, arrays begin at 0, instead of 1. Also, the algorithm computes the actual subsequences (not just the sum); additional code is required to transmit this information to the calling routine.

Convince yourself that this algorithm works (this should not take much). The running time is *O*(*n*^{ }) and is entirely due to lines 5 and 6, which consist of an *O*(1) statement buried inside three nested *for* loops. The loop at line 2 is of size *n*.

int

max_subsequence_sum( int a[], unsigned int n )

{

int this_sum, max_sum, best_i, best_j, i, j, k;

/*1*/ max_sum = 0; best_i = best_j = -1;

/*2*/ for( i=0; i<n; i++ )

/*3*/ for( j=i; j<n; j++ )

{

/*4*/ this_sum=0;

/*5*/ for( k = i; k<=j; k++ )

/*6*/ this_sum += a[k];

/*7*/ if( this_sum>max_sum )

{ /* update max_sum, best_i, best_j */

/*8*/ max_sum = this_sum;

/*9*/ best_i = i;

/*10*/ best_j = j;

}

}

/*11*/ return( max_sum );

}

There is a recursive and relatively complicated *O*(*n* log *n*) solution to this problem, which we now describe. If there didn't happen to be an *O*(*n*) (linear) solution, this would be an excellent example of the power of recursion. The algorithm uses a "divide-and-conquer" strategy. The idea is to split the problem into two roughly equal subproblems, each of which is half the size of the original. The subproblems are then solved recursively. This is the "divide" part. The "conquer" stage consists of patching together the two solutions of the subproblems, and possibly doing a small amount of additional work, to arrive at a solution for the whole problem.

int

max_subsequence_sum( int a[], unsigned int n )

{

int this_sum, max_sum, best_i, best_j, i, j, k;

/*1*/ max_sum = 0; best_i = best_j = -1;

/*2*/ for( i=0; i<n; i++ )

{

/*3*/ this_sum = 0;

/*4*/ for( j=i; j<n; j++ )

{

/*5*/ this_sum += a[j];

/*6*/ if( this_sum>max_sum )

/* update max_sum, best_i, best_j */;

}

}

/*7*/ return( max_sum );

}

First Half Second Half

----------------------------------------

4 -3 5 -2 -1 2 6 -2

int

max_sub_sequence_sum( int a[], unsigned int n )

{

return max_sub_sum( a, 0, n-1 );

}

int

max_sub_sum( int a[], int left, int right )

{

int max_left_sum, max_right_sum;

int max_left_border_sum, max_right_border_sum;

int left_border_sum, right_border_sum;

int center, i;

/*1*/ if ( left == right ) /* Base Case */

/*2*/ if( a[left]>0 )

/*3*/ return a[left];

else

/*4*/ return 0;

/*5*/ center = (left + right )/2;

/*6*/ max_left_sum = max_sub_sum( a, left, center );

/*7*/ max_right_sum = max_sub_sum( a, center+1, right );

/*8*/ max_left_border_sum = 0; left_border_sum = 0;

/*9*/ for( i=center; i>=left; i-- )

{

/*10*/ left_border_sum += a[i];

/*11*/ if( left_border_sum>max_left_border_sum )

/*12*/ max_left_border_sum = left_border_sum;

}

/*13*/ max_right_border_sum = 0; right_border_sum = 0;

/*14*/ for( i=center+1; i<=right; i++ )

{

/*15*/ right_border_sum += a[i];

/*16*/ if( right_border_sum>max_right_border_sum )

/*17*/ max_right_border_sum = right_border_sum;

}

/*18*/ return max3( max_left_sum, max_right_sum,

max_left_border_sum + max_right_border_sum );

}

T(1) = 1

T(n) = 2T(n/2) +O(n)

int

max_subsequence_sum( int a[], unsigned int n )

{

int this_sum, max_sum, best_i, best_j, i, j;

/*1*/ i = this_sum = max_sum = 0; best_i = best_j = -1;

/*2*/ for( j=0; j<n; j++ )

{

/*3*/ this_sum += a[j];

/*4*/ if( this_sum > max_sum )

{ /* update max_sum, best_i, best_j */

/*5*/ max_sum = this_sum;

/*6*/ best_i = i;

/*7*/ best_j = j;

}

else

/*8*/ if( this_sum < 0 )

{

/*9*/ i = j + 1;

/*10*/ this_sum = 0;

}

}

/*11*/ return( max_sum );

}

It should be clear why the time bound is correct, but it takes a little thought to see why the algorithm actually works. This is left to the reader. An extra advantage of this algorithm is that it makes only one pass through the data, and once *a*[*i*] is read and processed, it does not need to be remembered. Thus, if the array is on a disk or tape, it can be read sequentially, and there is no need to store any part of it in main memory. Furthermore, at any point in time, the algorithm can correctly give an answer to the subsequence problem for the data it has already read (the other algorithms do not share this property). Algorithms that can do this are called *on-line algorithms*. An on-line algorithm that requires only constant space and runs in linear time is just about as good as possible.

The most confusing aspect of analyzing algorithms probably centers around the logarithm. We have already seen that some divide-and-conquer algorithms will run in *O*(*n* log *n*) time. Besides divide-and-conquer algorithms, the most frequent appearance of logarithms centers around the following general rule: *An algorithm* *is* *O*(log *n*) *if it takes constant* (*O*(1)) *time to cut the problem size by a fraction* (*which is usually* ). On the other hand, if constant time is required to merely reduce the problem by a constant *amount* (such as to make the problem smaller by 1), then the algorithm is* O*(*n*).

The first example is usually referred to as binary search:

int

binary_search( input_type a[ ], input_type x, unsigned int n )

{

int low, mid, high; /* Can't be unsigned; why? */

/*1*/ low = 0; high = n - 1;

/*2*/ while( low<= high )

{

/*3*/ mid = (low + high)/2;

/*4*/ if( a[mid]<x )

/*5*/ low = mid + 1;

else

/*6*/ if ( a[mid]<x )

/*7*/ high = mid - 1;

else

/*8*/ return( mid ); /* found */

}

/*9*/ return( NOT_FOUND );

}

A second example is Euclid's algorithm for computing the greatest common divisor. The greatest common divisor (*gcd*) of two integers is the largest integer that divides both. Thus, *gcd *(50, 15) = 5. The algorithm in Figure 2.10 computes *gcd*(*m, n*), assuming *m* *n*. (If *n* > *m*, the first iteration of the loop swaps them).

unsigned int

gcd( unsigned int m, unsigned int n )

{

unsigned int rem;

/*1*/ while( n>0 )

{

/*2*/ rem = m % n;

/*3*/ m = n;

/*4*/ n = rem;

}

/*5*/ return( m );

}

Our last example in this section deals with raising an integer to a power (which is also an integer). Numbers that result from exponentiation are generally quite large, so an analysis only works if we can assume that we have a machine that can store such large integers (or a compiler that can simulate this). We will count the number of multiplications as the measurement of running time.

int

pow( int x, unsigned int n)

{

/*1*/ if( n == 0 )

/*2*/ return 1;

/*1*/ if( n == 1 )

/*4*/ return x;

/*5*/ if( even( n ) )

/*6*/ return( pow( x_{*}x, n/2 ) );

else

/*7*/ return( pow( x_{*}x, n/2 ) * x );

}

x^{3}= (x^{2})x, x^{7}= (x^{3})^{2}x, x^{15}= (x^{7})^{2}x,x^{31}= (x^{15})^{2}x,x^{62}= (x^{31})^{2}

/*7*/ return( pow( x, n-1 )_{*}x );

/*6a*/ return( pow( pow( x, 2 ), n/2 ) );

/*6b*/ return( pow( pow( x, n/2 ), 2 ) );

/*6c*/ return( pow( x, n/2 ) * pow( x, n/2 ) );

Once an analysis has been performed, it is desirable to see if the answer is correct and as good as possible. One way to do this is to code up the program and see if the empirically observed running time matches the running time predicted by the analysis. When *n* doubles, the running time goes up by a factor of 2 for linear programs, 4 for quadratic programs, and 8 for cubic programs. Programs that run in logarithmic time take only an additive constant longer when *n* doubles, and programs that run in *O*(*n *log *n*) take slightly more than twice as long to run under the same circumstances. These increases can be hard to spot if the lower-order terms have relatively large coefficients and *n* is not large enough. An example is the jump from *n* = 10 to *n* = 100 in the running time for the various implementations of the maximum subsequence sum problem. It also can be very difficult to differentiate linear programs from *O*(*n l*og *n*) programs purely on empirical evidence.

Another commonly used trick to verify that some program is *O*(*f*(*n*)) is to compute the values *T*(*n*)/ *f*(*n*) for a range of *n* (usually spaced out by factors of 2), where *T*(*n*) is the empirically observed running time. If *f*(*n*) is a tight answer for the running time, then the computed values converge to a positive constant. If *f*(*n*) is an over-estimate, the values converge to zero. If *f*(*n*) is an under-estimate and hence wrong, the values diverge.

rel = 0; tot = 0;

for( i=1; i<=n; i++ )

for( j=i+1; j<=n; j++ )

{

tot++;

if( gcd( i, j) = 1 )

rel++;

}

printf( "Percentage of relatively prime pairs is %lf\n",

( (double) rel )/tot );

nCPU time (T)T/n^{2 }T/n^{3 }T/n^{2}logn

---------------------------------------------------------

100 022 .002200 .000022000 .0004777

200 056 .001400 .000007000 .0002642

300 118 .001311 .000004370 .0002299

400 207 .001294 .000003234 .0002159

500 318 .001272 .000002544 .0002047

---------------------------------------------------------

600 466 .001294 .000002157 .0002024

700 644 .001314 .000001877 .0002006

800 846 .001322 .000001652 .0001977

900 1,086 .001341 .000001490 .0001971

1,000 1,362 .001362 .000001362 .0001972

---------------------------------------------------------

1,500 3,240 .001440 .000000960 .0001969

2,000 5,949 .001482 .000000740 .0001947

4,000 25,720 .001608 .000000402 .0001938

This chapter gives some hints on how to analyze the complexity of programs. Unfortunately, it is not a complete guide. Simple programs usually have simple analyses, but this is not always the case. As an example, we shall see, later in the text, a sorting algorithm (Shellsort, Chapter 7) and an algorithm for maintaining disjoint sets (Chapter 8) each of which requires about 20 lines of code. The analysis of Shellsort is still not complete, and the disjoint set algorithm has an analysis that is extremely difficult and requires pages and pages of intricate calculations. Most of the analysis that we will encounter here will be simple and involve counting through loops.

An interesting kind of analysis, which we have not touched upon, is lowerbound analysis. We will see an example of this in Chapter 7, where it is proved that any algorithm that sorts by using only comparisons requires (*n* log *n*) comparisons in the worst case. Lower-bound proofs are generally the most difficult, because they apply not to an algorithm but to a class of algorithms that solve a problem.

We close by mentioning that some of the algorithms described here have real-life application. The *gcd* algorithm and the exponentiation algorithm are both used in cryptography. Specifically, a 200-digit number is raised to a large power (usually another 200-digit number), with only the low 200 or so digits retained after each multiplication. Since the calculations require dealing with 200-digit numbers, efficiency is obviously important. The straightforward algorithm for exponentiation would require about 10^{200} multiplications, whereas the algorithm presented requires only about 1,200.

2.1 Order the following functions by growth rate: *n*, , *n*^{1.5}, *n*^{2}, *n *log *n*, *n *log log *n*, *n *log^{2 }*n*, *n *log(*n*^{2}), 2/*n*, 2* ^{ }*, 2

2.2 Suppose *T*_{l}(*n*) = *O*(*f*(*n*)) and *T*_{2}(*n*) = *O*(*f*(*n*)). Which of the following are true?

a. *T*_{1}(*n*) + *T*_{2}(*n*) = *O*(*f*(*n*))

b. *T*_{1}(*n*) - *T*_{2}(*n*) = o(*f*(*n*))

2.3 Which function grows faster: *n* log *n* or *n*^{1+}/ ^{ > 0 ?}

2.4 Prove that for any constant, *k*, log* ^{k}n* = o(

2.5 Find two functions *f*(*n*) and *g*(*n*) such that neither (*n*) = *O*(*g*(*n*)) nor *g*(*n*) = *O*(*f*(*n*)).

2.6 For each of the following six program fragments:

a. Give an analysis of the running time (Big-Oh will do).

c. Compare your analysis with the actual running times.

(1) sum = 0;

for( i=0; i<n; i++ )

sum++;

(2) sum = 0;

for( i=0; i<n; i++ )

for( j=0; j<n; j++ )

sum++;

(3) sum = 0;

for( i=0; i<n; i++ )

for( j=0; j<n*n; j++ )

sum++;

(4) sum = 0;

for( i=0; i<n; i++ )

for( j=0; j<i; j++ )

sum++;

(5) sum = 0;

for( i=0; i<n; i++ )

for( j=0; j<i*i; j++ )

for( k=0; k<j; k++)

sum++;

(6) sum = 0;

for( i=1; i<n; i++ )

for( j=1; j<i*i; j++ )

if( j%1 == 0 )

for( k=0; k<j; k++ )

sum++;

2.7 Suppose you need to generate a *random* permutation of the first *n* integers. For example, {4, 3, 1, 5, 2} and {3, 1, 4, 2, 5} are legal permutations, but {5, 4, 1, 2, 1} is not, because one number (1) is duplicated and another (3) is missing. This routine is often used in simulation of algorithms. We assume the existence of a random number generator, *rand_int*(*i*, *j*), which generates integers between *i* and *j* with equal probability. Here are three algorithms:

3. Fill the array such that *a*[*i*] = *i* + 1. Then

for( i=1; i<n; i++ )

swap( &a[i], &a[ rand_int( 0, i ) ] );

b. Give as accurate (Big-Oh) an analysis as you can of the *expected* running time of each algorithm.

d. Compare your analysis with the actual running times.

e. What is the worst-case running time of each algorithm?

2.8 Complete the table in Figure 2.2 with estimates for the running times that were too long to simulate. Interpolate the running times for these algorithms and estimate the time required to compute the maximum subsequence sum of one million numbers. What assumptions have you made?

2.9 How much time is required to compute

a. using a simple routine to perform exponentiation?

b. using the routine in Section 2.4.4?

2.10 Consider the following algorithm (known as Horner's rule) to evaluate

poly = 0;

for( i=n; i>=0; i-- )

poly = x * poly +a_{i}

a. Show how the steps are performed by this algorithm for *x* = 3, *f*(*x*) = 4*x*^{ } + 8*x*^{ } + *x* + 2.

b. Explain why this algorithm works.

c. What is the running time of this algorithm?

2.11 Give an efficient algorithm to determine if there exists an integer *i* such that *a _{i}* =

2.12 Give efficient algorithms (along with running time analyses) to

a. find the minimum subsequence sum

*b. find the minimum *positive* subsequence sum

*c. find the maximum subsequence *product*

2.13 a. Write a program to determine if a positive integer, *n*, is prime.

c. Let *B* equal the number of bits in the binary representation of *n*. What is the value of *B*?

d. In terms of *B*, what is the worst-case running time of your program?

e. Compare the running times to determine if a 20-bit and a 40-bit number are prime.

f. Is it more reasonable to give the running time in terms of *n* or *B*? Why?

*2.14 The Sieve of Erastothenes is a method used to compute all primes less than *n*. We begin by making a table of integers 2 to *n*. We find the smallest integer, *i*, that is not crossed out, print *i*, and cross out *i*, 2*i*, 3*i*, . . . . When the algorithm terminates. What is the running time of this algorithm?

2.15 Show that *x*^{62} can be computed with only eight multiplications.

2.16 Write the fast exponentiation routine without recursion.

2.17 Give a precise count on the number of multiplication used by the fast exponentiation routine. (*Hint:* Consider the binary representation of *n*.)

2.18 Programs *A* and *B* are analyzed and found to have worst-case running times no greater than 150*n* log* _{2} n* and

a. Which program has the better guarantee on the running time, for large values of *n *(*n *> 10,000)?

b. Which program has the better guarantee on the running time, for small values of *n (n* < 100)?

c. Which program will run faster *on average* for *n* = 1,000?

d. Is it possible that program *B* will run faster than program *A* on *all* possible inputs ?

2.19 A majority element in an array,* A,* of size *n* is an element that appears more than *n*/2 times (thus, there is at most one). For example, the array

3, 3, 4, 2, 4, 4, 2, 4, 4

has a majority element (4), whereas the array

3, 3, 4, 2, 4, 4, 2, 4

a. How does the recursion terminate?

*b. How is the case where n is odd handled?

*c. What is the running time of the algorithm?

d. How can we avoid using an extra array B?

*e. Write a program to compute the majority element.

*2.20 Why is it important to assume that integers in our computer model have a fixed size?

2.21 Consider the word puzzle problem described in Chapter 1. Suppose we fix the size of the longest word to be 10 characters.

2.22 Suppose that line 5 in the binary search routine had the expression *low = mid* instead of* low = mid + *1. Would the routine still work?

2.23 Suppose that lines 6 and 7 in Algorithm 3 (Fig. 2.7) are replaced by

/*6*/ max_left_sum = max_sub_sum( a, left, center-1);

/*7*/ max_right_sum = max_sub_sum( a, center, right);

*2.24 The inner loop of the cubic maximum subsequence sum algorithm performs *n*(*n *+ 1)(*n *+ 2)/6 iterations of the innermost code. The quadratic version performs* n*(*n *+ 1)/2 iterations. The linear version performs* n *iterations. What pattern is evident? Can you give a combinatoric explanation of this phenomenon?

Big-Oh, big-omega, big-theta, and little-oh notation were advocated by Knuth in [8]. There is still not a uniform agreement on the matter, especially when it comes to using ( ). Many people prefer to use *O*( ), even though it is less expressive. Additionally, *O*( ) is still used in some corners to express a lower bound, when ( ) is called for.

1. A. V. Aho, J. E. Hopcroft, and J. D. Ullman, *The Design and Analysis of Computer Algorithms,* Addison-Wesley, Reading, Mass., 1974.

2. J. L. Bentley, *Writing Efficient Programs,* Prentice Hall, Englewood Cliffs, N.J., 1982.

3. J. L. Bentley,* Programming Pearls, *Addison-Wesley, Reading, Mass., 1986.

4. J. L. Bentley,* More Programming Pearl*s, Addison-Wesley, Reading, Mass., 1988.

5. D. E. Knuth, *The Art of Computer Programming, Vol 1: Fundamental Algorithms, *2d ed., Addison-Wesley, Reading, Mass., 1973.

6. D. E. Knuth,* The Art of Computer Programming, Vol 2: Seminumerical Algorithms, *2d ed., Addison-Wesley, Reading, Mass., 1981.

7. D. E. Knuth, *The Art of Computer Programming, Vol 3: Sorting and Searching, *Addison-Wesley, Reading, Mass., 1975.

8. D. E. Knuth, "Big Omicron and Big Omega and Big Theta," ACM SIGACT News, 8 (1976), 18-23.