Algorithm Analysis
5/25/2021
MeghaV
ResearchScholar
Dept. of IT
Kannur University
How to compare Algorithms
• The running time of a given algorithm can be expressed as a function of input size n
• Comparison of algorithms can be done in terms of f(n)
• These kinds of comparisons are independent of machine time.
Rate of Growth
The rate at which the running time increases as a function of input is called rate of growth.
5/25/2021
Time Name Example
1 Constant Linear search(best case)
log n Logarithmic Binary search
n Linear Linear search
n log n linear logarithm Merge sort complexity
n2 Quadratic Bubble sort
n3 Cubic Matrix Multiplication
2n Exponential Travelling salesman problem
5/25/2021
Algorithm analysis:
Importance of algorithm analysis
• Analysis of Algorithm or performance analysis refers to the task of determining how much computing time and storage
an algorithm requires.
• It involves 2 phases
1.Priori analysis
2.Posterior analysis
• Priori analysis: this is a theoretical analysis. Efficiency of an algorithm is measured by assuming that factors like
processor speed, are constant and have no effect on the implementation.
• Posterior analysis: Statistics about the algorithm in terms of space and time is obtained during execution
• This is a challenging area which some times requires great mathematical skill
5/25/2021
Analysis of algorithms
Efficiency of algorithm depends on
1.Space efficiency
2.Time efficiency
Space Efficiency: It is the amount of memory required to run the program
completely and efficiently.
Space complexity depends on:
- Program space: space required to store the compiled program.
- Data space: Space required to store the constants variables.
- Stack space: Space required to store return address , parameters passed
etc.
5/25/2021
Time complexity
• The time T(P) taken by a program is the sum of the compile time and runtime.
• We may assume that a compiled program will run several times without
recompilation
• Consequently we consider only the run time of the program. This runtime is
denoted as tp. .
5/25/2021
Types of Analysis
1. Best case
2. Worst case
3. Average case
Best case: Defines the input for which the algorithm takes lowest time
Input is one for which the algorithm runs fastest
Worst case: Defines the input for which the algorithm takes largest time
Input is one for which the algorithm runs slower
Average case: Provides a prediction about the running time of the algorithm
Assumes that the input is random
5/25/2021
Contd..
Time efficiency: Measure of how fast algorithm executes
Time complexity depends on:
• Speed of computer
• Choice of programming language
• Compiler used
• Choice of algorithm
• Number of inputs/outputs
• Operations count and Steps count
Time efficiency mainly depends on size of input n, hence expressed in terms of n.
5/25/2021
Order of growth
The efficiency of algorithm can be analyzed by considering highest order of n.
5/25/2021
Asymptotic notations
• Accurate measurement of time complexity is possible with asymptotic notation.
• Asymptotic complexity gives an idea of how rapidly the space requirement or
time requirement grow as problem size increase.
• Asymptotic complexity is a measure of algorithm not problem.
• The complexity of an algorithm is a function relating the input length to the
number of steps (time complexity) or storage location (space complexity).
For example, the running time is expressed as a function of the input size ‘n’ as
follows.
f(n)=n4+100n2+10n+50 (running time)
5/25/2021
Big O notation
• Big oh notation is denoted by ‘O’. It is used to describe the efficiency of an algorithm.
• It is used to represent the upper bound of an algorithm running time. Using Big O notation, we
can give largest amount of time taken by the algorithm to complete.
Definition: Let f(n) and g(n) be the two non-negative functions. We say that f(n) is said to be
O(g(n)) if and only if there exists positive constants ‘c’ and ‘n0’ such that,
f(n)≤ c*g(n) for all non-negative values of n, where n≥n0
Here, g(n) is the upper bound for f(n).
5/25/2021
Big O notation (contd.)
Ex: Let f(n)= 2n4+5n2+2n+3
≤ 2n4+5n4+2n4+3n4
≤ (2+5+2+3)n4
≤ 12n4
f(n) =12n4
This implies g(n)=n4, n≥1
c=12 and n0=1
f(n)=O(n4)
The above definition states that the function
‘f’ if almost ‘c’ times the function ‘g’ when ‘n’ is greater than or equal to n0.
This notation provides an upper bound for the function ‘f’ i.e. the function g(n) is an upper
bound on the value of f(n) for all n, where n ≥n0
5/25/2021
Big-Omega notation
• Big omega notation is denoted by ‘Ω’. It is used to represent the lower bound of an algorithm
running time.
• Using Big omega notation we can give shortest amount of time taken by the algorithm to
complete.
Definition:
The function f(n)=Ω(g(n)) , if and only if there exist positive constants ‘c’ and n0, such that
f(n)≥c*g(n) for all n, n ≥ n0
5/25/2021
Big-Omega notation (Contd.)
Example:
Let f(n) = 2n4+5n2+2n+3
≥ 2n4 (for example as n ∞,
lower order terms are insignificant)
f(n) ≥ 2n4, n ≥1
g(n)=n4, c=2 and n0=1
f(n)= Ω(n4)
5/25/2021
Big Theta notation
5/25/2021
•The big theta notation is denoted by ‘θ’
•It is in between the upper bound and lower bound of an algorithms running time.
Definition:
Let f(n) and g(n) be two non-negative functions. We say that f(n) is said to be θ(g(n)) if and only
if there exists positive constants ‘c1’ and ’c2’, such that,
c1g(n) ≤ f(n) ≤ c2g(n) for all non-negative values n, where n ≥ n0
• The above definition states that the function f(n) lies between ‘c1’ times the function g(n) and
‘c2’ times the function g(n).
• In other words theta notation says that f(n) is both O(g(n)) and Ω(g(n)) for all n, where n≥n0
Big Theta notation (contd.)
Example:
f(n) =2n4+5n2+2n+3
2n4≤ 2n4+5n2+2n+3 ≤ 12n4
2n4 ≤f(n) ≤12n4 , n ≥1
g(n) =n4
c1=2, c2=12 and n0=1
f(n)=n4
5/25/2021
Little Oh (o) notation
• It is a method of expressing the upper bound on the growth rate of an algorithm’s running time
which may or may not be asymptotically tight,
• Also called loose upper bound
Definition
Given functions f(n) and g(n), we say that f(n) is o(g(n)) if there are positive constants c and n0
Such that f(n) < cg(n) for all n, n≥ n0
• The main difference between Big O and Little o lies in their definition
• In Big O f(n) = O(g(n)) and the bound is 0 ≤ f(n) ≤ cg(n) so it is true only for some positive value
of c
• In little o, it is true for all constant c>0 because f(n)= o(g(n)) and the bound f(n)<cg(n).
5/25/2021
Little omega (ω) notation
• It is a method of expressing the lower bound on the growth rate of an algorithm’s running time which
may or may not be asymptotically tight.
• Little omega is also called a loose lower bound.
Definition:
Given functions f(n) and g(n), we say that f(n) is little omega of (g(n)) if there are positive
constants c and n0
Such that f(n) > cg(n) for all n, n≥ n0
• The main difference between Big Omega and Little omega lies in their definition
• In Big Omega f(n) = Ω(g(n)) and the bound is 0 ≤ cg(n)<f(n) so it is true only for some positive value of
c
• In little o, it is true for all constant c>0
5/25/2021
Analysing algorithm control structures
We start by considering how to count operations in for-loops.
• First of all, we should know the number of iterations of the loop; say it is x
• Then the loop condition is executed x + 1 times.
• Each of the statements in the loop body is executed x times
• The loop-index update statement is executed x times
5/25/2021
Analysing algorithm control structures contd.
5/25/2021
Loops: Complexity is determined by the number of iterations in the loop times the complexity of the body
of the loop
EXAMPLE
Analysing algorithm control structures contd.
Nested independent loops: Complexity of inner loop * complexity of outer loop
5/25/2021
Example
Nested dependent loops: Examples:
Number of repetitions of the inner
loop is: 1 + 2 + 3 + . . . + n = n(n+2)/2
Hence the segment is O(n2)
Analysing algorithm control structures contd.
If Statement: O(max(O(condition1), O(condition2), . . , O(branch1), O(branch2), . . ., O(branchN))
5/25/2021
Analysing algorithm control structures contd.
Switch: Take the complexity of the most expensive case including the default case
5/25/2021

Algorithm Analysis

  • 1.
  • 2.
    How to compareAlgorithms • The running time of a given algorithm can be expressed as a function of input size n • Comparison of algorithms can be done in terms of f(n) • These kinds of comparisons are independent of machine time. Rate of Growth The rate at which the running time increases as a function of input is called rate of growth. 5/25/2021
  • 3.
    Time Name Example 1Constant Linear search(best case) log n Logarithmic Binary search n Linear Linear search n log n linear logarithm Merge sort complexity n2 Quadratic Bubble sort n3 Cubic Matrix Multiplication 2n Exponential Travelling salesman problem 5/25/2021
  • 4.
    Algorithm analysis: Importance ofalgorithm analysis • Analysis of Algorithm or performance analysis refers to the task of determining how much computing time and storage an algorithm requires. • It involves 2 phases 1.Priori analysis 2.Posterior analysis • Priori analysis: this is a theoretical analysis. Efficiency of an algorithm is measured by assuming that factors like processor speed, are constant and have no effect on the implementation. • Posterior analysis: Statistics about the algorithm in terms of space and time is obtained during execution • This is a challenging area which some times requires great mathematical skill 5/25/2021
  • 5.
    Analysis of algorithms Efficiencyof algorithm depends on 1.Space efficiency 2.Time efficiency Space Efficiency: It is the amount of memory required to run the program completely and efficiently. Space complexity depends on: - Program space: space required to store the compiled program. - Data space: Space required to store the constants variables. - Stack space: Space required to store return address , parameters passed etc. 5/25/2021
  • 6.
    Time complexity • Thetime T(P) taken by a program is the sum of the compile time and runtime. • We may assume that a compiled program will run several times without recompilation • Consequently we consider only the run time of the program. This runtime is denoted as tp. . 5/25/2021
  • 7.
    Types of Analysis 1.Best case 2. Worst case 3. Average case Best case: Defines the input for which the algorithm takes lowest time Input is one for which the algorithm runs fastest Worst case: Defines the input for which the algorithm takes largest time Input is one for which the algorithm runs slower Average case: Provides a prediction about the running time of the algorithm Assumes that the input is random 5/25/2021
  • 8.
    Contd.. Time efficiency: Measureof how fast algorithm executes Time complexity depends on: • Speed of computer • Choice of programming language • Compiler used • Choice of algorithm • Number of inputs/outputs • Operations count and Steps count Time efficiency mainly depends on size of input n, hence expressed in terms of n. 5/25/2021
  • 9.
    Order of growth Theefficiency of algorithm can be analyzed by considering highest order of n. 5/25/2021
  • 10.
    Asymptotic notations • Accuratemeasurement of time complexity is possible with asymptotic notation. • Asymptotic complexity gives an idea of how rapidly the space requirement or time requirement grow as problem size increase. • Asymptotic complexity is a measure of algorithm not problem. • The complexity of an algorithm is a function relating the input length to the number of steps (time complexity) or storage location (space complexity). For example, the running time is expressed as a function of the input size ‘n’ as follows. f(n)=n4+100n2+10n+50 (running time) 5/25/2021
  • 11.
    Big O notation •Big oh notation is denoted by ‘O’. It is used to describe the efficiency of an algorithm. • It is used to represent the upper bound of an algorithm running time. Using Big O notation, we can give largest amount of time taken by the algorithm to complete. Definition: Let f(n) and g(n) be the two non-negative functions. We say that f(n) is said to be O(g(n)) if and only if there exists positive constants ‘c’ and ‘n0’ such that, f(n)≤ c*g(n) for all non-negative values of n, where n≥n0 Here, g(n) is the upper bound for f(n). 5/25/2021
  • 12.
    Big O notation(contd.) Ex: Let f(n)= 2n4+5n2+2n+3 ≤ 2n4+5n4+2n4+3n4 ≤ (2+5+2+3)n4 ≤ 12n4 f(n) =12n4 This implies g(n)=n4, n≥1 c=12 and n0=1 f(n)=O(n4) The above definition states that the function ‘f’ if almost ‘c’ times the function ‘g’ when ‘n’ is greater than or equal to n0. This notation provides an upper bound for the function ‘f’ i.e. the function g(n) is an upper bound on the value of f(n) for all n, where n ≥n0 5/25/2021
  • 13.
    Big-Omega notation • Bigomega notation is denoted by ‘Ω’. It is used to represent the lower bound of an algorithm running time. • Using Big omega notation we can give shortest amount of time taken by the algorithm to complete. Definition: The function f(n)=Ω(g(n)) , if and only if there exist positive constants ‘c’ and n0, such that f(n)≥c*g(n) for all n, n ≥ n0 5/25/2021
  • 14.
    Big-Omega notation (Contd.) Example: Letf(n) = 2n4+5n2+2n+3 ≥ 2n4 (for example as n ∞, lower order terms are insignificant) f(n) ≥ 2n4, n ≥1 g(n)=n4, c=2 and n0=1 f(n)= Ω(n4) 5/25/2021
  • 15.
    Big Theta notation 5/25/2021 •Thebig theta notation is denoted by ‘θ’ •It is in between the upper bound and lower bound of an algorithms running time. Definition: Let f(n) and g(n) be two non-negative functions. We say that f(n) is said to be θ(g(n)) if and only if there exists positive constants ‘c1’ and ’c2’, such that, c1g(n) ≤ f(n) ≤ c2g(n) for all non-negative values n, where n ≥ n0 • The above definition states that the function f(n) lies between ‘c1’ times the function g(n) and ‘c2’ times the function g(n). • In other words theta notation says that f(n) is both O(g(n)) and Ω(g(n)) for all n, where n≥n0
  • 16.
    Big Theta notation(contd.) Example: f(n) =2n4+5n2+2n+3 2n4≤ 2n4+5n2+2n+3 ≤ 12n4 2n4 ≤f(n) ≤12n4 , n ≥1 g(n) =n4 c1=2, c2=12 and n0=1 f(n)=n4 5/25/2021
  • 17.
    Little Oh (o)notation • It is a method of expressing the upper bound on the growth rate of an algorithm’s running time which may or may not be asymptotically tight, • Also called loose upper bound Definition Given functions f(n) and g(n), we say that f(n) is o(g(n)) if there are positive constants c and n0 Such that f(n) < cg(n) for all n, n≥ n0 • The main difference between Big O and Little o lies in their definition • In Big O f(n) = O(g(n)) and the bound is 0 ≤ f(n) ≤ cg(n) so it is true only for some positive value of c • In little o, it is true for all constant c>0 because f(n)= o(g(n)) and the bound f(n)<cg(n). 5/25/2021
  • 18.
    Little omega (ω)notation • It is a method of expressing the lower bound on the growth rate of an algorithm’s running time which may or may not be asymptotically tight. • Little omega is also called a loose lower bound. Definition: Given functions f(n) and g(n), we say that f(n) is little omega of (g(n)) if there are positive constants c and n0 Such that f(n) > cg(n) for all n, n≥ n0 • The main difference between Big Omega and Little omega lies in their definition • In Big Omega f(n) = Ω(g(n)) and the bound is 0 ≤ cg(n)<f(n) so it is true only for some positive value of c • In little o, it is true for all constant c>0 5/25/2021
  • 19.
    Analysing algorithm controlstructures We start by considering how to count operations in for-loops. • First of all, we should know the number of iterations of the loop; say it is x • Then the loop condition is executed x + 1 times. • Each of the statements in the loop body is executed x times • The loop-index update statement is executed x times 5/25/2021
  • 20.
    Analysing algorithm controlstructures contd. 5/25/2021 Loops: Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop EXAMPLE
  • 21.
    Analysing algorithm controlstructures contd. Nested independent loops: Complexity of inner loop * complexity of outer loop 5/25/2021 Example Nested dependent loops: Examples: Number of repetitions of the inner loop is: 1 + 2 + 3 + . . . + n = n(n+2)/2 Hence the segment is O(n2)
  • 22.
    Analysing algorithm controlstructures contd. If Statement: O(max(O(condition1), O(condition2), . . , O(branch1), O(branch2), . . ., O(branchN)) 5/25/2021
  • 23.
    Analysing algorithm controlstructures contd. Switch: Take the complexity of the most expensive case including the default case 5/25/2021