Table of Contents
Introduction
Big O Notation
Constant Time Complexity: O(1)
Linear Time Complexity: O(n)
Logarithmic Time Complexity: O(log n)
Quadratic Time Complexity: O(n^2)
Exponential Time Complexity: O(2^n)
Factorial Time Complexity: O(n!)
Conclusion
Introduction:
What is Time Complexity? Why Do We Need It?
Time complexity is a measure of how long an algorithm takes to run in the worst case. It is important to understand time complexity because it helps us compare different algorithms and choose the best one for a given problem.
In this article, we will learn about time complexity and Big O notation in JavaScript. We will see some common time complexity classes and how to calculate them for different algorithms. We will also see some examples of code and descriptive tables to illustrate the concepts.
Big O Notation:
Big O notation is a mathematical way of expressing the time complexity of an algorithm. It does this by classifying algorithms according to their worst-case time complexity.
The worst-case time complexity of an algorithm is the maximum amount of time that the algorithm can take to run for any input of a given size. It represents the upper bound of the running time, which means that the algorithm will never take more time than that.
The following table shows some of the most common Big O time complexity classes
Class | Description | Example |
O(1) | Constant time | Accessing an array element |
O(log n) | Logarithmic time | Binary search |
O(n) | Linear time | Looping through an array |
O(n^2) | Quadratic time | Bubble sort |
O(n!) | Exponential time | Factorial calculation |
Constant Time Complexity: O(1)
Constant time complexity means that the running time of an algorithm does not depend on the input size. It always takes the same amount of time regardless of how large or small the input is. An example of an algorithm with constant time complexity is accessing an element in an array by its index.
// Accessing an element in an array by its index
function getElement(array, index) {
return array[index]; // O(1)
}
The table below shows how the running time of this algorithm remains constant as the input size increases.
Input Size | Running Time |
1 | 1 |
10 | 1 |
100 | 1 |
1000 | 1 |
Constant time complexity is the most desirable type of complexity, because it means that the algorithm will run just as fast regardless of the input size.
Linear Time Complexity: O(n)
Linear time complexity means that the running time of an algorithm is proportional to the input size. It increases linearly as the input size increases. An example of an algorithm with linear time complexity is looping through an array and performing some operation on each element.
// Looping through an array and performing some operation on each element
function loopArray(array) {
for (let i = 0; i < array.length; i++) { // O(n)
console.log(array[i]); // O(1)
}
}
The table below shows how the running time of this algorithm increases linearly as the input size increases.
Input Size | Running Time |
1 | 1 |
10 | 10 |
100 | 100 |
1000 | 1000 |
Linear time complexity is still relatively efficient, and it is often acceptable for algorithms that need to process small amounts of data. However, for algorithms that need to process large amounts of data, it is important to be aware of linear time complexity, as the running time of the algorithm can quickly become too slow.
Logarithmic Time Complexity: O(log n)
Logarithmic time complexity means that the running time of an algorithm is proportional to the logarithm of the input size. It increases slowly as the input size increases. An example of an algorithm with logarithmic time complexity is binary search, which is a technique to find an element in a sorted array by repeatedly dividing the array into two halves and comparing the middle element with the target element.
// Binary search to find an element in a sorted array
function binarySearch(array, target) {
let left = 0; // O(1)
let right = array.length - 1; // O(1)
while (left <= right) { // O(log n)
let mid = Math.floor((left + right) / 2); // O(1)
if (array[mid] === target) { // O(1)
return mid; // O(1)
} else if (array[mid] < target) { // O(1)
left = mid + 1; // O(1)
} else { // O(1)
right = mid - 1; // O(1)
}
}
return -1; // O(1)
}
The table below shows how the running time of this algorithm increases slowly as the input size increases.
Input Size | Running Time |
1 | 1 |
10 | 4 |
100 | 7 |
1000 | 10 |
Quadratic Time Complexity: O(n^2)
Quadratic time complexity means that the running time of an algorithm is proportional to the square of the input size. It increases quadratically as the input size increases. An example of an algorithm with quadratic time complexity is nested looping through an array and performing some operation on each pair of elements.
// Nested looping through an array and performing some operation on each pair of elements
function nestedLoopArray(array) {
for (let i = 0; i < array.length; i++) { // O(n)
for (let j = 0; j < array.length; j++) { // O(n)
console.log(array[i], array[j]); // O(1)
}
}
}
The table below shows how the running time of this algorithm increases quadratically as the input size increases.
Input Size | Running Time |
1 | 1 |
10 | 100 |
100 | 10000 |
1000 | 1000000 |
Quadratic time complexity is the least desirable type of complexity, because it means that the algorithm will run slower and slower as the input size increases.
Exponential Time Complexity: O(2n)
O(2^n) is exponential time complexity, which means that the running time of an algorithm doubles for every increase in the input size. It grows very fast and becomes impractical for large inputs. An example of an algorithm with exponential time complexity is the recursive Fibonacci sequence, which calculates the nth Fibonacci number by calling itself twice for each smaller number.
// Recursive Fibonacci sequence
function fibonacci(n) {
if (n <= 1) { // O(1)
return n; // O(1)
} else { // O(1)
return fibonacci(n - 1) + fibonacci(n - 2); // O(2^n)
}
}
The table below shows how the running time of this algorithm doubles as the input size increases.
Input Size | Running Time |
1 | 1 |
2 | 2 |
3 | 4 |
4 | 8 |
5 | 16 |
Factorial Time Complexity: O(n!)
O(n!) is factorial time complexity, which means that the running time of an algorithm is proportional to the factorial of the input size. It grows even faster than exponential time and becomes impossible for large inputs. An example of an algorithm with factorial time complexity is generating all the permutations of an array, which creates all the possible ways to arrange the elements of the array.
// Generating all the permutations of an array
function permute(array) {
let result = []; // O(1)
if (array.length === 0) { // O(1)
return [[]]; // O(1)
} else { // O(1)
for (let i = 0; i < array.length; i++) { // O(n)
let element = array[i]; // O(1)
let rest = array.slice(0, i).concat(array.slice(i + 1)); // O(n)
let permutations = permute(rest); // O(n!)
for (let j = 0; j < permutations.length; j++) { // O(n!)
let permutation = [element].concat(permutations[j]); // O(n)
result.push(permutation); // O(1)
}
}
return result; // O(1)
}
}
The table below shows how the running time of this algorithm increases factorially as the input size increases.
Input Size | Running Time |
1 | 1 |
2 | 2 |
3 | 6 |
4 | 24 |
5 | 120 |
Conclusion:
In this article, we learned about time complexity and Big O notation in JavaScript. We saw some common time complexity classes and how to calculate them for different algorithms. We also saw some examples of code and descriptive tables to illustrate the concepts.
Time complexity and Big O notation are powerful tools for analyzing algorithms, but they are not the only factors to consider when choosing an algorithm. Other factors, such as space complexity, readability, maintainability, and correctness, may also be important. The best algorithm for a given problem will depend on the specific requirements of the application.