This content originally appeared on DEV Community and was authored by Kondwani Chikwa
What is O notation?
O notation describes the time an algorithm will take to complete. This is typically used to aid in measuring the efficiency of an algorithm. What do we term as efficient? How quickly an algorithm will process N number of input.
The above is sourced from https://www.bigocheatsheet.com/
The image above shows the different types of behavior we can anticipate from different algorithms. Below is a brief breakdown:
lets take n to be the size of the input:
O(1) - Constant time, algorithm will complete the same time for n number of input.
O(n) - Linear time, completion time increases as n increases.
O(log n) - Logarithmic time, completion time increases to the logarithm of n input.
O(n^2) - Quadratic time, completion time increases by the square of n input size.
O(2^n) - Exponential time, completion time increases exponentially.
O(n!) - Factorial time, steps to completion increases by the factorial of the input. n*(n-1)...*1
O(nlogn) - Loglinear time, logn operations will occur n times.
This content originally appeared on DEV Community and was authored by Kondwani Chikwa
Kondwani Chikwa | Sciencx (2024-09-10T18:24:57+00:00) O NOTATION. Retrieved from https://www.scien.cx/2024/09/10/o-notation/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.