Recently while practising algorithms on Codewars, I came across a simple-looking challenge. I implemented it offline and it seems to work but when I did the same on the codewars platform, the checkers on the codewars failed to pass, due to runtime timeout error as the test case increased. Looking at the requirements again, I saw that the least acceptable time complexity is O(log n)(logarithmic time).
This prompted me to start my research on Big O and time complexities. Programming is not only about making your codes work, keen attention to the run-time is also important. I realised that learning these basic foundational computer science fundamentals will help you make decisions and write better codes.
WHAT THEN IS THE BIG O NOTATION
Big O notation is the language we use when talking about how long an algorithm takes to run (time complexity) or how much memory is used by the algorithm (space complexity).
Big-O notation can be used to express an algorithm's best, worst, and average-case execution times. For the sake of this article, we'll concentrate on Big-O as it relates to time complexity.
Note: Big O notation is a way to track how quickly the runtime grows relative to the size of the input.
The "running time" when using Big-O notation does not immediately correlate to time as we know it, which is a crucial point to remember (e.g., seconds, milliseconds, microseconds, etc.). Certain factors, such as the processor, the language, and the run time environment, are not taken into account while analyzing running times. Time can instead be thought of as the number of operations or steps required to solve a problem of size n. Big-O notation, in other words, is a means to keep track of how quickly the runtime grows with the amount of the input.
When considering the worst-case scenario, the question becomes: what is the maximum number of operations/steps that could occur for an input of size n?.
0(1) -> Constant time.
0(1) means that it takes constant time to run an algorithm, regardless of the size of the input. 0(1) is the best time complexity because no matter the size of list, it takes the same amount of time to run. Eg.
- Math Operations
- Accessing an array via the index.
- Accessing a hash via the key.
- Pushing and popping on a stack.
- Inserting and removal from a queue
- Returning the value from a function.
From the function above, you notice that no matter the size of n, the same amount of time is required to find the first index of the array.
O(n) -> Linear Time
O(n) means that the run time increases at the same pace as the input. In programming one of the most common linear-time operations is traversing an array. In JavaScript, methods like forEach, map, and reduce run through the entire collection of data from start to finish.
Take a look at the printAllValues function in the example above. The number of operations required to loop through n is proportional to the size of n. Seeing a loop is usually (but not always) a good indicator that the code you're looking at has an O run time (n).
O(n²) -> Quadratic Time
O(n²) means that the calculation runs in quadratic time, which is squared the size of the input data. In programming, many of the basic sorting algorithms have worst-case run time of O(n²). Eg. Bubble sort, Insertion sort, Selection Sort. etc. Generally speaking, seeing two nested loops is typically an indicator that a piece of code your’re looking at has a run time of O(n²). Likewise, three nested loops would indicate a runtime of O(n³).
Let's have a look at countOperations. After each iteration, we have two nested loops that increment the operations variable. If n is the size of our smallData, we'll have a total of 16 operations. It's not that bad. But what if n a repressive collection? The operation will run billions of times.
O(log n) -> Logarithmic time.
O(log n) means that the running time grows in proportion to the logarithm of the input size. Meaning that as the run time barely increases as you exponentially increase the input.
let loopA = logTime([1]) // 0 loops
let loopA = lotTime([1, 2]) // 1 loops
let loopA = lotTime([1, 2, 3, 4, 5, 6, 7, 8]) // 2 loops
Notice that the post-operation of the for-loop multiplies the current value of i by 2, so i goes from 1 to 2 to 4 to 8 to 16 to 32 … In other words, it doubles with each loop. In simple terms, the number of operations doesn’t increase very much when we increase the size of the input.
Note: O(nlog n) - which is often confused with O(log n) means that the running time of an algorithm is linearithmic, which is a combination of linear and logarithm complexity. Sorting algorithm that utilises a divide and conquers strategy is linearithmic, such as the following; merge sort, timsort, heapsort.
Thus far, we have only focused on talking about Big-O by isolating a few lines of code at a time. In my next post, we will look at how to calculate the big O notation.
Conclusion
You can develop code without knowing the ins and outs of Big-O, as previously stated. You should also be aware that it's an approximation, as the run time you calculate by hand may not be accurate when used in production, as performance is influenced by factors such as environments, processors, and languages.
This should not, however, deter you from studying Big-O. Who wouldn't want a better grasp of Big-O notation? It provides you with a broader perspective on what matters and what doesn't when building algorithms — and who wouldn't want that?