Do You want to know how to calculate the Time Complexity and Big O Notation of an Algorithm? This article will show you the methods to use to calculate the Time complexity and Big O notation of the algorithm.

You’ve probably heard about Time and Space Complexity or Big O Notation on your route to becoming a software engineer. If you haven’t, read up on Big O Notation and why you should care.

Understanding Time and Space Complexity is critical for any aspiring developer. For starters, this is a common topic in coding interviews, so get comfortable with the basics.

More significantly, knowing how to calculate and analyze Time and Space Complexity will help you become a better developer by highlighting disparities inefficiency and helping you design better solutions to challenges you work on.

Big O Notation is the language used to describe and understand these concepts. To develop the optimal code, we need to know how to calculate Big O in both space and time. This post will solely describe how to calculate Big O in terms of Time Complexity.

## The Fundamentals of How to Calculate Big O Notation

Big O Notation is used to measure how quickly runtime increases when an algorithm (or function) runs based on input size. You can use the below five steps that can help you obtain the required runtime and Big O for an algorithm.

- Separate your algorithm/function into discrete operations to make it easier to understand.
- Make a note of the Big O for each procedure.
- Add the Big O of each operation together to get the total.
- Ignore runtime related to Constants in your Algorithm.
- Find the phrase with the highest order — this will be what we refer to as the Big O of our algorithm or function.

### Example 1: Constant Time Complexity Example

Let us start with an example that has a constant runtime complexity. So, when an algorithm is having a constant runtime complexity then in that case the Big O is represented as * O(1)*. Where 1 is indicating the constant.

Let us see the example code for constant runtime complexity.

```
def multi(num1, num2):
multiVal = num1 * num2
return multiVal
```

As you can see the return is just a multiplication of two numbers. To calculate the time complexity and if we follow the above-mentioned steps.

- num1 is being looked up
- num2 is being looked up
- Assigning the variable multiVal to the multiplication of num1 and num2
- multiVal is being returned.

Now that we know our operations, we can calculate their Big O. Its time complexity is ** O(1)**, or constant time because the operations occur only once and do not rely on the input’s size.

Alternatively, these processes will take the same amount of time regardless of the inputs. Running multi(1,2) takes the same time as multi(6341,1532).

Now that we know each operation’s Big O, let’s add them all up. Our algorithm’s Big O is O(1 + 1 + 1 + 1), which we will reduce to O(1) by stripping our constants and identifying our highest-order term.

How did we manage to get 1 point of 4?? Consider what removing constants mean. We want to get to the core of Big O — its highest-order word. 4 is fluff here. ** O(4)** basically says

**. So we’ll ignore the four and declare this algorithm’s Big O is O(1), which means it executes in constant time.**

*O (4×1)*### Example 2: Linear Time Complexity

In this example, we will see the code that has the time complexity of linear time. The linear time can be understood as input size rises, the number of processes required to complete the task grows in proportion.

Let us see the example code to understand the Linear Time Complexity.

```
n = 100
for i in range(0, n):
print(i)
```

In the above code, for loop is executed at least 100 times to print the number from 0 to 99. And then it stops. But Since we are only printing till number 100 it runtime can be really fast but if we want to print a billion number then the runtime increases linearly as input grew a lot.

So, to summarize if a line of code is executed till an input n and this input can be small or big depending on this the Time Complexity of the Code will be * O(n)*. Where n is the number of times for loop is executed.

### Example 3: Quadratic Time Complexity

This example is also really simple to understand and for this, the example can be the two nested for loop. Each value of an input is recognized as requiring a linear-time operation, not simply the input itself, in quadratic time algorithms or processes.

To simply if you are running a for loop then the first loop of for loop is running for another linear time of n times of loop.

n = 10 m = 11 for i in range(1, n): for j in range(1, m): print("{} * {} = {}".format(i, j, i*j))

The above code prints the multiplication table from 1 to 9 with the multiplication of 1 to 10. So, if we have 2 as the first loop element in the first loop then the second loop will run 10 times. So basically this code will run n*m times which is 100 times.

So, the runtime of the above code can be understood as ** O(n*m)** but since both m and n are the same number or have very minimal difference we can constitute them as

**. The same logic holds true when more loops are nested, hence runtimes of**

*O(n*^{2})**or**

*O(n*^{3})**are perfectly achievable.**

*O(n*^{4})## Wrap Up

Other common Big O functions include Logarithmic Time, Exponential Time, Factorial Time, and more. So now you have a framework to calculate the time complexity of your algorithms and the amount of knowledge required to code more efficient alternatives.

Let me know if you like my post related to GitBot the GitHub Productive Tool. Then please follow us on **Facebook **and **Twitter**. Let us know the questions and answer you want to cover in this blog.

Further Read: