What’s Time complexity?
Time complexity is defined because the period of time taken by an algorithm to run, as a function of the length of the input. It measures the time taken to execute each statement of code in an algorithm. It will not be going to look at the whole execution time of an algorithm. Reasonably, it’ll give information in regards to the variation (increase or decrease) in execution time when the variety of operations (increase or decrease) in an algorithm. Yes, because the definition says, the period of time taken is a function of the length of input only.
Time Complexity Introduction
Space and Time define any physical object within the Universe. Similarly, Space and Time complexity can define the effectiveness of an algorithm. While we all know there may be multiple method to solve the issue in programming, knowing how the algorithm works efficiently can add value to the best way we do programming. To search out the effectiveness of this system/algorithm, knowing methods to evaluate them using Space and Time complexity could make this system behave in required optimal conditions, and by doing so, it makes us efficient programmers.
While we reserve the space to know Space complexity for the long run, allow us to give attention to Time complexity on this post. Time is Money! On this post, you’ll discover a mild introduction to the Time complexity of an algorithm, and methods to evaluate a program based on Time complexity.
After reading this post, you’ll know:
- Why is Time complexity so significant?
- What’s Time complexity?
- The right way to calculate time complexity?
- Time Complexity of Sorting Algorithms
- Time Complexity of Searching Algorithms
- Space Complexity
Let’s start.
Why is Time complexity Significant?
Allow us to first understand what defines an algorithm.
An Algorithm, in computer programming, is a finite sequence of well-defined instructions, typically executed in a pc, to unravel a category of problems or to perform a standard task. Based on the definition, there must be a sequence of defined instructions that should be given to the pc to execute an algorithm/ perform a particular task. On this context, variation can occur the best way how the instructions are defined. There may be any number of how, a particular set of instructions may be defined to perform the identical task. Also, with options available to decide on any one in all the available programming languages, the instructions can take any type of syntax together with the performance boundaries of the chosen programming language. We also indicated the algorithm to be performed in a pc, which results in the subsequent variation, by way of the operating system, processor, hardware, etc. which are used, which may also influence the best way an algorithm may be performed.
Now that we all know various factors can influence the final result of an algorithm being executed, it is smart to know how efficiently such programs are used to perform a task. To gauge this, we require to guage each the Space and Time complexity of an algorithm.
By definition, the Space complexity of an algorithm quantifies the quantity of space or memory taken by an algorithm to run as a function of the length of the input. While Time complexity of an algorithm quantifies the period of time taken by an algorithm to run as a function of the length of the input. Now that we all know why Time complexity is so significant, it’s time to know what’s time complexity and methods to evaluate it.
To elaborate, Time complexity measures the time taken to execute each statement of code in an algorithm. If an announcement is about to execute repeatedly then the variety of times that statement gets executed is the same as N multiplied by the point required to run that function every time.
The primary algorithm is defined to print the statement just once. The time taken to execute is shown as 0 nanoseconds. While the second algorithm is defined to print the identical statement but this time it is about to run the identical statement in FOR loop 10 times. Within the second algorithm, the time taken to execute each the road of code – FOR loop and print statement, is 2 milliseconds. And, the time taken increases, because the N value increases, because the statement goes to get executed N times.
Note: This code is run in Python-Jupyter Notebook with Windows 64-bit OS + processor Intel Core i7 ~ 2.4GHz. The above time value can vary with different hardware, with different OS and in several programming languages, if used.
By now, you could possibly have concluded that when an algorithm uses statements that get executed just once, will all the time require the identical period of time, and when the statement is in loop condition, the time required increases depending on the variety of times the loop is about to run. And, when an algorithm has a mixture of each single executed statements and LOOP statements or with nested LOOP statements, the time increases proportionately, based on the variety of times each statement gets executed.
This leads us to ask the subsequent query, about methods to determine the connection between the input and time, given an announcement in an algorithm. To define this, we’re going to see how each statement gets an order of notation to explain time complexity, which is known as Big O Notation.
What are the Different Forms of Time complexity Notation Used?
As we now have seen, Time complexity is given by time as a function of the length of the input. And, there exists a relation between the input data size (n) and the variety of operations performed (N) with respect to time. This relation is denoted as Order of growth in Time complexity and given notation O[n] where O is the order of growth and n is the length of the input. Additionally it is called as ‘Big O Notation’
Big O Notation expresses the run time of an algorithm by way of how quickly it grows relative to the input ‘n’ by defining the N variety of operations which are done on it. Thus, the time complexity of an algorithm is denoted by the mix of all O[n] assigned for every line of function.
There are various kinds of time complexities used, let’s see one after the other:
1. Constant time – O (1)
2. Linear time – O (n)
3. Logarithmic time – O (log n)
4. Quadratic time – O (n^2)
5. Cubic time – O (n^3)
and plenty of more complex notations like Exponential time, Quasilinear time, factorial time, etc. are used based on the kind of functions defined.
Constant time – O (1)
An algorithm is claimed to have constant time with order O (1) when it will not be depending on the input size n. Regardless of the input size n, the runtime will all the time be the identical.
The above code shows that regardless of the length of the array (n), the runtime to get the primary element in an array of any length is similar. If the run time is taken into account as 1 unit of time, then it takes just one unit of time to run each the arrays, regardless of length. Thus, the function comes under constant time with order O (1).
Linear time – O(n)
An algorithm is claimed to have a linear time complexity when the running time increases linearly with the length of the input. When the function involves checking all of the values in input data, with this order O(n).
The above code shows that based on the length of the array (n), the run time will get linearly increased. If the run time is taken into account as 1 unit of time, then it takes only n times 1 unit of time to run the array. Thus, the function runs linearly with input size and this comes with order O(n).
Logarithmic time – O (log n)
An algorithm is claimed to have a logarithmic time complexity when it reduces the dimensions of the input data in each step. This means that the variety of operations will not be the identical because the input size. The variety of operations gets reduced because the input size increases. Algorithms are present in binary trees or binary search functions. This involves the search of a given value in an array by splitting the array into two and starting searching in a single split. This ensures the operation will not be done on every element of the info.
Quadratic time – O (n^2)
An algorithm is claimed to have a non-linear time complexity where the running time increases non-linearly (n^2) with the length of the input. Generally, nested loops come under this order where one loop takes O(n) and if the function involves a loop inside a loop, then it goes for O(n)*O(n) = O(n^2) order.
Similarly, if there are ‘m’ loops defined within the function, then the order is given by O (n ^ m), that are called polynomial time complexity functions.
Thus, the above illustration gives a good idea of how each function gets the order notation based on the relation between run time against the variety of input data sizes and the variety of operations performed on them.
The right way to calculate time complexity?
We’ve got seen how the order notation is given to every function and the relation between runtime vs no of operations, input size. Now, it’s time to know methods to evaluate the Time complexity of an algorithm based on the order notation it gets for every operation & input size and compute the whole run time required to run an algorithm for a given n.
Allow us to illustrate methods to evaluate the time complexity of an algorithm with an example:
The algorithm is defined as:
1. Given 2 input matrix, which is a square matrix with order n
2. The values of every element in each the matrices are chosen randomly using np.random function
3. Initially assigned a result matrix with 0 values of order equal to the order of the input matrix
4. Each element of X is multiplied by every element of Y and the resultant value is stored within the result matrix
5. The resulting matrix is then converted to list type
6. For each element within the result list, is added together to offer the ultimate answer
Allow us to assume cost function C as per unit time taken to run a function while ‘n’ represents the variety of times the statement is defined to run in an algorithm.
For instance, if the time taken to run print function is say 1 microseconds (C) and if the algorithm is defined to run PRINT function for 1000 times (n),
then total run time = (C * 1000 = 1 millisec
Run time for every line is given by:
Line 1 = C1 * 1
Line 2 = C2 * 1
Line 3,4,5 = (C3 * 1) + (C3 * 1) + (C3 * 1)
Line 6,7,8 = (C4*[n+1]) * (C4*[n+1]) * (C4*[n+1])
Line 9 = C4*[n]
Line 10 = C5 * 1
Line 11 = C2 * 1
Line 12 = C4*[n+1]
Line 13 = C4*[n]
Line 14 = C2 * 1
Line 15 = C6 * 1
Total run time = (C1*1) + 3(C2*1) + 3(C3*1) + (C4*[n+1]) * (C4*[n+1]) * (C4*[n+1]) + (C4*[n]) + (C5*1) + (C4*[n+1]) + (C4*[n]) + (C6*1)
Replacing all cost with C to estimate the Order of notation,
Total Run Time
= C + 3C + 3C + ([n+1]C * [n+1]C * [n+1]C) + nC + C + [n+1]C + nC + C
= 7C + ((n^3) C + 3(n^2) C + 3nC + C + 3nC + 3C
= 12C + (n^3) C + 3(n^2) C + 6nC
= C(n^3) + C(n^2) + C(n) + C
= O(n^3) + O(n^2) + O(n) + O (1)
By replacing all cost functions with C, we are able to get the degree of input size as 3, which tells the order of time complexity of this algorithm. Here, from the ultimate equation, it is obvious that the run time varies with the polynomial function of input size ‘n’ because it pertains to the cubic, quadratic and linear types of input size.
That is how the order is evaluated for any given algorithm and to estimate the way it spans out by way of runtime if the input size is increased or decreased. Also note, for simplicity, all cost values like C1, C2, C3, etc. are replaced with C, to know the order of notation. In real-time, we’d like to know the worth for each C, which might give the precise run time of an algorithm given the input value ‘n’.
Time Complexity of Sorting algorithms
Understanding the time complexities of sorting algorithms helps us in picking out one of the best sorting technique in a situation. Listed here are some sorting techniques:
What’s the time complexity of insertion sort?
The time complexity of Insertion Sort in one of the best case is O(n). Within the worst case, the time complexity is O(n^2).
What’s the time complexity of merge sort?
This sorting technique is for all types of cases. Merge Sort in one of the best case is O(nlogn). Within the worst case, the time complexity is O(nlogn). It is because Merge Sort implements the identical variety of sorting steps for all types of cases.
What’s the time complexity of bubble sort?
The time complexity of Bubble Sort in one of the best case is O(n). Within the worst case, the time complexity is O(n^2).
What is the time complexity of quick sort?
Quick Sort in one of the best case is O(nlogn). Within the worst case, the time complexity is O(n^2). Quicksort is taken into account to be the fastest of the sorting algorithms on account of its performance of O(nlogn) in best and average cases.
Time Complexity of Searching algorithms
Allow us to now dive into the time complexities of some Searching Algorithms and understand which ones is quicker.
Time Complexity of Linear Search:
Linear Search follows sequential access. The time complexity of Linear Search in one of the best case is O(1). Within the worst case, the time complexity is O(n).
Time Complexity of Binary Search:
Binary Search is the faster of the 2 searching algorithms. Nonetheless, for smaller arrays, linear search does a greater job. The time complexity of Binary Search in one of the best case is O(1). Within the worst case, the time complexity is O(log n).
Space Complexity
You may have heard of this term, ‘Space Complexity’, that hovers around when talking about time complexity. What’s Space Complexity? Well, it’s the working space or storage that’s required by any algorithm. It’s directly dependent or proportional to the quantity of input that the algorithm takes. To calculate space complexity, all you will have to do is calculate the space taken up by the variables in an algorithm. The lesser space, the faster the algorithm executes. Additionally it is necessary to know that point and space complexity should not related to one another.
time Complexity Example
Example: Ride-Sharing App
Consider a ride-sharing app like Uber or Lyft. When a user requests a ride, the app needs to search out the closest available driver to match the request. This process involves looking through the available drivers’ locations to discover the one which is closest to the user’s location.
By way of time complexity, let’s explore two different approaches for locating the closest driver: a linear search approach and a more efficient spatial indexing approach.
- Linear Search Approach: In a naive implementation, the app could iterate through the list of accessible drivers and calculate the gap between each driver’s location and the user’s location. It could then select the driving force with the shortest distance.
Driver findNearestDriver(List drivers, Location userLocation) { Driver nearestDriver = null; double minDistance = Double.MAX_VALUE; for (Driver driver : drivers) { double distance = calculateDistance(driver.getLocation(), userLocation); if (distance < minDistance) { minDistance = distance; nearestDriver = driver; } } return nearestDriver; }
The time complexity of this approach is O(n), where n is the number of accessible drivers. For a lot of drivers, the app’s performance might degrade, especially during peak times.
- Spatial Indexing Approach: A more efficient approach involves using spatial indexing data structures like Quad Trees or K-D Trees. These data structures partition the space into smaller regions, allowing for faster searches based on spatial proximity.
Driver findNearestDriverWithSpatialIndex(SpatialIndex index, Location userLocation) { Driver nearestDriver = index.findNearestDriver(userLocation); return nearestDriver; }
The time complexity of this approach is usually higher than O(n) since the search is guided by the spatial structure, which eliminates the necessity to compare distances with all drivers. It could possibly be closer to O(log n) and even higher, depending on the specifics of the spatial index.
In this instance, the difference in time complexity between the linear search and the spatial indexing approach showcases how algorithmic decisions can significantly impact the real-time performance of a critical operation in a ride-sharing app.
Summary
On this blog, we introduced the fundamental concepts of Time complexity and the importance of why we'd like to make use of it within the algorithm we design. Also, we had seen what are the different sorts of time complexities used for various sorts of functions, and at last, we learned methods to assign the order of notation for any algorithm based on the associated fee function and the variety of times the statement is defined to run.
Given the condition of the VUCA world and within the era of huge data, the flow of knowledge is increasing unconditionally with every second and designing an efficient algorithm to perform a particular task, is required of the hour. And, knowing the time complexity of the algorithm with a given input data size, might help us to plan our resources, process and supply the outcomes efficiently and effectively. Thus, knowing the time complexity of your algorithm, can show you how to do this and in addition makes you an efficient programmer. Completely happy Coding!
Be happy to depart your queries within the comments below and we’ll get back to you as soon as possible.