About
Complexity is based off of how the algorithm scales for very large n.
Constants and lower-order terms are dropped.
For example:
- <math>O(1)</math> will execute in constant time
- a model that takes <math>100*n^2</math> time has <math>O(n^2)</math> complexity,
- a model that takes <math>4n + n^2</math> time has also <math>O(n^2)</math> complexity.
A counterexample for this question is:
- Algorithm A has running time <math>n^3</math> and algorithm B has running time <math>10n^2</math> . Until <math>n >= 10</math> , A is faster than B.
order N
- 40 records, 40 comparisons
- N records, N Comparison
- The algorithmic complexity is order N: O(N)
Linear time Algorithm
Binary tree, binary search
- 40 records, 4 comparisons
- N records, log(N) Comparisons
The algorithmic complexity is order: O(log(N)). Far better scalable than the previous.
In a relational database, you get this logarithm access time pattern through the access of a sequence.
Parallel Workers
In a read Trimming (trim function), we can't use an index. We have to touch every record no matter what.
The task is fundamentally O(N).
We can do better with parallel tasks (of workers)
- For 40 records
- with 7 workers
- we need 7 (Cycles|Time) to trim all records
The algorithmic complexity is order: O(N/k)
N is the number of items to operate on and k is the number of workers. Because the execution is parallel, the amount of time it takes to complete the task (time complexity) is number of items to operate on divided by the number of workers.