Complexity is based off of how the algorithm scales for very large n.
Constants and lower-order terms are dropped.
For example:
A counterexample for this question is:
Linear time Algorithm
The algorithmic complexity is order: O(log(N)). Far better scalable than the previous.
In a relational database, you get this logarithm access time pattern through the access of a sequence.
In a read Trimming (trim function), we can't use an index. We have to touch every record no matter what.
The task is fundamentally O(N).
We can do better with parallel tasks (of workers)
The algorithmic complexity is order: O(N/k)
N is the number of items to operate on and k is the number of workers. Because the execution is parallel, the amount of time it takes to complete the task (time complexity) is number of items to operate on divided by the number of workers.