There are two primary methods for analyzing algorithms formally:
Complexity can be through 3 main types of analysis:
- Average-Case Analysis, using a distribution of cases to find an average case of the algorithm run-time for analysis
- Best-Case Analysis, using the best possible problem instance of the algorithm for analysis
- Worst-Case Analysis,using the worst possible problem instance of the algorithm for analysis
Why not just execute algorithms on machines to find their runtimes?
A common misconception of algorithms is that analyzing run times of algorithmic behaviour using an execution of a finite state machine is just as effective as finding their complexity to demonstrate one algorithm to be more efficient than another. The final step of algorithm design is to implement the algorithm on a machine and not the first step. Algorithm behaviours are very hard to spot in implementation as opposed to when observing complexity. Other factors such as CPU speeds, and physical factors upon machines distort the true runtime on these machines. To compare their implementation speeds is a poor way to generalize algorithm complexity versus proper analysis.
An example of this is Quicksort. Theoretically runs in quadratic time (in worst-case) but, in implementation it might run in logarithmic time depending on implementation (in best-case).
- 1997: Donald E. Knuth: The Art of Computer Programming: Volume 1: Fundamental Algorithms (3rd ed.) ... (previous) ... (next): $\S 1.1$: Algorithms