Definition:Algorithm/Analysis
![]() | This page has been identified as a candidate for refactoring of basic complexity. Until this has been finished, please leave {{Refactor}} in the code.
New contributors: Refactoring is a task which is expected to be undertaken by experienced editors only. Because of the underlying complexity of the work needed, it is recommended that you do not embark on a refactoring task until you have become familiar with the structural nature of pages of $\mathsf{Pr} \infty \mathsf{fWiki}$.To discuss this page in more detail, feel free to use the talk page. When this work has been completed, you may remove this instance of {{Refactor}} from the code. |
Definition
There are two primary methods for analyzing algorithms formally:
Correctness
The primary method of validity for algorithms is using a Proof of Correctness.
This is correctness through verification of the algorithm accomplishing its purpose formally, and terminating in $k$ finite steps.
Complexity
An algorithm may be analysed in the context of its complexity.
This is typically via measures of run-times using big-$\OO$, or big-$\Omega$, or $\Theta$ notation.
Complexity can be through 3 main types of analysis:
- Average-Case Analysis, using a distribution of cases to find an average case of the algorithm run-time for analysis
- Best-Case Analysis, using the best possible problem instance of the algorithm for analysis
- Worst-Case Analysis,using the worst possible problem instance of the algorithm for analysis
Why not just execute algorithms on machines to find their runtimes?
A common misconception of algorithms is that analyzing run times of algorithmic behaviour using an execution of a finite state machine is just as effective as finding their complexity to demonstrate one algorithm to be more efficient than another. The final step of algorithm design is to implement the algorithm on a machine and not the first step. Algorithm behaviours are very hard to spot in implementation as opposed to when observing complexity. Other factors such as CPU speeds, and physical factors upon machines distort the true runtime on these machines. To compare their implementation speeds is a poor way to generalize algorithm complexity versus proper analysis.
An example of this is Quicksort. Theoretically runs in quadratic time (in worst-case) but, in implementation it might run in logarithmic time depending on implementation (in best-case).
Sources
- 1997: Donald E. Knuth: The Art of Computer Programming: Volume 1: Fundamental Algorithms (3rd ed.) ... (previous) ... (next): $\S 1.1$: Algorithms