User:Barto/Asymptotic Notation

Every reference I know is a bit fuzzy about asymptotic notations. (And it's a good thing they are - look how tedious it is to define them properly and in full generality!) However, if there is one place where they should be defined properly, it surely is ProofWiki.

Difficulties:


 * Asymptotic notations are used in different spaces: $\R\to\R$, $\R\to\C$, $\C\to\C$, Banach $\to$ Banach or more generally normed $\to$ normed. Evidently, all of these are special cases of the last. Does this mean we only need one definition? Surely not. ProofWiki has to be understandable, and we do not expect from someone doing their first asymptotic estimates in e.g. Definition:Analytic Number Theory to read the definition of a normed vector space to proceed.


 * Asymptotic notations are used in different contexts: there are point estimates (e.g. occurring in series expansions) as well as estimates at infinity. Here, $\infty$ (seemingly) has a different meaning depending on the space: in the real case, it is usually interpreted as $+\infty$, whereas in other spaces it is commonly interpreted as in Alexandroff Extension, where neighborhoods of $\infty$ are the complements of closed compact sets.


 * There are definitions using $\lim$ or $\limsup$, which apply to cases where we're in a field and if things are nonzero. They can be mentioned in the definition page, but I'd rather treat them as corollaries instead of definitions.


 * Domains of validity. Sometimes we say an estimate is valid in a certain domain only. (In complex analysis: typically half planes, infinite rectangles, angular regions.)


 * Parameter-dependent estimates'''. Sometimes, the functions or domains of validity involved depend on a parameter, in which case the implied constants and implied neighborhoods may depend on that parameter. You may say we should not use them because they're difficult to define properly. I say we should provide the formal framework to allow them to be used.


 * When it comes to proving properties of $O$ and $o$, such as transitivity of $O$ estimates (or less innocent basic properties such as substitution or going from non-uniform to uniform $O$-estimates), all these different factors (source space, target space, estimate at point or infinity, parameter-dependence and domains of validity) make it cumbersome to prove them and organize the proof pages. This is the main motivation to try to find a general framework, in order to avoid essentially proving things twice.


 * $O$ estimates frequently occur at both sides of an equation. Example: $x+O(\log x)=\log x + O(x)$. Some resolve this by defining $O(f)$ as a set of functions and treating equations with $O$-estimates as inclusions of sets. An other framework consists of treating such equations as sentences with quantifiers. Example: for any $f=O(\log x)$, there exists $g=O(x)$ such that $x+f=\log x + O(g)$. In any case, the $=$-sign becomes non-symmetric. That is, equations have to be read left-to-right.


 * $O$-estimates (or other ones) may appear inside formulas in a more nested way. Example: $f(z)=O(e^{O_\epsilon(z^{1+\epsilon})})$ ($f$ is an entire function of order $1$)

Contributing:

Everyone is welcome to change the draft definitions I gave here. Also, feel free to discuss them on the talk page of this project.

Links to definitions at ProofWiki
Big-O

Little-O

Asymptotic Equivalence: $\sim$

Order of Magnitude: $\asymp$
Definition: $f=O(g)$ and $g=O(f)$