You are currently browsing the monthly archive for janvier 2015.

Given $t\in\mathbb{R}$, we would like to find solutions, probably rather approximate solutions, to the diophantine equation $pt-q=0$ (so the unknown $p,q$ are constrained to live in $\mathbb{Z}$ ; we may also require them to be coprime, and the denominator shall always be positive). This equation has an exact integral solution if and only if $t$ is rational. Finding an approximate solution amounts to finding a fraction $p/q$ approximating $t$. We want to define a notion of best approximation. Of course there are rationals arbitrarily close to $t$, but their denominators are then unbounded; so we fix an amount we are ready to pay, or rather a bound to the denominators, say $Q$, and we restrict our attention to fractions $p/q$ with $q\leq Q$. We will say that $p/q$ with $q\leq Q$ is a best approximation (of first kind) of order Q if $\vert pt-q\vert < \vert at -b\vert$ for all $a/b,b\leq Q$. The second kind would be to demand $\vert t-p/q\vert$ to be minimal; it happens that both awards aren't always awarded to the same candidate.

How well can a number be approximated ?
Given a bound $Q$ on the denominators, how close can be fractions with denominator $q\leq Q$ to $t$ ? Let $\Vert x\Vert \in [0,1/2]$ be the distance to the nearest integer, so we are looking at $\Vert qt\Vert$. A simple application of the pigeonhole principle ensures that there exists $q\leq Q$ with $\Vert qt \Vert\leq 1/Q$, so in particular $\Vert qt \Vert \leq 1/q$ and there exists a sequence of fractions $p/q$ with $\vert t-p/q\vert\leq 1/q^2$. Note that the result is not constructive. This exponent 2 is nice, but can we do better? Can we be sure to find an unbounded sequence of $q$ with $\Vert qt\Vert \leq 1/q^2$ for instance?
The answer is no: Liouville proved that if $t$ is an irrational algebraic number of degree at most d (so that there exists a unitary rational polynomial of degree $d\geq 2$ such that $P(t)=0$, then there exists a constant $C(t)$ such that for any p,q, $\vert t-p/q\vert\geq C(t)/q^d$. Thus for quadratic irrationals (irrationals which are algebraic of degree 2), the exponent 2 is really the best possible. But the discussion concerning the optimal constant C is still open.
So if you define $\delta(t) =\sup \{m\mid\exists\text{ a sequence }p/q\text{ with }q\rightarrow \infty : \vert t-p/q\vert\leq 1/q^m\}$ then $\delta\leq d$ for algebraic irrationals of degree $d$.

This theorem seems paradoxical because it states that the irrationals which seem to be the least irrational, the algebraic ones of low degree, are actually the most difficult to approximate. The paradox vanishes (or at least fades) when one remarks that the rationals are actually the most difficult to approximate: indeed, if $t=a/b$ is in lowest terms, and if $p/q\neq t$ is a rational in lowest terms with $q\leq Q$, then $\vert p/q-t\vert\geq \frac{1}{bq}\geq\frac{1}{bQ}$ so we get an exponent 1. Notice the trick that a non-zero integer is at least 1, which is also together with the intermediate value theorem, the trick to prove Liouville’s theorem. Liouville’s theorem is also the easiest gateway to exhibiting transcendental numbers; indeed Liouville found numbers with arbitrarily high $\delta$.

In fact, as far as the exponent is concerned, algebraic irrationals of high degree are not any easier to approximate than quadratic irrationals, as has been revealed by the theorem of Thue-Siegel-Roth, saying that for an algebraic irrational $t$ and any $\epsilon>0$, there exists a constant $C(t,\epsilon)$ such that for all integers $p,q$, $\vert t-p/q\vert\leq \frac{C(t,\epsilon)}{q^{2+\epsilon}}$.

The best exponent is 2 in general, but what about the constant ?
We can always find a sequence $p/q$ with $\vert t-p/q\vert\leq 1/q^2$, and for quadratic irrationals, $\vert t-p/q\vert\geq C/q^2$ so between $C$ and 1, what is the optimal constant? The truth, as Hurwitz’s theorem says, is that $\vert t-p/q\vert < \frac{1}{\sqrt{5}q^2}$ for infinitely many $p,q$. And for the golden ratio and its conjugates, $\frac{1}{\sqrt{5}}$ is optimal; for all other numbers, one can do better, certainly arbitrarily better (arbitrarily big constants) for algebraic irrationals of degree more than 2, and merely a better constant for other quadratic irrationals. We’ll come back to this later.

How to construct good approximations (exponent 2)?
The existence of a sequence $p/q$ with $\vert t-p/q\vert\leq 1/q^2$ is always granted but how to construct one? If you naively look at the decimal expansion of your number, you only get an approximation of quality $1/q$.
So the correct way to proceed is to use continuous fraction expansion. So take $t\in [0,1]$, and look at the map $T(x)=\{\frac{1}{x}\}$. Put $a_1=\lfloor \frac{1}{x}\rfloor$ and $a_k=\lfloor \frac{1}{T^k(x)}\rfloor$. Then
$x=\frac{1}{a_1+\frac{1}{a_2+\cdots}}=:[a_1,a_2,\cdots...]$. If you write $A_k/B_k=[a_1,\cdots,a_k]$ the kth convergent, then you can prove that $\vert B_kt-A_k\vert<1/B_{k+1}<1/B_k$, which is the quadratic approximation we desired.

Best approximation
Not only do continuous fractions yield a natural way towards quadratic approximation, they also provide the « best approximations » in the following sense. We fix a number $t$ (let’s say for convenience that it’s irrational); we say a fraction $a/b$ is a best approximation if every other fraction $p/q\neq a/b$ with $0 satisfies $\vert bt-a\vert< \vert pt-q\vert$.

Then every best approximation is a convergent $A_k/B_k=[a_0;a_1,\cdots,a_k]$ and any convergent is a best approximation

Which numbers are the toughest to approximate ?
We have seen that for every number $t$, there exists a convergent sequence $p/q$ with $\vert t-p/q\vert\leq \frac{1}{\sqrt{5}q^2}$. The only numbers for which $\sqrt{5}$ is optimal are the golden ratio $g$ and his conjugate, solution of the quadratic equation $x^2-x-1=0$. We know that $g\in (1,2)$ and $g=1+1/g$, so $a_0=\lfloor g\rfloor=1$ and by injecting the description of $g$ in the right-hand side, we get $g=1+\frac{1}{1+1/g}=1+\frac{1}{1+\frac{1}{1+1/g}}=\cdots$ so the convergents are plainly $[1,\cdots,1]$. In particular it is periodic.
We could try to characterize the numbers for which the $a_k$ (called the partial quotients) are periodic. The same question is easy to answer for the other pleasant type of rational approximations, the decimal approximations: for which numbers is the sequence of digits eventually periodic, i.e. $t=a_0,a_1a_2\cdots a_k\overline{a_{k+1}\cdots a_m}$ where the bar means that the sequence is repeated ad vitam? The answer is very easy: it is the rationals. So for which numbers do we have $t=[a_0;a_1,a_2,\cdots a_k,\overline{a_{k+1},\cdots ,a_m}]$? Lagrange proved that these were exactly the quadratic irrationals.

Now from Liouville’s theorem, and Thue-Siegel-Roth, we suspect that the toughest numbers to approximate are the quadratic irrationals, so the numbers which have periodic partial quotients. The answer is slightly more general: it is the numbers which have bounded partial quotients. Indeed, we can formulate the following alternative:
For any irrational number $t$ and integer $n$, either there are infinitely many positive $b$ such that $b\Vert bt\Vert<\frac{1}{\sqrt{n^2+4}}$ or $a_k for all sufficiently large indices $k$. This is a generalization of Hurwitz' theorem.

Other neighbouring questions
What we have just seen is a characterization of the real numbers $t$ such that $\inf n\Vert nt\Vert>0$: they are the real numbers with bounded partial quotients $a_k$. Borel has shown that this set has Lebesgue measure 0. But of course they do exist, the golden ratio for instance. Now what if you take a second real number $s$ and ask for $\liminf n\Vert nt\Vert\Vert ns\Vert$: will you always find 0? Littlewood’s conjecture is that it is the case.

Happy new year to all readers! This year I hope will be full of mathematical reflexions here, alas certainly again in English, in order to reach new audience (I’ve just noticed I’m referenced on Terry Tao’s weblog for instance).

I have already discussed what this same Tao calls the dichotomy principle (« a massive lot or very few ») for polynomials. It was essentially the case of polynomials in two variables, as in the traditional Bezout’s theorem: two coprime polynomials $P,Q\in \mathbb{R}[X,Y]$ of degrees $m,n$ have at most $mn$ common zeros,
where $\mathbb{R}$ can in fact be replaced by any field, and
with the inequality being an equality in the algebraic closure, if one counts points at infinity and multiplicities of intersection. This can be proven either with the resultant, or by some argument of dimensions.

Lines in curves, planes in surfaces

For instance it implies that if a polynomial of degree $m$ vanishes on $m+1$ distinct points of the line $ax+by+c=0$, then $ax+by+c$ is a factor of this polynomial. However it’s not relevant at all in this case to invoke Bezout’s theorem, because the statement is quite immediate in the case where the line has equation $x=0$ – expand according to the powers of $X$, so latex $P=\sum_{k=0}^{m}c_k(Y)X^k$ and notice that this line is in fact fairly general (by change of variables).
What about a polynomial in three variables now, so an algebraic surface of the space instead of an algebraic curve in the plane? An algebraic surface can very well include a full line and still be irreducible, as show ruled surfaces like the hyperbolic paraboloid $z=x^2-y^2$. Lines are not the zero set of a truly irreducible polynomial anymore; indeed a line is defined by two equations of degree 1 $f=0$ and $g=0$. But this system of equations is in $\mathbb{R}$ synonymous with $f^2+g^2=0$, which involves a polynomial of degree 2 which is irreducible over $\mathbb{R}$ but not over $\mathbb{C}$. But an algebraic surface cannot contain a plane, without the degree 1 equation polynomial defining the plane to factor out in it.

Intersection points, intersection lines
In dimension $n$, a reasonable generalization of Bezout’s theorem must involve enough hypersurfaces (of codimension 1) in order for the intersection to be expected to be points, so of dimension 0, hence you need $n$ hypersurfaces. If their degrees are $d_1,\cdots,d_n$, it is reasonable to guess $d_1\cdots d_n$ intersection points. But the example of the hyperbolic paraboloid (degree 2) with a line included in it, defined by two equations of degree 1, yields infinitely many points of intersections, instead of the expected 2… Although all the equations are coprime. But the correct statement is that if the number of intersections is finite, it’s at most the product of the degrees.

However, a variant of Bezout’s theorem involving two surfaces in $\mathbb{R}^3$ is still possible. Surely, you won’t bound the number of points of intersection, but you can bound the number of lines of intersection: the intersection of two coprime surfaces of degree $n$ and $m$ contains at most $nm$ points. A proof of this can be seen on Larry Guth’s lecture notes on polynomial method lecture 13.

Monge-Cayley-Salmon: flecnodal points, ruled surfaces
Salmon’s theorem, hinted at by Monge earlier and also stated by Cayley, says that if an algebraic surface $S=\{(x,y,z)\mid P(x,y,z)=0\}$ is such that at every point $x$ there exists a line $l_x\ni x$ tangent up to order 3 to $S$ (flecnodal point), then for all points $x$, the line $l_x$ is in fact fully included and so the surface is ruled. As a line tangent up to order 3, on a surface of degree 2 has to be included, we suppose that $p$ is squarefree and of degree at most 3. Then flecnodal points of $p$ are determined by a polynomial $Fl(p)$ of degree $11d-24$. Together with Bezout’s theorem, it yields the following dichotomy for a surface of degree $d$ :
1) Either ruled, so one line in $S$ through each point
2) Or only $O(d^2)$ on $S$.

This is one of the basic ingredients for Guth-Katz solution of the Erdös distinct distance. problem.