Normal Numbers: Arithmetic, Computational and Probabilistic Aspects
In this workshop recent developments in the area of normal numbers are discussed. The concept of normal numbers goes back to E. Borel (1909) who considered a real number x as "normal" in base q if all digits in the expansion to the bases q, q^2, q^3,... occur asymptotically with frequency 1/q . This is equivalent to the fact that the sequence (x.q^k); k = 1, 2, ... is uniformly distributed modulo 1 and thus the discrepancy of this sequence can be used as a quantitative measure of normality. A real number x is defined to be "absolutely normal" if it is normal with respect to all bases q = 2, 3, 4, 5, ... . E. Borel showed that with respect to Lebesgue measure almost all numbers x are absolutely normal, whereas it is much more difficult to find constructions of such numbers. A first approach in this direction is due to W. Sierpinski (1917) without giving a "formal" algorithm for producing absolutely normal numbers. Later A. Turing (1935) realized that no "effective" algorithm for solving this problem is known. In the 1960's a big progress was made by W. Schmidt but it took until the 21-st century to obtain a polynomial time algorithm for constructing absolutely normal numbers. The aim of this workshop is to discuss the enormous progress in this field which was made during the last 10 years. This includes computational, algorithmic and quantitative aspects as well as connections with diophantine approximation and analytic number theory.