Harmonic Counting: Zeta, Primes and the Euler Constants

Estimated read time 7 min read

Dr. Jonathan Kenigson, FRSA

The Riemann Zeta Function evaluated at a point s is merely the sum of the integers 1, 2, 3, and so on, each raised to the –s power. For instance, the Zeta function of 2 is the sum of 1+ 1/4 + 1/9 + 1/16 + 1/25 + … and on to infinity. This sum is said to “converge” or “tend toward” a limiting value. In this case, the result is well-known: the infinite sum tends toward (1/6)K, where K is the area of a circle with a radius equal to the square root of pi. Recall that pi is the ratio of the circumference (distance around) and given circle to the diameter (distance across) the same circle. It turns out that this ratio is the same for every circle one can conceivably make. Now consider all pairs of natural numbers (n, m). The numbers n and m are called “relatively prime” if they have only the trivial nonnegative common factor 1. For instance, the numbers 10 and 3 are relatively prime because there is no other natural number which divides into both 10 and 3 without remainder except for 1 itself. As a counterexample, the numbers 2 and 6 are not relatively prime because they can both be divided by 2 without remainder.

One may naturally ask certain questions about pairs of numbers and their properties. Particularly natural questions are often phrased in terms of probability, or odds. One may ask, for instance, questions about whether two numbers n and m are relatively prime as regards the definition above. As a matter of course, one finds with some diligence that given n and m as natural numbers, the probability that n and m are relatively prime is precisely 1/((1/6)K). This is a peculiar result in the sense that it unites geometry with number theory in a particularly elegant manner. The circumference of a circle ostensibly has little to do with the probability of two numbers not having any common factor. In mathematical research, one asks causal questions in much the same sense as a physical scientist would: Is there some paradigm larger than both domains of geometry and number theory that unites both? Is there some substrate on which both the theory of numbers and the theory of Euclidean geometry may be married?

The answer – if one alone can be said to exist – is deceptively more subtle than the question that is posed. At present, mathematicians’ resolution of it is only vanishingly partial and tentative. Natural questions often lead to convoluted and protracted answers that merely pose more questions in turn. In the case of the probability of two integers being relatively prime, it is first necessary to compute the probability in terms of prime factors. One obtains an infinite product as a result of this inquiry. This form is, however, grossly unsuitable for more detailed analysis as a sum. One finds a tool that converts multiplication into addition. A canonical tool for this task is the logarithm – a function that plucks the power off on a quantity. Consider, for instance, the number 4 raised to the twelfth power. The logarithm base four of this quantity is 12 – merely the power that 4 is raised to in the given question. Similarly, the logarithm base 5 of 5 raised to the power 12 is also 12. Intuitively, logarithms do not “care” about anything but the power of what argument they accept, and spit this power back out if we choose as our base for the logarithm the same quantity that we are exponentiating.

Applying the logarithm concept to an infinite product yields an infinite sum. Because we are aiming to study sums, we may convert our sum into the sort of sum employed in the Zeta function: the sum of the inverse powers of integers. When this is done, a series of manipulations may be performed in infinite succession to derive identities for the sum. The logarithm is the engine that permits us to travel from the world of prime numbers and their products to the world of sums. The world of sums is an altogether different world. The conversion of a function to a sum is known as a series representation of that function. What we obtain in this case is a series representation for the Zeta function. The most basic representation for the Zeta evaluated at two converges as mentioned in the first paragraph. How about other integers? Is the Zeta function capable of being similarly evaluated at 3, or 5, or 17, or 504? Unfortunately, the answer is no. The first several small integers like 3 and 4 have simple “closed forms”, albeit more complex than the case for Zeta of 2. For the Zeta of larger numbers, a closed-form is at present intractable, and perhaps theoretically so (one may perhaps prove that no such “reasonable” form exists).

The Euler constant is merely the difference between the harmonic partial sum evaluated at n and the natural logarithm of n as n gets larger without bound. By “Harmonic” sum “evaluated” at n, we mean the sum of the first n inverses of positive integers: 1 + 1/2 + 1/3 + 1/4 + 1/5 + …+1/n – ln(n). The function ln(n) is merely the logarithm function evaluated at a different base known as e, the exponential base. The number e is an infinite and non-terminating decimal that arises in such broad areas as fission decay, the growth of stock portfolios, the resolution of competitive games, and the pure and applied subjects of Cryptography, Physics, Mechanics, and Number Theory. It may seem trivial to postulate that the Zeta function is easy to state in terms of Harmonic sums because: (1) The Zeta function is represented by logarithms and (2) The Euler constant is defined in terms of logarithms and Harmonic sums. This conclusion would be grossly incorrect. The banalities of the argument regarding the reasons for this complexity would lead us far afield into Complex Analysis and its corollaries in the Theory of Numbers – a trip that, even with adequate repose to consider the philosophy of the approach – could span many years or even lifetimes.

It is sufficient to say that Harmonic sums diverge as they get larger – that is, they eventually exceed any fixed finite number. Moreover, they cannot be “made” to converge without a Scheme of Renormalization, which is a machine that converts infinite quantities predictably into finite ones without engendering contradictions in the underlying arithmetic. The study of such schemes and their logarithmic and exponential forms is known as Renormalization Analysis and is a staple of modern Field Theory. In Field Theory, infinite quantities appear and disappear, rendering mathematical analysis convoluted and unintuitive. The representation of the Zeta in terms of Harmonic sums is difficult in this sense. Delicate balancing acts keep partial sums finite. Mathematicians are just beginning to understand them after some substantial breakthroughs during the era of the Great Depression. The tools of analysis are Combinatorial, meaning that they derive from delicate ways of counting collections in novel and surprising ways. Researchers are actively working to determine these counting strategies. Because Renormalization is so prominent in physics, there may be many concrete applications with social and commercial utility. Realms of interest include Quantum Computing, Nonstandard Logic, Network Analysis, Probability, and even the Equilibrium Theory of Markets. Pure mathematics is not often pure forever. Utility and commerce penetrate even the most recondite domains of study and eventually render the impossible to the mundane. To lose wonder at this transformation and its sociological, epistemological, and economic implications is to ignore nearly every fundamental physical and mathematical advance made since the era of Newton.