Wednesday, July 05, 2017

Using a Decision Method to Prove a New Theorem in Number Theory


The mathematician David Hilbert had a dream, a dream to mechanize mathematics. He hoped that every true mathematical result had a formal proof, and furthermore, that this formal proof would be discoverable by algorithmic methods. All you would have to do is state your theorem in some formal mathematical language, perform the algorithm, and voila! You would have a proof or disproof.

But Hilbert's dream was killed by Kurt Gödel and Alan Turing in the early 20th century. Gödel showed that, given any sufficiently powerful axiom system (roughly speaking, it's enough to be able to define addition and multiplication of integers), there would be true results lacking any formal proof. And Turing showed that, even if we restrict ourselves to provable results, there is no algorithm that, given a mathematical statement, will always halt and correctly report "provable" or "unprovable".

Nevertheless, there are some logical theories in which (a) every well-formed statement is provable or disprovable and (b) there is, in fact, an algorithm to find a proof or disproof. Such a theory is sometimes called decidable and the corresponding algorithm is called a prover. The first-order theory of the natural numbers with addition, often called Presburger arithmetic, is one such theory; it has a prover. Unfortunately, Presburger arithmetic is not very powerful, and although it has some practical applications, I am not aware of a single significant mathematical theorem proved using a Presburger prover.

However, when Presburger arithmetic is augmented with a certain additional function on natural numbers called Vk, for a fixed integer k ≥ 2, it remains decidable. And now it is powerful enough to prove things people really want to prove! My former master's student, Hamoon Mousavi, wrote a prover called Walnut for this bigger theory, and with it we have proven many new theorems of genuine mathematical interest. And we also reproved many results in the literature for which the only proofs previously known were long, tedious, and case-based.

Recently, my co-authors Aayush Rajasekaran and Tim Smith used a different method to algorithmically prove a new result in number theory. It concerns palindromes, which are numbers that, when considered as a string of digits in some integer base b ≥ 2, read the same forward and backwards. For example, the number 717 is a palindrome in base 10, but it is also a palindrome in base 2, since in binary it is 1011001101.

Last year, the number theorist William D. Banks started studying the additive number theory of palindromes. He proved that every natural number is the sum of at most 49 decimal palindromes. More recently, this result was improved by Florian Luca, Lewis Baxter, and the late Javier Cilleruelo, who proved that every natural number is the sum of at most 3 palindromes in base b, for all b ≥ 5. However, it seems that so far, nobody proved any bound at all for bases b = 2, 3, 4.

Here is our result: every natural number is the sum of at most 9 binary palindromes. The bound "9" is probably not the best possible result, as empirical results suggest the best possible bound is probably 4. Probably somebody will improve our bound soon! What makes our result interesting, though, is how we did it. Instead of the heavily case-based approach of Banks and Cilleruelo-Luca-Baxter, we used a decision method: we recoded the problem as a formal language theory problem, and then used the fact that this formalism has a decidable logical theory associated with it. Then we used publicly-available software to prove our result.

Here are the details: we created a certain kind of automaton, called a nondeterministic nested-word automaton, that takes integers n, represented in base 2, as input. Given an input representing an integer n, our automaton "guesses" a possible representation as a sum of palindromes, and then "verifies" that its guess is correct. Here the "verifies" means checking that the summands are indeed palindromes (read the same forwards and backwards) and that they sum to n. If the guess succeeds, the automaton accepts the input. Then the "sum of palindromes" theorem we want to prove amounts to claiming that the automaton accepts every possible natural number as input.

Luckily for us, the so-called "universality problem" (does a given automaton accept every input?) is actually decidable for nested word automata, a result proved by Alur and Madhusudan. We used the ULTIMATE automata library to then check the universality of the automaton we created. For more details, see our preprint.

Could other theorems in number theory be proved using this method? Yes, we proved a few more in our preprint. The holy grail would be a decidable logical theory strong enough to express traditional theorems about primes. If such a theory existed, we could, at least in theory, prove statements like Goldbach's conjecture (every even number > 2 is the sum of two primes) purely mechanically, by expressing them in the proper formalism and then running a prover. But currently we do not even know whether Presburger arithmetic, together with a predicate for primality, is decidable.

What's next? Well, a lot! But that will be the subject of Aayush's master's thesis, so you'll have to wait to find out.

9 comments:

sniffnoy said...

Sorry, just what is V_k? I couldn't find it in the linked Walnut paper.

Jeffrey Shallit said...

V_k (n) is the highest power of k dividing n. Not the exponent of the power, but the power itself.

Paul C. Anagnostopoulos said...

This is just too much fun! Thanks for posting it.

~~ Paul

lukebarnes said...

That's fascinating.

I have a related question. You say "Gödel showed that, given any sufficiently powerful axiom system (roughly speaking, it's enough to be able to define addition and multiplication of integers), there would be true results lacking any formal proof."

And yet I am told that the real numbers are complete and decidable. More precisely, wikipedia tells me, the natural first-order theory of arithmetic of real numbers (with both addition and multiplication) is complete and decidable, as was shown by Tarski (1948). This would seem to me (as a spectator, not a mathematician) as an example of a more powerful axiomatic system being complete and decidable.

Again, I remember reading somewhere that, while one can add axioms to an incomplete axiomatic system to make statements decidable, this will result in other statements becoming undecidable. But that doesn't seem to be the case here. Or should I not be thinking of the reals as an "expansion" (in some sense) of the integers? I find the whole thing fascinating and confusing in equal measure.

philosopher-animal said...

Nifty. A lot of presentations of undecidability make it sound like nobody should ever care about it mathematically speaking because decidable theories are way too weak. But this is an intermediary case I wasn't aware of and gives the lie to that attitude.

Now I wonder increasingly how this maps on to traditional programming and software concerns. I'm in application security now, and while it is true that AS is undecidable in general, I wonder about security properties of certain classes of programs increasingly.

Joe Felsenstein said...

Extremely cool, though it leaves me a bit dizzy.

Jeffrey Shallit said...

Luke:

The existential theory of the reals is not stronger than, for example, Presburger arithmetic. They're not comparable. For one thing, in the existential theory of the reals it is not possible to express the assertion "x is an integer".

lukebarnes said...

Got it.

Well, sort of. I'm off to read something about the existential theory of the reals.

sniffnoy said...

I see, thanks!