Cobham's invited half-hour talk took place during the Congress for Logic, Methodology and Philosophy of Science at the Hebrew University of Jerusalem, which was held from August 26 to September 2 1964. His paper, which he delivered on the last morning of the conference (September 2), was entitled "The intrinsic computational difficulty of functions". It later appeared in the proceedings of that conference [1], which were published in 1965.

The paper introduces a number of fundamental ideas and questions that continue to drive computational complexity theory today. For example, Cobham starts by asking "is it harder to multiply than to add?", a question for which we still do not have a satisfactory answer, 50 years later. Clearly we can add two n-bit numbers in O(n) time, but it is still not known whether it is possible to multiply in linear time. The best algorithm currently known is due to Fürer, and runs in n(log n)2^{O(log* n)} time.

Cobham then goes on to point out the distinction between the complexity of a *problem* and the running time of a particular algorithm to solve that problem (a distinction that many students still don't appreciate).

Later in the paper, Cobham points out that many familiar functions, such as addition, multiplication, division, square roots, and so forth can all be computed in time "bounded by a polynomial in the lengths of the numbers involved". He suggests we consider the class "ℒ", of all functions having this property. Today we would call this class P. (Actually, P is usually considered to consist only of the {0,1}-valued functions, but this is a minor distinction.)

He then goes on to discuss why P is a natural class. The reasons he gives are the same ones I give students today: first, that the definition is invariant under the choice of computing model. Turing machines, RAMs, and familiar programming languages have the property that if a problem is in P for one such model, then it is in P for all the others. (Today, though, we have to add an asterisk, because the quantum computing model offers several problems in BQP (such as integer factorization) for which no polynomial-time solution is known in any reasonable classical model.)

A second reason, Cobham observes, is that P has "several natural closure properties" such as being "closed in particular under ... composition" (if f and g are polynomial-time computable, then so is their composition f ∘ g).

He then mentions the problem of computing f(n), the n'th prime function, and asks if it is in P. Fifty years later, we still do not know the answer; the fastest algorithm known runs in O(n^{½+ε}), which is exponential in log n.

He concludes the paper by saying that the problems he mentioned in his talk are "fundamental" and their "resolution may well call for considerable patience and discrimination" --- very prescient, indeed.

Like many scientific ideas, some of the ideas underlying the definition of the class P appeared earlier in several places. For example, in a 1910 paper [2], the mathematician H. C. Pocklington discussed two different algorithms for solving a quadratic congruence, and drew a distinction between their running times, as follows: "the labour required here is proportional to a power of the logarithm of the modulus, not to the modulus itself or its square root as in the indirect process, and hence see that in the case of a large modulus the direct process will be much quicker than the indirect."

In 1956, a letter from Gödel to von Neumann discussed the possibility that proofs of assertions could be carried in linear or quadratic time and asked specifically about the number of steps required to test if a number n is prime. Today we know that primality can indeed be decided in polynomial time.

About the same time, Waterloo's own Jack Edmonds was considering the same kinds of ideas. In a 1965 paper [3], he drew a distinction between algorithms that "increases in difficulty exponentially with the size of the" input and those whose "difficulty increases only algebraically". He raised, in particular, the graph isomorphism problem, whose complexity is still unsolved today.

For some reason I don't understand, Cobham never got much recognition for his ideas about P. (Neither did Edmonds.) Stephen Cook, in a review of Cobham's paper, wrote "This is perhaps the best general discussion in print" of the subject. But, as far as I know, Cobham never got any kind of award or prize.

Cobham did other fundamental work. For example, a 1954 paper in the *Journal of the Operations Research Society* on wait times in queues has over 400 citations in Google Scholar. In two papers published in 1969 and 1972, respectively [4,5], he introduced the notion of "automatic sequence" (that is, a sequence over a finite alphabet computable, given the base-k expansion of n, using a finite automaton) and proved most of the really fundamental properties of these sequences. And in a 1968 technical report [6] he discussed proving transcendence of certain formal power series, although his proofs were not completely correct.

Alan Belmont Cobham was born on November 4 1927, in San Francisco, California. His parents were Morris Emin Cobham (aka Emin Maurice Cobham) (October 2 1888 - February 1973), a silk merchant, and Ethel Carolina Rundquist (June 24 1892 - Nov 1977), an artist. He had a older sister, Claire Caroline Cobham (June 18 1924 - November 29 2000), who worked for Boehringer-Ingelheim Pharmaceuticals. In the 1940 census, Alan was living in Palm Beach, Florida, where his father was a hotel manager.

Cobham's parents in 1920.

Sometime between 1940 and 1945, Alan's family moved to the Bronx, where Alan attended the Fieldston School. Below is a picture of Alan from the 1945 Fieldston Yearbook.

Alan attended Oberlin College. In the 1948 Oberlin College yearbook, he appears in a photo of the Mathematics Club (below). He is in the front row, 3rd from the right.

Later, Alan transferred to the University of Chicago. He worked for a time in the Operations Evaluation Group of the United States Navy in the early 1950's. He went on to do graduate work at both Berkeley and MIT, although he never got a Ph.D. He also worked at IBM Yorktown Heights from the early 1960's until 1984. One of his achievements at IBM was a computer program, "Playbridge", that was, at the time, one of the best programs in the world for playing bridge; it was profiled in an October 7 1984 article in the *New York Times*. In the fall of 1984, Alan left IBM and became chair of the fledgling computer science department at Wesleyan University in Middletown, Connecticut, a post which he held until June 30 1988.

I interviewed Alan Cobham in 2010. I was hoping to find out about the reception of his paper in 1964, but unfortunately, he was clearly suffering from some sort of mild dementia or senility, and could not remember any details of his work on P. When I asked him what he did to keep himself busy, he said, "I watch a lot of TV."

Alan passed away in Middletown, Connecticut, on June 28 2011. As far as I can tell, he never married, nor did he have any children.

References

[1] A. Cobham, The intrinsic computational difficulty of functions, in Y. Bar-Hillel, ed., *Logic, Methodology and Philosophy of Science: Proceedings of the 1964 International Congress*, North-Holland Publishing Company, Amsterdam, 1965, pp. 24-30.

[2] H. C. Pocklington, The determination of the exponent to which a number belongs, the practical solution of certain congruences, and the law of quadratic reciprocity, *Proc. Cambridge Phil. Soc.* **16** (1910), 1-5.

[3] J. Edmonds, Paths, trees, and flowers, *Canad. J. Math.* **17** (1965), 449-467.

[4] A. Cobham, On the base-dependence of sets of numbers recognizable by finite automata, *Math. Systems Theory* **3** (1969), 186-192.

[5] A. Cobbham, Uniform tag sequences, *Math. Systems Theory* **6** (1972), 164-192.

[6] A. Cobham, A proof of transcendence based on functional equations, IBM Yorktown Heights, Technical report RC-2041, March 25 1968.

*This is a draft of an article I am preparing. I would appreciate feedback and more information, if you have it, about Alan Cobham.
*

## 4 comments:

I have only a slight familiarity with complexity notation. Can you explain what n(log n)2^{O(log* n)} means? I don't recall seeing a big-O expression in a superscript.

~~ Paul

When a big O occurs inside an expression, it is shorthand for "there exists a function f that is big-O of the given thing that can be substituted for the big-O". So g(n) = n (log n) 2^{O(log* n)} means "there exists f which is O(log*n) such that g(n) = n (log n) 2^f(n)".

And if you don't know the function log* n, it is the number of times you need to apply log repeatedly to n to get a number ≤ 1.

This is really interesting. I look forward to reading the final version.

Thanks for your post.

I wondered about his whereabouts for a long time.

Post a Comment