Tuesday, August 20, 2019

Why Can't Creationists Do Mathematics?


I suppose it's not so remarkable that creationists can't do mathematics. After all, almost by definition, they don't understand evolution, so that alone should suggest some sort of cognitive deficit. What surprises me is that even creationists with math or related degrees often have problems with basic mathematics.

I wrote before about Marvin Bittinger, a mathematician who made up an entirely bogus "time principle" to estimate probabilities of events. And about Kirk Durston, who speaks confidently about infinity, but gets nearly everything wrong.

And here's yet another example: creationist Jonathan Bartlett, who is director of something called the Blyth Institute (which, mysteriously, lists no actual people associated with it, and seems to consist entirely of Jonathan Bartlett himself), has recently published a post about mathematics, in which he makes a number of very dubious assertions. I'll just mention two.

First, Bartlett calls polynomials the "standard algebraic functions". This is definitely nonstandard terminology, and not anything a mathematician would say. For mathematicians, an "algebraic function" is one that satisfies the analogue of an algebraic equation. For example, consider the function f(x) defined by f^2 + f + x = 0. The function (-1 + sqrt(1-4x))/2 satisfies this equation, and hence it would be called algebraic.

Second, Bartlett claims that "every calculus student learns a method for writing sine and cosine" in terms of polynomials, even though he also states this is "impossible". How can one resolve this contradiction? Easy! He explains that "If, however, we allow ourselves an infinite number of polynomial terms, we can indeed write sine and cosine in terms of polynomial functions".

This reminds me of the old joke about Lincoln: "In discussing the question, he used to liken the case to that of the boy who, when asked how many legs his calf would have if he called its tail a leg, replied, "Five," to which the prompt response was made that calling the tail a leg would not make it a leg."

If one allows "an infinite number of polynomial terms", then the result is not a polynomial! How hard can this be to understand? Such a thing is called a "power series"; it is not the same as a polynomial at all. Mathematicians even use a different notation to distinguish between these. Polynomials over a field F in one variable are written using the symbol F[x]; power series are written as F[[x]].

Moral of the story: don't learn mathematics from creationists.

P.S. Another example of Bartlett getting basic things wrong is here.

6 comments:

JimV said...

A mathematician probably would have just said that the Weierstrass approximation theorem states that every continuous function defined on a closed interval [a, b] can be uniformly approximated as closely as desired by a polynomial function. For all practical purposes, there is no need to go to an infinite number of terms. (No electronic calculator or computer does, and that's where most students and engineers get their sine and cosine values.)

The thing I would add to calculus textbooks (since the three I have used never mentioned it) is that the calculus of continuous functions is an excellent approximation to a lot of things which in real life are not continuous. E.G., Maxwell's Equations (electric charge is not continuous, we never see a charge smaller in absolute value than that of an electron--although slightly smaller but still discrete charges are assumed to exist); Stokes Equations (fluid flow is not continuous since fluids are composed of molecules); stress in bridge girders; et cetera. I tend to think, similar to Zeno and Democritus, that nothing, not even space and time, are continuous, but that is just an opinion.

Gerry Myerson said...

I was thinking about the Lincoln thing the other day, in particular in a mathematical context. How many primes are there between 10 and 20, if we call 14 a prime? Is the right answer "five", or is it "four, because calling 14 a prime doesn't make it one"? What is, and what isn't, a prime, is a matter of definition, and definitions are subject to change as circumstances change. There was a time when it was not unusual to call 1 a prime – it was listed as a prime in D N Lehmer's tables. There are occasions when it is convenient to view -1 as a prime. It's conceivable (though I'm not holding my breath) that some day we'll find good reasons for calling 14 a prime. Does calling a pseudorhombicuboctahedron an Archimedean solid make it one? Does calling a stellated dodecahedron a polyhedron make it one? I think a case can be made that if, for some purpose, we find it convenient to call the tail a leg, then, at least for that purpose, that makes it one.

jfnorburg said...

To argue that it may be, at some point, acceptable to call a tail a leg is to strain credulity and beg the question of why we should ever want to do so. Your examples of things we "might" define into reality serve as little more than to say, "anything is possible in this jet propelled age." Yes such is possible, but highly improbable to the point of not requiring consideration.

JimV said...

As usual there is a semantic difference involved as to how "calling" is understood.

Suppose we defined prime numbers as follows: all numbers except 1 which are only divisible by themselves and 1, plus the number 14. If that was the actual mathematical definition then 14 would be a prime. So if "calling" meant "redefining" then Lincoln would be wrong.

But if "calling" means "pretending" or "asserting" Lincoln is correct, which would be the conventional reading. Then Lincoln's point is that we don't get to make up our own versions of reality.

Unknown said...

Bartlett is "crevo", and has (or had) at least 2 creationism blogs and his own 'publishing' company, of which he was the only client. He once declared that it would have taken at least 1 million mutations to get an obligate biped from a knuckle-walking ape, just for the pelvic region. He was asked repeatedly if he could provide evidence that this was so, and to name 100 specific features that would have needed to have been altered via mutation. He bailed, of course, but continued to insist he was correct.

Just goes to show that creationists are usually wrong about genetics and general biology as well as math. Even an engineer like "crevo."

Rob Fielding said...

So, his paper "Simplifying and Refactoring Introductory Calculus" makes A LOT OF SENSE.

https://arxiv.org/pdf/1811.03459.pdf

The change that he makes to second derivative notation is almost obvious in hindsight. People can honestly disagree about what is "Easier to teach", but following this advice has completely cleared up things about Calculus that bothered me.

And to be honest, other than his (apparently) insistence that higher derivative notation be fixed; it's not entirely original. JB happens to be a computer programmer as well. People that spend all their time on chalkboards and on paper or in LaTeX can work around bad notation. Computers can't really do this.

This is because it's very possible to have correct answers for all the inputs that they get applied to, for hundreds of years; because by definition, you use tools where they work, and obey a rule to not uses it in the situations where the notation has broken down.

In what he is doing, he has effectively discovered Auto-differentiation; which is the technique used for Deep Learning in Artificial Intelligence. I ran into Johnathan Bartlett's paper when I was trying to make sense of Auto-differentiation. In a DeepLearning model, you have the notion of taking a general program with thousands of inputs and differentiating it. You calculate f(x0,x1,x2,...) for a really huge function that could have hundreds of thousands of parameters; because you will use the first derivative of it to do backpropagation. In this situation if you DO NOT fix the damned notation, then in your CODE you have to fill it with exceptional cases to get the correct answer in spite of an issue with the notation. Welcome to the world of programming.

I would say that if you try to implement multi-variable calculus in a raw programming language (ie: Python without libraries), that you will start to understand the perspective of Finitism to some degree. In code, when the (algebraic) pattern breaks; you really need to think hard about whether putting in special case code to handle things like "this value is really small/large" is going to eventually break a pattern that it needs to match.

We are beginning to see a clash of worlds from computer graphics programmers ... who discover Geometric Algebra, when they start to regard all the bizzare workarounds in Gibbs Vector Algebra as bugs.

Just because something works, doesn't mean that it's holy and unchallengeable. All day long, I deal with code that works perfectly over a wide range of requirements for many years; and a customer needs to apply it to a new context. In these cases, you realize that some inconveniences that you tolerated are actually design flaws; or even bugs when scope of it is expanded.

The comments about "creationists can't do calculus" is idiotic by the way. Newton himself was a religious person that calculated the end of the world from the bible, dabbled in alchemy, and other kinds of crackpot pseudo-science. It's completely irrelevant to what he managed to get right.

I haven't read anything else from Johnathan Bartlett, but his paper about higher derivative fixes is quite interesting. And his Calculus book actually is a really great book on the subject. The bits in it that are heretical are out of necessity. But almost all math books written by people who have actual experience writing large code bases, in computer graphics especially, have something in common: they understand the value of making things as algebraic as possible, so that the code does the right thing without a human being second-guessing it.