# Calculus in Newton’s Principia

I would like to start my fourth blog by drawing your attention to Melvyn Bragg’s BBC Radio 4 In Our Time broadcast on Émilie du Châtelet (see image above) the brilliant translator of Newton’s The Principia into French and lover of Voltaire. Alas, she died of childbirth complications before her translation was published (it came out posthumously in 1756). I found it quite enthralling. But remarks were made about The Principia that I disagree with.

The assertion that there is no calculus in The Principia is wrong. Newton’s love-hate relationship with the calculus is extremely complex, but he needs the calculus and uses it. In Lemmas 2 and 3 of Section 1, Book 1, he sets up the Riemann integral with uncharacteristic clarity, and in Lemma 2 of Section 2, Book 2, he manages to combine most of the elementary results on differentiation in one beautifully worded but obscure sentence.

I have lost count of the number of times that Newton proves the fundamental theorem of calculus in The Principia. This theorem states, roughly speaking, that integration and differentiation are inverses of each other, so the derivative of the integral is the function you started with, and the other way round. He never gives this result as a proposition that can stand on its own feet, but always slips a proof in as part of the main argument. The experts will realise that there are all kinds of difficulties, and that a crucial ingredient is deciding what kind of function one is prepared to integrate.

A function f (x) is said to be monotonically increasing if f (b) ≥ f (a) whenever b > a, and to be monotonically decreasing if f (b) ≤ f (a) whenever b > a. Let us say that f (x) is locally monotonic if every finite interval on which f (x) is defined can be chopped up into a finite number of subintervals in such a way that f (x) is monotonic on each of these pieces. An obvious example is the function f (x) = sin(x). An example of a function f (x) that does not satisfy this condition is given by f (0) = 0, and f (x) = x sin(x -1) for x ≠ 0. Note that f (x) = 0 whenever x = 1/(n𝜋) for any integer n ≠ 0. When doing calculus you want to measure angles, as here, in radians. But replace 𝜋 by 180 if you prefer degrees. So there are infinitely many values of x for which f (x) = 0 in any interval that contains 0, and at these values f (x) is alternately increasing and decreasing. The factor of x included in the definition is to make the function continuous; otherwise it would tear itself apart at the origin.

Another problem is that Newton does not have a precise definition of the derivative of a function. More generally, he does not have a precise definition of a limit.
Another problem, that mathematicians will see coming, is with the terms ‘for all’ and ‘there exists’. These are now called ‘quantifiers’. If you are not used to the formal manipulation of quantifiers, consider the following two statements.

1. For every real number x there exists a real number y such that it is the case that y > x.
2. There exists a real number y such that for every real number x it is the case that y > x.

One is true and the other is false. And yes, Newton makes a quantifier mistake.

Because of all these difficulties it is not reasonable to ask who first proved the fundamental theorem of calculus. Newton may have made a better job of it than his predecessors and contemporaries. Whether he failed to state the result as a proposition was to avoid the issue of priority I do not know.

Another point where I differ from the experts on the BBC broadcast is in the assertion that there is a Leibnizian calculus and a Newtonian calculus. I would say that there is only one calculus, but with different notations. In most contexts the Leibniz notation is far superior; but it was Newton who calculated a triple integral, and solved partial differential equations.

Finally, the assertion made in the broadcast that Newton did not make hypotheses is wrong. He makes very explicit numbered hypotheses, despite his very famous hypotheses non fingo from the great General Scholium that terminates The Principia. The problem lies in the fact that, in typical Newton style, he uses ‘hypothesis’ in two senses. He is stating in the scholium that he does not make metaphysical hypotheses about the causes of gravity. But he does make scientific hypotheses about, for example, the nature of viscosity. It is like the distinction between superstition and true belief. There is a problem with translating fingo. Here ‘conjure up’ might be suitable. But he uses this verb with no negative connotations in other contexts. So fingamus is the word he uses for ‘let us assume’.

I reproduce below the lemma (a single sentence) in which Newton tells you everything you need to know about differentiation, followed by my translation. What this lemma actually demonstrates is that Newton’s calculus notation is not optimal, and that you need an annotated translation of The Principia.