Linear polynomials, i.e. functions ${f}$ of the form ${x \in {\mathbb R} \mapsto m x + c}$, where ${m}$ and ${c}$ are real numbers, are the only1 real-valued functions on ${{\mathbb R}}$ with the property that

$\displaystyle \frac {f(b) - f(a)} {b - a} \ \ \ \ \ (1)$

has a common value (namely, ${m}$) for every pair of real numbers ${a}$ and ${b}$. ${m}$ can be thought of as measuring how “steep” the graph of ${f}$ is and is called the gradient of ${f}$. When we look at functions ${f}$ other than linear polynomials, our intuition is that the graph of ${f}$ still has a “steepness”, but the steepness now varies from point to point. We call the particular “steepness” at a given point ${a}$ the derivative of ${f}$ at ${a}$. It turns out to be rather difficult to define derivatives in a proper manner, and the process of trying to make this definition precise motivates the development of the concept of limits of functions. But we’re not going to go into this in this post. Instead, I just wanted to point out that there is a way of avoiding having to do all this. We can just calculate “steepness” using (1). The price is that “steepness” then depends on two variables rather than just one. From now on I will use the more mathematical term gradient rather than “steepness”, although derivative would also be just as good (either way, I am extending these terms from their standard definitions). In case it’s not clear what I mean, here’s a formal definition of the term.

Definition 1 For every real-valued function ${f}$ and every pair of real numbers ${a}$ and ${b}$ at which ${f}$ is defined, the gradient of ${f}$ from ${a}$ to ${b}$ is defined by the formula

$\displaystyle \Delta f(x) |_{x = a}^{x = b} = \frac {f(b) - f(a)} {b - a}.$

You can develop a version of the theory of differentiation which is based on this definition. In particular, there are nice analogues to all the differentiation rules. The formulae are generally more complicated, but also somewhat easier to derive, and they can all be used to immediately derive the differentiation rules by taking the limits as ${b}$ approaches ${a}$. They can also be used to derive the differentiation rules in the discrete calculus.

First, here are some rules dealing with functions in general.

Theorem 2 For every pair of real-valued functions ${f}$ and ${g}$ and every pair of real numbers ${a}$ and ${b}$ at which ${f}$ and ${g}$ are defined,

$\displaystyle \Delta (f(x) + g(x)) |_{x = a}^{x = b} = \Delta f(x) |_{x = a}^{x = b} + \Delta g(x) |_{x = a}^{x = b}.$

Proof: Suppose ${f}$ and ${g}$ are real-valued functions and ${a}$ and ${b}$ are real numbers at which ${f}$ and ${g}$ are defined. Then

$\begin{array}{rcl} \Delta (f(x) + g(x)) |_{x = a}^{x = b} &=& \frac {(f(b) + g(b)) - (f(a) + g(a))} {b - a} \\ &=& \frac {(f(b) - f(a)) + (g(b) - g(a))} {b - a} \\ &=& \frac {f(b) - f(a)} {b - a} + \frac {g(b) - g(a)} {b - a} \\ &=& \Delta f(x) |_{x = a}^{x = b} + \Delta g(x) |_{x = a}^{x = b}. \end{array}$

Theorem 3 For every pair of real-valued functions ${f}$ and ${g}$ and every pair of real numbers ${a}$ and ${b}$ at which ${f}$ and ${g}$ are defined,

$\displaystyle \Delta f(x) g(x) |_{x = a}^{x = b} = \Delta f(x) |_{x = a}^{x = b} g(b) + f(a) \Delta g(x) |_{x = a}^{x = b}.$

Proof: Suppose ${f}$ and ${g}$ are real-valued functions and ${a}$ and ${b}$ are real numbers at which ${f}$ and ${g}$ are defined. Then

$\begin{array}{rcl} \Delta (f(x) g(x)) |_{x = a}^{x = b} &=& \frac {f(b) g(b) - f(a) g(a)} {b - a} \\ &=& \frac {(f(b) g(b) - f(a) g(b)) + (f(a) g(b) - f(a) g(a))} {b - a} \\ &=& \frac {(f(b) - f(a)) g(b)} {b - a} + \frac {f(a) (g(b) - g(a))} {b - a} \\ &=& \frac {f(b) - f(a)} {b - a} g(b) + f(a) \frac {g(b) - g(a)} {b - a} \\ &=& \Delta f(x) |_{x = a}^{x = b} g(b) + f(a) \Delta g(x) |_{x = a}^{x = b}. \end{array}$

Theorem 4 For every pair of real-valued functions ${f}$ and ${g}$ and every pair of real numbers ${a}$ and ${b}$ at which ${g}$ is defined such that ${f}$ is defined at ${g(a)}$ and ${g(b)}$,

$\displaystyle \Delta f(g(x)) |_{x = a}^{x = b} = \Delta f(u) |_{u = g(a)}^{u = g(b)} \Delta g(x) |_{x = a}^{x = b}.$

Proof: Suppose ${f}$ and ${g}$ are real-valued functions, ${a}$ and ${b}$ are real numbers at which ${g}$ is defined and ${f}$ is defined at ${g(a)}$ and ${g(b)}$. Then

$\begin{array}{rcl} \Delta f(g(x)) |_{x = a}^{x = b} &=& \frac {f(g(b)) - f(g(a))} {b - a} \\ &=& \frac {(f(g(b)) - f(g(a))) (g(b) - g(a))} {(b - a) (g(b) - g(a))} \\ &=& \frac {f(g(b)) - f(g(a))} {g(b) - g(a)} \frac {g(b) - g(a)} {b - a} \\ &=& \Delta f(g(x)) |_{g(x) = g(a)}^{g(x) = g(b)} \Delta g(x) |_{x = a}^{x = b}. \end{array}$

Theorem 5 For every invertible real-valued function ${f}$ and every pair of real numbers ${a}$ and ${b}$ at which ${f^{\circ -1}}$ is defined,

$\displaystyle \Delta f^{\circ -1}(x) |_{x = a}^{x = b} = \frac 1 {\Delta f(u) |_{u = f^{\circ -1}(a)}^{u = f^{\circ -1}(b)}}.$

Proof: Suppose ${f}$ is an invertible real-valued function and ${a}$ and ${b}$ are real numbers at which ${f^{\circ -1}}$ is defined. Since ${a = f(f^{\circ -1} a)}$ and ${b = f(f^{\circ -1} b)}$, we have ${\Delta x |_{x = a}^{x = b} = \Delta f(u) |_{u = f^{\circ - 1}(a)}^{u = f^{\circ -1}(b)} \Delta f^{\circ -1}(x) |_{x = a}^{x = b}}$ by Theorem 4. We also have ${\Delta x |_{x = a}^{x = b} = 1}$ by Theorem 7, so

$\displaystyle 1 = \Delta f(u) |_{u = f^{\circ - 1}(a)}^{u = f^{\circ -1}(b)} \Delta f^{\circ -1}(x) |_{x = a}^{x = b},$

which yields the proof upon rearrangement.

Now, here are some rules for specific functions.

Theorem 6 For every triple of real numbers ${c}$, ${a}$ and ${b}$,

$\displaystyle \Delta c |_{x = a}^{x = b} = 0.$

Proof: Suppose ${c}$, ${a}$ and ${b}$ are real numbers. Then

$\begin{array}{rcl} \Delta c |_{x = a}^{x = b} &=& \frac {c - c} {b - a} \\ &=& \frac 0 {b - a} \\ &=& 0. \end{array}$

Theorem 7 For every positive integer ${n}$ and every pair of real numbers ${a}$ and ${b}$,

$\displaystyle \Delta x^n |_{x = a}^{x = b} = \sum_{i = 0}^{n - 1} a^i b^{(n - 1) - i}.$

Proof: Suppose ${n}$ is a positive integer and ${a}$ and ${b}$ are real numbers. Recall the geometric series formula:

$\displaystyle \sum_{i = 0}^m r^i = \frac {r^{m + 1} - 1} {r - 1},$

where ${m}$ is a non-negative integer and ${r}$ is a real number. By substituting ${m = n - 1}$ and ${r = \frac b a}$ into this formula and multiplying both sides by ${a^{n - 1}}$ it follows that

$\displaystyle \sum_{i = 0}^{n - 1} b^i a^{(n - 1) - i} = \frac {b^n - a^n} {b - a},$

so

$\displaystyle \Delta x^n |_{x = a}^{x = b} = \sum_{i = 0}^{n - 1} b^i a^{(n - 1) - i}.$

Theorem 8 For every positive integer ${n}$ and every pair of real numbers ${a}$ and ${b}$ which are positive if ${n}$ is even,

$\displaystyle \Delta \sqrt[n] x |_{x = a}^{x = b} = \left( \sum_{i = 0}^{n - 1} \sqrt[n] {a^i b^{(n - 1) - i}} \right)^{-1}.$

Proof: Suppose ${n}$ is a positive integer and ${a}$ and ${b}$ are real numbers which are positive if ${n}$ is even. Recall the geometric series formula:

$\displaystyle \sum_{i = 0}^m r^i = \frac {r^{n + 1} - 1} {r - 1},$

where ${m}$ is a non-negative integer and ${r}$ is a real number. By substituting ${m = n - 1}$ and ${r = \sqrt[n] {\frac b a}}$ into this formula and multiplying both sides by ${\sqrt [n] {a^{n - 1}}}$ it follows that

$\displaystyle \sum_{i = 0}^{n - 1} \sqrt[n] {b^i a^{(n - 1) - i}} = \frac {b - a} {\sqrt[n] b - \sqrt[n] a},$

so

$\displaystyle \Delta \sqrt[n] x |_{x = a}^{x = b} = \left( \sum_{i = 0}^{n - 1} \sqrt[n] {b^i a^{(n - 1) - i}} \right)^{-1}.$

Theorems 7 and 8 might not seem obviously useful, but they do allow a rational function to be expressed as a polynomial (it’s just that the rational function is quite simple while the polynomial is quite complex).

Theorem 9 For every pair of non-zero real numbers ${a}$ and ${b}$,

$\displaystyle \left. \Delta \frac 1 x \right|_{x = a}^{x = b} = \frac 1 {a b}.$

Proof: Suppose ${a}$ and ${b}$ are real numbers. Then

$\begin{array}{rcl} \left. \Delta \frac 1 {x} \right|_{x = a}^{x = b} &=& \frac {\frac 1 b - \frac 1 a} {b - a} \\ &=& \frac {a - b} {(b - a) a b} \\ &=& -\frac 1 {a b}. \end{array}$

Proof: Suppose ${a}$ and ${b}$ are non-zero real numbers. Since ${1 = a a^{-1}}$ and ${1 = b b^{-1}}$, we have ${\Delta 1_{x = a}^{x = b} = \Delta x |_{x = a}^{x = b} b^{-1} + a \Delta x^{-1} |_{x = a}^{x = b}}$ by Theorem 3. We also have ${\Delta 1 |_{x = a}^{x = b} = 0}$ by Theorem 6 and ${\Delta x |_{x = a}^{x = b} = 1}$ by Theorem 7, so

$\displaystyle 0 = \frac 1 b + a \left. \Delta \frac 1 x \right|_{x = a}^{x = b},$

which yields the proof upon rearrangement. $\Box$

These last three rules do not really simplify expression (1) very much, but they are interesting to see when you consider the regular differentiation rules.

Theorem 10 For every positive real number ${c}$ and every pair of real numbers ${a}$ and ${b}$,

$\displaystyle \Delta c^x |_{x = a}^{x = b} = c^a \Delta c^x |_{x = 0}^{x = b - a}.$

Proof: Suppose ${c}$ is a positive real number and ${a}$ and ${b}$ are real numbers. Then

$\begin{array}{rcl} \Delta c^x |_{x = a}^{x = b} &=& \frac {c^b - c^a} {b - a} \\ &=& \frac {c^a c^{b - a} - c^a} {b - a} \\ &=& \frac {c^a (c^{b - a} - 1)} {b - a} \\ &=& c^a \frac {c^{b - a} - 1} {b - a}. \\ &=& c^a \Delta c^x |_{x = 0}^{x = b - a}. \end{array}$

Theorem 11 For every pair of real numbers ${a}$ and ${b}$,

$\displaystyle \Delta \sin x |_{x = a}^{x = b} = \Delta \sin x |_{x = 0}^{x = b - a} \cos x + \Delta \cos x |_{x = 0}^{x = b - a} \sin x.$

Proof: Suppose ${a}$ and ${b}$ are real numbers. Then

$\begin{array}{rcl} \Delta \sin x |_{x = a}^{x = b} &=& \frac {\sin b - \sin a} {b - a} \\ &=& \frac {\sin ((b - a) + a) - \sin a} {b - a} \\ &=& \frac {\sin (b - a) \cos a + \sin a \cos (b - a) - \sin a} {b - a} \\ &=& \frac {\sin (b - a) \cos a + \sin a (\cos (b - a) - 1)} {b - a} \\ &=& \frac {\sin (b - a)} {b - a} \cos a + \frac {\cos (b - a) - 1} {b - a} \sin a \\ &=& \Delta \sin x |_{x = 0}^{x = b - a} \cos a + \Delta \cos x |_{x = 0}^{x = b - a} \sin a. \end{array}$

Theorem 12 For every pair of real numbers ${a}$ and ${b}$,

$\displaystyle \Delta \cos x |_{x = a}^{x = b} = \Delta \cos x |_{x = 0}^{x = b - a} \cos x + \Delta \sin x |_{x = 0}^{x = b - a} \sin x.$

Proof: Suppose ${a}$ and ${b}$ are real numbers. Then

$\begin{array}{rcl} \Delta \cos x |_{x = a}^{x = b} &=& \frac {\cos b - \cos a} {b - a} \\ &=& \frac {\cos ((b - a) + a) - \cos a} {b - a} \\ &=& \frac {\cos (b - a) \cos a + \sin a \sin (b - a) - \cos a} {b - a} \\ &=& \frac {(\cos (b - a) - 1) \cos a + \sin a \sin (b - a)} {b - a} \\ &=& \frac {\cos (b - a) - 1} {b - a} \cos a + \frac {\sin (b - a)} {b - a} \sin a \\ &=& \Delta \cos x |_{x = 0}^{x = b - a} \cos a + \Delta \sin x |_{x = 0}^{x = b - a} \sin a. \end{array}$

Note the similarity of theorem 10 with theorems 11 and 12. The similarity is to do with the fact that exponential functions solve first-order differential equations and linear combinations of the sine and cosine functions solve second-order differential equations. I wrote a little about the relationship here in an earlier post.

That’s all I’m going to do for this post, but I still have questions to think about: how far can we go with this? Can we develop something analogous to the concept of Taylor series, for example? And can we do something similar with integration?

#### Footnotes

1. How do we know they are the only such functions? Well, suppose ${f}$ is a real-valued function on ${{\mathbb R}}$ and (1) always has the common value ${m}$, regardless of the values of ${a}$ and ${b}$. Then for every real number ${x}$, ${\frac {f(x) - f(0)} {x - 0} = m}$, i.e. ${\frac {f(x) - f(0)} x = m}$; rearranging shows that ${f(x) = m x + f(0)}$, and, therefore, since the value of ${f(0)}$ does not depend on ${x}$, ${f}$ is a linear polynomial.