Linear polynomials, i.e. functions of the form , where and are real numbers, are the only^{1} real-valued functions on with the property that

has a common value (namely, ) for every pair of real numbers and . can be thought of as measuring how “steep” the graph of is and is called the **gradient** of . When we look at functions other than linear polynomials, our intuition is that the graph of still has a “steepness”, but the steepness now varies from point to point. We call the particular “steepness” at a given point the **derivative** of at . It turns out to be rather difficult to define derivatives in a proper manner, and the process of trying to make this definition precise motivates the development of the concept of limits of functions. But we’re not going to go into this in this post. Instead, I just wanted to point out that there is a way of avoiding having to do all this. We can just calculate “steepness” using (1). The price is that “steepness” then depends on two variables rather than just one. From now on I will use the more mathematical term **gradient** rather than “steepness”, although **derivative** would also be just as good (either way, I am extending these terms from their standard definitions). In case it’s not clear what I mean, here’s a formal definition of the term.

Definition 1For every real-valued function and every pair of real numbers and at which is defined, thegradientof from to is defined by the formula

You can develop a version of the theory of differentiation which is based on this definition. In particular, there are nice analogues to all the differentiation rules. The formulae are generally more complicated, but also somewhat easier to derive, and they can all be used to immediately derive the differentiation rules by taking the limits as approaches . They can also be used to derive the differentiation rules in the discrete calculus.

First, here are some rules dealing with functions in general.

Theorem 2For every pair of real-valued functions and and every pair of real numbers and at which and are defined,

*Proof:* Suppose and are real-valued functions and and are real numbers at which and are defined. Then

Theorem 3For every pair of real-valued functions and and every pair of real numbers and at which and are defined,

*Proof:* Suppose and are real-valued functions and and are real numbers at which and are defined. Then

Theorem 4For every pair of real-valued functions and and every pair of real numbers and at which is defined such that is defined at and ,

*Proof:* Suppose and are real-valued functions, and are real numbers at which is defined and is defined at and . Then

Theorem 5For every invertible real-valued function and every pair of real numbers and at which is defined,

*Proof:* Suppose is an invertible real-valued function and and are real numbers at which is defined. Since and , we have by Theorem 4. We also have by Theorem 7, so

which yields the proof upon rearrangement.

Now, here are some rules for specific functions.

Theorem 6For every triple of real numbers , and ,

*Proof:* Suppose , and are real numbers. Then

Theorem 7For every positive integer and every pair of real numbers and ,

*Proof:* Suppose is a positive integer and and are real numbers. Recall the geometric series formula:

where is a non-negative integer and is a real number. By substituting and into this formula and multiplying both sides by it follows that

so

Theorem 8For every positive integer and every pair of real numbers and which are positive if is even,

*Proof:* Suppose is a positive integer and and are real numbers which are positive if is even. Recall the geometric series formula:

where is a non-negative integer and is a real number. By substituting and into this formula and multiplying both sides by it follows that

so

Theorems 7 and 8 might not seem obviously useful, but they do allow a rational function to be expressed as a polynomial (it’s just that the rational function is quite simple while the polynomial is quite complex).

Theorem 9For every pair of non-zero real numbers and ,

*Proof:* Suppose and are real numbers. Then

*Proof:* Suppose and are non-zero real numbers. Since and , we have by Theorem 3. We also have by Theorem 6 and by Theorem 7, so

which yields the proof upon rearrangement.

These last three rules do not really simplify expression (1) very much, but they are interesting to see when you consider the regular differentiation rules.

Theorem 10For every positive real number and every pair of real numbers and ,

*Proof:* Suppose is a positive real number and and are real numbers. Then

Theorem 11For every pair of real numbers and ,

*Proof:* Suppose and are real numbers. Then

Theorem 12For every pair of real numbers and ,

*Proof:* Suppose and are real numbers. Then

Note the similarity of theorem 10 with theorems 11 and 12. The similarity is to do with the fact that exponential functions solve first-order differential equations and linear combinations of the sine and cosine functions solve second-order differential equations. I wrote a little about the relationship here in an earlier post.

That’s all I’m going to do for this post, but I still have questions to think about: how far can we go with this? Can we develop something analogous to the concept of Taylor series, for example? And can we do something similar with integration?

#### Footnotes

- How do we know they are the only such functions? Well, suppose is a real-valued function on and (1) always has the common value , regardless of the values of and . Then for every real number , , i.e. ; rearranging shows that , and, therefore, since the value of does not depend on , is a linear polynomial.