Chain Rule: Further Examples

Example. Let MATH Let
MATH
Then
MATH

MATH

MATH
We may express these relations with the following matrix equation:
MATH

Example. Show that
MATH
satisfies the partial differential equation
MATH
where $F$ is any smooth function.

Proof. Let
MATH
with
MATH
Then
MATH

MATH

MATH
Therefore
MATH

Two-dimensional Laplacian

Let MATH Show that
MATH
Proof. From
MATH
we have
MATH

MATH

MATH

MATH

MATH

MATH

MATH

MATH

MATH

MATH
Therefore
MATH

MATH

MATH

MATH

MATH

MATH

Derivatives of functions defined implicitly

Some surfaces in 3-space are described by Cartesian equations of the form
MATH
An equation like this is said to provide an implicit representation of the surface. For example, the equation $x^2+y^2+z^2-1=0$ represents the surface of a unit sphere with center at the origin. Sometimes it is possible to solve the equation $F(x,y,z)=0$ for one of the variables in terms of the other two, say for $z$ in terms of $x$ and $y$. This leads to one or more equations of the form
MATH
For the sphere we have two solutions,
MATH
one representing the upper hemisphere, the other the lower hemisphere.

In the general case it may not be an easy matter to obtain an explicit formula for $z$ in terms of $x$ and $y$. For example, there is no easy method for solving for $z$ in the equation MATH Nevertheless, a judicious use of the chain rule makes it possible to deduce various properties of the partial derivatives MATH and MATH without an explicit knowledge of $f(x,y)$. The procedure is described in this section.

We assume that there is a function $f(x,y)$ such that
MATH
all $(x,y)$in some open set $S$, although we may not have explicit formulas for calculating $f(x,y)$. We describe this by saying that the equation $F(x,y,z)=0$ defines $z$ implicitly as a function of $x$ and $y$, and we write
MATH

Now we introduce an auxiliary function $g$ defined on $S$ as follows:
MATH
Equation $F[x,y,f(x,y)]=0$ states that $g(x,y)=0$ on $S$; hence the partial derivatives MATH and MATH are also $0$ on $S$. But we can also compute these partial derivatives by the chain rule. To do this we write
MATH
where MATH, MATH, and $u_3(x,y)=f(x,y)$. The chain rule gives us the formulas
MATH
and
MATH
where each partial derivative $D_kF$ is to be evaluated at $(x,y,f(x,y)).$ Since we have
MATH
the first of the foregoing equations becomes
MATH
Solving this for MATH we obtain
MATH
at those points at which MATH By a similar argument we obtain a corresponding formula for MATH:
MATH
at those points at which MATH These formulas are usually written more briefly as follows:
MATH

EXAMPLE. Assume that the equation MATH defines $z$ as a function of $x$ and $y$, say $z=f(x,y).$ Find a value of the constant $c $ such that $f(0,e)=2$, and compute the partial derivatives MATH and MATH at the point $(x,y)=(0,e).$

Solution. When $x=0,y=e$, and $z=2$, the equation becomes $e^2+4-e^2-c=0$, and this is satisfied by $c=4$. Let MATH From
MATH
and
MATH
we have
MATH
When $x=0,y=e$, and $z=2$ we find MATH and MATH Note that we were able to compute the partial derivatives MATH and MATH using only the value of $f(x,y)$ at the single point $(0,e).$

The foregoing discussion can be extended to functions of more than two variables.

THEOREM. Let $F$ be a scalar field differentiable on an open set $T$ in $\QTR{bf}{R^n.}$ Assume that the equation
MATH
defines $x_n$ implicitly as a differentiable function of $x_1,...,x_{n-1}$, say
MATH
for all points MATH in some open set $S$ in $\QTR{bf}{R^{n-1}.}$ Then for each $k=1,2,...,n-1,$ the partial derivative $D_kf$ is given by the formula
MATH
at those points at which $D_nF\ne 0$. The partial derivatives $D_kF$ and $D_nF$ which appear in this equation are to be evaluated at the point MATH

The discussion can be generalized in another way. Suppose we have two surfaces with following implicit representations:
MATH
If these surfaces intersect along a curve $C$, it may be possible to obtain a parametric representation of $C$ by solving the two equations simultaneously for two of the variables in terms of the third, say for $x$ and $y$ in terms of $z$. Let us suppose that it is possible to solve for $x$ and $y$ and that solutions are given by the equations
MATH
for all $z$ in some open interval $(a,b)$. Then when $x$ and $y$ are replaced by $X(z)$ and $Y(z)$, respectively, the two equations in this equation are identically satisfied. That is, we can write $F[X(z),Y(z),z]=0$ and $G[X(z),Y(z),z]=0$ for all $z$ in $(a,b).$ Again, by using the chain rule, we can compute the derivatives $X^{\prime }(z)$ and $Y^{\prime }(z)$ without an explicit knowledge of $X(z)$ and $Y(z)$. To do this we introduce new functions $f$ and $g$ by means of the equations
MATH
Then $f(z)=g(z)=0$ for every $z$ in $(a,b)$ and hence the derivatives $f^{\prime }(z)$ and $g^{\prime }(z)$ are also zero on $(a,b)$. By the chain rule these derivatives are given by the formula
MATH
Since $f^{\prime }(z)$ and $g^{\prime }(z)$ are both zero we can determine $X^{\prime }(z)$ and $Y^{\prime }(z)$ by solving the following pair of simultaneous linear equations:
MATH

MATH
At those points at which the determinant of the system is not zero, these equations have a unique solution which can be expressed as follows, using Cramer's rule:
MATH
The determinants which appear in this equation are determinants of Jacobian matrices and are called Jacobian determinants. A special notation is often used to denote Jacobian determinants. We write
MATH
In this notation, the formulas for $X^{\prime }(z)$ and $Y^{\prime }(z)$ can be expressed more briefly in the form
MATH
(The minus sign has been incorporated into the numerators by interchanging the columns.)

The method can be extended to treat more general situations in which $m$ equations in $n$ variables are given, where $n>m$ and we solve for $m$ of the variables in terms of the remaining $n-m$ variables. The partial derivatives of the new functions so defined can be expressed as quotients of Jacobian determinants.

Worked Examples

EXAMPLE 1. Assume that the equation $g(x,y)=0$ determines $y$ as a differentiable function of $x$, say $y=Y(x)$ for all $x$ in some open interval $(a,b)$. Express the derivative $Y^{\prime }(x)$ in terms of the partial derivatives of $g.$

Solution. Let $G(x)=g[x,Y(x)]$ for $x$ in $(a,b)$. Then the equation $g(x,y)=0$ implies $G(x)=0$ in $(a,b)$. By the chain rule we have
MATH
from which we obtain
MATH
at those points $x$ in $(a,b)$ at which MATH The partial derivatives MATH and MATH are given by the formulas MATH and MATH.

EXAMPLE 2 When $y$ is eliminated from the two equations $z=f(x,y)$ and $g(x,y)=0$, the result can be expressed in the form $z=h(x)$. Express the derivative $h^{\prime }(x)$ in terms of the partial derivatives of $f$ and $g $.

Solution. Let us assume that the equation $g(x,y)=0$ may be solved for $y$ in terms of $x$ and that a solution is given by $y=Y(x)$ for all $x$ in some open interval $(a,b)$. Then the function $h$ is given by the formula
MATH
Applying the chain rule we have
MATH
Using the equation of Example 1 we obtain the formula
MATH
The partial derivatives on the right are to be evaluated at the point $(x,Y(x))$. Note that the numerator can also be expressed as a Jacobian determinant, giving us
MATH

EXAMPLE 3. The two equations $2x=v^2-u^2$ and $y=uv$ define $u$ and $v$ as functions of $x$ and $y$,. Find formulas for MATH.

Solution. If we hold $y$, fixed and differentiate the two equations in question with respect to $x$, remembering that $u$ and $v$ are functions of $x$ and $y$, we obtain
MATH
Solving these simultaneously for MATH and MATH we find
MATH
On the other hand, if we hold $x$ fixed and differentiate the two given equations with respect to $y$ we obtain the equations
MATH
Solving these simultaneously we find
MATH

EXAMPLE 4. Let $u$ be defined as a function of $x$ and $y$ by means of the equation
MATH
Find MATH and MATH in terms of the partial derivatives of $F$.

Solution. Suppose that $u=g(x,y)$ for all $(x,y)$ in some open set $S$. Substituting $g(x,y)$ for $u$ in the original equation we must have
MATH
where $u_1(x,y)=x+g(x,y)$ and $u_2(x,y)=yg(x,y).$ Now we hold $y$ fixed and differentiate both sides with respect to $x$, using the chain rule on the right, to obtain
MATH
But MATH and MATH Hence the equation becomes
MATH
Solving this equation for MATH (and writing MATH for MATH) we obtain
MATH
In a similar way we find
MATH
This leads to the equation
MATH
The partial derivatives $D_1F$ and $D_2F$ are to be evaluated at the point MATH

EXAMPLE 5. When $u$ is eliminated from the two equations $x=u+v$ and $y=uv^2$, we get an equation of the form $F(x,y,v)=0$ which defines $v$ implicitly as a function of $x$ and $y$, say $v=h(x,y).$ Prove that
MATH
and find a similar formula for MATH

Solution. Eliminating $u$ from the two given equations, we obtain the relation
MATH
Let $F$ be the function defined by the equation
MATH
The discussion in the previous section is now applicable and we can write
MATH
But MATH and MATH Hence the equations
MATH
become
MATH
and
MATH

EXAMPLE 6. The equation $F(x,y,z)=0$ defines $z$ implicitly as a function of $x$ and $y$, say $z=f(x,y)$. Assuming that MATH show that
MATH
where the partial derivatives on the right are to be evaluated at $(x,y,f(x,y)).$

Solution. We have
MATH
We must remember that this quotient really means
MATH
Let us introduce MATH and MATH. Our object is to evaluate the partial derivative with respect to $x$ of the quotient
MATH
holding $y$ fixed. The rule for differentiating quotients gives us
MATH
Since $G$ and $H$ are composite functions, we use the chain rule to compute the partial derivatives MATH and MATH. For MATH we have
MATH

MATH
Similarly, we find
MATH

MATH
Substituting these into
MATH
and replacing MATH by
MATH
we obtain the desired formula.

This document created by Scientific WorkPlace 4.0.