# Complete Proof

**Complete Proof**

**What's the proof for the chain rule in the partial form df= fxdx+fydy+fzdz?**

*
*

*Let's say f is a scalar function of three variables (x,y,z) and fx, ty and fz are the partial derivatives. I would like to see the proof for df= fxdx+fydy+fzdz
*

the formula

df = fxdx + fydy + fzdz

is the definition of the total differential for f. there's nothing to prove, it's a definition.

this definition is motivated by the chain rule for higher dimensions:

df/dt = (fx)(dx/dt) + (fy)(dy/dt) + (fz)(dz/dt)

actually, this represents somewhat of an abuse of notation, because what is usally meant by this is f is a real-valued function of 3 variables, each of which is parametrized by t. so we actually have a composition:

(fop)(t), where p(t) = (x(t),y(t),z(t)).

so it really shouldn't be df/dt, but rather, d/dt(fop).

now p'(t) = (x'(t),y'(t),z'(t)) = (dx/dt,dy/dt,dz/dt). while the derivative of f is

∇f = (fx,fy,fz), and in this context, the chain rule is:

(fop)'(t) = ∇f(p(t))•p'(t)), where we have a dot product (actually, this is a special example of a matrix product).

the proof of this isn't terribly hard, but requires a lot of definitions to be made regarding derivatives of functions of more than one variable, and space prevents me from giving a complete proof here.

a "semi-intuitive proof" is the following: f:R^3--->R can be approximated (at a given point) by a linear function L, where L(x,y,z) = ax + by + cz + d. and the parametrization p can also be approximated by some line: p(t) = (x0 + tu, y0 + tv, z0 + tw).

now (Lop)(t) = a(x0 + tu) + b(y0 + tv) + c(z0 + tw) + d

= (au + bv + cw)t + ax0 + by0 + cz0 + d

computing (Lop)'(t) directly, we have:

(Lop)'(t) = au + bv + cw

while ∇L is the constant vector (a,b,c), and p'(t) = (u,v,w), so

(Lop)'(t) = (a,b,c)•(u,v,w).

now the derivative of a function g:R^n-->R^m can be defined as the unique linear transformation λ such that:

lim h-->0 |g(x+h) - g(x) - λ(h)|/|h| = 0

in other words λ(x) + g(x) is the "best linear approximation to g" at x, so if λ is the derivative of g, and μ is the derivative of h, then it makes sense for the best linear approximation of goh to be the matrix product, λμ, and when λ = g, μ = h, this is certainly the case (we just showed that for L and p above).

it is sometimes easier to see this is true with an example:

let f(x,y,z) = 2x + xy^2 + 3z

let p(t) = (x,y,z) = (t, t^2, t^3)

∇f(x,y,z) = (2 + y^2, 2xy,3) and p'(t) = (1, 2t,3t^2)

so ∇f(p(t)) = (2 + t^4, 2t^3, 3) so ∇f(p(t))•(1,2t,3t^2)

= 2 + t^4 + (2t^3)(2t) + 3(3t^2) = 2 + 9t^2 + 5t^4

on the other hand, f(p(t)) = 2t + t^5 + 3t^3, which has the derivative:

(fop)"(t) = 2 + 9t^2 + 5t^4.

**Scroll Down To Find Complete Proof On eBay Below:**

$115.00 |
$32.50 |
$64.50 |
$39.00 |

$19.50 |
$3.25 |
$585.00 |
$38.00 |

$21.95 |
$24.50 |
$38.00 |
$375.00 |

$22.37 |
$54.00 |
$39.00 |
$38.50 |

$5.99 |
$7.95 |
$20.00 |
$3.75 |

$59.52 |
$24.25 |
$37.50 |
$9.95 |

$739.95 |
$356.99 |
$313.00 |
$38.99 |

$239.00 |
$210.00 |
$609.95 |
$3.99 |

$3.29 |
$72.50 |