Multivariable Calculus and Linear Algebra for Engineering Systems

(Level 3)

Single-variable calculus is one-dimensional thinking. But real systems have multiple inputs: temperature and pressure affect gas volume, x and y coordinates define a surface, three forces act on a joint simultaneously. This level teaches you to differentiate, integrate, and solve systems when everything depends on everything else. Partial derivatives track change in one direction while holding others fixed. Matrices organize huge systems of equations into solvable form. This is where math becomes multidimensional.

Functions with Multiple Interacting Variables

Most engineering functions are not simple f(x) but rather f(x, y) or even f(x, y, z). The area of a rectangle is Area = Length × Width, a function of two variables. The volume of an ideal gas is given by the ideal gas law, PV = nRT, a function of pressure, temperature, and amount of gas. Stress in a beam is a function of position, applied load, and the beam's cross-section. There is no easy way to turn these problems into single-variable problems, and one usually gets different answers when doing it both ways.

What about functions with two inputs? Then we have f(x, y) = x² + y², a function where changing one of the inputs gives a completely different result. This is not something we graph as a simple curve or line. Instead we see that the response of the function to different inputs can be represented as a surface in 3D space. The input/output relationships, expressed algebraically through the function, don't change, it's the way we envision and analyze these relationships that evolves.

Two-variable function: f(x, y) = x² + y²
For positive integers a and b, the function f(a, b) is defined as follows: f(a, b) = a² + b². So f(2, 3) = 2² + 3² = 4 + 9 = 13.

Different outputs for the same input: Change either x or y, output changes. This relation can't be graphed as a line because it is a surface.

f(x,y,z)=xy+z for three variables.
Another function. f(2, 3, 1) = (2)(3) + 1 = 7.

Visualization has now given way to three new inputs, and algebra prevails.

Why This Matters Later

The heat transfer problem depends on several independent variables: x, y, z, and time. Stress in a structure to be reinforced is a function of position and loading. Even optimization problems involve a dozen variables or more. Multivariable functions are the norm in real engineering problems. But single-variable functions are all we've learned.

Partial Derivatives: Changing One Variable at a Time

Partial derivatives introduce new dimensions. Instead of moving in one direction, we can now advance in many more.

Here's a simple example: f(x, y) = x² + 3y. What's the derivative? Of what, with respect to what? Two variables have entered the room, and we need to understand the concept of partial derivatives, which tell us how a function changes with respect to one variable while keeping the other(s) constant. The answer to the first example is: ∂f/∂x = 2x. This partial derivative answers the question: how does f change as x changes, keeping y frozen? The answer to our second example is: ∂f/∂y = 3. This partial derivative answers the question: how does f change as y changes, keeping x frozen?

You may not know that every partial derivative corresponds to a cross-section of a function. All the partial derivatives together correspond to the gradient or slope of a function. The gradient is the vector that points in the direction of greatest increase of a function. Most optimization algorithms take advantage of this when trying to step in the direction of optimization.

Compute the partial derivatives of the function f(x, y) = x² + 3y.
∂f∂x = 2x

Consider y as a constant and then differentiate with respect to x.

∂f∂y = 3

Treat x as a constant. Now, differentiate with respect to y. The term containing x² will disappear.

Why This Matters Later

The heat equation, wave equation and diffusion equation all are partial differential equations (PDEs). To understand these equations you first must learn about partial derivatives, and then optimization in high dimensional spaces. To optimize in many dimensions you follow the gradient of all the partials and then train a neural network by computing the gradients of the errors in the output and adjusting the weights. Partials are not "optional math". Any physical model involving two or more variables (e.g., price and quantity) always involves partials.

Multiple Integrals and Multidimensional Accumulation

Single integrals slice out areas bounded by lines. Double integrals slice out volumes bounded by surfaces. Triple integrals slice out volumes bounded by three dimensions, accumulating over the entire 3D volume. It's the same process - accumulation - pushed to even more dimensions.

To integrate ∫₀¹ ∫₀¹ (x + y) dx dy, first integrate with respect to x treating y as a constant, and then integrate the result with respect to y. The order in which you integrate may change when integrating over non-rectangular regions, but for areas bounded by straight lines, the order does not matter much. We are summing up an infinite number of infinitely thin slabs.

We will compute the following double integral: ∫₀¹ ∫₀¹ (x + y) dx dy
Outer: (y + 1)² = y² + 2y + 1
Inner: ∫(a+bxy)/y² dy = 1∫(A+Bx)/Y² dy + ∫Cxy dy = (A+Bx)Y⁻¹/ y + Cxy²/2 + K = (a+x)/ x + x²/2 + K

Integrate inside-out. First x, then y.

Triple integral: ∫∫∫ f(x, y, z) dV

Just one more to pile on. These are used for 3D mass, volume and charge distributions.

Why This Matters Later

How much will something balance around (the center of mass of an irregular 3D figure comes from a triple integral), probability of two random variables entering a region comes from a double integral, and the flux of a vector field through a surface comes from a surface integral (a slightly different form of the double integral). These are the core of all finite element methods - you have to understand the underlying integral before you can hope to understand what your computer is badly approximating.

Vectors as Algebraic Objects in High Dimensions

We have previously encountered vectors in Level 1 as geometric objects with magnitude and direction. Now we encounter vectors algebraically as ordered lists of numbers such as v = [2, -1, 3]. Vector addition and scalar multiplication of a vector both occur element-wise. For example, [1, 0, 2] + [2, -1, 3] = [3, -1, 5] and 2[1, 0, 2] = [2, 0, 4].

Why lists and not arrows? Because, frankly, I couldn't draw a 10-dimensional arrow. But I can most certainly write a 10-dimensional vector and perform algebra on it. Concepts such as linear independence, basis vectors, and the span of a set of vectors exist and are used in any number of dimensions.

Vector representation: v = [2, -1, 3]

3 components : v₁ = 2, v₂ = -1, v₃ = 3.

Vector addition: u + v
[1, 0, 2] + [2, -1, 3] = [1+2, 0-1, 2+3] = [3, -1, 5]

Continue to build the physical model one component at a time. This level introduces an algebraic twist in the same way that Level 1 introduced geometric components.

Why This Matters Later

Matrices operate on vectors and the manipulation of vectors is a prerequisite. Eigenvalue problems ask a rather subtle question, namely which vectors during a linear transformation stay parallel to each other. State-space models for control theory represent the state of a system at a particular time as a vector of variables, which changes over time according to a matrix of coefficients. With some practice of vector algebra you soon get used to matrices and can tackle linear systems of any size.

Matrices and Scalable Linear Systems

An m x n matrix A can be viewed as a transformation on vectors. For example, what input vector x will A transform into the vector b? This problem can be represented by the equation Ax = b, where x is a column vector of n components and b is a column vector of m components. The system of equations 2x + y = 5 and x + 3y = 8 can be rewritten in this form as follows: view x and y as components of a 2 component vector x. Then the coefficient matrix for this system of equations is A = [2 1; 1 3], the vector x is now [x; y], and the vector b is [5; 8].

This shorthand by hand saves only a couple of symbols. But this shorthand saves every symbol for the computer, which means that a 1000×1000 system of equations looks just like a 2×2 system of equations to the computer. Both are given in the form of a single matrix equation, Ax = b. To the algorithm, a 1000×1000 system is no more or less difficult than a 2×2 system: both have the same structure. This is why engineers typically express systems of linear equations in matrix form.

Matrix:
A = [2 1]
[1 3]
Matrix-vector multiplication: Ax
A = [2 1] x = [x]
[1 3] [y]
Ax = [2x + y]
[x + 3y]

The system is: 2x + y = ?, x + 3y = ?.

Why This Matters Later

Finite element analysis is typically solved as a very large linear system of equations, with thousands of equations, presented in the form of Ax = b. Many control systems can be modelled using matrices describing the time evolution of the state of a system. All of the fundamental transformations in graphics (rotation, scaling, shearing) are performed using matrix multiplications. Linear algebra is how mathematics gets to do real engineering.

Determinants and System Solvability

For any 2×2 matrix [a b; c d], the determinant det(A) is calculated as ad – bc. If the determinant of A is not equal to zero (det(A) ≠ 0), then the system of linear equations Ax = b has exactly one solution. This solution can be calculated directly by inverting A. If the determinant of A is zero (det(A) = 0), then A is singular, which means that either there are infinitely many solutions or no solution exists, depending on the value of b.

Geometrically the determinant of a linear transformation measures volume distortion. For example a transformation with determinant 2 doubles volumes, and a transformation with determinant 0 flatttes spaces of higher dimension down to spaces of lower dimension, losing information in the process. Volume-doubling transformations can generally be uniquely inverted to yield determinant 2 again, but a flattening transformation with determinant 0 cannot be uniquely inverted. This sort of thing is important in structural mechanics, for example, where the determinant of the stiffness matrix (K) equal to zero indicates that a structure has a mechanism or is singular and unstable. Not good.

Example: Calculate determinant
A = [3 2]
[1 4]
Compute the determinant of A.
det(A) = (3)(4) - (2)(1) = 12 - 2 = 10

The determinant of A is det(A) ≠ 0, so the system Ax = b has a unique solution. However, the matrix A is invertible.

Why This Matters Later

Determinants appear in the eigenvalue problem, in matrix inversion techniques, and in stability considerations. A zero determinant in linear algebra signifies singularity and either the corresponding square matrix is underdetermined or is singular and the system of linear equations is inconsistent. In structural analysis, a zero determinant of the stiffness matrix indicates that the structure is unstable.

Eigenvalues, Eigenvectors, and System Modes

In the operation of most vectors multiplied by a matrix, the vector does not only rotate, but also possibly gets bigger or smaller, and can even change direction. However, for some special vectors, this is not so. These special vectors get scaled, that is, increased or decreased in magnitude, but remain the same direction. Such vectors are called eigenvectors, and the eigenvalue is the scalar by which they are multiplied. Thus, if Av = λv, v is an eigenvector and λ is the eigenvalue.

The eigenvectors of a system (in addition to the eigenvalues) describe the natural modes or patterns of the system. For a vibrating physical structure, the natural modes describe the shapes that the system oscillates up and down without external influence. For a general dynamical system, the eigenvalues tell us whether the system is stable or not: negative eigenvalues correspond to a stable system which decays away, and positive eigenvalues indicate an unstable system which exponentially increases in size. The eigenvectors answer the question: What does the system naturally do?

Definition: Av = λv

Matrix A times vector v equals a scaled version of vector v, λ is the scaling factor, eigenvalue.

Simple example:
A = [2 0] v = [1]
[0 3] [0]
Av = [2] = 2[1] → λ = 2
[0] [0]

The v is still going in the same direction, it has just been widened to fit within the thicker bar by being scaled up by 2.

Why This Matters Later

"Eigenvalues tell us the natural frequencies of a system (how fast it wants to vibrate) and eigenvectors tell us the mode shapes (the patterns in which it will vibrate)." "In control theory, the eigenvalues of a system tell us whether the system is stable or not. If they have negative real part(s), the system is stable and will settle. If they have positive real part(s), the system is unstable and will blow up. Principal component analysis in data analysis finds the eigenvectors of the covariance matrix (or other similarity matrix) of a set of data. These eigenvectors correspond to the directions of maximum variance in the data. Celebrating the down-to-earth eigenstuff."

Why Engineers Use Multivariable Calculus and Linear Algebra

Engineering problems typically include multiple variables, simultaneously changing, and real engineering problems are no exception. Temperature is not a constant and can vary in three dimensions. Stress can be caused by many different loads, each applying force in different directions. The velocity of a fluid is a vector field as well. Multivariable calculus generalizes the derivative and integral to these multivariable situations. The gradient points in the steepest direction of change, divergence measures the net rate of change of a quantity out of a point, and line and surface integrals extend the idea of work to these two dimensional and multi-dimensional spaces. Linear algebra provides a means to solve large systems of linear equations efficiently, which is a primary task in finite element analysis. Transformations, rotations, and projections all involve linear algebra. But with these tools, 3D engineering models are no longer the exception.

Apply Multivariable Math in Specializations

Multivariable calculus and linear algebra are critical for:

Ready for the Next Level?

Once you understand multivariable calculus and linear algebra, you're ready to explore differential equations and engineering modeling.

Continue to Level 4: Differential Equations →