NewMech
Guiding The Next Generation of
Engineers

Multivariable Calculus & Linear Algebra

(Level 3)

Single-variable calculus is one-dimensional thinking. But real systems have multiple inputs: temperature and pressure affect gas volume, x and y coordinates define a surface, three forces act on a joint simultaneously. This level teaches you to differentiate, integrate, and solve systems when everything depends on everything else. Partial derivatives track change in one direction while holding others fixed. Matrices organize huge systems of equations into solvable form. This is where math becomes multidimensional.

1. Functions of Multiple Variables

Most engineering functions aren't f(x)—they're f(x, y) or f(x, y, z). The area of a rectangle depends on length and width. The volume of a gas depends on pressure, temperature, and amount. Stress in a beam depends on position, load, and cross-section. You can't reduce these to single-variable problems without losing information.

A function like f(x, y) = x² + y² takes two inputs and gives one output. Change either x or y, and the output changes. You can't graph this as a simple curve—it's a surface in 3D space. The algebra is the same (plug in values, get results), but the visualization and analysis require new techniques.

Two-variable function: f(x, y) = x² + y²
f(2, 3) = 2² + 3² = 4 + 9 = 13

Change either x or y, output changes. Can't graph it as a line—it's a surface.

Three-variable function: f(x, y, z) = xy + z
f(2, 3, 1) = (2)(3) + 1 = 7

Now you have three inputs. Visualization breaks down, algebra keeps working.

Why This Matters Later

Heat transfer depends on x, y, z, and time. Stress depends on position and loading. Optimization problems have dozens of variables. Multivariable functions are the default in real engineering—single-variable is the special case.

2. Partial Derivatives

Say you have f(x, y) = x² + 3y. If you ask "what's the derivative?", the answer is: of what, with respect to what? The function has two inputs. Partial derivatives let you differentiate with respect to one variable while treating the others as constants. ∂f/∂x = 2x means "how does f change as x changes, with y frozen?" ∂f/∂y = 3 means "how does f change as y changes, with x frozen?"

Here's the key insight: each partial derivative is just a slice through the function. Together, all the partials form the gradient—a vector pointing in the direction where the function increases fastest. That's how optimization algorithms know which way to step.

Example: Find partial derivatives of f(x, y) = x² + 3y
∂f∂x = 2x

Treat y as a constant, differentiate with respect to x.

∂f∂y = 3

Treat x as a constant, differentiate with respect to y. The x² term disappears.

Why This Matters Later

Heat equation, wave equation, diffusion equation—all partial differential equations. You can't solve them without understanding partials. Optimization in high dimensions? Follow the gradient (all the partials together). Machine learning trains neural networks by computing gradients and nudging weights. Partials aren't optional math—they're how you model anything that varies across space, time, or multiple parameters.

3. Multiple Integrals

Single integrals add up slices along a line (area under a curve). Double integrals add up slices across a region (volume under a surface). Triple integrals add up contributions throughout a 3D volume. It's the same idea—accumulation—extended to more dimensions.

For ∫₀¹ ∫₀¹ (x + y) dx dy, you integrate with respect to x first (treating y as constant), then integrate the result with respect to y. Order matters sometimes, but for rectangular regions it usually doesn't. The key is that you're summing infinitely many infinitely thin slabs to get an exact total.

Double integral: ∫₀¹ ∫₀¹ (x + y) dx dy
Inner: ∫₀¹ (x + y) dx = [2 + xy]₀¹ = 12 + y
Outer: ∫₀¹ (12 + y) dy = [y2 + 2]₀¹ = 1

Integrate inside-out. First x, then y.

Triple integral: ∫∫∫ f(x, y, z) dV

Same process, one more layer. Used for mass, volume, charge distributions in 3D.

Why This Matters Later

Want the center of mass of an irregular 3D object? Triple integral. Probability that two random variables fall in a certain region? Double integral. Flux of a vector field through a surface? Surface integral (a variant). Finite element methods break continuous domains into tiny elements and turn integrals into sums—but you need to understand the integral version first to know what the computer is approximating.

4. Vectors as Mathematical Objects

Back in Level 1, vectors were geometric: arrows with magnitude and direction. Now they're algebraic: ordered lists of numbers. v = [2, -1, 3] means three components stacked in a column (or row, depending on context). Adding [1, 0, 2] + [2, -1, 3] gives [3, -1, 5]—just add matching components. Scalar multiplication works element-wise: 2[1, 0, 2] = [2, 0, 4].

Why the shift from arrows to lists? Because you can't draw a 10-dimensional arrow. But you can absolutely write down a vector with 10 components and do algebra with it. Linear independence, basis vectors, span—these concepts work in any dimension once you stop relying on pictures and start trusting the algebra.

Vector representation: v⃗ = [2, -1, 3]

Three components: v₁ = 2, v₂ = -1, v₃ = 3.

Vector addition: u⃗ + v⃗
[1, 0, 2] + [2, -1, 3] = [1+2, 0-1, 2+3] = [3, -1, 5]

Add component by component. Same as Level 1, now with algebraic emphasis.

Why This Matters Later

Matrices operate on vectors. Linear transformations map vectors to vectors. Eigenvalue problems ask which vectors stay parallel to themselves. State-space models in control theory represent system state as a vector that evolves over time. Once you're fluent with vector algebra, matrices become manageable, and linear systems become solvable at any scale.

5. Matrices and Linear Systems

Think of a matrix as a transformation engine. Feed it a vector, get a transformed vector out. The equation Ax = b isn't just notation—it's asking "which input vector x gets transformed by A into the output vector b?" For the system 2x + y = 5, x + 3y = 8, you can write A = [2 1; 1 3], x = [x; y], b = [5; 8]. Same equations, cleaner structure.

By hand, this saves you a little notation. By computer, it saves everything. A 1000×1000 system looks identical to a 2×2 system: Ax = b. The algorithm doesn't care about size—it just needs the structure. That's why engineers use matrix form: it scales.

Matrix:
A = [2 1]
[1 3]
Matrix-vector multiplication: Ax⃗
A = [2 1] x⃗ = [x]
[1 3] [y]
Ax⃗ = [2x + y]
[x + 3y]

This represents the system: 2x + y = ?, x + 3y = ?. Compact and scalable.

Why This Matters Later

Finite element analysis solves systems with thousands of equations—all organized as Ax = b. Control systems use matrices to model state transitions. Graphics transformations (rotation, scaling, shearing) are all matrix operations. Linear algebra is how engineering scales.

6. Determinants and System Behavior

For a 2×2 matrix [a b; c d], the determinant is ad - bc. If det(A) ≠ 0, the system Ax = b has exactly one solution—you can invert A and solve directly. If det(A) = 0, the matrix is singular: either infinite solutions or no solution, depending on b.

Geometrically? The determinant measures volume distortion. A transformation with det(A) = 2 doubles volumes. det(A) = 0 means the transformation flattens space into fewer dimensions—you lose information, so you can't uniquely reverse it. In structural mechanics, det(K) = 0 (where K is the stiffness matrix) means the structure has a mechanism or is unstable. Red flag.

Example: Calculate determinant
A = [3 2]
[1 4]
det(A) = (3)(4) - (2)(1) = 12 - 2 = 10

det(A) ≠ 0, so the system Ax = b has a unique solution. Matrix is invertible.

Why This Matters Later

Determinants appear in eigenvalue calculations, matrix inversion, and stability analysis. A zero determinant signals singularity—the system is either under-determined or inconsistent. In structural analysis, it can mean the structure is unstable.

7. Eigenvalues and Eigenvectors

When you multiply most vectors by a matrix, they rotate, stretch, and change direction. But certain special vectors just get scaled—they point the same way before and after transformation. If Av = λv, then v is an eigenvector (doesn't change direction) and λ is the eigenvalue (the scaling factor).

Find the eigenvectors of a system, and you've found its natural modes—the patterns it intrinsically prefers. For a vibrating structure, these are the shapes it oscillates in without external forcing. For a dynamical system, the eigenvalues tell you stability: negative eigenvalues mean decay (stable), positive mean exponential growth (unstable). They're diagnostic: "what does this system naturally do?"

Definition: Av⃗ = λv⃗

Matrix A transforms vector v into a scaled version of itself. λ is the scaling factor (eigenvalue).

Simple example: A = [2 0], v⃗ = [1]
[0 3] [0]
Av⃗ = [2] = 2[1] → λ = 2
[0] [0]

v stays in the same direction, just gets scaled by 2.

Why This Matters Later

Structural dynamics: eigenvalues are the natural frequencies (how fast the structure wants to vibrate), eigenvectors are the mode shapes (the patterns it vibrates in). Control theory: eigenvalues determine stability—negative real parts mean the system settles down, positive means it blows up. Data analysis: principal component analysis finds eigenvectors of the covariance matrix to identify directions of maximum variance. Eigenstuff isn't abstract—it reveals what a system naturally wants to do.

Ready for the Next Level?

Once you understand multivariable calculus and linear algebra, you're ready to explore differential equations and engineering modeling.

Continue to Level 4: Differential Equations →