1.4
Linear & Nonlinear

Linear systems, converting nonlinear systems to linear ones, and differential equations. 32 min read

Linearity and linear systems are important in science and engineering for two reasons:

  1. Linear systems are easy to think about – at least when compared to nonlinear systems!
  2. A great many systems are approximately linear if we look at them the right way.

Linear systems give rise to a rich ground of understanding and are natural to think about and design, even when the underlying physics is nonlinear. Furthermore, we can and often do design electronic and other systems to provide linear control of something that is nonlinear underneath.

Perhaps most importantly, a great deal of nonlinear systems can be analyzed by breaking them down into a number of locally linear systems, as we’ll see shortly.

Linear might be defined as “in a line” or “in proportion to,” but we can think about linearity at a number of levels. All of these layers of conceptual depth give rise to techniques for understanding and designing electronics, and for engineering at large.

Let’s build from the simplest cases upwards.


Level 1: A Single Linear Equation  

Consider the slope-intercept form of a line in the 2D x-y plane:

This equation has explicit independent ( ) and dependent ( ) variables, input and output. It has two constants which define the line. We can intuitively understand the relationship by inspection:

The slope-intercept form is convenient because we can plug in our independent variable and solve “forwards” (doing just a multiplication and addition) to get the other variable.

However, if you’ve studied algebra, you’ve also seen the standard form of the equation of a line:

This form is nice because it keeps all the terms with variables on the left-hand side, and keeps the fixed constant on the right. It extends well to situations with more than just the two variables here.

In this equation, there are two variables within one equation. One equation means one constraint. The solution space has 2 variables - 1 equation = 1 degree of freedom. (We’ll build on this concept in Systems of Equations.)

The standard form of the line is is equivalent to a matrix equation:

In the standard form, the relationship between input and output (between independent and dependent variables) is not as obvious as it was in the slope-intercept form. Regardless, it still describes exactly the same line and has exactly the same solutions.

As an example, consider the line:

We can transform this equation from slope-intercept form to standard form and matrix form:

This equation describes the entire line, not just a particular point on it. There are infinitely many solutions; there are infinitely many points on a line. That means that , and any other points on that line, are all possible solutions to this matrix equation.

Let’s shift our focus from all of the solutions on the line to isolating one particular solution.

How do we find the value of when ? In the slope-intercept form, this was obvious enough: just substitute in the number, multiply, and add.

However, in the matrix case, we can add a very simple new equation to the system, which adds a new row to both the and matrices:

In the next section Systems of Equations, we’ll talk more directly about how to solve this, but for now it suffices to say that given a square matrix and a vector of constants , it is possible to solve for a unique solution vector (if a single unique solution exists). Note that, depending on , there are cases in which there are zero or infinitely many solutions.

In the case above, the only solution is:

This single point is a unique solution to the 2x2 system of equations above.

This technique works equally well when we want to solve “backwards”, for example to find the value of on our line when . We’d just set up the matrix equation:

and solve to find the solution:

Even in this simplest case where we started with just a single equation of a line, we can encapsulate both variables and into a single vector which describes possible values within the space of the system.

The big idea here is that even a simple single equation for a line can be treated more generally as a system of equations. A matrix equation is a very compact way of writing a system of linear equations. If we treat it as “easy” to solve the matrix equation – and it is easy, especially for computers – then terms like “input”, “output”, “dependent variable” and “independent variable” drop away and just describe a bigger concept, a system, which is itself linear and can be built upon to higher levels of complexity.


Level 2: Multiple Linear Equations  

Consider this system of four equations and four variables:

If you’ve studied electronics before you’ll see these equations represent a simple electronic circuit:

Exercise Click to open and simulate the circuit above.

(We’ll talk in a future section Solving Circuit Systems about how to go from this schematic to this set of equations, so don’t worry about that yet if you’re confused! We’ll also dig into how node voltages, branch currents, and terminal currents are related and named in Labeling Voltages, Currents, and Nodes.)

Can you solve this system of equations? This particular system was chosen to be very easy because you can just read down the list and solve for one variable at a time. (This isn’t usually the case, but we designed it to make this example easier!)

We’ve hidden the idea of independent and dependent variables here, but now suppose that the first equation is an input that we can control. For example, perhaps we have a knob on an adjustable voltage source and can change the voltage 5 to 5.5 or 6.

Now, what happens to when is increased by a small amount, let’s call it ? We can solve through algebra, simply substituting in place of and following through:

We can look at things in terms of these “deltas,” or changes, to see how much the other variables change in response to the one we’re adjusting. We just do that by subtraction from their original reference values before we added :

With all these deltas, we now know that if we increase by a little bit, we can say how that is going to affect all the other variables in the system.

We can use CircuitLab’s Frequency Domain simulation mode to plot the constants associated with these deltas directly:

Exercise Click the circuit, click “Simulate,” and “Run Frequency-Domain Simulation.” The lines for will both be plotted at , which is exactly the slope for we found in our deltas relationship above, while will be at 0. Though we aren’t using any frequency-dependent circuit elements (such as capacitors or inductors), this shows the power of the frequency-domain simulation mode to quickly analyze the linearized relationships between circuit variables.

We can also come up with a single algebraic relationship between any dependent variable and our independent variable (hiding the rest of the system) because everything is linear.

Suppose . Can you write as a function of only and some constants?

If you open the same circuit again, you can plot that line from the simulation:

Exercise Click the circuit, click “Simulate,” then click “DC Sweep” to open a mode where the simulator evaluates the circuit for a range of different input avlues, and finally “Run DC Sweep.” You’ll see exactly that line plotted, with various values of on the x-axis, and the current plotted on the y-axis. Mouse-over the graph to see that at , we have , and look at the slope of the relationship.

We can and will consider systems with multiple inputs and multiple outputs, but the overall linear system concept is that:

  1. We can solve systems of linear equations.
  2. We can choose our independent variables (inputs) and dependent variables (outputs).
  3. All the equations are linear, so every input-output relationship will be linear too.

In matrix form we can write:

where is a column vector of independent variables (inputs), is a matrix of coefficients, is a column vector of constants, and is the column vector of dependent variables (outputs). If you’re scared of matrixes, don’t be! This is just a compact way to write:

For the circuit above, if we selected to be our independent variable, we could write this “matrix slope-intercept form” like this:

Alternatively, it would look like this if we decided that both and were our independent variables:

This is just a linear equation in a higher-dimensional space, which means it contains multiple simultaneously-true equations. It may look advanced, but it’s really saying the same thing as the four equations above (although we’ve pulled out and as independent input variables ).

No matter how we rearrange it, and no matter what we choose to include as dependent or independent variables, the system itself is linear. Just as we did in Level 1, rearranging between the input-output form and the standard form is possible.

We can rearrange all equations with all the multiplicative terms – regardless of whether they’re dependent or independent variables – on the left hand side, and just a single constant on the right. Here are the four original equations where we’ve just subtracted to move all coefficient-times-variable terms to one side and all constants to the other:

This can be set up as a matrix equation , where includes both the dependent and independent variables.

This is in a form that’s very easy for a computer to solve.

Just for this particular example, notice that this particular problem was chosen to be a really easy toy problem that you could solve by hand just reading down the list of equations one at a time. This makes this particular matrix have a special shape: it’s a “lower triangular” matrix, because all the values above and to the right of the main diagonal are all 0. being lower triangular is not true in general, but we picked our problem here intentionally. This depends not just on the equations, but on the order we put them in, as well as the order we map variables to columns. We’ll talk more about triangular matrices and how they’re used for solving any system in the Systems of Equations section.

From an electronics perspective, look at which conditions generate which equations, especially after you read through Chapter 2. For example, the voltage and current sources generate the 1st and 3rd rows, with nonzero constant terms in :

And Kirchhoff’s Current Law summing currents produces the 4th equation, with several coefficients on currents summing to zero:

And finally, the second equation came from Ohm’s Law, also summing to zero constant right-hand side:

The big idea here is that systems of multiple simultaneous linear equations can be written in many ways, but the standard matrix equation form encompasses all of them without rearranging to any paricular input-output form. Computers are really good at quickly solving this linear equation (if is square and certain other conditions are met) even if there are 100 or 1000 simultaneous equations and unknowns.

The full space of possible values (in multiple dimensions) is constrained by each equation to obey some linear relationship between variables. As each constraint between variables (i.e. each equation) is linear, the system as a whole must also be linear, regardless of how big or complicated-looking it gets!

Redefining our system or network to be linear (rather than just a single linear input-output relationship) is important because many analysis techniques we’ll use rely on us redefining inputs and outputs on the fly as we examine pieces of our circuits.


Level 3: Multiple Linear Equations with Changing Input  

Continuing with our example circuit from above, we have an idea of what happens if we change from 5 to 6; we can simply rewrite our equations and matrices and solve again.

But what if the input is changing as a function of time? We can introduce another variable which is not an unknown in our matrix, but is a parameter to the input (and therefore will be a parameter to the output too). Let’s choose our input to be a signal, which can be defined as a function of time:

We now write instead of just to remind ourselves that this is a function of time.

Given this input, what is our output ? In this particular case, our system is memoryless, which means nothing depends on any previous value of the state of the system (such as an integral or derivative). Additionally, the system is time invariant because nothing in the system depends on the parameter itself. (For the future, note that most electronic systems of interest will be time invariant, but will not be memoryless, because of the presence derivative and/or integral terms from capacitors or inductors. However, a memoryless system responds to reach its new steady-state equilibrium instantly, making it easy to think about for this example.) Because these two properties are true, then our original solution:

can be translated directly into treating these variables as functions of time:

Now, if we substitute in our definition of above:

Try it in this simulation:

Exercise Click the circuit above, then click “Simulate,” and “Run Time-Domain Simulation.” Note that you can change the voltage signal to be whatever you want. The system is perfectly linear, and nothing within the system itself depends on time as a variable, so it doesn’t matter: the input and output as defined in this way will retain their proportionality.

The big idea here is that if the system obeys certain properties, then we can broaden our thinking: instead of reasoning about our system as a function where are numbers, to now thinking about our system as a capital-S System which takes a signal (a function of time) in and outputs another signal, .


Level 4: Frequency Domain Addition  

Let’s take a slight detour and hint at some amazing things to come. Instead of having be a single sine wave, what if it were the sum of two different sine waves at two different frequencies?

We’ve now added a small amplitude (0.1) but faster (10 Hz) wave on top of our previous one. Since the system is still linear, so is the output:

Try it in this simulation:

Exercise Click to run the simulation above.

In addition to showing this combined input algebraically, we can show it schematically:

Exercise Click to run the simulation above.

Compare both schematics above. They both do the same thing.

This represents a combination of inputs in the frequency domain. For any linear system, the same frequencies will then be present in the output.

This particular example has a flat frequency response, which means it doesn’t matter whether the input is 1 Hz or 10 Hz – the input is reduced by a factor in either case. However, even in other cases with non-flat frequency response, the idea of a linear combination of sine waves is a useful one.

If you’ve seen Laplace and/or Fourier Transforms, you may see where this line of thought is going, but we’ll put it away for now.

The big idea here is that if we take a linear system and put in the sum of two sine waves at two different frequencies, our output will also have the sum of two sine ways at those same two frequencies. (The amplitude and/or phase of each may be different depending on each frequency.)


Level 5: A Single Differential Equation  

Let’s take a closer look at a few definitions:

In the earlier Level 3 example, we considered a memoryless example which had no derivative or integral terms, but in many physical situations there is a differential equation that includes both .

The derivative needs to know what a variable’s own value was at (even if an infinitesimally small ) in order to have a derivative, so the situation is no longer memoryless. The derivative has memory, so prior values of the input will affect the present value of the function.

Taking a derivative is a linear operator. This might surprise you. You might have done some derivatives and know that, for example:

Since and aren’t proportional to each other, how are they possibly linear? But that’s not what linear means when we talk about a linear operator.

If is a linear operator, then:

A linear operator scales up and down in size with a constant term, and it distributes linearly to a sum.

A derivative follows these properties, so a derivative is a linear operator.

Suppose we go back to our single equation from Level 1, , and change it to be:

We now have a single equation that describes how a system’s input is related to its output , but we now have a coefficient for the time derivative of the input. This is extremely practical in electronics because, as we will see later, capacitors and inductors create time derivatives of their inputs in equations, which we call differential equations.

Is this really linear? For example, suppose we have a particular input signal:

Then, taking the derivative of the input:

And substituting in:

Clearly, if then we won’t have a “linear” relationship between the numerical values and at a given time .

But again, that is not what we’re talking about! We’re talking about linearity in terms of the whole signal, the whole function, over all time.

We have changed our perspective in an important way: instead of the input being a single number, now the input is a signal, a function of time, and the output signal is also a function of time.

The important linear aspect here is that if we consider a second signal, for example , we can find .

And now, when we create any linear combination of those two input signals:

For any values of the two linear combination constants and , the output will be:

That’s the meaning of linearity at an operator level, and taking a derivative is a linear operator.

Note that it was not particularly important to choose a function here. However, and have a nice property which is that their derivative is always another wave of the same frequency, although it may have different amplitude and phase. (This will be useful later!)

Additionally, and also have a very nice property that summing up any number of multiple and terms – as long as they all have the same frequency – can be collapsed into a single function with some single overall amplitude and phase . Just as and have a geometric interpretation in terms of tracing the path of the unit circle in the x-y plane, there is a geometric interpretation here. For more on this topic, review the Complex Numbers section.

The big idea here is that when we talk about linear systems, we’re not talking about just mapping an input value (a single number) to an output value (a different number). We’re really talking about something that takes an input signal (a function of time) and gives an output signal (a different function of time). We’ve defined linearity more rigorously, and any linear system as we’ve previously defined in terms of is a linear system regardless of how we define inputs or outputs within .


Level 6: Multiple Differential Equations  

If we combine the ideas from Level 2 (Multiple Linear Equations) with Level 5 (A Single Differential Equation), we can take the general matrix equation form and extend it to allow each equation to have a term for the derivative of :

The vector of derivatives simply includes the time derivatives of each of the individual components of :

This is the most general form of the derivative available. If we don’t actually need to use all of the derivatives of all of the unknowns, the corresponding cells in the matrix will just be 0.

This general form, like the one discussed in Level 2, includes both dependent and independent variables in the vector.

As in Level 2, it is possible for us to choose what our dependent and independent variables are and algebraically rearrange the equations so that each dependentent variable is equal to some linear combination of the independent variables, their derivatives, and the constant terms. We won’t get into that here. But if it helps you see that it is possible, it is possible to rewrite:

as a new matrix equation,

where we define , . We’ve just folded the matrixes and together side-by-side, and pushed and together into one longer vector. Matrix multiplication to expand still produces the same result.

As we’ll talk about more in the Systems of Equations section, we need a square matrix in order to have a unique solution for – we need the same number of unknowns as we have linearly independent variables. However, with differential equations in the mix, we have a problem: is an unknown, and so is . If our original were each of size , then our combined will have size – more unknowns than equations; no longer square. This reflects the fact that with a differential equation we need to be given the value of in order to compute , or vice versa; if given neither, we have infinitely many possible solutions related by the differential equation between them.

The additional required equations reflect the fact that in order to solve a Kth order linear differential equation (with one equation and one unknown), we need K extra constraints, such as initial values or boundary values, which select a single curve from all the possible ones specified by the same differential equation.

The matrix formulation of a system of simultaneous linear equations:

is a linear system because is a matrix of constant values.

Similarly,

is a linear system because is also a matrix of constant values, and is effectively another set of unknowns.

Overall, since is just a combination of and , the overall system of differential equations is still just a linear system.

So, even when equations have time derivative terms, and even if it isn’t easy to see or write the exact relationship between input and output, there’s still a linearity to the system that is very useful for analysis.

Also, note that we can always unwrap multiple derivatives into a chain of single derivatives. If we want to write a second-order differential equation:

we can define a new variable and write two connected first-order equations:

The big idea here is that we can easily modify our approach to systems of linear equations to incorporate linear differential equations, which occur in electronics every time there’s a capacitor or inductor.


Level 7: A Single Nonlinear Equation, Linearized  

Let’s look at:

Clearly this is a nonlinear equation because there’s a squared term. But we’d still like to think about and be able to make a statement about it, such as, “if we increase by a small amount , what happens to ?” This question is just asking for a derivative, and calculus gives us the answer:

This answer itself is not a constant! (If it were, then would have been linear.) However, if we know what is, we can construct a tangent line at that point . It’s simply:

If you’re more comfortable with numbers, let’s say we’re working around . Then:

We can combine these two pieces of information to form the local tangent line:

We might also write it more simply by substituting in :

If you’ve studied calculus, you’ll see that this is a first order Taylor series expansion of around . This is useful because it’s very easy to have intuition about linear functions, and as long as we stay close to our approximation will be pretty good.

Here’s where things get interesting. Let’s say you were asked to invert or solve the function: for example to find the value of for which .

In this particular case, we have the closed form function algebraically, and it’s algebraically invertible, so you’d use your algebra skills to find that . But in many cases in engineering, we run into a few problems:

  1. We often don’t have access to the closed form at all, and/or
  2. We do have the closed form, but it isn’t easily invertible, and/or
  3. We’ve combined not just one, but maybe tens or hundreds of equations together and don’t have a straightforward input-output relationship.

In those cases, we can use the first-order approximation tangent line to make our guess:

From this we can solve for our unknown to find . This is an approximation (not an exact answer! ) because it was based on our tangent line based near .

And here’s the cool part: we can improve our approximation! How? By re-linearizing to find a new tangent line around this new point :

Numerically, we get a new tangent line approximation:

Since we’re still trying to solve the problem , we set equal to our new approximation line as before:

From which we solve numerically for our unknown . This is a new approximate solution: we went from to to .

We can repeat once more, creating a new tangent line at this point :

Again, we can solve this tangent line for our desired condition:

From which we find .

We can do this a few times and find that the series quickly numerically converges toward the real answer of .

This is called the Newton-Raphson Method:

  1. Start with an initial guess.
  2. Linearize around that guess point.
  3. Solve using that tangent line.
  4. Repeat with the new approximate solution.

Repeat as many times as desired to improve accuracy. After just three iterations, we’ve improved our guess to a substantial level of precision:

This process is actually hard to see on a graph because it converges so quickly, but if you wish, you can plot these successive tangent lines:

Exercise Click to plot the tangent lines.

We can repeat the technique as many times as we’d like to get more precision, but note that we’re extremely close even after just 3 iterations. As the series will converge:

Notice that it matters where we started! was also a possible answer to our question, and it’s the answer we would have found if our initial guess had been any negative number.

The big idea here is that even without an invertible closed-form equation for we can very quickly solve problems as long as we can compute the forward values at any , and the local first derivative there. This is a remarkable technique that is at the root of much of scientific computing. Linearization is a powerful technique that allows solving even nonlinear problems.


Level 8: Multiple Nonlinear Equations, Linearized  

The Newton-Raphson method presented above may have seemed trivial because we already knew the algebraic form of and could just directly solve for if we wanted to. The terminology we use of dependent and independent variables usually implies that is a independent variable and is a dependent variable. However, when we start to build systems of equations as we did in Level 2 above, it’s more and more clear that many or most equations may not be written in that clearly make input and output so easily solved. Even with strictly linear systems of equations, it takes work to rearrange them into an input-output relationship. But, once we allow those systems of equations to have any nonlinear terms, it can be either a lot of work to express the relationship in input-output format – or it can in fact be impossible to express a closed-form inverse function.

However, the Newton-Raphson method works just as well in multiple dimensions as it works in a single dimension!

Instead of evaluating , we evaluate partial derivatives . This creates a Jacobian matrix and we can put our entire equation in the form .

For example, if we have any 3 nonlinear equations and 3 unknowns of the form:

(Note that we can always set the right-hand side to zero without any loss of generality because we can fold any constant into the nonlinear function on the left.)

In order to create our linearization, we can work with one equation at a time. For example, for nonlinear function , we can create a simple linearization about our point of interest :

As we are solving for , we’ll also set :

If we extend this to also cover our other equations we’ll find:

The matrix of partial derivatives for every equation (row) and every unknown (column) as seen on the left is called the Jacobian matrix.

Because each function is nonlinear, the cells of this matrix are not constant; they depend on the point at which we evaluate all the partial derivatives.

We now have a matrix equation of the form . We could solve this to find the values to update. However, since , we can expand our equation:

After this manipulation, we have a standard matrix problem of the form where are known constant matrixes (at a particular linearization point), and then our new approximate solution is easily solvable.

Again, as with the single equation Newton-Raphson method, we can successively use a “guessed” starting point for , evaluate all the equations and their derivatives at that point to generate and , and then solve .

This gives us a new “best guess” . From we can regenerate a new and from the derivatives of the original nonlinear system, and solve , and we’ll get an even better guess . And we can do this on and on until our converges.

As we discussed in Level 6 above, it’s possible for our vector to include time derivative terms, so now we can work with nonlinear differential equations!

This is what a circuit simulator like CircuitLab does: it takes the schematic provided by the user, it writes down tens or hundreds of simultaneous nonlinear differential equations, and it solves them, again and again.

This Jacobian matrix is also the root of what’s called incremental analysis or small signal analysis, an incredibly powerful circuit analysis tool, that relies on using a linearized model of a nonlinear circuit and just looking at small deviations around an operating point. The Jacobian matrix contains all those small-signal relationships. (As shown briefly in Level 2 above, these values are effectively exposed within CircuitLab’s Frequency Domain Simulation tools.)

At some level of physics intuition, you can think of this as being what the circuit itself does: when you read the later section about Thermodynamics, Energy, & Equilibrium, you can think about the mathematical system exploring nearby states with little wiggles in each variable (derivatives), and converging toward a low energy state (one at which all the equations are satisfied)!

From a mechanics perspective, a book resting on a table doesn’t “know” its equations of force balance between gravity and the normal force provided by the table. Instead, the atoms and electrons in each material are always vibrating at random. If a few electrons on the bottom of the book happen to randomly wiggle a nanometer closer to the electrons on the top of the table, they’ll feel a repulsive force pushing them away. This first-order exploration is an automatic part of the universe until equilibrium is established. As we’ll discuss in Steady State & Transient, in most cases equilibrium is actively maintained through these tiny random interaction, rather than passively purely stable, when we look closely enough at physical systems.

In general, this microscopic process happens so fast that we usually virtually ignore it as engineers! But, inspected over the right time or distance scales, it matters. See the Lumped Element Model for more about the assumption of instantaneous equilibrium.

The big idea here is that the Newton-Raphson method can be extended to solving systems of multiple nonlinear differential equations. Even if there are hundreds or thousands of simultaneous unknowns and equations, this numerical method will start from an initial guess and quickly converge toward the true solution point by creating a linearized version of the nonlinear system at each successive approximation point.


Level 9: Higher-Level Mostly-Linear Systems  

We’ll stop looking at Systems of Equations until the next section.

Instead, we want to consider systems at an even higher level of abstraction.

Here’s a pulse-width modulation circuit that might not make much sense now, but will in a few chapters:

Exercise Click the circuit above, then click “Simulate,” and finally click “Run Time-Domain Simulation” to see what this circuit does.

This circuit turns an input voltage into a series of pulses of different lengths. Then, it smooths them back out into an output voltage .

Overall, over a reasonably wide range of possible inputs, this does something roughly linear: the output voltage and the input voltage are quite close to each other as long as we don’t zoom in too closely.

You can see the output and input traces aren’t exactly the same: there’s some jaggedness in the output, while the input is smooth; there’s a delay between input and output; and the size or scale aren’t exactly the same either! The jaggedness is an effect of digital on-off switching in the intermediate circuit. The phase and scale issues are effects of filters that we’ll talk about in later chapters.

Yet, despire these mismatches, to a rough approximation, when the input goes up, so does the output.

Why would we have all this complexity? Why turn a nice smooth analog sine wave into a series of digital pulses, and then back into an analog signal that “looks worse” by some metrics? Well, this particular circuit is an example of pulse width modulation (PWM). It turns out that this technique is often the most efficient to drive a motor or an LED by pulsing it on and off very quickly, and letting the “inertia” of either the motor load or of our ability to perceive light smooth out the pulses into a continuously variable average, just like this circuit does. Just for this example here, there are probably billions of this exact idea produced every year: every class-D audio amplifier (including every smartphone), many RF amplifiers, many LED controls, many motor controllers, etc.

The big idea here is that linearity lets us bundle up all that internal complexity, and for the user or for the engineer integrating this into a larger system, it’s easy to say, “This black box labeled ‘amplifier’ will make the output roughly proportional to the input.” The fact that it does it in a particularly energy-efficient way is a great bonus.


Level 10: Higher-Level Cross-Domain Systems  

In Level 9 we showed an example where an input and output voltage were made (approximately) linear to each other, even though the process in the middle was fairly complicated and nonlinear.

However, it doesn’t have to be linear from voltage to voltage. In fact, we can cross domains entirely and think more abstractly.

In this example, we’ll convert voltage to frequency. An adjustment in input voltage will adjust the frequency of the output:

Exercise Click the circuit, click “Simulate,” and finally “Run Time-Domain Simulation.”

For higher input voltages , the output frequency is slower. For lower input voltages , the output frequency is faster.

If you were to plot the input voltage versus the output frequency, you’d see that this behavior is approximately linear. It’s not perfect, but over a fairly wide range, it’s reasonably linear.

This general concept of voltage-to-frequency conversion circuit also finds many practical uses: for example, as a voltage-controlled oscillator (VCO) in radio-frequency systems.

In FM radio, and many digital radio systems as well, a signal to transmit is converted from a voltage to a frequency. This frequency is then transmitted wirelessly over an antenna, received, and the detected frequency is converted back into a voltage.

Frequency is not a state variable. Only currents and voltages (and possibly their derivatives) are state variables in the time-domain simulation of this circuit.

Nonetheless, at a higher level of abstraction, this circuit can be examined as a black box where you put a voltage in and get a linearly-related frequency out.

The big idea is that this type of cross-domain thinking is extremely valuable in an engineering mindset.


Linearity in Abstractions  

Obviously, linearity is useful in actual numerical problem solving as shown above. This is not just useful in electronics, but in any engineering field; in mechanical or civil engineering, for example, we can think about the loads on a structure and the displacements that result as a linear system. The linear behavior will only be an approximation to the real nonlinear behavior, but still a very useful approximation, or even a tool for solving the nonlinear behavior as shown above.

However, the concept of linearity is even more broadly useful to engineers because of abstractions.

As shown in Levels 9 and 10, we can hide a lot of complexity underneath the idea that if we “zoom out” a bit, or make certain assumptions or constraints, then the overall behavior is linear, or close enough to linear that we can model it as such. Even if the underlying mechanism is quite complicated, we can still most easily think and talk about stuff when it’s somewhat linear.

The first-order (linear) approximation to any function’s behavior is always the most significant term, compared to second-order and third-order effects.

The big idea of linearity is as a tool in managing what would otherwise be a quickly unmanageable tangle of nonlinear complexity, so that we can actually wrap our heads around how to analyze and design useful systems.


What’s Next  

In the next section, Systems of Equations, we’ll talk about how to solve math problems with multiple equations and multiple variables.