1.1
Algebraic Approximations

Asymptotic and intermediate approximation techniques for back-of-the-envelope problem solving. 9 min read

The behavior of an electronic circuit can be described by a system of equations. However, that system is often large and nonlinear. It is often impractical to find the exact algebraic solution to these systems.

One of the most common tools engineers (of all kinds!) use to make this complexity manageable is to make algebraic approximations when it’s safe to do so. While you may have seen these techniques previously, here’s a quick refresher in context.

For the next few sections of approximations, we’ll consider this simple function as an example to explore a few approximation techniques:

This equation is not a linear relationship, but it (or a similar looking fraction) often comes out of many circuit networks. From this one function, we can make three types of algebraic approximations:


Large Asymptotic Approximation  

For “large” , we really mean (read as “the absolute value of x is much greater than 1”) in this case. That’s because the denominator of is , and:

(You should read the squiggly equals sign as “is approximately equal to.”) This is just saying that and , an approximation which seems to get better as gets bigger. (We can similarly handle the case but will keep it positive herein for simplicity.)

Note that we’ve chosen 1 as our comparison point because it’s the other addend in the denominator, and we’re considering which addend ( or ) dominates the denominator and thus the entire expression. If our expression were , we’d use as our comparison point.

Let’s put a “hat” over our original function so that we call our approximation . Plugging our approximated denominator back into our formula to get an estimate for the fraction:

This is only an approximation for the original function , but it’s an increasingly good one as .

Error in Large Asymptotic Approximation  

How good is this approximation? We can consider the approximation error :

It may seem a bit circular, but we can actually use our approximation technique again on the error function. Since we are assuming that , then . In regards to the denominator of , we can then say that , and the error is approximately:

This means that if then our approximation error due to using instead of is about one part in a million. Whether that’s close enough for any particular situation is for you to determine, but in many practical cases, being off by only one part in a million is much better than many manufacturing tolerances or noise sources, so using the approximation may be a very reasonable choice.

Limit of Large Asymptotic Approximation  

In fact, depending on the structure of our equations and for really huge values of , it might be appropriate (though usually is not!) to take the full limit of :

Note that the limit of the original is also the limit of the approximation .

However, if you’re not very careful in taking approximations, it is very dangerous to remove too much information. Your approximations will quickly mislead you if you always assume that – be warned!

Approximations Gone Wrong  

If you look closely, the idea above that

may seem a bit strange. In fact, the absolute error in this approximation itself actually grows as . That’s because we’re speaking about the approximation colloquially in the context of the overall problem, rather than strictly in a mathematical sense.

The generally accepted and more strict definition of a good asymptotic approximation involves a limit (in the calculus sense) and a fraction:

In using this fractional limit to define the approximate equivalence, we’re treating and as multiplicative terms or divisors. When defining approximations in this way, multiplication and division are generally OK, but you generally want to avoid or be very careful when taking approximations that will get added or subtracted from other approximations. That’s because additive or subtractive cancellation can make your terms of interest disappear if you’re not careful.

If you’re ever unsure of whether it’s safe to make an approximation, try to solve using the full, un-simplified function, and see if the solution matches the approximation closely enough. Alternatively, create a higher-order approximation, where you include one or two more terms below the most-significant term.


Small Asymptotic Approximation  

For “small” , by which we really mean , a new approximation is needed for our function .

At :

But what happens if is not exactly zero, but is simply small? Suppose :

If you have a calculus background, the simplest way to get a linear approximation near is to take the derivative of there and use it to construct a tangent line:

Now that we have the point value and the derivative value we can construct the tangent line:

At the end of the day, we now have the approximation that:

This is a very useful approximation and we might see it in other formats. For one example, consider the ratio of resistances:

This fraction comes up in the analysis of every resistor divider. We can factor out from the denominator and find:

If we simply define , the ratio of resistances, then our function and all the approximations we’ve developed here apply to the resistor divider problem.

If (and both and as is typical of all real resistors), then , and we’re back to our approximation above.

Suppose, for a practical example, that represents the input impedance of one amplifier stage, and represents the output impedance of the previous stage. In that case, represents the voltage transfer ratio between stages, and we might want to make sure it was as close to 1 as reasonably possible.

For example, if we knew that (i.e. that there was a 100X ratio between the resistances), we could now quickly estimate the voltage transfer ratio as approximately:

This is just an easier and more intuitive calculation to reason about than using the full exact form:

which, for most people, is harder to think about directly!


Intermediate Approximation at a Point  

So far we’ve handled “extreme” cases: making simplifying approximations when and when . But suppose we want to find a simplified approximation at some intermediate point that is neither very large nor very small.

One easy way to do that is by using the same tangent line approach we used for the small asymptotic approximation, but instead of using as our anchor point, use the value in the middle of the range we want to approximate.

For example, suppose again we are looking at a resistor divider but now we have the case where , so . Perhaps, due to imperfect manufacturing or biasing tolerances, the resistances are not exactly equal, but are close, and we want a simplified formula to describe the voltage transfer ratio in these cases.

The exact formula is still the same:

But, for values of close to 1, we can approximate this fraction with a simpler expression that doesn’t have an in the denominator. Here’s how we can construct the tangent line around the point :

In fact, we can use CircuitLab just as a plotting engine to generate a graph showing how good this approximation really is.

Exercise Click the “circuit” shown above, then click “Simulate”, then “Run DC Sweep”. You’ll see that the tangent line is a very good approximation for values of , and is a bad approximation far away from that point.

As an exercise, modify this “circuit” to also plot the other approximations for , specifically:

You may have to adjust the range of values swept for to make a reasonable-looking plot. Look at how each approximation is mostly good over a particular range, but then diverges when moving further away from that approximation point.

We’ll talk more about linearization later in the Linear & Nonlinear section.


Other Approximation Techniques  

The tangent line approximation minimizes the approximation error to 0 at one particular point, but does not necessarily optimize behavior at other points.

If we had to approximate some nonlinear function (possibly even experimentally derived rather than algebraic) over some bounded range of input values , there are several ways we could imagine doing it:

  1. Compute the tangent line at the midpoint, . This would minimize error in the middle of the range, but does nothing to guarantee good approximation over the rest of the range.
  2. Linear interpolation between the two endpoints. If we have then we can just draw a line between these two points:
    This would minimize error to zero at the two endpoints, but does nothing to guarantee good approximation in the middle of the range.
  3. Some line that is anchored neither at the endpoints nor midpoints. For example, if we want to minimize the maximum approximation error over the entire range, we may have to construct a line different from either #1 or #2. This could be done numerically (for example, a linear least squares approach might work) or graphically. This approximation may be “better” in a practical way; for example, our approximations in #1 and #2 may be off by 20% at some point along the curve, but our approximation in #3 might be off by only 5% at its worst.

Here’s a quick example: let’s plot three different approximations for:

The midpoint tangent line, endpoint linear interpolation, and linear least squares approximations over this range are:

Plot these three approximation functions against the original:

Exercise Click the “circuit” shown above, then click “Simulate”, then “Run DC Sweep”. All approximations have some error, but if you had to pick one, which of the three approximations do you think is best?

Additionally, these techniques can be combined with preprocessing the function to make it more linear. For example, itself may not be very linear, so it may be better to approximate something else like or with a straight line. This may dramatically improve the approximation while adding only a little bit of complexity.


What’s Next  

In the next section, Orders of Magnitude, Logarithmic Scales, and Decibels, we’ll use the approximations we’ve made here and see how they combine with logarithms to form very powerful tools for understanding how a function behaves over a wide range of input and output values.