In the space of real numbers, the negative numbers do not have a defined square root, because any real number (positive or negative) multiplied by itself will have a positive product.
However, we can arbitrarily define a value called $j$ to represent the square root of -1. Note that in electrical engineering we use $j$ to avoid clashing with the use of the letter $i$ for current, and because it's more distinct when written down.
$$ \begin{align} j^2 = j \cdot j & = -1 \\ j & = \sqrt {-1} \end{align} $$
This is now an "imaginary" number.
These imaginary numbers do not themselves have physical meaning: I can eat 3 slices of pizza, but I can't eat $3 j$ slices of pizza. However, we'll show that complex numbers form a self-consistent area of mathematics, and that their close connection to circles, trigonometry (sines and cosines), and sine waves makes them a powerful and convenient tool for tracking both the magnitude and phase of a sine wave as it propogates through any system.
By adding real and imaginary numbers we can have complex numbers. Instead of imaginging the number line as a single line from $-\infty$ to $+\infty$, we can imagine the space of complex numbers as being a two-dimensional plane: on the x-axis are the real numbers, and on the y-axis are the imaginary. Any point on the 2D plane is now a complex number:
$$z = a + b j$$
If you've worked with the idea of unit vectors before (such as $\hat{x} \hat{y} \hat{z} \ \text{or} \ \hat{i} \hat{j} \hat{k}$), then in some sense we can imagine this as a space defined by two unit vectors, $\hat{1}$ and $\hat{j}$. Of course, we don't normally write $\hat{1}$ at all, but we can imagine that it's there.
We can define basic operations on complex numbers $z = a + b j$ by thinking about them as two-dimensional vectors: $$ \begin{equation} \text{Re}(z) = \Re{(z)} = \Re{(a + b j)} = a \\ \text{Im}(z) = \Im{(z)} = \Im{(a + b j)} = b \\ z_1 + z_2 = (a + b j) + (c + d j) = (a+b) + (c+d) j \\ z_1 - z_2 = (a + b j) - (c + d j) = (a-b) + (c-d) j \end{equation} $$
Multiplication is a more complicated case, but we can expand the product and work term by term to get the correct result:
$$ \begin{align} z_1 z_2 & = (a + b j) (c + d j) \\ z_1 z_2 & = a c + a d j + b c j + b d j^2 \\ z_1 z_2 & = a c + a d j + b c j + b d (-1) \\ z_1 z_2 & = (a c - b d) + (a d + b c) j \end{align} $$
If we consider the special case of multiplying by $j$:
$$ \begin{align} z_1 j = (a + b j) j \\ z_1 j = a j + b j^2 \\ z_1 j = a j + b (-1) \\ z_1 j = (-b) + a j \end{align} $$
Interestingly, we find when multiplying by $j$, the real and imaginary parts of $z_1$ have swapped, and the real part gets a negative sign too.
This maps geometrically to a rotation by 90 degrees counterclockwise around the origin, assuming we've drawn the real numbers on the x-axis and the imaginary numbers on the y-axis. If this is not clear, draw a two-dimensional plane with any point $(a,b)$ labeled. Then, plot the point $(-b, a)$.
Multiplication by $j$ a second time (i.e. multiplying $z_1$ by $j$ twice) maps geometrically to two 90 degree rotations, for a total of rotation 180 degrees around the origin:
$$(z_1 j) j = (-b + a j) j = -b j + a j^2 = -a - b j$$
Similarly, multiplication by $-j$ corresponds to 90 degree clockwise rotation. In either direction, this series of rotations is periodic, because $j^4 = (-1)^2 = 1$.
The complex conjugate $\bar{z}$ of a complex number $z$ is defined as the value with negative imaginary part:
$$ \begin{align} z & = a + b j \\ \bar{z} & = a - b j \\ \Im(\bar{z}) & = - \Im(z) \end{align} $$
The complex conjugate is important because it multiplies with the original complex number to a purely real number:
$$ \begin{align} z \bar{z} & = (a + b j) (a - b j) \\ & = a^2 + a b j - a b j - b^2 j^2 \\ & = a^2 + (a b j - a b j) - b^2 (-1) \\ & = a^2 + b^2 \end{align} $$
Using the complex conjugate and multiplication definitions, we can define the divison of complex numbers as well:
$$ \begin{align} \frac {z_1} {z_2} & = \frac {a + b j} {c + d j} \\ \frac {z_1} {z_2} & = \frac {z_1} {z_2} \big( \frac {\bar{z_2}} {\bar{z_2}} \big) \\ \frac {z_1} {z_2} & = \frac {(a + b j) (c - d j)} {(c + d j) (c - d j)} \\ \frac {z_1} {z_2} & = \frac {(a + b j) (c - d j)} {c^2 + d^2} \\ \frac {z_1} {z_2} & = \frac {(a c + b d) + (b c - a d) j} {c^2 + d^2} \\ \frac {z_1} {z_2} & = \big( \frac {a c + b d} {c^2 + d^2} \big) + \big( \frac {b c - a d} {c^2 + d^2} \big) j \end{align} $$
Notice that for two purely real numbers (i.e. $b = 0 \ \text{and} \ d = 0$), this simplifies to ordinary division: $\frac {z_1} {z_2} = \frac {a} {c}$. The real numbers are a subset of the complex numbers.
As previously mentioned, complex numbers can be though of as part of a two-dimensional vector space, or imagined visually on the x-y (Re-Im) plane. These graphical interpretations give rise to two other geometric properties of a complex number: magnitude and phase angle.
The magnitude of a complex number is defined just like it is in three-dimensional vector spaces, as the overall length of the vector from the origin:
$$|z| = \sqrt{z \bar{z}} = \sqrt{a^2 + b^2}$$
The phase angle is defined graphically from the x-y plane interpretation: it is the counterclockwise angle from the positive x-axis to the vector represented by the complex number. The phase of a positive real number is 0 degrees, and that of a negative real number is 180 degrees or $\pi$ radians. In general, the phase is defined as:
$$\phi(z) = \tan^{-1} \frac {\text{Im}(z)} {\text{Re}(z)}$$
However, the naive $\tan^{-1}$ definition does obscure the fact that a complex number with negative real part and negative imaginary part lies in the 3rd quadrant: the range of $tan^{-1}$ is $\big[ -\frac{\pi}{2} , +\frac{\pi}{2} \big]$ which is only $\pi$ radians or 180 degrees wide -- it covers only half of the phase space. (In computer programming, the "atan2" function accounts for these multi-quadrant issues and gives a full $2\pi$ range of output by checking the sign of numerator and denominator before proceeding. You should do the same when working by hand.)
Although we have previously defined a complex number in terms of a Cartesian two-dimensional plane with orthogonal real and imaginary parts, an alternative polar interpretation is easy, useful, and suggested by the phase and magnitude. As long as we're careful to have specified the phase angle with a full $2\pi$ range, we can directly map any complex number with Cartesian representation $z = (a, b)$ to a polar representation $(r, \theta)$, where $r=|z|$ and $\theta = \phi(z)$. To map back and forth between the two representations, note that:
$$ \begin{align} z & = a + b j = (r, \theta) \\ a & = r \cos \theta \\ b & = r \sin \theta \end{align} $$
While some operations like addition and subtraction are easiest in the Cartesian representation $a + b j$, other operations are actually simpler in the polar representation $(r, \theta)$. In particular, multiplication and division become quite simple:
$$ \begin{align} z_1 z_2 & = (r_1, \theta_1) \cdot (r_2, \theta_2) \\ z_1 z_2 & = (r_1 r_2, \theta_1 + \theta_2) \end{align} $$
In multiplication, the two radius (or magnitude) values multiply, and the two phase angles add. (As an exercise, you may wish to prove this to yourself on paper.)
This is useful because many electronic systems like amplifiers and filters can be though of as working multiplicatively: a filter may reduce the input magnitude by a factor $\frac {1} {10}$ and apply a phase delay of -90 degrees. By tracking both the magnitude and phase, we can consider this filter as effectively being like multiplying the input signal by a complex number $z_{\text{filter}} = (r=\frac{1}{10}, \theta=-\frac{\pi}{2}) = -\frac{1}{10} j$. For a quick hint of what's to come, here's a simple RC low-pass filter that behaves like this complex number (at least, it does at the particular frequency we've specified):
Interactive Exercise Click the circuit, then click "Simulate", and "Run Time-Domain Simulation." Observe the output signal is about a tenth as tall as the input, and the sine wave of the output is (very roughly) 90 degrees behind the input.
In division, it's quite similar:
$$ \begin{align} \frac {z_1} {z_2} & = \frac {(r_1, \theta_1)} {(r_2, \theta_2)} \\ \frac {z_1} {z_2} & = \big( \frac {r_1} {r_2} , \theta_1 - \theta_2 \big) \\ \end{align} $$
In division, the magnitudes are divided, and the phase angles are subtracted. (Again, you may wish to prove this as an exercise.)
The polar representation of complex numbers is closely connected to one of the most beautiful expressions in mathematics. First, we start by observing that
$$ \begin{align} z & = a + b j \\ z & = (r \cos \theta) + (r \sin \theta) j \\ z & = r \big( \cos \theta + j \sin \theta \big) \end{align} $$
The $r$ is just a scaling factor, but the other term is actually very interesting: the angle $\theta$ traces out a unit circle on the complex number plane. Again, if the complex number notation is unfamiliar, just observe that if we had two axes with unit vectors $\hat{x} \hat{y}$ then the vector equation
$$\vec {p}(\theta) = (\cos \theta) \hat{x} + (\sin \theta) \hat{y}$$
traces out a unit circle on the x-y plane.
It turns out that there is a more compact representation of the complex plane unit circle, and it's an incredible piece of mathematics called Euler's Formula:
$$e^{j \theta} = \cos \theta + j \sin \theta$$
If you haven't seen this before, this may seem cryptic, but it is incredibly powerful. The natural base constant $e$ raised to any imaginary number produces a complex number with unit magnitude and angle varying with the parameter.
Taking this as a given for the moment, this lets us write the complex number $z$ very simply:
$$z = r e^{j \theta}$$
Any complex number can be written in this form, where $r$ and $\theta$ are real numbers specifying the magnitude and phase of the complex number.
Importantly, from this definition, the rules about multiplication and division of complex numbers are very easy to work with. Since for any exponents $e^A e^B = e^{(A+B)}$ and $\frac{e^A} {e^B} = e^{(A-B)}$:
$$ \begin{align} z_1 z_2 & = \big( r_1 e^{j \theta_1} \big) \big( r_2 e^{j \theta_2} \big) = (r_1 r_2) e^{j(\theta_1 + \theta_2)} \\ \frac {z_1} {z_2} & = \frac {r_1 e^{j \theta_1}} {r_2 e^{j \theta_2}} = \frac {r_1} {r_2} e^{j (\theta_1 - \theta_2)} \end{align} $$
It's reasonable to ask what it even means to raise a number to an imaginary number. Exponentiation of any base $b^n$ is clear enough when $n$ is a positive integer: multiplying $b$ by itself $n$ times. When $n$ is not an integer, it's stranger: $n=\frac {1} {2}$ implies a square root, for example. But any exponentiation can be transformed to one in base $e$, the base of the natural logarithm:
$$ \begin{align} e^y & = b^n \\ \ln {(e^y)} & = \ln {(b^n)} \\ y & = n \ln b \\ e^{(n \ln b)} = b^n \end{align} $$
So far we've only shown that any real exponentiation can be made into a natural exponent and a natural logarithm. But for a computer to evaluate any (non-integer) exponent or any natural logarithm, we have to find a way to express these operations in terms of basic arithmetic: addition, multiplication, division, etc. Fortunately, we have something called the Taylor series expansion of $e^x$:
$$e^x = 1 + x + \frac {x^2} {2!} + \frac {x^3} {3!} + \frac {x^4} {4!} + \frac {x^5} {5!} + \frac {x^6} {6!} + \frac {x^7} {7!} + \dots = \sum_{n=0}^{\infty} \frac {x^n} {n!}$$
This is an infinitely long series, but each term is simply a polynomial term in $x$: easy enough to calculate by multiplying $x$ by itself an integer number of times directly. The terms eventually get smaller and smaller toward 0 because the factorial function in the denominator grows faster than the polynomial in the numerator, so the infinite series converges to a finite value for all inputs.
Other functions like $\sin(x)$ and $\cos(x)$ also have Taylor series expansions as polynomials:
$$ \begin{align} \cos(x) & = 1 - \frac {x^2} {2!} + \frac {x^4} {4!} - \frac {x^6} {6!} + \dots & = \sum_{n=0,2,4,6,8,\dots \ \text{(even} \ n \ \text{only)}}^{\infty} (-1)^{\frac{n}{2}} \frac {x^n} {n!} \\ \sin(x) & = x - \frac {x^3} {3!} + \frac {x^5} {5!} - \frac {x^7} {7!} + \dots & = \sum_{n=1,3,5,7,9,\dots \ \text{(odd} \ n \ \text{only)}}^{\infty} (-1)^{\frac{(n-1)}{2}} \frac {x^n} {n!} \end{align} $$
When comparing the Taylor series for $\cos(x)$ and $\sin(x)$ with that for $e^x$ above, you can see there's something interesting going on: all the $e^x$ terms with even powers of $x^n$ (including $n=0$) are contained within the series for $\cos(x)$, and all the odd power terms are contained within the series for $\sin(x)$ -- except for a pattern of alternating minus signs!
Now let's take a look at the unit imaginary number $j = \sqrt{-1}$ and see what happens when it's raised to the n-th power:
$$ \begin{align} 1 & = j^0 = j^4 = j^8 = j^{12} = \dots \\ \sqrt{-1} & = j^1 = j^5 = j^9 = j^{13} = \dots \\ -1 & = j^2 = j^6 = j^{10} = j^{14} = \dots \\ -\sqrt{-1} & = j^3 = j^7 = j^{11} = j^{15} = \dots \end{align} $$
The idea that multiplication by $j$ is like rotating 90 degrees counterclockwise in the real-imaginary plane is a useful one here. After four rotations, we're back to where we started.
Now look at just the first and third rows listed here:
$$ \begin{align} j^n & = +1 \quad \text{for} \ n=0,4,8,12,\dots \\ j^n & = -1 \quad \text{for} \ n=2,6,10,14,\dots \end{align} $$
If you look at the Taylor series expansion for $\cos(x)$, it has $(-1)^{\frac{n}{2}}$ in each term, which maps precisely to this sequence for all even values of $n$. Simply by comparing values, for even values of $n$, we can write:
$$(-1)^{\frac{n}{2}} = j^n \quad \text{for} \ n=0,2,4,6,8,\dots$$
Of course we could also have observed that $$(-1)^{\frac{n}{2}} = \big( (-1)^{\frac{1}{2}} \big) ^ n = (\sqrt{-1})^n = j^n$$ directly, but it's more concrete to see the matching of values within each term.
For the odd terms, we find something similar:
$$ \begin{align} j^n & = +j \quad \text{for} \ n=1,5,9,13,\dots \\ j^n & = -j \quad \text{for} \ n=3,7,11,15,\dots \end{align} $$
If you look at the Taylor series expansion for $\sin(x)$, it has $(-1)^{\frac{(n-1)}{2}}$ in each term. This gives us the correct sequence of positive and negative inversions, but are missing the $j$. (This missing factor of $j$ is OK -- we'll simply multiply by $j$ shortly to bring it back.)
$$(-1)^{\frac{(n-1)}{2}} = j^{(n-1)} \quad \text{for} \ n=1,3,5,7,9,\dots$$
So finally, let's reassemble everything. Here is the series for $\cos(\theta)$:
$$ \begin{align} \cos(\theta) & = \sum_{n=0,2,4,6,8,\dots \ \text{(even} \ n \ \text{only)}}^{\infty} (-1)^{\frac{n}{2}} \frac {\theta^n} {n!} \\ \cos(\theta) & = \sum_{n=0,2,4,6,8,\dots \ \text{(even} \ n \ \text{only)}}^{\infty} j^n \frac {\theta^n} {n!} \\ \cos(\theta) & = \sum_{n=0,2,4,6,8,\dots \ \text{(even} \ n \ \text{only)}}^{\infty} \frac {(j \theta)^n} {n!} \\ \cos(\theta) & = \color{red} {1 - \frac {\theta^2} {2!} + \frac {\theta^4} {4!} - \frac {\theta^6} {6!} } + \dots \end{align} $$
And here's the series for $j \sin(\theta)$ (we've simply multiplied every term by $j$ to fix the "missing" imaginary value from the sequence above):
$$ \begin{align} j \sin(\theta) & = \sum_{n=1,3,5,7,9,\dots \ \text{(odd} \ n \ \text{only)}}^{\infty} j (-1)^{\frac{(n-1)}{2}} \frac {\theta^n} {n!} \\ j \sin(\theta) & = \sum_{n=1,3,5,7,9,\dots \ \text{(odd} \ n \ \text{only)}}^{\infty} j \cdot j^{n-1} \frac {\theta^n} {n!} \\ j \sin(\theta) & = \sum_{n=1,3,5,7,9,\dots \ \text{(odd} \ n \ \text{only)}}^{\infty} j^n \frac {\theta^n} {n!} \\ j \sin(\theta) & = \sum_{n=1,3,5,7,9,\dots \ \text{(odd} \ n \ \text{only)}}^{\infty} \frac {(j \theta)^n} {n!} \\ j \sin(\theta) & = \color{blue} { j \theta - j \frac {\theta^3} {3!} + j \frac {\theta^5} {5!} - j \frac {\theta^7} {7!} } + \dots \end{align} $$
And the series for $e^{j \theta}$ will proceed similarly: $$ \begin{align} e^{j \theta} & = \sum_{n=0}^{\infty} \frac {(j \theta)^n} {n!} \\ e^{j \theta} & = 1 + j \theta + \frac {(j \theta)^2} {2!} + \frac {(j \theta)^3} {3!} + \frac {(j \theta)^4} {4!} + \frac {(j \theta)^5} {5!} + \frac {(j \theta)^6} {6!} + \frac {(j \theta)^7} {7!} + \dots \\ e^{j \theta} & = \color{red}{1} \color{blue}{+ j \theta} \color{red}{ - \frac {\theta^2} {2!}} \color{blue}{ - j \frac {\theta^3} {3!}} \color{red} {+ \frac {\theta^4} {4!}} \color{blue} { + j \frac {\theta^5} {5!} } \color{red} {- \frac {\theta^6} {6!}} \color{blue} { - j \frac {\theta^7} {7!}} + \dots \end{align} $$
And so, when we combine $\cos(\theta)$ (providing the even terms) and $j \sin(\theta)$ (providing the odd terms), we find precisely:
$$e^{j \theta} = \cos(\theta) + j \sin(\theta)$$
This is known as Euler's formula.
Sometimes this is evaluated at a single point $\theta = \pi$ and written as:
$$e^{j \pi} + 1 = 0$$
This is famously known as Euler's identity, and now hopefully you can understand where it comes from: just a special case of Euler's formula.
As mentioned earlier, complex numbers can be used to represent the magnitude and phase of a sine wave. These two values -- magnitude and phase -- are all that's needed to specify any sine wave.
Recalling our earlier example:
At the particular frequency of this example, the filter can be though of as (approximately) multiplying the magnitude of the input sine wave by 0.1, and (approximately) modifying the phase by -90 degrees. (We're saying "approximately" because we've done some rounding to nice values. We'll discuss how to analyze and design such filters in later chapters, but for now, just take it as a given that this design does approximately that.)
If we look at this purely in the time domain, for our single frequency $f=1000 \text{Hz}$, we might consider our input $x(t)$ and output $y(t)$:
$$ \begin{align} x(t) & = \sin(2 \pi \cdot 1000 t) \\ y(t) & \approx \frac{1}{10} \sin(2 \pi \cdot 1000 t - \frac {\pi} {2} ) \end{align} $$
This is a simple, linear filter: if we double the magnitude of $x(t)$, the magnitude of $y(t)$ also doubles. If we change the phase of the input by 20 degrees, we also change the phase of the output by 20 degrees. Yet, perhaps surprisingly, it would be very hard to write a single general algebraic formula that describes the filter transfer function $\frac {y(t)} {x(t)}$. The amplitude part would factor out easily, but the phase part is trapped inside some sine expressions.
Instead, we can think about our input and output signals as phasors. This means that instead of treating $x(t)$ as a function of time, we just treat it as a single complex number $X$ which represents a sine wave at our particular frequency. The magnitude and phase of $X$ represent that of the sine wave. Similarly, our output is just a single complex number $Y$. The advantage of doing so is that we can now represent our entire filter (at this frequency) very compactly:
$$Y = (\frac{1}{10} e^{-j \frac{\pi}{2}}) \cdot X$$
We're making two big assumptions here in order to work with these complex numbers called phasors:
First, we're assuming that putting in a sinusoidal wave means we'll get something shaped like a sinusoidal wave out. (Later we'll show that this is always true for Linear Time-Invariant (LTI) Systems, but assume it's true for now.)
Second, we're assuming that we're only working at one single particular frequency (for now) -- or equivalently we're assuming that if we put in only a sinusoidal signal at frequency $f$, we'll only get a sinusoidal signal at $f$ out. (Again, we'll show this to be true for LTI systems. Additionally, in the future, we'll look at how to combine different sinusoids at different frequencies -- the key insight to Laplace and Fourier Transforms and the Frequency Domain.)
As long as we're comfortable with these assumptions, we can now go back and forth between our two representations.
Suppose our input signal is $x(t) = 5 \cos(2 \pi \cdot 1000 t)$: we can represent this with a phasor of magnitude 5 and phase 0, or
$$ \begin{align} X & = 5 e^{j 0} \\ & = 5 \end{align} $$
We can apply our filter $(\frac{1}{10} e^{-j \frac{\pi}{2}})$ to find the phasor for our output:
$$ \begin{align} Y & = (\frac{1}{10} e^{-j \frac{\pi}{2}}) \cdot X \\ & = (\frac{1}{10} e^{-j \frac{\pi}{2}}) \cdot (5 e^{j 0}) \\ & = 0.5 e^{j (- \frac{\pi}{2} + 0)} \\ & = 0.5 e^{- j \frac{\pi}{2}} \end{align} $$
How do we go back and forth from the phasor representation to the time-domain representation?
First, note that we can write the frequency $f=1000 \text{Hz}$ as an angular frequency, often denoted by the greek character $\omega = 2 \pi f$, so for this example, $\omega = 2 \pi \cdot 1000$. We do this just because it gets tiring to write $2\pi f$ everywhere!
Now, to go from a phasor $X = 5 e^{j 0}$ to the function $x(t)$, we just multiply $X$ by $e^{j \omega t}$ and then take the real part of the result:
$$ \begin{align} e^{j \theta} & = \cos(\theta) + j \sin(\theta) \\ e^{j \omega t} & = \cos(\omega t) + j \sin(\omega t) \end{align} $$
This complex value $e^{j \omega t}$ is our base complex-valued sinusoid. Note that it keeps track of both a cosine and a sine at the same frequency within its real and imaginary parts, respectively. When we take the real part $\Re{(5 e^{j 0} e^{j \omega t})}$ we find:
$$ \begin{align} x(t) & = \Re { \big( X \cdot e^{j \omega t} \big) } \\ & = \Re { \Big( 5 \big( \cos(\omega t) + j \sin(\omega t) \big) \Big) } \\ & = \Re { \big( 5 \cos(\omega t) + j 5 \sin(\omega t) \big) } \\ & = 5 \cos(\omega t)\\ \end{align} $$
This was a particularly simple example because X was purely real. But now let's find $y(t)$ from our complex phasor Y:
$$ \begin{align} y(t) & = \Re { \big( Y \cdot e^{j \omega t} \big) } \\ & = \Re { \Big( 0.5 e^{- j \frac{\pi}{2}} \big( \cos(\omega t) + j \sin(\omega t) \big) \Big) } \\ & = \Re { \Big( 0.5 (-j) \big( \cos(\omega t) + j \sin(\omega t) \big) \Big) } \\ & = \Re { \big( 0.5 (-j) \cos(\omega t) + j 0.5 (-j) \sin(\omega t) \big) } \\ & = \Re { \big( - j 0.5 \cos(\omega t) + 0.5 \sin(\omega t) \big) } \\ & = 0.5 \sin(\omega t) \\ \end{align} $$
In this case, $Y$ was purely imaginary, and we find that $y(t)$ purely has a $\sin(\omega t)$ term. In general, however, if $Y$ has nonzero real and imaginary parts, then the time-domain $y(t) = \Re { \big( Y \cdot e^{j \omega t} \big) }$ will have both cosine and sine terms.
Note that it's always possible to write a linear combination of $\cos(x)$ and $\sin(x)$ as only $\cos(x + \theta)$ for some phase angle $\theta$. In simple 90-degree cases, we may have a simple expression such as:
$$ \begin{align} \sin(x) & = \cos(x - \frac {\pi} {2}) \\ \cos(x) & = \sin(x + \frac {\pi} {2}) \end{align} $$
In more advanced cases for intermediate values, remember that:
$$ \begin{align} f(t) & = \Re{\big( e^{j \theta} e^{j \omega t} \big)} \\ & = \Re{\big( e^{j \theta + j \omega t} \big)} \\ & = \Re{\big( e^{j ( \theta + \omega t )} \big)} \\ & = \Re{\big( \cos(\theta + \omega t) + j \sin (\theta + \omega t) \big)} \\ & = \cos(\theta + \omega t) \end{align} $$
So for our example above, we could write:
$$y(t) = 0.5 \cos(\omega t - \frac {\pi} {2})$$
Here, we've shown that while it is very hard to write a general algebraic function for this simple filter that transforms our input sine wave $x(t)$ directly into the output sine wave $y(t)$, it's actually quite simple if we think of both input and output as being complex-valued phasors representing the magnitude and phase of the corresponding sinusoid. Applying the filter then becomes simply multiplication by a complex number, which is easy. And we can get back from our complex phasor $Y$ to our time-domain signal $y(t)$ by multiplying by our complex sine wave $e^{j \omega t}$ and then finally taking the real part.
$$y(t) = \Re{ \big( Y e^{j \omega t} \big) }$$
In the next section, Linear & Nonlinear, we'll switch gears and talk about linearity and why it's so useful to understanding and designing systems of all kinds.
How to cite this source: