Floating Point Representation

It’s exam season again which means my build time is a bit limited. I thought I’d take the opportunity while I’m studying to write some more content content for the basics area of this site. To start out I thought I’d build on my post regarding Binary Representation. With an understanding of binary we can begin to explore one of the more confusing data types, Floating point. I know this topic is straying a little bit from audio synthesis but if you plan on working with microprocessors it’s a good thing to understand.

Binary Fractional Representation

The first piece of this puzzle is understanding how fractional numbers are represented in binary. That is to say non-integer numbers. You may remember that in binary each bits value can be determined by raising 2 to the power of that bit. So the first bit is 2^0 = 1, the second is 2^1 = 2 and so on. For numbers on the other side of the decimal point we continue the same pattern but move into negative exponents. The table below should offer some clarity.

This means if we wanted to express a decimal number like 10.625. We could do so in binary by writing 1010.101.

Conversions of Fractional Numbers

You may remember that we can convert an integer number in decimal to binary by repeatedly halving it and noting the remainders. There is a similar algorithm for converting fractional numbers. This time however we repeatedly double the number and note whether the result is greater than one. Taking the example 10.625 from earlier. For the whole number part (10).

Reading from the bottom up, this gives us the binary representation of the whole number portion (1010). Next for the fractional portion (0.625) we can repeatedly double.

This time we read from the top down to get the fractional part in binary .101. Putting it together we get the answer from above 1010.101.

Scientific Notation in Base 2

If you’ve ever had to do math involving very large or very small numbers you’ve probably encountered scientific notation. It’s a fairly straight forward concept. If you have a number with a bunch of zeros like 30,000,000. You know that dividing by 10 would remove one of the zeroes. If you divided by 10 7 times you would be left with just 3. That means 3 * 10 * 10 * 10 * 10 * 10 * 10 * 10 would be equivalent to 30,000,000. For brevity we just write 3 * 10^7.

This process also works for zeros on the other side of the decimal point. We simply use a negative exponent in these cases. for instance 0.000015 can be represented as 1.5 * 10^-5. The key here is that we multiply or divide by ten in a base ten system. So guess what we use in base 2.

You may not have encountered scientific notation in binary but it works in much the same way. You can slide the decimal point back and forth by multiplying or dividing by 2. for instance our example from earlier (1010.101) could be represented by writing 1.010101 * 2^3. This is at the core of floating point representation.

Floating Point

So that was a lot of preamble… but what actually is floating point. At a high level floating point is a way to represent fractional numbers in a digital way. Through some clever design floating point allows us to represent both incredibly large and incredibly small numbers with the same basic architecture.

A floating point number is made up of three parts. The sign, the exponent and the mantissa. The first, the sign bit, is the easiest to understand. If the sign bit is a 1, the number is negative, if it’s a 0, the number is positive.

The Mantissa

The mantissa holds the actual digits of the number. If we go back to our example of 10.625 (1010.101). We would slide the decimal to the first digit giving us 1.010101. Take note of how many decimal places we move the decimal point, we’ll need that later. Also notice that regardless of the number we do this with we will always have a 1 on the left of the decimal point. Since we can always assume this one will be there we can actually leave it out of our final representation. This leaves us with 010101 which is what you would find in the mantissa segment of a floating point number.

The Exponent

The final piece is the exponent. This represents how many digits the decimal point must be moved from the actual number to reach the mantissa. In our example we’ve moved the decimal point 3 places to the left, but there’s a little more to the story. In order for the exponent to represent both positive and negative numbers a bias is added. The bias is typically half the range of numbers that can be represented in the exponent. That means if we have 8-bits set aside for the exponent (0-255) a bias of 127 would be used. That way even if we have a negative exponent it can still be represented using an unsigned number. Adding 127 to our exponent (3) we end up with a value of 130. We can represent this in binary with 1000 0010.

Putting It Together

Now that we’ve determined the values for our sign bit, exponent and mantissa we can put them together into a floating point number. We can do so in one of two ways. Floating point numbers are divided into either singles or doubles which refers to the level of precision each has available. A single is made of 32 bits (4-bytes) while a double takes 64 bits (8-bytes). I will go over each of them with our current example.

Single Precision

In single precision we use 32 bits. Of these 32-bits, 1 is used for the sign, 8 for the exponent and 23 for the mantissa. Using the values we determined previously that means the single point representation of 10.625 would be:

S | Exponent | Mantissa

0 | 1000 0010 | 0101 0100 0000 0000 0000 000

Notice I have added zeros to the tail end of the mantissa to fill the available bits. This won’t affect the number. It’s also pretty clear looking at this example that even with the lower precision of a single we can represent an incredible spectrum of numbers. We can use exponents from 127 to -127. Just to give an idea of the scale there that would approximately translate to a number with 38 zeros before or after the decimal point in base 10.

Double Precision

In case single doesn’t provide the range or precision necessary another larger option is available. With double precision we use 64 bits to represent a number. This is divided into, 1 sign bit, 11 bits for the exponent and 52 bits for the mantissa.

One important note is that since we now have more than 8 bits for the exponent we need to adjust our bias. 11 bits provides a range from 0-2047. Half of that range gives us a bias of 1023. This makes the exponent for our example 1026 (1023+3). In binary 1026 is 1000 0000 010. By putting this together with the other results calculated earlier we get a double precision floating point representation as follows:

S | Exponent | Mantissa

0 | 1000 0000 010 | 0101 0100 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000

With 11 bits available for the exponent the available number range has grown exponentially (pun intended). We can now use exponents anywhere from 1023 to -1023. These numbers actually go beyond what my poor Casio calculator is capable of outputting. In base 10 they equate to approximately 307 zeroes on either side of the decimal place!

In Closing

Floating point representation can seem really intimidating at first. That being said with some practice they start to make sense and you’ll begin to develop an intuition while working with them. As you go further working with micro-controllers you will start encountering situations where you have to send and receive data, read or write from registers or make bare-metal conversions between data-types. In any of these situations a strong grasp on floating point (and other common data types) will serve you well.

Binary Representation

Since I’ve been working so much lately in the digital space I thought it would be pertinent to do a quick review today. I want to spend some time on one of the most fundamental ideas in digital logic, Binary Representation. I know it’s not the most exciting topic but understanding binary numbers intuitively is critical to understanding the inner workings of digital devices.

Why Do We Care?

You’ve undoubtedly run into binary numbers in pop culture. They seem to appear any time a screenwriter wants to convey that a character “speaks computer.” But what do these zeros and ones mean? And more importantly, why do we care?

At the core the answer is that a computer has no idea what a 7 is. Computers are made up of millions (or billions) of transistors. These transistors only have two states, High and Low, or if you prefer zero and one. This means every thing you store in memory and any instructions you send into your processor need to be written as a series of these zeroes and ones. That goes for all your video files, pictures, video games and even your operating system itself. As far as your computer is concerned it’s all binary.

We can make it a long way working in high level languages like C or Python but inevitably there will come a time when you have to write directly to a register or transmit raw data. This is when binary will serve you. These situations are doubly likely to occur if you are working with microprocessors as both memory and power are limited. Further in time sensitive situations (like audio processing) writing straight to a register is typically faster and more efficient than using high level code.

How Does It Work?

The numbers we are familiar with are known as base 10 (or decimal numbers). This means each digit can be one of 10 possible values (0-9). If I add one to 9 the ones digit resets to zero and the tens digit is incremented to 1 (Giving you 10). When you were first learning to add numbers together you may have been taught to write the two numbers one atop the other and add each digit individually carrying to the next digit when your answer was more than 9. This gets at the core of the base 10 system.

The binary system is no great magic trick. We simply change the base to 2. This means each digit can only hold one of two values (0 or 1). As you count upwards you start with 0. Adding one gives you 1. When you try to add another one you have to carry over to the next digit (just like adding one to 9) giving you 10. To help clarify this process I’ve written out the binary representations of the numbers 0 to 15.

Binary Representation of 0-15

It’s good to note that there are other bases commonly used as well. In computation hexadecimal (base 16) is frequently used to make very large numbers manageable. In Hexadecimal we use the letters A-F to represent 10-15.

How Many Bits?

Notice in the previous table I used 4 digits and was able to represent numbers from 0 to 15 before I ran out of space. These digits are usually referred to as bits and they govern how large a number you can represent. This shouldn’t be too surprising as this is exactly how binary works (2 digits can represent numbers up to 99, 4 digits can represent up to 9999).

So how do we know how many bits we need? In decimal each new digit has a ten times higher value than the previous digit (1. In binary we can use a similar rule except since it’s base 2 each new digit is double the previous one. Here I have shown the value of a one in each of the first 8 digits to illustrate this rule.

Values of Binary Digits

Additionally in decimal you can find the number maximum value you can obtain with a number of digits using the following formula:

Where n is the number of digits and N is the highest value possible. We can do the same in binary by swapping the 10 for a 2:

Using this formula we can calculate the range of numbers available given any number of bits:

Maximum Value Based on Quantity Of Bits

Conversions

There are various methods to convert between decimal and binary. The route I have always found easiest though involves repeatedly dividing a number by two. Each time you divide by two you check if there is a remainder and note that remainder (it will always be 1 if it exists). If there is no remainder (ie. the number is even) note a zero. When you finish you can reverse the order of the numbers you have noted to see the binary representation.

Lets try applying this algorithm for 42:

Binary Conversion of 42

We can see that the binary conversion of 42 is 101010 by reading the remainders from the bottom up. Additionally you can verify your answer by multiplying each digit by the values determined earlier in this article (1*32 + 1*8 + 1*4). Doing this you should get back your original number.

Closing

Before I finish up for the day I have one final question. How high can you count on your fingers? If you answered 10 you’re not thinking with portals yet. We have 10 fingers, each of which can be either extended or folded. If we use binary counting we can reach 2^10 – 1. That’s 1023! You’ll never need a calculator again!

That’s all for me today. I hope you’ve found this refresher helpful, I’ll be back soon with further updates to my Arduino R2R DAC project.

Filtering PWM: Continued

In an earlier post I spoke in a fairly abstract way about filtering Pulse Width Modulated signals. I explained how when using a basic RC filter we have to compromise between the stability of the DC output and the time needed for the filtered signal to reach the average voltage of the source. Today I’d like to return to this problem with a slightly more analytical approach. My goal in doing this is to show you the process I will use to select the filter values for my Arduino PWM project.

Since this is ultimately for my Arduino based project I will be using a frequency of 980Hz for my calculations. This gives me a period of 0.00102s per cycle.

The Method

Step Response Equation for RC Circuits

My approach to this problem is fairly straightforward, if a bit computationally expensive. By using the step equation for voltage I introduced in my post regarding the step response of RC circuits, I should be able to move through each step of the PWM signal for a given RC value until the high and low voltages stabilize. Each step I will set V0 as the output of the last step and Vs to 0 or 5 Volts (Based on whether the step is up or down). At that point I can note how many steps it’s taken and what the remaining peak to peak voltage is. For my initial calculations I will use a duty cycle of 50% meaning a step occurs every 0.00051s.

Crunching Some Numbers

I started out attempting to do this project in Excel. I set up a cell formula and began populating results but before I knew it my table had grown well beyond a workable size and I decided to step back and regroup.

My solution was to write a python script that given a frequency, duty cycle, array of RC values and high/low voltage values would run the calculations for me and output a plot of the resulting ripple voltages and stabilization (settling) times. I will add my code at the end of this post so you can play with it yourself.

I started out with a fairly wide range of RC values from 0.0001 to 0.1 and stopped the calculations when the high voltage was within 0.01mV of the previous high voltage. This gave me the following two plots:

RC Values vs Ripple Voltage and Settling Time

Here we can get an idea of the nature of these two values. We can identify that (as expected) the ripple exponentially decreases as the RC value increases. Meanwhile the settling time appears to be a fairly linear function increasing as the RC values increase. It is also clear looking at these graphs that my initial range was much too large. I am going to eliminate all RC values where the peak to peak voltage is far greater than 1V by starting the scale at RC = 0.001. Meanwhile I will eliminate all values with a settling time greater than 0.1s (approximately RC = 0.01). Additionally I modified the voltage resolution to be 10mV.

RC Values vs Ripple Voltage and Settling Time (Modified Range)

This has made the picture significantly clearer. Particularly with the voltage ripple we can now see the slope of the exponential decay.

Sanity Check

Low Pass Filter Simulation Circuit

Before finishing up for today I wanted to make sure my math made sense. To do so I built a simple RC filter in NI Multisim and fed it a steady square wave signal at 980Hz. I chose an RC of 0.006 which should give me a peak to peak voltage after filtering of 0.2V and a settling time of approximately 0.024s (24ms). The output I received from the oscilloscope was as follows:

Ripple Voltage Test

The peak to peak voltage of the ripple appears to be just over 200mV (0.2V) which is exactly what we predicted!

Settling Time Test

The settling time is a little bit more up to interpretation. I found based on how I set the v_resolution in my program (value that v_high – v_high_last must be under to be seen as stable) I got profoundly different values for this reading. Basically what’s happening is that the output voltage is exponentially increasing towards the average voltage. This means even though the increases stop being visible on the oscilloscope output they are still proceeding each step in exponentially smaller and smaller amounts. That being said we can see that within the 25ms timeframe the output has absolutely stabilized. In practice we can likely get by with a much smaller timeframe based on the oscilloscope output above.

That’s probably enough science for today. I’ll be building further on these concepts in my next post where I will be filtering the PWM signal from my Arduino to see what I can create with it.

Code

Finally as promised here is the code I used for these calculations. Full disclosure I am not a software designer by any stretch so this may not be the prettiest program in the world but I hope it is helpful.

import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import math
import pandas as pd


#Generate an array of rc values between rc_start and rc_stop with a step size of rc resolution
def create_rc_array(rc_start, rc_stop, rc_resolution):
    array = np.arange(rc_start, rc_stop, rc_resolution, dtype=float)
    return array


#Calculates period, time high and time low based on frequency and duty cycle
def get_period(frequency, duty_cycle):
    period = 1/frequency
    time_high = period*duty_cycle
    time_low = period*(1-duty_cycle)

    return period, time_high, time_low


#Given starting voltage, final voltage, rc and time solves step responce equation
def v_calc(v0, vf, rc, time):
    exponent = -(time/rc)
    v1 = vf
    v2 = v0 - vf

    return v1 + v2*math.exp(exponent)


#Iterates through list of rc values
#For each rc value will calculate v_high and_v_low for each step until v_high-v_high_last < v_resolution
def generate_values(rc_values, period, time_high, time_low, v_resolution, v_low, v_high):
    delta_v_list = []
    settling_times = []

    for rc in rc_values:
        time = 0
        v_start = v_low
        v_up_old = 5
        v_up = 0
        v_down = 0
        delta_v = 5
        while abs(v_up_old - v_up) > v_resolution:
            time = time + period
            v_up_old = v_up
            v_up = v_calc(v_down, v_high, rc, time_high)
            v_down = v_calc(v_up, v_low, rc, time_low)
            delta_v = abs(v_up-v_down)

        delta_v_list.append(delta_v)
        settling_times.append(time)

    return delta_v_list, settling_times


def main():
    #Declare parameters
    frequency = 980
    duty_cycle = 0.5

    rc_start = 0.001
    rc_stop = 0.01
    rc_resolution = 0.00001

    v_low = 0
    v_high = 5
    v_resolution = 0.01

    #create derived values
    rc_values = create_rc_array(rc_start, rc_stop, rc_resolution)
    period, time_high, time_low = get_period(frequency, duty_cycle)
    
    #Run calculations
    delta_v_list, settling_times = generate_values(rc_values, period, time_high, time_low, v_resolution, v_low, v_high)


    #Plotting
    sns.set_theme()

    ripple_frame = pd.DataFrame({'RC Values (1/s)': rc_values, 'Peak to Peak Voltage Ripple (V)': delta_v_list})
    settle_frame = pd.DataFrame({'RC Values (1/s)': rc_values, 'Settling Time (s)': settling_times})

    fig, ax = plt.subplots(1,2)

    sns.lineplot(x='RC Values (1/s)', y='Peak to Peak Voltage Ripple (V)', data=ripple_frame, ax=ax[0])

    sns.lineplot(x='RC Values (1/s)', y='Settling Time (s)', data=settle_frame, ax=ax[1])


    plt.show()



main()

Filtering PWM Signals

In a recent post I talked about how you can use Pulse Width Modulation to create a simple voltage controller. However PWM is only half the story. Once we have our PWM signal how do we transform this from a malformed square wave into a nice steady DC voltage? There are many different techniques that can be used to do this but today I’d like to introduce you to one of the simplest, Low Pass Filters.

Low Pass Filters

I briefly introduced RC low pass filters in my post on Square Waves in RC Circuits. Put simply these are circuits which allow low frequency signals to pass through while attenuating high frequency signals. The reason for this has to do with the capacitors charging. If you recall from my previous post the time a capacitor takes to charge in an RC circuit is approximately equal to 5 times the time constant of the circuit (RC). When the frequency is low enough that the capacitor has the chance to fully charge and discharge each cycle the signal will pass through. If however the capacitor cannot fully charge/discharge a certain amount of the current will always be passing through the capacitor to ground which attenuates the signal at the output. The higher the frequency the greater the attenuation.

The key understanding here is what happens to the signal when it is being attenuated. You might expect if you pass a 5V square wave (0-5V) through a filter which reduces the amplitude by 50% that the resulting 2.5V signal would be from 0-2.5V. However, this is not the case. Since the capacitor cannot fully charge or discharge the signal will stabilize at the average voltage of the incoming signal. In the case of this example the signal would travel between 1.25V and 3.75V. As we attenuate it further we can get a smaller and smaller peak to peak voltage (always centered around the average voltage). The smaller these peaks, the closer you get to your target DC voltage.

Ripple Vs Stabilization Time

So that all sounds great but how do we know what capacitance and resistance to use? This actually gets a bit more complicated. When choosing our resistors and capacitors we often find ourselves balancing two undesirable characteristics of the circuit.

Consider this first filter. Obviously this is a long way from a smooth DC voltage. There is a distinct ripple in the output with the voltage moving up and down in a sharp triangle pattern. As discussed earlier we should be able to reduce this ripple by increasing the time the capacitor takes to charge (increasing the resistance or capacitance) to further attenuate the signal. This will however unfortunately introduce a new problem.

Here we can see that by increasing the attenuation of the signal we are able to produce a much smoother output signal. The issue here is on the left side of the simulation output. Since the capacitor is only charging and discharging a small amount the signal takes significantly longer to stabilize at the average voltage.

Where on this spectrum your filter falls depends largely on your application. If you need a very stable signal which will not vary over time you can use a large capacitor and/or resistance. If on the other hand your signal strength needs to change quickly over time and your circuit can handle a bit more ripple you may opt for a smaller capacitor to support this behavior.

As you might imagine there are additions we can make to this circuit to improve both of these behaviors. By adding additional complexity to this filter we can develop a more robust digital to analog converter. I hope though that this has provided something of a starting point to begin generating analog signals from your digital devices.

Pulse Width Modulation

You may have encountered Pulse Width Modulation when working with Arduino or other microcontrollers. PWM is a powerful tool that is often implemented to control motors and dim LEDs in electronics projects. While it’s typical applications are not always audio-centric PWM can still be a useful tool in our tool belt. PWM can be used to modify the sound of our humble square wave oscillators and is foundational to many digital to analog converters.

Duty Cycle

The duty cycle of a square wave is the ratio of time the wave spends in its high state to the time it spends low. A perfect square wave would have a duty cycle of 50%. If you increase the time the wave spends high the duty cycle increases, if you decrease it the duty cycle decreases. A duty cycle of 100% would represent a DC voltage (at the output voltage of your source). Meanwhile a duty cycle of 0% would represent a steady 0V.

Below I have graphed some basic PWM signals (blue) along with their duty cycle and average voltage (orange):

50% Duty Cycle
75% Duty Cycle
25% Duty Cycle

Average Voltage

The key to understanding how to use PWM often lies in understanding average voltage. In the graph above you can see the average voltage (orange line) is highest when the duty cycle is high. This should make intuitive sense since with, for example, a 75% duty cycle the voltage is high for 75% of the time and 0V for 25% of the time. You can find the average voltage of any PWM signal by multiplying the duty cycle by the max voltage. In a 5V circuit for instance, with a 20% duty cycle, the average voltage would be 1V (5*0.2=1).

A Quick Word On Frequency

As you might guess the frequency of a PWM signal can be critical to your system design. In an audio oscillator the role of frequency is largely unchanged, The frequency will set the tone of your output while the pulse width will allow you to adjust the fullness of your sound.

When you are using PWM to create an analog signal things get a bit more complicated. Typically a specific frequency is chosen and the subsequent filters are designed based on it (I will go further into this process when I discuss filtering PWM signals in a future post) . In choosing this frequency we would be looking for a balance between speed and control. At faster frequencies it is easier to filter the signal (to obtain a steady voltage) and you can change the output voltage quicker. However, the faster the signal the harder it becomes to control the exact duty cycle. You want to maintain the fastest frequency possible where you can still adjust the duty cycle with the precision your application requires.

I hope this has given you a framework to begin working with PWM. Keep an eye out for my next post; I will be looking at filtering these PWM signals to obtain a steady voltage.

Square Waves In RC Circuits

Last week I introduced the Step Response in RC Circuits and we looked at a simple example of turning on a power switch. Today I’d like to extend this intuition to investigate the response of an RC circuit supplied with a square wave signal. This intuition forms the basis of understanding more complex concepts like filters and pulse width modulation.

Square Waves

We’ve used square waves quite a bit when working with simple oscillators. They are easy to understand and easy to create, but how do they relate to the step response?

A square wave can be visualized as nothing more than a series of steps. First stepping up to the high voltage mark, then after some time stepping back down. Consider our example from last week, If instead of turning the power on and leaving it, you turned it on, waited for a period, then turned it back off. Repeating this process over and over would produce a square wave. It follows that we should be able to apply our step equation to the steps in a square wave.

A quick word about pulse width: One way square waves can be manipulated is by adjusting the “Duty Cycle.” This means changing the ratio of how long the signal is high versus how long the signal is low. To keep things simple today we will work with a 50% duty cycle meaning the signal will be high for the the same period it is low.

Frequency vs Period

When working with synthesizers you’ve probably heard signals described in terms of frequency. The frequency of a sound wave (Or any wave for that matter) is the number of times the signal cycles (goes up and back down) in a second and is expressed in Hz. For these calculations we need a slightly different descriptor for the speed of the waves. By taking the inverse of the frequency (1/f) we can find the period. The period describes the time in seconds that the wave takes to cycle. In a square wave with a 50% duty cycle we know that a step will take place twice every period (once up and once down). Further, these steps are evenly spaced. They take place at t=np and t=p(n+1/2) (Where n is any whole number).

Time Constant

Relationship Between Capacitor Charging (5V circuit) and Time Constant

In my last post I introduced the time constant. In an RC circuit the time constant is defined as the product of the resistance and capacitance (RC). It is used to determine the speed at which a capacitor will fully charge or discharge. Note the time constant is the same for both charging and discharging.

The graph above shows the relationship between the charging/discharging of a capacitor and the time constant in a 5V circuit. Since this is an exponential equation the capacitor will never fully charge or discharge (in theory) however for calculations we say that the capacitor is fully charged after 5 time steps. This charge would represent 99.3% of the maximum value. This means if you had a time constant of 0.2 the capacitor would take 1s to fully charge.

Filter Circuit

Passive Low Pass Filter Circuit

To illustrate how this all comes together lets have a look at the low pass filter circuit above. Let’s pass a square wave at 1Hz (50% duty cycle) into this circuit and see what happens. The first thing we need is our time constant for this circuit:

So if our time constant is 0.1 we know that it will take 0.5s for the capacitor to fully charge or discharge (5*0.1). With our frequency of 1Hz we know that the square wave will cycle every 1s (1/1). The steps in the square wave will occur every 0.5s (switching between stepping up and stepping down). This means every step will happen exactly when the capacitor reaches full or zero charge. So what does that look like?

Effect Of Low Pass Filter on 1Hz Square Wave

Here we can see the charging and discharging cycles have turned our square wave into a sawtooth. This is only the beginning of what filters can do for us though.

The important question here is what would happen if we changed the frequency? If we doubled the frequency to 2Hz the period would become 0.5s (1/2). This means the steps would be occurring every 0.25s instead of every 0.5s. If we bring back the equation used in my last post we can see how much voltage would build across the capacitor in this time:

Here we can see that with a frequency of 2Hz the output voltage will only reach 4.59V before the input steps back down to 0V. This attenuation only gets worse as the frequency increases. At a frequency of 4Hz the peak voltage falls to 3.57V. At 8Hz you only see a peak voltage of 0.304V.

Note: This is a slight oversimplification, since we are shortening both the high and low periods. This means not only the charging but also the discharging state will be cut short. The capacitor will be unable to fully charge or discharge so the attenuated signal will stabilize at an offset from zero volts.

It is this attenuation that makes this a low pass filter. Low frequency signals (below a certain critical point) are able to pass at full amplitude while high frequency signals are attenuated or even eliminated from the output. We’ll explore filters further at a later date but this should give you some idea of the mechanisms which allow them to function.

Step Response in RC Circuits

In passing I’ve referred to the fact that capacitors and inductors are time-dependent components but I never really gave any explanation of this assertion. I’d like to go over a simple case of time-dependent circuitry to clarify exactly what this means and how it differs from time-independent circuitry (resistors, diodes ect.).

Step Response?

Time dependent circuits in a nut shell are circuits which respond to changes in voltage or current over time. These changes can come in many forms but the easiest to wrap your head around (and do the math for) is a step. A step simply refers to any time the voltage (and by extension the current) in your circuit immediately changes from one value to another.

The most basic example of this is when you turn on a power switch on a device. The voltage in the device before the power was connected is typically 0V and after the switch is turned it immediately jumps to the devices operating voltage.

When we look at how the circuit responds to this change (i.e. how the current and voltage in different parts of the circuit change after this step occurs) we are looking at the step response. In a simple resistive circuit the change occurs immediately throughout the circuit however once you introduce capacitors and inductors the story gets a little more complicated.

RC Circuit?

A Simple RC Series Circuit

An RC Circuit is a circuit which contains only resistors and capacitors. These Resistors and capacitors can be arranged in series, parallel, or some combination of the two.

Capacitors

Before we get into the math it’s important to understand some of the properties of capacitors to understand how they react in circuits. A capacitor is made up of two conductive plates separated by a non-conductive layer referred to as a dielectric. These plates collect charge as current flows through the capacitor. When there is no charge on the plates current flows freely through the capacitor but as charge builds less and less current can pass through.

Once a capacitor becomes fully charged no current can flow through it and it behaves as an open circuit. Going back to our power switch example, when the power is off there is no charge on the capacitor. Once the switch is turned the current begins flowing around the circuit. This charges the capacitor until it reaches steady state. When the capacitor is fully charged no current flows and the voltage across the capacitor is maximized.

Math Time

Step Response Equation for RC Circuits

Wait! Don’t run, I promise it’s not as bad as it looks. There is a fairly satisfying calculus derivation for this equation but in the interest of keeping things high level I’ve skipped right to the final formula. That means all we have to do is find values for the unknowns above and we can plug them right in. So how do we find these unknowns?

The Time Constant – The first thing we want to find is a value called the time constant. This represents the speed at which the circuit reaches steady state. The time constant is R*C. The negative of the time is divided by the time constant in the exponent. We simply plug in the resistance and capacitance values from the circuit and we’re all set.

Vo – Vo is the starting voltage of the circuit. This would be the voltage before the step takes place. In our basic switching example this would be equal to 0V since there is no voltage across the capacitor before the switch is flipped.

Vs – Vs represents the steady state voltage. This would be the voltage in the circuit after a long time has passed (Once the capacitor is fully charged). To find this we can replace the capacitor with an open circuit and determine the voltage between the two points.

Now that we’ve defined these variables lets have a look at an example to see them in action.

Example

Example Circuit

Above is a basic example of a simple RC circuit. What we want to find out is how the circuit responds when the switch is closed at t=0. If we complete the equation given above for this circuit we can find the voltage at any time after the switch is flipped by entering the desired value of t.

The first thing we can fill in is the time constant. As I said above the time constant is equal to RC. This would be (10*100uF) for our circuit. Converting uF to F gives us 10*0.0001 and taking the inverse of this (since time is divided by it) gives us 1000. You can see above I have filled in this value above the exponent.

Next up is Vo. For a simple series circuit like this this step is very easy. Before the switch is closed the battery is disconnected from the circuit so no voltage is present in the circuit. This makes Vo equal to zero and it can be removed from the equation.

Finally we can calculate Vs which represents the steady state voltage of the system. Remember that when the capacitor is fully charged it allows no current to flow through the circuit. Since the voltage drop across a resistor is defined by Ohm’s Law (V=IR) and there is no current flowing through the circuit, there is no voltage drop across the resistor. This means the steady state voltage across the cap is the full battery voltage (9V).

Now we’ve got a fully defined function for the voltage at any time after t=0. The trick here is that the exponent (e^(-1000t)) becomes smaller the larger t becomes. This means the more time that passes the larger the voltage becomes until it reaches steady state. Graphing this function we get the following:

Digital Logic

Working with digital logic may seem counter intuitive to the idea of sound synthesis. Audio is an analog phenomenon and as such we typically steer towards analog electronics to produce or manipulate it. That being said digital electronics can be incredibly valuable to us as synth enthusiasts. We can use them to breath new life and depth into humble square wave oscillators, control analog devices and interface with micro-controllers/processors. For this reason I wanted to provide an introduction to some of the more common digital logic devices you may encounter.

Digital Vs. Analog

Before going too far I’d like to reiterate the difference between analog and digital signals. An analog signal is a signal which varies across a range of voltages. Think of a sound wave, the wave is a continuous signal which rises and falls based on the intensity of the sound being captured.

A digital signal on the other hand varies between two discreet states. The signal is either high or low (0 or 1). Electronically speaking these two states are typically (though not always) 0V for low and 5V for high. Any values between these two states will either be assumed to be one of the two or ignored depending on the components used. The square waves we have produced using 555 or 40106 oscillators could be considered as digital signals as they oscillate between two distinct values.

Truth Tables

Truth Table for a Generic Inverter

Truth Tables are a tool we use to clarify the function of digital logic. The tables show a listing of all possible inputs to the system and the output which they would provide. In the inverter example above (and most of the components I will discuss today) these tables are quite straight forward however as you get into progressively more complicated digital circuits these truth tables will also become more complex.

Building Blocks

Once you have a digital signal whether its from a humble square wave oscillator or something more complex like a microprocessor, there are a number of operations you can carry out on it using digital logic chips. There are hundreds of different chips available but most of them fall into a few basic categories.

NOT Gate:

NOT Gates (commonly called inverters) are probably the most basic digital component you will encounter. As the name suggests they invert whatever signal you send to them. This means if you send in a 1, you will receive a 0 from the output. Alternately if you send in a 0 you will receive a 1. The 40106 I have used in previous projects is a special type of inverter which uses threshold voltages to interpret analog inputs.

AND Gate:

An AND Gate is a digital logic gate which takes two inputs (notice there are now two input columns on the truth table). The output of the gate is only high if both these inputs are high.

NAND Gate:

The NAND gate is actually a combination of the AND gate and the Inverter. Notice the circuit symbol is that of the AND Gate with the small circle from the inverter added. This small circle is used in circuit symbols to indicate that an output is inverted. This means the output of the NAND gate is opposite the output of the AND gate (as seen in the truth table). A NAND gate will output high at all times except when both inputs are high.

OR Gate:

The OR Gate is a gate which will output high when either (or both) of the inputs are high. This can be very useful for combining signals or triggers from multiple sources. There are also a few modified versions of the OR gate which I will look at next.

NOR Gate:

Similar to the NAND Gate this is an OR Gate with an added inverter. This means the NOR Gate will output high when no inputs are sensed. It will go low when Either (Or both) Inputs go high.

XOR Gate:

The XOR Gate is another special type of OR Gate. This time the output will go high when either input is high however if both inputs go high the output will go low.

This is by no means an exhaustive list but should give us a starting place to work from. We can also create progressively more complex gates by combining these different building blocks together. I hope this has given you some ideas about what you can accomplish using digital logic. Keep an eye out for more projects using these gates coming soon!

Voltage Dividers

Voltage Divider Circuit

In many of my past projects I have made use of a circuit component (or set of components) called a voltage divider. I realized however, I’ve never really spent time talking about what these were or how they work. For that reason I wanted to spend some time showing this trick and how you can use it yourself.

So What Are They?

A voltage divider divides voltage! Brilliant analysis I know, but what does that mean? And why is it useful? Simply put a voltage divider is a way to get a smaller voltage from a larger one. For example lets say you are trying to power a chip which requires a 5V power source from a 9V battery. In this situation you could use a voltage divider at the power pin to lower your 9V supply to the required 5V. Another important use is signal attenuation. As we’ll soon see the output of a voltage divider is a linear function of the input voltage. That’s a fancy way of saying the voltage you get out will always be a specific fraction of the voltage you put in (Based on the resistor values). That means if you have a waveform or audio signal you can reduce its amplitude while still maintaining the original waveform.

Note: Voltage Dividers are a cheap and easy way to reduce voltage however they are not highly stable or accurate. If you require a stable supply it is better to use a voltage regulator IC or Buck Converter

Math Time!

Voltage dividers are an excellent illustration of Kirchoff’s Loop Law and we can use this along with Ohm’s Law to calculator our output voltage given the values of R1, R2 and the input voltage.

Voltage Divider Drawn as Loop Circuit

To clarify how these calculations are done I have drawn the divider as a loop with a voltage source V1. We know from Kirchoffs Laws that the Sum of the voltages around any closed loop is zero. This means V1 minus the voltage lost across the two resistors is 0. This also means the voltage V2 will be equal to V1 minus the voltage lost across R1:

To determine the voltage dropped across R1 we can rely on Ohm’s Law. Ohm’s Law states that V = IR. From this we know the voltage dropped across R1 is equal to the product of the resistance and the current. Unfortunately we still need the current for this equation. Looking at the loop as a whole though we can solve for it:

Now that we have an equation for current we can substitute this into the equation for the voltage drop through R1:

Finally we can input this new function into our original equation to solve the ratio between V1 and V2:

We can divide both sides by V1 to clean this up a bit:

For good measure we can do a little more simplification to come to the final voltage divider equation:

And there you have it! using this equation you can take any input voltage and any desired output voltage and calculate the values of R1 and R2 you will need to accomplish that reduction.