Binary Representation

Since I’ve been working so much lately in the digital space I thought it would be pertinent to do a quick review today. I want to spend some time on one of the most fundamental ideas in digital logic, Binary Representation. I know it’s not the most exciting topic but understanding binary numbers intuitively is critical to understanding the inner workings of digital devices.

Why Do We Care?

You’ve undoubtedly run into binary numbers in pop culture. They seem to appear any time a screenwriter wants to convey that a character “speaks computer.” But what do these zeros and ones mean? And more importantly, why do we care?

At the core the answer is that a computer has no idea what a 7 is. Computers are made up of millions (or billions) of transistors. These transistors only have two states, High and Low, or if you prefer zero and one. This means every thing you store in memory and any instructions you send into your processor need to be written as a series of these zeroes and ones. That goes for all your video files, pictures, video games and even your operating system itself. As far as your computer is concerned it’s all binary.

We can make it a long way working in high level languages like C or Python but inevitably there will come a time when you have to write directly to a register or transmit raw data. This is when binary will serve you. These situations are doubly likely to occur if you are working with microprocessors as both memory and power are limited. Further in time sensitive situations (like audio processing) writing straight to a register is typically faster and more efficient than using high level code.

How Does It Work?

The numbers we are familiar with are known as base 10 (or decimal numbers). This means each digit can be one of 10 possible values (0-9). If I add one to 9 the ones digit resets to zero and the tens digit is incremented to 1 (Giving you 10). When you were first learning to add numbers together you may have been taught to write the two numbers one atop the other and add each digit individually carrying to the next digit when your answer was more than 9. This gets at the core of the base 10 system.

The binary system is no great magic trick. We simply change the base to 2. This means each digit can only hold one of two values (0 or 1). As you count upwards you start with 0. Adding one gives you 1. When you try to add another one you have to carry over to the next digit (just like adding one to 9) giving you 10. To help clarify this process I’ve written out the binary representations of the numbers 0 to 15.

Binary Representation of 0-15

It’s good to note that there are other bases commonly used as well. In computation hexadecimal (base 16) is frequently used to make very large numbers manageable. In Hexadecimal we use the letters A-F to represent 10-15.

How Many Bits?

Notice in the previous table I used 4 digits and was able to represent numbers from 0 to 15 before I ran out of space. These digits are usually referred to as bits and they govern how large a number you can represent. This shouldn’t be too surprising as this is exactly how binary works (2 digits can represent numbers up to 99, 4 digits can represent up to 9999).

So how do we know how many bits we need? In decimal each new digit has a ten times higher value than the previous digit (1. In binary we can use a similar rule except since it’s base 2 each new digit is double the previous one. Here I have shown the value of a one in each of the first 8 digits to illustrate this rule.

Values of Binary Digits

Additionally in decimal you can find the number maximum value you can obtain with a number of digits using the following formula:

Where n is the number of digits and N is the highest value possible. We can do the same in binary by swapping the 10 for a 2:

Using this formula we can calculate the range of numbers available given any number of bits:

Maximum Value Based on Quantity Of Bits

Conversions

There are various methods to convert between decimal and binary. The route I have always found easiest though involves repeatedly dividing a number by two. Each time you divide by two you check if there is a remainder and note that remainder (it will always be 1 if it exists). If there is no remainder (ie. the number is even) note a zero. When you finish you can reverse the order of the numbers you have noted to see the binary representation.

Lets try applying this algorithm for 42:

Binary Conversion of 42

We can see that the binary conversion of 42 is 101010 by reading the remainders from the bottom up. Additionally you can verify your answer by multiplying each digit by the values determined earlier in this article (1*32 + 1*8 + 1*4). Doing this you should get back your original number.

Closing

Before I finish up for the day I have one final question. How high can you count on your fingers? If you answered 10 you’re not thinking with portals yet. We have 10 fingers, each of which can be either extended or folded. If we use binary counting we can reach 2^10 – 1. That’s 1023! You’ll never need a calculator again!

That’s all for me today. I hope you’ve found this refresher helpful, I’ll be back soon with further updates to my Arduino R2R DAC project.

Filtering PWM Signals

In a recent post I talked about how you can use Pulse Width Modulation to create a simple voltage controller. However PWM is only half the story. Once we have our PWM signal how do we transform this from a malformed square wave into a nice steady DC voltage? There are many different techniques that can be used to do this but today I’d like to introduce you to one of the simplest, Low Pass Filters.

Low Pass Filters

I briefly introduced RC low pass filters in my post on Square Waves in RC Circuits. Put simply these are circuits which allow low frequency signals to pass through while attenuating high frequency signals. The reason for this has to do with the capacitors charging. If you recall from my previous post the time a capacitor takes to charge in an RC circuit is approximately equal to 5 times the time constant of the circuit (RC). When the frequency is low enough that the capacitor has the chance to fully charge and discharge each cycle the signal will pass through. If however the capacitor cannot fully charge/discharge a certain amount of the current will always be passing through the capacitor to ground which attenuates the signal at the output. The higher the frequency the greater the attenuation.

The key understanding here is what happens to the signal when it is being attenuated. You might expect if you pass a 5V square wave (0-5V) through a filter which reduces the amplitude by 50% that the resulting 2.5V signal would be from 0-2.5V. However, this is not the case. Since the capacitor cannot fully charge or discharge the signal will stabilize at the average voltage of the incoming signal. In the case of this example the signal would travel between 1.25V and 3.75V. As we attenuate it further we can get a smaller and smaller peak to peak voltage (always centered around the average voltage). The smaller these peaks, the closer you get to your target DC voltage.

Ripple Vs Stabilization Time

So that all sounds great but how do we know what capacitance and resistance to use? This actually gets a bit more complicated. When choosing our resistors and capacitors we often find ourselves balancing two undesirable characteristics of the circuit.

Consider this first filter. Obviously this is a long way from a smooth DC voltage. There is a distinct ripple in the output with the voltage moving up and down in a sharp triangle pattern. As discussed earlier we should be able to reduce this ripple by increasing the time the capacitor takes to charge (increasing the resistance or capacitance) to further attenuate the signal. This will however unfortunately introduce a new problem.

Here we can see that by increasing the attenuation of the signal we are able to produce a much smoother output signal. The issue here is on the left side of the simulation output. Since the capacitor is only charging and discharging a small amount the signal takes significantly longer to stabilize at the average voltage.

Where on this spectrum your filter falls depends largely on your application. If you need a very stable signal which will not vary over time you can use a large capacitor and/or resistance. If on the other hand your signal strength needs to change quickly over time and your circuit can handle a bit more ripple you may opt for a smaller capacitor to support this behavior.

As you might imagine there are additions we can make to this circuit to improve both of these behaviors. By adding additional complexity to this filter we can develop a more robust digital to analog converter. I hope though that this has provided something of a starting point to begin generating analog signals from your digital devices.

Digital Logic

Working with digital logic may seem counter intuitive to the idea of sound synthesis. Audio is an analog phenomenon and as such we typically steer towards analog electronics to produce or manipulate it. That being said digital electronics can be incredibly valuable to us as synth enthusiasts. We can use them to breath new life and depth into humble square wave oscillators, control analog devices and interface with micro-controllers/processors. For this reason I wanted to provide an introduction to some of the more common digital logic devices you may encounter.

Digital Vs. Analog

Before going too far I’d like to reiterate the difference between analog and digital signals. An analog signal is a signal which varies across a range of voltages. Think of a sound wave, the wave is a continuous signal which rises and falls based on the intensity of the sound being captured.

A digital signal on the other hand varies between two discreet states. The signal is either high or low (0 or 1). Electronically speaking these two states are typically (though not always) 0V for low and 5V for high. Any values between these two states will either be assumed to be one of the two or ignored depending on the components used. The square waves we have produced using 555 or 40106 oscillators could be considered as digital signals as they oscillate between two distinct values.

Truth Tables

Truth Table for a Generic Inverter

Truth Tables are a tool we use to clarify the function of digital logic. The tables show a listing of all possible inputs to the system and the output which they would provide. In the inverter example above (and most of the components I will discuss today) these tables are quite straight forward however as you get into progressively more complicated digital circuits these truth tables will also become more complex.

Building Blocks

Once you have a digital signal whether its from a humble square wave oscillator or something more complex like a microprocessor, there are a number of operations you can carry out on it using digital logic chips. There are hundreds of different chips available but most of them fall into a few basic categories.

NOT Gate:

NOT Gates (commonly called inverters) are probably the most basic digital component you will encounter. As the name suggests they invert whatever signal you send to them. This means if you send in a 1, you will receive a 0 from the output. Alternately if you send in a 0 you will receive a 1. The 40106 I have used in previous projects is a special type of inverter which uses threshold voltages to interpret analog inputs.

AND Gate:

An AND Gate is a digital logic gate which takes two inputs (notice there are now two input columns on the truth table). The output of the gate is only high if both these inputs are high.

NAND Gate:

The NAND gate is actually a combination of the AND gate and the Inverter. Notice the circuit symbol is that of the AND Gate with the small circle from the inverter added. This small circle is used in circuit symbols to indicate that an output is inverted. This means the output of the NAND gate is opposite the output of the AND gate (as seen in the truth table). A NAND gate will output high at all times except when both inputs are high.

OR Gate:

The OR Gate is a gate which will output high when either (or both) of the inputs are high. This can be very useful for combining signals or triggers from multiple sources. There are also a few modified versions of the OR gate which I will look at next.

NOR Gate:

Similar to the NAND Gate this is an OR Gate with an added inverter. This means the NOR Gate will output high when no inputs are sensed. It will go low when Either (Or both) Inputs go high.

XOR Gate:

The XOR Gate is another special type of OR Gate. This time the output will go high when either input is high however if both inputs go high the output will go low.

This is by no means an exhaustive list but should give us a starting place to work from. We can also create progressively more complex gates by combining these different building blocks together. I hope this has given you some ideas about what you can accomplish using digital logic. Keep an eye out for more projects using these gates coming soon!