Wednesday, September 4, 2013

Madness to the Method – Gain Structure

NOTE: This article was sent to me by the author, Mark Amundson, several years ago to use in my Audio Engineering classes. The article had been on my website as a link in the original Microsoft Word format for a decade. Unfortunately, Mark’s article seems to have vanished from FOH Magazine’s article history, which is a huge loss for audio engineers who actually care about optimizing distortion and noise characteristics in their signal path. A version of this discussion is included in his book; Live Sound Practice and Theory.

Manuscript: Madness to the Method – Gain Structure
Magazine: FOH
Manuscript Type: Technical Article
Issue Assignment: May 2003
Word Doc No: GainStructureMadness.doc
Figures: -
Photos: One
Editor: Bill Evans
Revision: -
Date: 05/12/03
Word Count: 1516

Madness to the Method – Gain Structure

By Mark Amundson

In this dissertation I am going to do a little Q&A, ala the old Audio Cyclopedias about questions you may have wondered about, but just did what you were told. I am going to throw down a generous helping of electronics history as a way of answering the question, and to remind us all were all this technology came from.

Q: WHY IS 0dBu THE REFERENCE SIGNAL LEVEL?

The question could easy re-phrased why is 0.773 volts RMS (0dBu) the standard and not some other convenient number like 0.1, 1, or 10 volts?

The answer goes way back to Alexander Graham Bell’s era when no such thing as radio or broadcasting was thought of. As the “Bell System” and “American Telephone and Telegraph” (AT&T) became the monopoly in the phone service industry, Western Electric Company was formed as a subsidiary of the Bell System to design and produce telephone gear for the whole country. After much trial and error, a standard two-wire pair transmission line was developed with 600-ohm source and load impedances to maximally send carbon microphone signals down the wires. With the right construction materials, voice signals (about –20dBu) could transit 5 miles with a passable loss of signal amplitude.

When Lee DeForest invented the Vacuum Tube Triode for signal amplification, his “killer app” was re-boosting feeble telephone signals, thus creating long-distance phone service in the second decade of the last century. Western Electric still had a lock on the electronics industry in the 1920’s as broadcast radio was just emerging, so naturally it had the highest technology suitable to fulfill civilian and military requests for standard “Public Address” apparatus. By the early 1930’s Western Electric had the first quality dynamic microphone (requiring no DC power unlike carbon mics) and combined vacuum tube amplification connected to the first efficient “loud-speaking apparatus” that we now know as horn loaded drivers.

As broadcast radio became widespread, and specialized companies like Electro-Voice, MagnaVox, and Shure Brothers came to supply (with Western Electric) the needs of public address and broadcast gear, the 600-ohm line cabling still held as the lowest loss method of distributing and processing audio signals. From that era, a one-milliwatt reference level into 600-ohms became the reference level, or 0dBm (zero deci-Bels referenced to one milliwatt). 0dBm is exactly 0.773 volts RMS, but as technology marched on audio electronics moved from power matching to “bridging” impedance matching, the 0.773 volts without any specified load impedance was now described as 0dBu (zero deci-Bels unreferenced).

To answer the lingering question of what became of Western Electric, government anti-monopoly policies in the 1930’s forced the breakup of AT&T (the first time) into RCA for broadcast, Bell Labs for telephony, and All-Technical Products (Altec) for public address. Altec slowly became Altec-Lansing, then split back to Altec and James B Lansing Inc., then on to JBL.

Q: WHY GAIN (TRIM) TO LINE LEVELS AND THEN MIX AFTERWARDS?

This question is more math than history, but we still thank the early broadcast pioneers of the 1930’s for the first work on defining signal-to-noise and noise source definition. This question could also be formed as what is the best method to minimize hiss in the mixing console?

The answer comes from the invention of the radio, and techniques used to maximize signal-to-noise ratio; and thus transmission distance. As a signal is created, processed, and sent to its final destination; there is a signal-to-noise ratio (SNR) degradation. As each stage, or processing block passes on the signal, the noise eventually encroaches on the signal level. The number of dB drop of SNR per stage is defined as its noise figure (or noise factor for you dB challenged). A noise factor of 6dB or less per amplification (gain) stage is considered a low-noise design for a preamp.

To better visualize this idea, lets put some example numbers to work. If a typical dynamic mic and voice put out –50dBu signal peaks, and the console’s referred input noise is –128dBu, you have a 78dB SNR which is respectable in live sound applications. As the signal proceeds through the channel mic preamp, eq section, channel fader or VCA, summing amps, master fader, and balanced line driver, there is a noise figure penalty to be paid. The good news is if two gain stages are cascaded together, the noise figure of the first stage dominates with the second gain stage noise figure effectively divided by the gain of the first. What this means is that cheaper electronics can be used after the mic preamp, with a high gain preamp covering for the sins of the rest the console’s electronics.

On other item to be shared is that attenuation circuits (eq filters, faders, pots, VCA’s, etc.) can generally be assumed to be direct losses in SNR, with every dB in attenuation a corresponding dB increase in noise figure. So the theoretical perfect (low noise) mixing console setup would be faders maxed, eq flat, and amplifier gains a perfect match between mic level and power amp full power sensitivity.

But no realistic scenario exists on a mixer without faders to “mix” with. So the next best answer is take your desired loudest channel in the mix, set its preamp gain control (gain, trim, etc.) for about 0dBu average level, and keep the channel, group, master faders reasonably high, but preserve some headroom for the occasional “louder” demand. This minor compromise yields the best SNR while still giving mix flexibility. This practice also applies to the gain of all the other signal source channels, but with the obvious idea that their faders would be more attenuated.

Q: WHY IS +22dBu THE COMMON MAXIMUM LEVEL

This answer also comes from electronics history, but only a half-century back. The dawn of the first mass-produced transistors had a typical maximum voltage level of 30 to 40 volts. Of these early transistors, many were targeted for industrial controls and “analog” computers for military and aerospace usage. The most common analog computer section was the “operational amplifier” or “op-amp”. Because these op-amp sections were designed to be near perfect mathematical gain stages with both positive and negative voltage swing capabilities. By taking the limitations of the transistors plus the need for a bi-polar (plus and minus) power supply, the standard of +/-15 volt supply levels was instituted, and is still used today.

As transistors got grouped on one silicon die, integrated circuits (ICs) were born with the first standard products becoming IC op-amps. As IC prices dropped in the late 1960’s and early 1970’s, more IC op-amps started finding there way into audio equipment, still requiring their +/-15 volt power supplies. Today’s pro-audio signal processing and mixing gear is largely composed of IC op-amps and a few application-specific ICs plus just a few necessary un-integrated transistors. The common legacy of supplying them with +/-15 volt levels still exists, with op-amps capable of near +/-14 volt audio signal swings. This level translates to about 10 volts RMS, or +22dBu at which the circuits would exhibit clipping of the signals. Some math trickery maybe also in maximum output specifications as you can gain another 6dB in level by stating the output as balanced; in which each balanced output contact swings in opposite polarity to double the levels.

With the above explanations that we should set our levels close to 0dBu and keep away from the clip levels around +22dBu, there leaves plenty of headroom for classifying what peak signals can be and what is required to get the drive channels (eqs, crossovers, and power amps) to full output. Most power amp manufacturers set their sensitivity values to around 0 to +9dBu for full unclipped speaker drive. By keeping the post pre-amp levels at or below the power amp sensitivity values, mostly assures a clipping free production. Of course that relies on your keeping the power amp input attenuator controls full up. You do not want to be “low-noise” all the way to the amp, and then throw away all that SNR at the last attenuator do you?

FINAL WORDS

Our latest generation of audio production personnel deserve to be educated on how we use our gain structure procedures, and why these methods came about. Some may argue about exact levels and forming up mixes, but I am coming at this from an electrical engineering view and attempting to shine light on what the design engineers consider optimum use, rather than operator tactics “that seem to work for me”. We need to appreciate that live sound borrowed heavily from the telephone and radio broadcast pioneering work, plus how electronics achievements impacted our practices. The twentieth century could be termed the “electronics” century, and it looks like the twenty-first century will be the “photonics” century; with fiber-optics promising near unlimited bandwidth for passing analog or digital signals from baseband to radio-frequency to optical-frequency signals.

MDA

1 comment:

T.W. Day said...

Another advantage to obtaining as much gain as possible as early as possible is common mode rejection, which in transformer-less designs is often directly related to input gain. CMRR = 20 log (Ad/Acm) where Ad= amplifer gain and Acm is the unity gain common mode rejection ratio. Gain in the differential stage gets you free additional common mode rejection.

Wirebender Audio Rants

Over the dozen years I taught audio engineering at Musictech College and McNally Smith College of Music, I accumulated a lot of material that might be useful to all sorts of budding audio techs and musicians. This site will include comments and questions about professional audio standards, practices, and equipment. I will add occasional product reviews with as many objective and irrational opinions as possible.