All Rights Reserved © 2006 Thomas W. Day
A while back, I read an editorial in Mix Magazine that claimed to “prove once and for all” that MP3 audio compression was low-fi. The author was a Mac user, so maybe he can’t be faulted for the many flaws in his test procedure, but be that as it may be, I read between his lines and found a few statements that I couldn’t believe. His procedure was incredibly flawed and rigged to make MP3 files test poorly, even if the file format and compression were perfect. His built-in bias was so strong that it was obvious how the test would turn out, even before the second sentence. This author claimed to hear a “robust difference signal,” proving that the MP3 format was defective and high distortion. I listen to MP3s, mostly in my car and I was suspicious of that “robust difference signal.”
First, probably because of the limitations of his computer equipment and/or software, the author took an audio sample, inverted the phase of that sample, played the two (in-phase and out-of-phase together), and found that the resulting signal was the expected nothing. The result was a near-perfect cancellation, proving that the two samples were equal, with practically non-existent distortion. Here is where the example died a non-scientific death. The author converted the phase-inverted signal to an MP3 format and, because his software could only play AIFF wave files, converted the MP3 back to AIFF and ran the above test again.
Back in my audio manufacturing employment history, I built an ABX test rig for my employer and did a ton of ABX testing on anyone I could con into submitting to the ABX protocol. Mostly, we learned what the audiologist discovered at the 1985 AES Convention in LA; that most people involved in professional audio are “functionally deaf” or, at least, “hearing impaired.” Instead of spending our time trying to figure out what subtle differences in equipment were audible, we discovered that drastic defects in the signal path went undetected by most of our listeners. I went so far as to install defective active components (ICs producing as much as 10% THD) into the signal path of otherwise identical pieces of equipment and found that an alarming number of audio professionals were unable to hear the difference in a reasonably good listening environment. On the other hand, one of my own employees was able to hear signal differences that my test equipment (which was Audio Precision’s finest of the time) barely identified as measurable (except for substantial very-low-frequency phase differences that I’m still hard-pressed to believe explained the listening test results). With that history behind me, I decided to replicate the magazine test myself, using a Windows-based software (Adobe Audition) which doesn’t have the limitation of only being able to play one file format simultaneously. This eliminates the multiple conversion errors from the author’s test and makes the test more of an apple-to-apple test.
I picked three MP3 formats, 128Kbps, 192Kbps, and 320Kbps, constant bitrate, with CRC checksums and a pair of 44kHz, 16-bit CD recordings (“Afternoon” from Pat Metheny Group, Speaking of Now, and L’Adoration de la Terre from Telarc’s Cleveland Orchestra recording of the Stravinsky The Rite of Spring) for the test. I, first, copied a section of the music and made an inverse-phase mono copy of that section. In Audition’s Multitrack View, I inserted the two sections into a pair of tracks and compared the resulting signal; or lack of signal. The two WAV copies exactly cancelled, both according to my ears and Audition’s metering system. Figure 1 (from the Afternoon recording) displays a section of the original signal from the recording that I will use to display the results of the signal-canceling comparison tests.
Figure 1 (on right): The original signal, with peaks approaching 0dB.
I tried both Windows’ Media Player v9.0 and dBpowerAMP Music Converter™ for the creation of my MP3 files (all made from the reversed-phase WAV file with an already established signal accuracy). I believe I saw a very slight reduction in distortion using the 320Kbps Music Converter™ version of the Media Player output, so I created the rest of my MP3 samples using that program. I inserted the 3 MP3 inverse-phase signals into a 5 channel Multitrack file and listened to the resulting output, comparing each MP3 to the original in-phase WAV file.
The first problem you will discover in performing this test is that the MP3 converters both added a short leader (approximately 50mS) to the files. Ignoring this anomaly produces the “robust” distortion difference signal that started this investigation for me. Time-aligning my files took some time, but produced a much more believable result from the comparisons.
Figure 2 (on right): 128Kbps distortion result waveform
The 128Kbps compression produced a tinny output, with a little low end and a sound quality that was obviously distortion components. The resulting distortion component waveform is pictured in Figure 2. At this point in the original recording, the peaks are touching 0dB, so the peak distortion output was approximately 25dB below the peak recording signal level.
Figure 3 (left) and 4 (right) picture the results of repeating this test with the 192Kbps and 320Kbps MP3 samples.
The distortion components of the 320Kbps compression sample are 33-36dB below the original signal. Those residual signal values roughly translate to 5% THD for 128Kbps, 3% THD, for 192Kbps, and around 1% THD for 320Kbps MP3 samples. I’ll agree that these are substantial distortion values, but “robust” is not how I’d describe the resultant signal and I question the ability of most professionals to clearly hear the difference signal in an ABX environment. After all, I’ve simulated much worse distortion components that appeared to have been inaudible.
Regardless of my test results, I recommend that you try a similar test with material of your own choosing and in a controlled listening environment. Personally, I discount the results of any listening test that doesn’t live up to the rigor of the ABX protocol. You can believe any fantasy you like, though. Part of what makes working in audio so entertaining is the delusions under which we labor and the resulting, sometimes silly, products produced to cater to those illusions.
I do, however, intensely suspect the opinions of someone who claims that consumer cassettes were musical and that MP3 reproduction systems are deficient in comparison. Compared to FM radio, a high resolution MP3 is practically pristine. Pop recordings are often so distorted that the minimal harmonic addition low-fi 64Kbps compression introduces can do no more harm to what’s left of the musical content. If we’re not going to complain about these traditional high distortion delivery systems, where is our credibility regarding new technology?
Most analog consoles don’t produce 40dB cancellation artifacts when one channel is beat against another in producing the side component in a Mid-Side microphone signal. Analog tape recording systems are far from capable of producing this level of signal uniformity.
Dis’ing our customers’ sonic standards, because they are listening to a technology that has an economic impact on our industry, is dishonest, unbelievable and ineffective. They know the MP3 files they listen to are higher fidelity than past and current commercially delivered formats and once they suspect the industry is lying about quality, what else do we have to offer?