Press "Enter" to skip to content

Fidelity vs Sound Quality: A comparison of digital and analog

In late 2012, I asked a wide array of independent musicians about how the shift to digital music has changed their career. I got an astonishing amount of response, and I’ll be featuring these responses on the blog over the next few weeks and months. The first response comes from Aviv Cohn of The Widest Smiling Faces.

The most fascinating discussion produced by the digital era has been the one regarding the “soul” of art/music. There’s a general sense of continually moving away from authenticity. Paper books with their familiar textures, rituals of page turning, and folded corner bookmarks are being superseded by numbers and screens. (Similar to the boxes we stare into almost every waking moment of our lives.) Paintings with their gloppy textures jutting off the canvas have been replaced by flat JPEGs. Vocals are being auto-tuned, machines/software programs increasingly replace real drums, and the dynamic range of audio is being squashed and “dehumanized.” It’s hard to escape the feeling that our means of artistic expression are being quantized. My experience has been that many share these feelings, and so the resurgence of analog media comes as no surprise.

Many fans of analog media attempt to substantiate their emotional preference for the medium by seeking to “prove” that vinyl records are of a higher fidelity than CDs. They often cite graphs showcasing the “staircasing” inherent to digital sampling alongside images of smooth analog curves as a means of reinforcing their point regarding the inaccuracy of digital audio. While it’s important to point out that this point is technically incorrect, it’s hard to deny that analog audio has a “presence” (and not the in the frequency range sense) that is missing from many digital recordings. On a technical level, this “presence” is euphonic (pleasing) distortion. But there’s nothing wrong with distortion. Distortion is good. I enjoy distortion, and you probably do as well!

However to many, the more “realistic” and “lifelike” sound of analog audio is indicative of a “superior format” with regards to accurate audio reproduction. This is based on an erroneous conflation of two terms that should be kept distinct, “fidelity” and “sound quality.” Fidelity describes the degree of accuracy to which a medium recreates a sound. Sound quality, however, is subjective. It’s not a measurement; rather it’s an indication of preference. A piece of music could be of very low fidelity, but present beautiful sound quality. For example, let’s say we’re working with a piece of music with significant harshness in the upper-mids. Converting that audio to digital and then playing it back would lead to experiencing an audio presentation showcasing extremely accurate (in fact perfect) fidelity, but the sound quality would be uncomfortable. Similarly, transferring that audio to a medium that softened the harshness in the upper-mids would result in sound that technically would be of lower fidelity, but presenting a much more pleasing sound quality.

Many experience a situation akin to the second example, but interpret it incorrectly. Because they don’t understand the difference between fidelity and sound quality, they perceive the more “musical” and “lifelike” presentation of analog audio to be indicative of a format that is “truer to the source” and thus a higher fidelity medium. This is technically incorrect, though it’s not entirely inaccurate when looked at from another perspective.

We must keep in mind that much of the audio equipment used today was developed and popularized during an era in which vinyl records were the dominant format. Engineers were well aware of the distortions presented by vinyl, and often acted to compensate for them. To help illustrate the effects of this compensation, let’s imagine a line with two poles. One pole representing “warm,” another representing “cold,” and in the middle “natural.” We can use an imaginary microphone as well. Let’s call it “Microphone A.” Suppose Microphone A, a high quality dynamic mic, was used in the recording of a popular hit in the 1970s. Not only was this song a commercial success, listeners and engineers alike marveled at its lush, natural, and realistic sound as reproduced by their turntables. Due to the distortions inherent to analog media (it has a “softening” and “warming” effect on the audio) in order to have a “natural, realistic sound” when played back on vinyl, the original sound would have to be relatively “cold” and “clinical.”

So it could be said that “Microphone A,” known for producing natural and realistic sounding albums, has a somewhat colder and more clinical sound before being softened by the vinyl pressing process. The end result of this process would be a tone that is somewhere in between (realistic).

Now suppose that due to the success of that microphone, it’s remained in use to this day. Digital music doesn’t have the softening distortions of analog media. So the same microphone that produced a natural and realistic sound when played back on vinyl now produces a sound that’s somewhat more “cold” and “clinical.” Technically, the sound of the digital recording is higher fidelity, and more accurately captures “the sound” of the microphone, but that doesn’t mean it’s presenting a more pleasing sound quality, nor does it mean it’s presenting the microphone’s sound as intended by the engineer.