Need Mastering?
Learn more now
© 2022 Fix Your Mix. All rights reserved.

Last week we discussed some of the inherent problems with sub-bass frequencies and how to deal with them.  One of the major issues is how sounds in that bandwidth lack specificity.  One instrument’s rumble, boom, and thud sound pretty similar to any other instrument’s.  For the frequency bands above the sub, we have to start talking about fundamentals, overtones, harmonics, and formants in order to properly appreciate some of the roles each portion of the audible spectrum plays in our interpretation of sound.

 

Since most of our clients and readers deal at least some of the time in the digital domain, chances are you’ve seen a complex waveform that looks something like this:

 waveform

 

 

 

 

 

 

 

 

 

 

 

In simple terms, waveforms of this type are the summation of various component frequencies.  In the illustration below, you see how a simple sine wave becomes more complex by the addition of harmonics:

 

complex_waveformdesktopmusichandbook 

The waveform starts with the fundamental frequency.  This is the lowest frequency present in the waveform that falls within the harmonic series.  When you play the 440 Hz A on the piano, 440 is really just the frequency of the fundamental, not the only frequency present.  Other frequencies are created when you play notes on almost any instrument in any environment—these additional frequencies beyond the fundamental are what help us distinguish one instrument from another.  Those that are above the fundamental are called overtones or upper partials.  Overtones that are integer multiples of the fundamental are called harmonics.

 

 

There can also be lower partials or undertones, though these are slightly less common.  And there are also sub-harmonics which follow the pattern of (1/x)(fundamental).  That is to say ½(440 Hz), ¼(440 Hz), etc.

 

Existing both above and below the fundamental are things called formants, which are acoustical resonances that, on an instrument, will sound no matter what.  For a violin, one formant of the instrument is a frequency whose nodes lie on opposite ends of the length of the violin.  Any vibration from any note stimulates the violin body itself to resonate and the aforementioned frequency sounds as well.

 

Formants and overtones are some of the things that allow us to distinguish a 440 A on the piano from a 440 A on a synthesizer, a singer, a violin, or a drum.  The also help us separate a Yamaha from a Stradivarius.

 

So if I were to hit that 440 Hz A on a piano, I would generate several frequencies:  the fundamental at 440 Hz; harmonics at 880, 1320, 1760, etc.; as well as whatever formants are present in that specific instrument.

 

The ratio of these frequencies relative to each other is what makes a characteristic sound.  So for instance, a guitar with nickel wound strings might sound that very same 440 Hz A and but have more emphasis on odd numbered harmonics whereas a guitar with nylon strings might hit that same 440 Hz A and have more emphasis on the even numbered harmonics.  Similarly, the nickel-stringed guitar might have a formant at 900 Hz and the nylon might have a formant at 4200 Hz.

 

You can see that when dealing with overtones and formants, you can very quickly span the entire audio spectrum.  That’s why if you get yourself a spectrum analyzer or even some of the nice plugin digital EQs out there, you’ll see that hitting any note on any instrument produces many more frequencies than that of the fundamental note you hit. 

 

When we talk about treating the bass, mid, and upper frequency bands over the next few weeks, you’ll see how important overtones and formants are to audio perception.

More from Phil’s Audible Spectrum series:

Last week we started examining component parts of the audible spectrum.  Of those component parts, perhaps none is more misunderstood and mishandled than the sub.  Perhaps it’s all those cars with bumpin’ sound systems out there, but it seems like everyone wants to cram as much “sub” as they can in the mix.  Just make sure you know what you are asking for!

 

Firstly, I just have to provide a disclaimer that I think any car with a big subwoofer in the back sounds terrible to me.  Outside my studio someone was parked blaring some Lady Gaga tune or something like that and all I could hear was the sub.  I could hear it distinctly too despite being three walls and a hundred yards away.  I can’t help but think about how badly those people are destroying their ears.  Morever, it just plain doesn’t sound good to me.

 

As I mentioned last week:  for practical purposes, Sub-Bass should be anything that sounds below the lowest fundamental note of your song.  This can include percussion and any sub-harmonics, resonations, formants, and room tones.  These are frequencies that would really only be reproduced by sub-woofers and large format PA/sound reinforcement systems so even if you have a million dollar audio setup and can hear all the way down to 20 Hz, realize that 90% of your fans still won’t hear that.  As I mentioned in the Limitations article, most of this won’t be reproduced by any consumer grade sound system.

 

Moreover, the sub is for audio content that lacks position specificity.  If you’ve ever seen a surround sound set-up before, you know that there are 5 speakers (LCR and two rears) plus a single sub-woofer.  Sub frequencies are very difficult to locate spatially and will more or less sound like they are coming from the same place no matter the position of the loudspeaker.  This is why surround sound setups don’t also require 5 separate subs.  A single sub placed in the center will suffice for all positions in the surround soundstage.  Because of this, too much sub content will turn into a big muddy bass because there is no real way to separate the rumble of the kick from the rumble of the bass or the rumble of the synth.

 

In order to get a focused sounding sub—the kind that moves you in the club or the kind that is noticeable (in a pleasant way) in home hi-fi systems that can actually reproduce those frequencies—you need to alter your thinking about the sub.  Don’t think of it as a separate frequency band that needs to stand on its own merit or be equal to the other frequency bands.  In fact, it helps even more to think of it as a garnish on the bass.  Something to help emphasize the bass, but not overpower it or stand on its own. 

 

If your bass lives in the bass and mid range frequencies, adding in the sub should make it stand out all the more.  But the bass should not expressly be confined in the sub regions. 

 

Furthermore, a sub bass is more clearly defined by what is NOT in it and for how long.  Imagine a band consisting of a drummer, a bass player, a synth player, and maybe a string orchestra—rocking out 80s arena style.  With all of those instruments you have the OPTION of including all that information in the sub:  the kick drum, the bass formants, the synth sub-harmonics, and the orchestra formants.  There would also be additional room tones and environmental sounds all going into the sub.  Since the sub has no position specificity and because sounds are distinguished from each other predominantly by upper harmonics, the sub sounds will be big and washed out because you won’t easily be able to tell the sub-bass components of each instrument apart from each other.

 

This introduces the problem of muddled bass.  So a kick drum that is short in duration might get buried by the longer notes of the orchestra and synthesizer, so you might get more sub overall, but lose the clarity of the kick.  In these sub ranges, sounds are really just a rumble and boom so the only way to tell things apart is by relative volumes and note duration.  Cramming all that stuff together like in the example above obscures all that.  That just creates an audible but unusable low-frequency noise floor.

 

Instead, a more preferable choice is to be selective in what makes it to the sub woofer.  That’s how you really draw emphasis and get the most out of the sub frequencies.  Make the drummer sound like John Bonham by putting a high pass filter over the mix at 45 Hz and bypass the filter on the kick drum track or maybe the entire drumset.  Then the kick drum really gets beefy and the rest of the ensemble doesn’t cloud that portion of the spectrum.

 

If you have a ticky kick drum like Metallica, you could instead opt to make the bass guitar or orchestra super fat by sending that to the sub instead.  The point is to be selective about what information makes it to the woofer so as not to obfuscate the sonic image with unnecessary clutter.

 

Additionally, as I’ve mentioned before, it is imperative to understand that low frequencies are extremely power dense.  So if you are actually “hearing” anything below 40 Hz, you are taking up way too much of the power spectrum.  This will blow out speakers, distort channel strips, and otherwise yield bad mixes.  And in a closed system, this extreme bass content (which is barely audible) will steal precious headroom from the more important frequencies.

 

The important take home lessons are to not expect any of your listeners to hear any of the sub-bass.  For the majority of them, the sub doesn’t exist altogether.  For the rest of them, be selective in the sub-bass content in order to make sure that you are actually using the woofers properly.

More from Phil’s Audible Spectrum series:

Over the past two weeks we have been discussing items pertaining to the audio spectrum at large.  In this article we’ll begin breaking down the audio spectrum into its component parts.  Though we disagree a bit on our subdivisions, Jay’s primer has excellent listening examples to hear each section individually.

 

Generally speaking, sounds can be lumped into three basic segments of the audio spectrum:  Bass, Mid, and Treble. 

 

The associated ranges would be approximately:

 

Bass 25 to 300 Hz.

Mids 300 to 2.4k Hz

Treble 2.4 to 20 kHz

 

Additionally, they can further be broken down in numerous ways depending on how people want to define sections:

 

Sub 25 to 45 Hz

Bass 45 to 300 Hz

Low-Mid 300 to 600 Hz

Mid 600  to 1.2k Hz

High-Mid  1.2 to 2.4 kHz

Treble 2.4 to 15 kHz

Super Treble 15 kHz to ~ 100 kHz

 

This Interactive Frequency Chart, much like the Carnegie Chart in the earlier article will help you understand how the frequency ranges match up with practical instrumentation.

 

For practical purposes, Sub-Bass should be anything that sounds below the lowest fundamental note of your song.  This can include percussion and any sub-harmonics, resonations, formants, and room tones.  These are frequencies that would really only be reproduced by sub-woofers and large format PA/sound reinforcement systems.  Some of this is undesirable—if you’ve ever watched an NFL game on windy day with a system that has a sub, pretty much everything is a big bass wash because of low-frequency wind noise.  We’ll go more in depth on that next week.

 

Bass should be reserved for the fundamental notes of the changes.  That is, the lowest sounding note of each chord progression.  This typically would include all the notes that would normally be played by a bass (Victor Wooten excluded).  This would also include bass playing synths and the left hand of the piano in many instances.

 

The Low-Mids and Mids include fundamental notes for melodic instruments as well as the first few orders of harmonics.  Harmonics help us distinguish sounds from each other and play a very important role in presence and clarity.  More on this when I examine the mid frequencies in two weeks.

 

The High-Mids deserve their own category because these frequencies contain sudden transient content.  For percussion, this would be the sound of sticks or mallets hitting the drum heads and cymbals.  For guitarists, this would be the sound of picks striking strings.  For vocalists, this would be the sound of hard consonance and sibilance.  All of these can be problematic, but also contribute greatly to impression of presence.

 

The treble portion of the audio spectrum contains almost nothing but upper harmonics of treble instruments and room tone.  This helps lead instruments and vocals sound present and full, but also adds brightness and clarity to a mix.

 

Over the next few weeks I’ll go into greater detail on problems with each part of the frequency spectrum.

More from Phil’s Audible Spectrum series:

Recording 101 teaches us that the audio spectrum is 20-20,000 Hz and it is our job as recording engineers to manage those frequencies. For introductory level classes, that is a usable definition, but it often leads to misunderstandings. >Do we hear 20 Hz as much as 20,000 Hz? Do we hear those frequencies as well as 2,000 Hz? The answer to both is no. In fact, given contemporary technological limitations, it isn’t even possible to accomplish most of that.

 

For those of you who read Jay’s Primer on Audio Frequency Bands and made it all the way the bottom, you would have read some interesting things about broadcast standards and encoding algorithms.  Broadcast standards here in the US actually cut off frequencies above 15 kHz.  That is, radio and television broadcasts don’t even bother with the top 5000 Hz of the audible spectrum!  If there were such a thing as radio anymore, you’d know to laugh off any audio engineer who promises you “radio quality mixes.”  Also, cutoffs are employed in almost all digital encoding algorithms in order to prevent aliasing of upper frequencies.

 

On the other end of the spectrum, most playback systems are not designed to go below 30 Hz.  Currently, the lowest reproducible frequency by any JBL system is a live sound reinforcement loud speaker with woofer that goes down to 25 Hz.  They also have consumer and studio woofers with roughly the same specs.  You’ll notice that these are all woofer systems and not standard speakers for desktop and meter-bridge monitoring.  The standard studio monitors without a woofer falloff sharply at ~45 Hz.  With this in mind, you should know not to expect to hear anything below 40 Hz on a standard system without a woofer.  Furthermore, you should know that about 90% of your audience will not be able to physically reproduce anything below 50 Hz given the standard consumer set up.

 

This is not to downplay the psychological impact of low or high frequencies.  These play a very important role in psychoacoustics.  Low-lows, though inaudible, help us perceive lowness partially through feel rather than sound.  High-highs also help us perceive presence and therefore clarity by giving more emphasis to the minutiae of a sound that you’d only hear by being close to it in the real world.

 

Next week, I’ll clearly define the component regions of the audio spectrum and talk about the various ways to treat undesirable maladies afflicting them individually.

More from Phil’s Audible Spectrum series:

The Audible Frequency Spectrum, Part 1 (Producer Speak)

Posted by Fix Your Mix On April - 19 - 20094 COMMENTS

Over the course of hundreds of interactions with clients through Fix Your Mix, both in a mixing and mastering capacity, I have noticed that there is a great disagreement out there on the practical frequencies in audio.  This is strange to me because we have such a vague lexicon for our enterprise (boomy, boxy, tinny, etc.) that you’d think we’d all latch on to terms with such defined parameters as Low, Low-Mid, High, et al.

 

But nevertheless, every couple months I get a client who says “I love the mix, but I’d really like to hear more bass, can you boost 10 Hz by like 5 dB?”  So for all of you loyal readers out there and as a reference for future clients, I have composed a series of articles describing the portions of the frequency spectrum.

 

Here is an excellent primer for discussing frequency ranges. Jay works in post-production (television, film, etc.), so his end goals are different from those of us in the music business. He also neglects to emphasize the importance of upper frequencies for imbuing a recording with presence, clarity, and professional quality.  But other than that it is an excellent breakdown of the frequency bands.  For this week though, we’ll be talking about the audible frequency spectrum at large.

 

The audible frequency range is generally accepted to run from 20 to 20,000 Hz.  Some people hear more, most people hear less.  However, it is important to understand that this broad frequency range is supposed to include the frequencies that the average person is physically able to hear.  For the purposes of experimentation, frequencies outside of the range can be heard, but they have to be amplified to such an extreme that they are not worth measuring.

 

fletcher-munsonTo the left is the Fletcher-Munson Equal Loudness Curve, established in 1937.  It is probably the most cited graph in psychoacoustics (although the Robinson-Dadson Equal Loudness Curve of 1956 has been shown to be more accurate, since Fletcher-Munson is the most widely used, the following commentary will focus on that).  This graph plots sound pressure level (SPL) in phons against frequency.  The lines indicate equal apparent loudness.  That is, if you were to follow each line, from 20 to 20k, you’d see the variation in amplitude necessary to make each frequency sound equal in loudness.  For example, on the top curve, take 1000 Hz sounding at 120 phons as the baseline.  In order to hear 20 Hz at the same apparent level, you’d have to amplify it to 130 phons.  The same goes for 20k.

 

Another interesting phenomenon about this curve is how exaggerated the differences become at lower amplitudes.  For instance, when you look at 1000 Hz at 20 phons (the third line from the bottom), you can see that it takes almost 80 phons to sound at the same apparent level.

 

Now bear in mind, this is not to say that you want to go and quadruple your bass content to get a booming mix.  On the contrary, this is to say that you really shouldn’t expect to hear anything beyond a certain points in the mix.  In almost all instances of music recording, there will be frequency content below easy audibility.  The point of mixing is not necessarily to make them audible.  Sometimes these frequencies are meant to be felt rather than heard.  Other times, these frequencies don’t really add much to the mix at all—eating up large portions of the usable power spectrum and overloading your mix with unnecessary content that either will hurt fidelity due to digital encoding or broadcast algorithms, or will be cast off anyway due to physical limitations of sound reproduction systems.

 

freq-1Here is a graph of all the frequency ranges for common instruments and their notes as shown on a piano.  What you’ll notice is that the range for a concert bass is from ~90 Hz to ~350 Hz.  The absolute lowest note on the piano is around ~28 Hz, and that is a note that you will likely never hit.  Practically all the action in musical instruments occurs between 60 and 5000 Hz.  Allowing for formants, harmonics, and other sonic phenomena outside of the fundamental frequency of the note, it is safe to say that practically all usable and desirable sounds fall within 20-20K and that range could even reasonably be made smaller.

 

In next week’s article I will examine these specific limitations and discuss why the low frequencies are the most problematic.

More from Phil’s Audible Spectrum series:

WORK WITH US







Featured Columns