Need Mastering?
Learn more now
© 2024 Fix Your Mix. All rights reserved.

Archive for the ‘Producer Speak’ Category

As previously defined, the low-mid portion of the audible spectrum runs from about 300 Hz to 600 Hz and contains mostly the fundamental frequencies of non-bass instruments.  This is the comfortable middle range for vocalists, the standard range for guitars, horns, strings, and other instruments.

 

It also is the range where the first few harmonics for the lower frequency instruments sound and give character to those instruments.  In more sparse mixes, these upper frequencies can be altered to help separate the bass from the kick and so on.  However this is also where a lot of build up will occur due to orchestration, so don’t bank on these frequencies helping to bail you out in dealing with the bass problems in a dense mix.  I’ll speak more at length about harmonics and how they can help you in next the mid-frequency article.

 

For the voice, most of the power and audibility comes in this range since it is the portion that contains the distinct vowel sounds which vocalists latch on to.  While this is an important range in dialogue and speech, it is also vitally important in music since vowels are what allow singers to elongate words.  Think about it, when you want to hold out a syllable, it is almost always the vowel sound that is held out.  It’s pretty difficult to lengthen a P or D sound.  Holding out an S just sounds sibilant.  So for clear vocals, it is pretty important not to muck up this frequency band.

 

This is easier said than done.  A lot of indie rock musicians have problems with this range.  Being a self-professed indie rock snob, I say this without any intended slight:  most indie rockers are not necessarily the most virtuosic musicians.  You can hear it in Caleb Followill’s vocals and Nick Drake’s guitar playing and Meg White’s drumming.  It isn’t that they are bad or they don’t write good music.  I love their music and they get the point across.  Let’s just say they aren’t necessarily in the realm of Yo Yo Ma or Mozart.

 

The truth is that most musicians who don’t perform a bunch of acrobatics like to stay squarely in this “comfortable” range when playing and that can really cloud the mid-range in a song.  If an untrained keyboard player lays down a keyboard track, changes are they’ll circle middle C.  And weaker vocalists might also stick in this comfortable range as will guitar players and trumpeters and string players, etc.

 

That’s another reason why solo musicians doing all the tracking themselves at home can struggle with their mixes.  They know that the bass is played way down on that end of the midi-controller and everything else kind of sits in this middle range.  If you are using midi for everything, then most people will play all their midi instruments in the same way. 

 

Being in studios for so long, you start to develop a knack for feeling out musicians.  Horn players behave like horn players and therefore sound like horn players when they play.  Drummers behave like drummers and usually sound like drummers when they play.  Singers and string players and harmonica players and everybody else roughly kind of act in the same manner and have a certain personality that is evident in their playing.  If a horn player is programming all the different instruments on a midi keyboard, he might find himself in a rut because all the instruments are playing parts like a horn player would instead of all these different personalities bouncing off each other.

 

The upshot is that if you are doing everything yourself at home and you aren’t well versed in orchestration or how certain instruments sound and play and how they do that in relation to other instruments, you might end up with a big pile of mid-range instrumentation that obscures the vocal as well as the other instruments.

 

So it is important to bear this in mind while writing and try to compartmentalize various parts to certain parts of the frequency range so that they don’t interfere with each other.  Keep the horns high and the guitar low and the vocal all by its lonesome.

 

Of course this isn’t always possible, so to address it you might emphasize certain frequencies in this band in some instruments and not in others.  For instance if a guitar is playing rhythm chords and a piano is chucking along as well, you might boost the guitar at 450 and then do the opposite in the piano.  This doesn’t need to be a drastic EQ, just enough to relegate each instrument to a certain portion of the range.

 

Much like the bass, we are dealing with a limited range of frequencies to do that with, so you might also want to try treating the upper harmonics which will give you much more room to play with.  These will come into play in the next portion of the audio spectrum:  the mid-frequencies.

The audio world can be a frustrating one for many reasons.  From buzzing headphones to crackling pres, our world is rife with little nuisances.  However, the most frustrating thing for me by far is how inexact our nomenclature is.  As a profession, we have really done a disservice to ourselves by not having a standardized and precise language for our trade. 

 

Oh, how easy it would be if someone would walk into one of my mixes and say “Yah, it sounds good, but there is a little too much 2.7 kHz, can you back that down a little?”  Instead, we are left with inexact jargon like “It’s a little harsh, can you do something about that?”  Of course most of us aren’t skilled enough to know exact frequencies without the necessary equipment, present company included.  So it would be ridiculous to say that we should all speak more precisely from now on. 

 

Instead, I will compile a list on this site, over time of course, that enumerates the various inexact terms I encounter in my career and what I would do to remedy them.

 

The first list here is for bass register terms.  Some of this comes with the help of Bruce Bartlett’s Practical Recording Techniques.  Feel free to respond back with more if you can think of them and I’ll try to include them.

 

Ballsy:  Emphasis on frequencies below 300 Hz, but only on mixes with distinct sounds between the bass instruments so as not to be muddy.

 

Bloated:  Emphasis on frequencies below 300 Hz, but with indistinct sounds.  Muddy with low frequency resonances.

 

Boomy:  Too much bass at 125 Hz.  This is often caused by sudden sounds that cause large excursions in the woofer reproducing the sound.

 

Boxy:  Low frequency resonances like being in a box.  Mainly resonances in the upper portion of the bass register from 200-300 Hz since boxes are too thin to adequately hold in low-lows.

 

Chesty:  This obviously refers to recordings of vocalists.  The chest is where the low frequencies reside, especially the native resonances.  It is relatively easy to address because humans are roughly the same size on average, so a simple eq trimming the frequencies somewhere between 120 and 250 Hz should do the trick.

 

Dark:  This usually is a term used in comparison to the upper frequencies.  As such, either decreasing the lower frequencies including the fundamentals or increasing the upper frequencies with an emphasis on harmonics can remedy the problem by evening out the response across the board.

 

Dull:  Along with dark, this usually means too much low register content in comparison to upper frequencies.  The upper frequencies are where you get words like “lively” and “bright” so again, the problem can be remedied by de-emphasizing fundamentals and low frequencies in comparison to the upper harmonics.

 

Ground Noise:  Constant hum between 50 and and 70 Hz, but can be extremely broad spectrum.  If possible filter it out, but it is often best addressed in tracking by using a ground lift or isolation transformer.

 

Muddy:  Too much competing low frequency content in the bass register.  Try etching out portions of the spectrum on each instrument and cutting unnecessary frequencies in other instruments in the bass range.

 

Rumble:  Relatively constant sound between 25 and 40 Hz.  Often caused by AC or other environmental sounds.  Easily addressed with a high-pass filter.

 

Thumpy:  Similar to Boomy–sudden excursions more of an emphasis between 20 and 50 Hz.

 

Tubby:  Low frequency resonances, like boxy, but with more bass collection (since bathtubs are more reverberant than boxes and contain low frequencies better due to density and thickness).  Try equing out low frequencies or using a high pass filter.

 

Warm:  As it pertains to bass, having good bass response without overpowering higher frequencies and without being overpowered by them.  On a scale: dull/dark, warm, bright.

As previously mentioned, the bass portion of the audible spectrum runs from 20 Hz to about 300 Hz.  Setting aside the previously discussed sub-bass portion of this frequency band (frequencies 45 Hz and below), we can say that the bass portion of the spectrum should be reserved primarily for the fundamental frequencies of the roots of the chord changes in the song insofar as tonal content is concerned.  Of course this range should also incorporate low frequency sounds such as kick drums, toms, and even room tones.

 

Many of the biggest problems people encounter in tracking, mixing, and mastering occur squarely in this region.  Terms like muddy, boomy, and woofy all deal explicitly with the bass region.  We all want “big bass” with lots of thunderous kick drums and thumpin’ bass lines, but unfortunately the arithmetic is not so simple as “turn them all up.”  As many of you following along at home might have already experienced, turning up all the bass instruments in your mix is a recipe for a muddy, distorted mess.

 

So how do we properly address these issues to get a decent sounding mix?  Well, first we need to take note of the frequency band that encompasses the bass portion and see how it compares to the other bandwidths:

 

Bass from 25-300 Hz

Treble 2.4-20 kHz.

 

Look at that again.  That says that the bass range has a bandwidth of about 275 Hz while the treble range has a bandwidth of almost 18,000 Hz!  No wonder we run into problems of indistinct bass but not indistinct top end.

 

In composition, there is something called the Lower Interval Limit.  This is a commonly held set of rules that say, based on the frequency of the first note, how big the interval must be in order for that interval to sound clear and distinct.  For instance, if we were to use the 440 Hz A as our base note and play the C above that to form a harmonic interval of a minor third, we’d have a difference in frequencies 83.25 cycles per second (C5 is 523.25 Hz so 523.25-440=83.25 Hz).  This is a difference that our ears can distinctly hear without hesitation and we perceive as a pleasant albeit sad sonority.

 

Now imagine that we started with A four octaves down.  This A has a fundamental frequency of 27.5 Hz.  The minor third above that is a C with a fundamental frequency of 32.70 Hz.  This provides a much tougher to distinguish difference of only 5.2 Hz.

 

Furthermore, the difference between that A and it’s next closest upper neighbor A# is only 1.64 Hz.  So even in a melodic context, it can sometimes be difficult to properly distinguish the two notes.

 

As an aside, here is a handy-dandy list of the lowest notes generally accepted in order to have a properly sounding interval.  Bear in mind that these are only commonly held compositional standards and are free to be broken at any time:

 

Interval

Lowest Pitch

Second Pitch

Minor Second

E2

F2

Major Second

Eb2

F2

Minor Third

C2

Eb2

Major Third

B1

D#2

Perfect Fourth

A1

D2

Diminished Fifth

B0

F1

Perfect Fifth

C#1

G#1

Minor Sixth

F1

Db2

Major Sixth

F1

D2

Minor Seventh

F1

Eb2

Major Seventh

F1

Ed

 

The first column is the desired interval.  The second column is the lowest note from which you can build the desired interval.  The third column is the co-responding note needed above the lowest pitch to complete the desired interval.

 

In my mind, muddiness occurs when we have too much bass information going on simultaneously that creates a big mess of sounds too close in frequency content.  This contributes to a washy indistinct bass. 

 

Generally speaking, the most common problem is figuring out how to separate the kick drum from the bass.  It is important to remember that even though the kick drum is often regarded as an atonal instrument, it still produces tonal frequencies and especially distinct fundamentals.  So if the bass and the kick drum are sounding in roughly the same range, our ears will be unable to distinguish the two sonorities.

 

One way to address this is by making sure that each instrument emphasizes different portions of the bass frequency band.  Ideally, these portions would follow the Lower Interval Limit.  For example, if the kick drum is tuned so that its fundamental sounds at about 60 Hz (which is roughly a B1), the bass should play no lower than D#2.  This way the fundamentals adhere to the lower interval limit theory and are reasonably sure to be clear and distinct sounds.

 

While the Lower Interval Limit theory  is not explicitly intended for this purpose and often it is meant to be used for harmonic intervals (notes sounding at the same time, generally on the same instrument but not necessarily), the point is that creating distinguishable sonorities is all about being able to distinctly hear differences between sounds.  We want to refrain from confusing our ears with sounds that are too close together that muddle distinction.

 

This will obviously not solve all the problems.  If you’ve ever seen a waveform for a kick drum, you know that its spectral content is very broad and not relegated simply to its fundamental frequency.  As such, it is further beneficial to deal with frequencies beyond the fundamental, however most of that will be dealt with as we move up the audible spectrum into the mid ranges.  For now the focus is on what we can do specifically in the bass register to prevent problems.

 

That aside, there is always a big collection of frequencies that sound in the bass register on a kick drum.  This is due to many resonations that aren’t perfectly in tune:  the beater head, the shell, the resonant head, not to mention all the nodes between the lugs on the head that yield very dense and complex waveforms.  These frequencies can be so broad that they can encompass a very large portion of the bass register and make the aforementioned solution pretty impossible. 

 

One way to address this strictly in the bass register is to eq out whole sections of the kick drum so that it creates space for the bass guitar.  If you know the key of the song, you can determine the lowest note the bass player might play.  Your job then would be to carve out a nice chunk of the kick drum sound, not only cover the register that the bass plays, but also below it to create enough space that the two sounds are distinguishable from each other by the Lower Interval Limit theory.

 

Another aside:  doing these things may seem drastic, but bear in mind that your ultimate goal is NOT to create the best kick drum sound possible and the best bass guitar sound possible and add them together.  Instead, your goal is to create the best sounds for each instrument that work together so that they sound good together in the mix as a whole.

 

Another issue that crops up is when there are a whole bunch of overdubs that play in the same area.  It is difficult in the bass range for a bass guitar to sound distinct from a bass synth and then have both of them stand out from the kick because you only have about 225 Hz to work with.  Layering a bunch off overdubs can lead to muddling if you do too much layering in the bass region.  One example that comes immediately to mind is an 808 kick drum and trying to make that audible against a bed of kick drum and bass guitar.  The easy thing about 808s is that they are basically sine waves.  So it is easy to determine the note that the 808 is sounding, and game plan the kick and bass around it.

 

But for more dense things like synthesizers, it can be more problematic.  As I mentioned in the harmonics primer, there are a whole bunch of other frequencies that sound at any given time from any instrument.  Some of these occur in the harmonic series, but many others like formants and native resonances do not.  Sometimes these can occur in clusters around the fundamental note and when that occurs, the extracurricular frequencies from the bass and the synth and kick all roll up together and make a big muddy mess.

 

The best way to address this is to avoid excessive overdubs in the bass register.  Another way to deal with it is to find out what you can change easily, like the kick drum since it is static, and treat it in a way that keeps it out of the way of the bass and other instruments.  For instance you might EQ to emphasize the fundamental below the key of the song, and then eq out portions from the root up about an octave and a half to keep it out of the way of the bass and other bass instruments.

 

Next week, I’ll post some common terms associated with bass problems with some quick tips on how to address them.

 

Then, I’ll examine some issues in the mid-range and further delve into how to mitigate problems associated with the bass as well as those unique to the mid-range.

Masking (Producer-Speak)

Posted by Fix Your Mix On May - 28 - 2009COMMENT ON THIS POST

Psychoacoustics plays a very important role in our everyday lives.  We are not necessarily affected by what we hear so much as how our minds interpret what we hear.  For instance, right now you might think you are sitting in a perfectly silent environment.  But listen closer:  the whirr of your computer fan, the gentle hum of the air conditioner, your neighbors’ blarring all kinds of intolerable pop songs.  We can notice all kinds of ambient noise when prompted, but often our minds just let them go unperceived.  This is a good thing because it helps us not be disturbed by all the frivolous noise out there.  Our minds filter out things for us so that we don’t get bothered by them unnecessarily

 

As professionals, amateurs, or hobbyists in the audio realm, we have to be more acquainted with psychoacoustic phenomena than the average Joe.  I have been discussing the sub-bass portion of the audible spectrum, which is the most demanding register in terms of its share of the power spectrum, and it brings up an important psychoacoustic phenomenon called masking.

 

From Sweetwater Sound’s wonderful Word For the Day dictionary:

 

When sounds that contain similar frequencies are played simultaneously, the weaker sound tends to have those overlapping frequencies covered – ‘masked’ – by the frequencies from the stronger sound (especially in a dense mix). The frequencies of the weaker sound are still there; they are just not discernable over the more dominant sound with the same frequencies.

 

This is explicitly why it is important not to have too much information in the sub-bass region especially.  The sub-bass is often an unusable portion of the audible spectrum, yet putting too much of it in a mix, perhaps such that some of it is audible, can cause it to mask neighboring frequencies in the bass register.  This can lead to muddy, indistinct low end as the sub-bass masks frequencies in the bass section.

 

This becomes even more of an issue in digital audio due to encoding algorithms.  The designers of audio codecs, notably MP3s, use masking as a way of excising “unnecessary” portions of audio.  They have processes set up that detect masked frequencies and eliminate them from the mix.  These algorithms are necessarily imperfect since no single metric could feasibly fit all recorded music.

 mp3-waves

 

 

 

 

 


 

 

 

 

 

 


 

 

 

If you look at a spectrometer for a full spectrum mix, you can see that the sub-bass portion generally reads extremely loud even though you can’t hear most of it.  This means that, to an algorithm searching for masking phenomena, the sub-bass would read as the stronger sound, and despite being largely inaudible and unusable, the algorithm would preserve it at the expense of other more important, but less spectrally powerful portions of the audio spectrum.

 

As we’ll further explore next week, the entire bass region (including the sub-bass) is a relatively small region in terms of frequency bandwidth, so neighboring frequencies are very dependent on each other down there.  Masking can occur at any portion of the audio spectrum, but it is especially important in the bass region.

Last week we discussed some of the inherent problems with sub-bass frequencies and how to deal with them.  One of the major issues is how sounds in that bandwidth lack specificity.  One instrument’s rumble, boom, and thud sound pretty similar to any other instrument’s.  For the frequency bands above the sub, we have to start talking about fundamentals, overtones, harmonics, and formants in order to properly appreciate some of the roles each portion of the audible spectrum plays in our interpretation of sound.

 

Since most of our clients and readers deal at least some of the time in the digital domain, chances are you’ve seen a complex waveform that looks something like this:

 waveform

 

 

 

 

 

 

 

 

 

 

 

In simple terms, waveforms of this type are the summation of various component frequencies.  In the illustration below, you see how a simple sine wave becomes more complex by the addition of harmonics:

 

complex_waveformdesktopmusichandbook 

The waveform starts with the fundamental frequency.  This is the lowest frequency present in the waveform that falls within the harmonic series.  When you play the 440 Hz A on the piano, 440 is really just the frequency of the fundamental, not the only frequency present.  Other frequencies are created when you play notes on almost any instrument in any environment—these additional frequencies beyond the fundamental are what help us distinguish one instrument from another.  Those that are above the fundamental are called overtones or upper partials.  Overtones that are integer multiples of the fundamental are called harmonics.

 

 

There can also be lower partials or undertones, though these are slightly less common.  And there are also sub-harmonics which follow the pattern of (1/x)(fundamental).  That is to say ½(440 Hz), ¼(440 Hz), etc.

 

Existing both above and below the fundamental are things called formants, which are acoustical resonances that, on an instrument, will sound no matter what.  For a violin, one formant of the instrument is a frequency whose nodes lie on opposite ends of the length of the violin.  Any vibration from any note stimulates the violin body itself to resonate and the aforementioned frequency sounds as well.

 

Formants and overtones are some of the things that allow us to distinguish a 440 A on the piano from a 440 A on a synthesizer, a singer, a violin, or a drum.  The also help us separate a Yamaha from a Stradivarius.

 

So if I were to hit that 440 Hz A on a piano, I would generate several frequencies:  the fundamental at 440 Hz; harmonics at 880, 1320, 1760, etc.; as well as whatever formants are present in that specific instrument.

 

The ratio of these frequencies relative to each other is what makes a characteristic sound.  So for instance, a guitar with nickel wound strings might sound that very same 440 Hz A and but have more emphasis on odd numbered harmonics whereas a guitar with nylon strings might hit that same 440 Hz A and have more emphasis on the even numbered harmonics.  Similarly, the nickel-stringed guitar might have a formant at 900 Hz and the nylon might have a formant at 4200 Hz.

 

You can see that when dealing with overtones and formants, you can very quickly span the entire audio spectrum.  That’s why if you get yourself a spectrum analyzer or even some of the nice plugin digital EQs out there, you’ll see that hitting any note on any instrument produces many more frequencies than that of the fundamental note you hit. 

 

When we talk about treating the bass, mid, and upper frequency bands over the next few weeks, you’ll see how important overtones and formants are to audio perception.

More from Phil’s Audible Spectrum series:

Last week we started examining component parts of the audible spectrum.  Of those component parts, perhaps none is more misunderstood and mishandled than the sub.  Perhaps it’s all those cars with bumpin’ sound systems out there, but it seems like everyone wants to cram as much “sub” as they can in the mix.  Just make sure you know what you are asking for!

 

Firstly, I just have to provide a disclaimer that I think any car with a big subwoofer in the back sounds terrible to me.  Outside my studio someone was parked blaring some Lady Gaga tune or something like that and all I could hear was the sub.  I could hear it distinctly too despite being three walls and a hundred yards away.  I can’t help but think about how badly those people are destroying their ears.  Morever, it just plain doesn’t sound good to me.

 

As I mentioned last week:  for practical purposes, Sub-Bass should be anything that sounds below the lowest fundamental note of your song.  This can include percussion and any sub-harmonics, resonations, formants, and room tones.  These are frequencies that would really only be reproduced by sub-woofers and large format PA/sound reinforcement systems so even if you have a million dollar audio setup and can hear all the way down to 20 Hz, realize that 90% of your fans still won’t hear that.  As I mentioned in the Limitations article, most of this won’t be reproduced by any consumer grade sound system.

 

Moreover, the sub is for audio content that lacks position specificity.  If you’ve ever seen a surround sound set-up before, you know that there are 5 speakers (LCR and two rears) plus a single sub-woofer.  Sub frequencies are very difficult to locate spatially and will more or less sound like they are coming from the same place no matter the position of the loudspeaker.  This is why surround sound setups don’t also require 5 separate subs.  A single sub placed in the center will suffice for all positions in the surround soundstage.  Because of this, too much sub content will turn into a big muddy bass because there is no real way to separate the rumble of the kick from the rumble of the bass or the rumble of the synth.

 

In order to get a focused sounding sub—the kind that moves you in the club or the kind that is noticeable (in a pleasant way) in home hi-fi systems that can actually reproduce those frequencies—you need to alter your thinking about the sub.  Don’t think of it as a separate frequency band that needs to stand on its own merit or be equal to the other frequency bands.  In fact, it helps even more to think of it as a garnish on the bass.  Something to help emphasize the bass, but not overpower it or stand on its own. 

 

If your bass lives in the bass and mid range frequencies, adding in the sub should make it stand out all the more.  But the bass should not expressly be confined in the sub regions. 

 

Furthermore, a sub bass is more clearly defined by what is NOT in it and for how long.  Imagine a band consisting of a drummer, a bass player, a synth player, and maybe a string orchestra—rocking out 80s arena style.  With all of those instruments you have the OPTION of including all that information in the sub:  the kick drum, the bass formants, the synth sub-harmonics, and the orchestra formants.  There would also be additional room tones and environmental sounds all going into the sub.  Since the sub has no position specificity and because sounds are distinguished from each other predominantly by upper harmonics, the sub sounds will be big and washed out because you won’t easily be able to tell the sub-bass components of each instrument apart from each other.

 

This introduces the problem of muddled bass.  So a kick drum that is short in duration might get buried by the longer notes of the orchestra and synthesizer, so you might get more sub overall, but lose the clarity of the kick.  In these sub ranges, sounds are really just a rumble and boom so the only way to tell things apart is by relative volumes and note duration.  Cramming all that stuff together like in the example above obscures all that.  That just creates an audible but unusable low-frequency noise floor.

 

Instead, a more preferable choice is to be selective in what makes it to the sub woofer.  That’s how you really draw emphasis and get the most out of the sub frequencies.  Make the drummer sound like John Bonham by putting a high pass filter over the mix at 45 Hz and bypass the filter on the kick drum track or maybe the entire drumset.  Then the kick drum really gets beefy and the rest of the ensemble doesn’t cloud that portion of the spectrum.

 

If you have a ticky kick drum like Metallica, you could instead opt to make the bass guitar or orchestra super fat by sending that to the sub instead.  The point is to be selective about what information makes it to the woofer so as not to obfuscate the sonic image with unnecessary clutter.

 

Additionally, as I’ve mentioned before, it is imperative to understand that low frequencies are extremely power dense.  So if you are actually “hearing” anything below 40 Hz, you are taking up way too much of the power spectrum.  This will blow out speakers, distort channel strips, and otherwise yield bad mixes.  And in a closed system, this extreme bass content (which is barely audible) will steal precious headroom from the more important frequencies.

 

The important take home lessons are to not expect any of your listeners to hear any of the sub-bass.  For the majority of them, the sub doesn’t exist altogether.  For the rest of them, be selective in the sub-bass content in order to make sure that you are actually using the woofers properly.

More from Phil’s Audible Spectrum series:

Over the past two weeks we have been discussing items pertaining to the audio spectrum at large.  In this article we’ll begin breaking down the audio spectrum into its component parts.  Though we disagree a bit on our subdivisions, Jay’s primer has excellent listening examples to hear each section individually.

 

Generally speaking, sounds can be lumped into three basic segments of the audio spectrum:  Bass, Mid, and Treble. 

 

The associated ranges would be approximately:

 

Bass 25 to 300 Hz.

Mids 300 to 2.4k Hz

Treble 2.4 to 20 kHz

 

Additionally, they can further be broken down in numerous ways depending on how people want to define sections:

 

Sub 25 to 45 Hz

Bass 45 to 300 Hz

Low-Mid 300 to 600 Hz

Mid 600  to 1.2k Hz

High-Mid  1.2 to 2.4 kHz

Treble 2.4 to 15 kHz

Super Treble 15 kHz to ~ 100 kHz

 

This Interactive Frequency Chart, much like the Carnegie Chart in the earlier article will help you understand how the frequency ranges match up with practical instrumentation.

 

For practical purposes, Sub-Bass should be anything that sounds below the lowest fundamental note of your song.  This can include percussion and any sub-harmonics, resonations, formants, and room tones.  These are frequencies that would really only be reproduced by sub-woofers and large format PA/sound reinforcement systems.  Some of this is undesirable—if you’ve ever watched an NFL game on windy day with a system that has a sub, pretty much everything is a big bass wash because of low-frequency wind noise.  We’ll go more in depth on that next week.

 

Bass should be reserved for the fundamental notes of the changes.  That is, the lowest sounding note of each chord progression.  This typically would include all the notes that would normally be played by a bass (Victor Wooten excluded).  This would also include bass playing synths and the left hand of the piano in many instances.

 

The Low-Mids and Mids include fundamental notes for melodic instruments as well as the first few orders of harmonics.  Harmonics help us distinguish sounds from each other and play a very important role in presence and clarity.  More on this when I examine the mid frequencies in two weeks.

 

The High-Mids deserve their own category because these frequencies contain sudden transient content.  For percussion, this would be the sound of sticks or mallets hitting the drum heads and cymbals.  For guitarists, this would be the sound of picks striking strings.  For vocalists, this would be the sound of hard consonance and sibilance.  All of these can be problematic, but also contribute greatly to impression of presence.

 

The treble portion of the audio spectrum contains almost nothing but upper harmonics of treble instruments and room tone.  This helps lead instruments and vocals sound present and full, but also adds brightness and clarity to a mix.

 

Over the next few weeks I’ll go into greater detail on problems with each part of the frequency spectrum.

More from Phil’s Audible Spectrum series:

Recording 101 teaches us that the audio spectrum is 20-20,000 Hz and it is our job as recording engineers to manage those frequencies. For introductory level classes, that is a usable definition, but it often leads to misunderstandings. >Do we hear 20 Hz as much as 20,000 Hz? Do we hear those frequencies as well as 2,000 Hz? The answer to both is no. In fact, given contemporary technological limitations, it isn’t even possible to accomplish most of that.

 

For those of you who read Jay’s Primer on Audio Frequency Bands and made it all the way the bottom, you would have read some interesting things about broadcast standards and encoding algorithms.  Broadcast standards here in the US actually cut off frequencies above 15 kHz.  That is, radio and television broadcasts don’t even bother with the top 5000 Hz of the audible spectrum!  If there were such a thing as radio anymore, you’d know to laugh off any audio engineer who promises you “radio quality mixes.”  Also, cutoffs are employed in almost all digital encoding algorithms in order to prevent aliasing of upper frequencies.

 

On the other end of the spectrum, most playback systems are not designed to go below 30 Hz.  Currently, the lowest reproducible frequency by any JBL system is a live sound reinforcement loud speaker with woofer that goes down to 25 Hz.  They also have consumer and studio woofers with roughly the same specs.  You’ll notice that these are all woofer systems and not standard speakers for desktop and meter-bridge monitoring.  The standard studio monitors without a woofer falloff sharply at ~45 Hz.  With this in mind, you should know not to expect to hear anything below 40 Hz on a standard system without a woofer.  Furthermore, you should know that about 90% of your audience will not be able to physically reproduce anything below 50 Hz given the standard consumer set up.

 

This is not to downplay the psychological impact of low or high frequencies.  These play a very important role in psychoacoustics.  Low-lows, though inaudible, help us perceive lowness partially through feel rather than sound.  High-highs also help us perceive presence and therefore clarity by giving more emphasis to the minutiae of a sound that you’d only hear by being close to it in the real world.

 

Next week, I’ll clearly define the component regions of the audio spectrum and talk about the various ways to treat undesirable maladies afflicting them individually.

More from Phil’s Audible Spectrum series:

The Audible Frequency Spectrum, Part 1 (Producer Speak)

Posted by Fix Your Mix On April - 19 - 20094 COMMENTS

Over the course of hundreds of interactions with clients through Fix Your Mix, both in a mixing and mastering capacity, I have noticed that there is a great disagreement out there on the practical frequencies in audio.  This is strange to me because we have such a vague lexicon for our enterprise (boomy, boxy, tinny, etc.) that you’d think we’d all latch on to terms with such defined parameters as Low, Low-Mid, High, et al.

 

But nevertheless, every couple months I get a client who says “I love the mix, but I’d really like to hear more bass, can you boost 10 Hz by like 5 dB?”  So for all of you loyal readers out there and as a reference for future clients, I have composed a series of articles describing the portions of the frequency spectrum.

 

Here is an excellent primer for discussing frequency ranges. Jay works in post-production (television, film, etc.), so his end goals are different from those of us in the music business. He also neglects to emphasize the importance of upper frequencies for imbuing a recording with presence, clarity, and professional quality.  But other than that it is an excellent breakdown of the frequency bands.  For this week though, we’ll be talking about the audible frequency spectrum at large.

 

The audible frequency range is generally accepted to run from 20 to 20,000 Hz.  Some people hear more, most people hear less.  However, it is important to understand that this broad frequency range is supposed to include the frequencies that the average person is physically able to hear.  For the purposes of experimentation, frequencies outside of the range can be heard, but they have to be amplified to such an extreme that they are not worth measuring.

 

fletcher-munsonTo the left is the Fletcher-Munson Equal Loudness Curve, established in 1937.  It is probably the most cited graph in psychoacoustics (although the Robinson-Dadson Equal Loudness Curve of 1956 has been shown to be more accurate, since Fletcher-Munson is the most widely used, the following commentary will focus on that).  This graph plots sound pressure level (SPL) in phons against frequency.  The lines indicate equal apparent loudness.  That is, if you were to follow each line, from 20 to 20k, you’d see the variation in amplitude necessary to make each frequency sound equal in loudness.  For example, on the top curve, take 1000 Hz sounding at 120 phons as the baseline.  In order to hear 20 Hz at the same apparent level, you’d have to amplify it to 130 phons.  The same goes for 20k.

 

Another interesting phenomenon about this curve is how exaggerated the differences become at lower amplitudes.  For instance, when you look at 1000 Hz at 20 phons (the third line from the bottom), you can see that it takes almost 80 phons to sound at the same apparent level.

 

Now bear in mind, this is not to say that you want to go and quadruple your bass content to get a booming mix.  On the contrary, this is to say that you really shouldn’t expect to hear anything beyond a certain points in the mix.  In almost all instances of music recording, there will be frequency content below easy audibility.  The point of mixing is not necessarily to make them audible.  Sometimes these frequencies are meant to be felt rather than heard.  Other times, these frequencies don’t really add much to the mix at all—eating up large portions of the usable power spectrum and overloading your mix with unnecessary content that either will hurt fidelity due to digital encoding or broadcast algorithms, or will be cast off anyway due to physical limitations of sound reproduction systems.

 

freq-1Here is a graph of all the frequency ranges for common instruments and their notes as shown on a piano.  What you’ll notice is that the range for a concert bass is from ~90 Hz to ~350 Hz.  The absolute lowest note on the piano is around ~28 Hz, and that is a note that you will likely never hit.  Practically all the action in musical instruments occurs between 60 and 5000 Hz.  Allowing for formants, harmonics, and other sonic phenomena outside of the fundamental frequency of the note, it is safe to say that practically all usable and desirable sounds fall within 20-20K and that range could even reasonably be made smaller.

 

In next week’s article I will examine these specific limitations and discuss why the low frequencies are the most problematic.

More from Phil’s Audible Spectrum series:

Yamaha NS-10s (Producer Speak)

Posted by Fix Your Mix On April - 16 - 2009COMMENT ON THIS POST

NS-10In 1978 the Yamaha NS-10 first hit the home audio market. The speakers were originally designed for the consumer rather than the professional sphere. The only problem was that the speakers sounded terrible and no one wanted them for that purpose. They were often described as overly bright and harsh and the frequency response was abysmal in the low end (criticisms which are founded and still exist to this day). However, despite its audiophilic shortcomings, Fate found other uses for this Little-Speaker-That-Couldn’t.


As New Wave, punk, and other lo-fi genres began to take hold on the world, a DIY spirit took over and smaller, cheaper recording studios were created that catered to a clientele who didn’t necessarily place a premium on fidelity. Near-field monitoring became the fashionable choice for these studios because it minimized the effect of listening environment on the sound of a mix. This allowed bedrooms, basements, strip-malls and other ostensibly acoustically unsound venues to become mixing environments.


In these situations the NS-10s weaknesses became strengths. Their lack of low-end capability meant that room nodes (standing waves in a listening environment which cause certain frequencies to be accentuated because of the geometry of the room) weren’t much of an issue since these acoustic phenomena are largely confined to the lower frequencies. Furthermore, their use with cheaper, lower output amplifiers (as was common in these smaller studios) meant that the program output was lower. These volume levels are generally agreed to be the NS-10s’ most accurate operating range. And of course the price, as a previously undesirable commodity, was just right for small studios.


Over the course of the 1980s, the NS-10 became a mainstay of the recording studio and their ubiquity, coupled with the fact that their poor sonic characteristics generally do not incite the individual characteristics of a listening environment, meant that the NS-10 could become a fairly universal reference. By and large, NS-10s were thought to sound reasonably similar in every listening environment. Thus, most mixing decisions are themselves adequately portable.


However, the NS-10 is only as useful as you are familiar with its sonic characteristics. A +7 dB peak at around 1500 Hz contributes to the audibility of some mid-range sounds such as the human voice and acoustic guitar. Operating without this knowledge may result in a weak vocal or acoustic in the mix when you take your songs to other environs.


It is also very difficult to judge a mix’s low-lows on NS-10s. The speaker simply was not designed to reproduce those frequencies. If you aren’t aware of this, then you may find yourself pumping in a ton of low-end just so that the sub frequencies are audible, but if you took it to the club, you’d probably blow out the speakers with all that 808!


It is now agreed in most professional circles that NS-10s are an excellent reference at low volume levels and for gross judgments that do not invoke the sub-frequencies. Armed with this knowledge, you’ll have a better understanding of how to use this omnipresent piece of gear and knowing how to properly use a tool is the most important part of the audio world.

WORK WITH US







Featured Columns