Need Mastering?
Learn more now
© 2024 Fix Your Mix. All rights reserved.

The final section of the audible spectrum is the high frequency or treble portion. Humans theoretically are able to hear up to 20 KHz (that is, newborn baby girls can theoretically hear up to 20 KHz at normal listening levels; for the rest of us, considerably lower). So what could happen in the 16,500 Hz range if no new instruments can sound there? 


It contains almost nothing but upper harmonics of treble instruments and room tone.  This helps solo instruments and vocals sound present and full, but also adds brightness and clarity to a mix


Most telephones cut off around 3.5 KHz yet you can still tell whose voice it is on the phone.  This tells you that practically everything needed to understand and distinguish any audio content pretty much lives below this range.


Pretty much only dog whistles operate in this range, so there is absolutely no need to worry about any more fundamentals or really any lower order harmonics getting in the way of any treatment you decide to apply.


Boosting in this range again helps with upper harmonics and upper harmonics are important to our brains in calculating proximity.  The closer we are to something, the more detail we can hear in the sound.  Similarly, the quieter an environment is when a sound is made, the more apparent that sound seems to us.  The upper harmonics of a sound are generally very soft and are the first things to go when we are either far away from a sound source or it is sounding in a noisy environment.  As such, the more upper frequency detail we can hear, the closer our mind perceives the sound source to be.  Furthermore, we perceive upper harmonic detail as clarity and salience.


Many mastering engineers, as a final polish job, will use a very hi-fi shelving EQ and boost the frequencies from roughly 16 or 18 KHz up about 3 or 4 dB.  The difference can be quite astonishing.


This range also gives you airiness and some pleasant room sounds.  But boosting this range can also have negative effects like hissing, piercing, and sibilance.


Sibilance, which is an overemphasis on frequencies ranging roughly from 6-8 KHz, is by far the most apparent and troublesome.  The beast way to deal with this is with a de-esser rather than an EQ so as not to sacrifice the harmonic content that you like that isn’t abrasive.  A de-esser is a frequency-dependent compressor, it only compresses a narrow bandwidth, usually between 4 and 9 KHz to tame sibilance.  It can be adjusted to work on cymbals or even hiss if it has the proper variables.



Next week, I’ll examine some of the commonly used terms associated with high frequency content and that will wrap up my series on the Audible Spectrum.  I hope you’ve enjoyed it so far!

Consisting of frequencies from 1.2 to 3.5 KHz, we once again have a nice range of frequencies to play with.  The high-mids contain lots of harmonics, especially the lower to mid order harmonics for the mid-range instruments.  The range of  2300 Hz gives you plenty of room to work with in carving out specific places for various instruments to sit.  Only the highest instruments can really play in this range:  the piccolo can sound in this band and it also accounts for the top octave or so of the piano, which most people know not a whole lot happens there.  So there are no new instruments sounding fundamentals to get in the way and cover up the harmonic content that you are treating in this band.


Additionally, this range is very important because it contains much of the sudden transient content.  Attack transients, sibilance, consonants and more all live in this register, so it is very important for understandability as well as punchiness, presence, and dynamics. 


The human voice is the most dynamic instrument on the planet.  When I work on TV and movies, I’m always amazed at how suddenly the voice changes dynamics.  Looking at the waveforms, the T, C, D, B and other consonant sounds are so short and quiet while the vowel sounds are exponentially louder and longer.  This can be a problem when mixing music because you might miss out on an initial or ultimate consonant sound that totally changes the meaning of a song without that consonant.  I remember working on a Christian rock album and the line was “We know that we can’t live with out you.”  When the mixing was done, they loved the track, but the “t” in “can’t” had disappeared.  Of course in a religious context you don’t want to be saying that you know you can live without God, so we had to spend a little extra time making sure that came across without being overbearing.


Just a crazy little factoid, almost all consonant sounds sound the same no matter who says them.  The majority of the time, you can fly in a t from one person, paste it in, and no one would know the difference.  You really only know from the vowel sounds what somebody sounds like.  Consonants are just air pushing against your lips, teeth, tongue, and mouth and we are all roughly equal to each other in body composition—at least enough that it isn’t imminently audible in most instances.


This is also the range where attack sounds live:  picks strumming strings, sticks striking cymbals, this is the range where you can hear all that.  Giving a boost to those sounds in this range can lead to a more present sound.  After all, your mind thinks your closer to something the more detail you can hear of it.  So if you can hear a stick tapping a drumhead, by god you must be close to it.  We’ll talk more about psychoacoustics and proximity in the next article.


Also, many big time producers believe that in this band lies the frequency that makes digital sound abrasive and therefore worse than tape, which centers around 2 KHz.  While this may or may not be the case, it certainly cannot be argued that harshness, edginess, and abrasiveness live in this frequency band.  Raucous and in your face sounds like screeching guitars and sailing synths need this range so that they can cut and make your ear drums bleed.


Next week, I’ll look at some common terms for upper-mid range problems and some common solutions

Earlier I defined the mid frequencies as the ones between 600 and 1200 Hz.  These would contain higher portions of the harmonies, higher melodies, and a whole bunch of harmonics.


For most of music history, solo singers who could sing very high were coveted.  Coloratura sopranos and castrato singers were great assets because their voices could soar audibly above the rest of the orchestration.  Their vocals pierce because the sit above the normal range for the rest of the instruments.  This frequency bandwidth aligns pretty well with the upper reaches of the soprano voice and the high-flying notes of 80s lead guitar.


Now accompaniment instruments such as guitar and piano might also play in this register in band situations, however in this range the emphasis tends to be on notes other than the root or melody.  This allows the soloist or lead instrument to have the spotlight in this frequency band.


This is also the register where the frequency range starts to increase.  Previously, we were dealing with relatively small increments between notes and registers, but here we have a gamut of 600 Hz as opposed to the low-mids which were only 300 Hz.  This allows much more room to play with sonics using EQs, harmonic exciters, and other effects which is great because this section houses most of the lower order harmonics other than the fundamental.


As mentioned in the primer, harmonics help us distinguish one instrument from another.  Even harmonics give a warmer, organic, and natural sound while odd harmonics impart a more harsh and metallic sound.  Smooth guitars through tube amplifiers have rich even harmonics while harsh distorted heavy metal guitars have more odd harmonic content.  Brass instruments have more of an emphasis on odd harmonics while strings have more even harmonics.


So now going back to bass instruments like the kick and bass guitar, another good way to distinguish them from each other is by treating their harmonics in this range differently.  This range is better for this kind of treatment because it avoids putting the changes intended for emphasis in the frequency band with a lot of build-up like the low mids.  This range contains mostly harmonics and solo instruments, so there isn’t a lot to get in the way of hearing these subtle alterations and they are still low enough to be significant to the fundamental sound.


So if we have a bass guitar playing mostly root notes down in the key of A, we’d know that the bass is playing notes in the frequency range of 55-110 Hz.  This would mean second harmonics from 110 to 220 and third harmonics from 220 to 440.  These are great to try and treat, especially if you are dealing with sparse mixes, but they aren’t really helpful in densely orchestrated tunes because other instruments will be taking up those frequency bands.  The next harmonics would be from 440 to 880.  These harmonics are in this frequency range, so a nice wide EQ centered at 660 and subtly boosting around might give the bass the audibility you need and it would be nice and smooth since it emphasizes the even harmonic (4th).  You could also try and emphasize the next batch of harmonics which would be 880 to 1760.  This would put the center right at 1.2 KHz, right at the top of our range.  And this would impart a more harsh and aggressive tone.


You might de-emphasize those frequencies in the kick drum or even choose to emphasize frequencies that fall on the outskirts of bass guitar’s harmonic ranges.  If you find that the bass rarely sounds harmonics in the 900 range, it’d be a perfect place to emphasize the kick drum and maybe carve out the bass there.  All you need is a little spot in the mix for your ear to key on and you’ve got audibility.


Now the bass guitar I’ve used in my example is 1-2 octaves below the other non-bass instruments I’ve talked about in my hypothetical mix.  That means that treating these instruments in the mid-range will be emphasizing lower order harmonics and can really alter the instruments’ sounds.  But you do have a nice wide range to work with, so treating each instrument individually with a different portion of the bandwidth for emphasis can help benefit audibility.  Plus, this is where many of the sounds intrinsic to specific instruments exist, so emphasize the frequency band that makes a trumpet really sound like a trumpet can help to keep it audible but prevent it from overtaking the lead vocal.


One thing I want to emphasize here is that most instruments play a range of notes, not just one note like a kick drum.  In the bass guitar example, you saw how wide a range a bass guitar’s frequency content can have from just playing in one octave.  I didn’t give any specifics about the tune other than the key, we don’t know how often it plays what note or which note, we just know the key.  Many experts and magazines will like to give you helpful frequencies to try when mixing.  Bear in mind that these are only guidelines and could not possibly be a one stop fix for all mixing needs.  If somebody tells you to cut 450 in every instance to make a mix better, it would really be a shame for songs in the key of A whose mid-range instruments would be getting de-emphasized when they play the root note…

As previously defined, the low-mid portion of the audible spectrum runs from about 300 Hz to 600 Hz and contains mostly the fundamental frequencies of non-bass instruments.  This is the comfortable middle range for vocalists, the standard range for guitars, horns, strings, and other instruments.


It also is the range where the first few harmonics for the lower frequency instruments sound and give character to those instruments.  In more sparse mixes, these upper frequencies can be altered to help separate the bass from the kick and so on.  However this is also where a lot of build up will occur due to orchestration, so don’t bank on these frequencies helping to bail you out in dealing with the bass problems in a dense mix.  I’ll speak more at length about harmonics and how they can help you in next the mid-frequency article.


For the voice, most of the power and audibility comes in this range since it is the portion that contains the distinct vowel sounds which vocalists latch on to.  While this is an important range in dialogue and speech, it is also vitally important in music since vowels are what allow singers to elongate words.  Think about it, when you want to hold out a syllable, it is almost always the vowel sound that is held out.  It’s pretty difficult to lengthen a P or D sound.  Holding out an S just sounds sibilant.  So for clear vocals, it is pretty important not to muck up this frequency band.


This is easier said than done.  A lot of indie rock musicians have problems with this range.  Being a self-professed indie rock snob, I say this without any intended slight:  most indie rockers are not necessarily the most virtuosic musicians.  You can hear it in Caleb Followill’s vocals and Nick Drake’s guitar playing and Meg White’s drumming.  It isn’t that they are bad or they don’t write good music.  I love their music and they get the point across.  Let’s just say they aren’t necessarily in the realm of Yo Yo Ma or Mozart.


The truth is that most musicians who don’t perform a bunch of acrobatics like to stay squarely in this “comfortable” range when playing and that can really cloud the mid-range in a song.  If an untrained keyboard player lays down a keyboard track, changes are they’ll circle middle C.  And weaker vocalists might also stick in this comfortable range as will guitar players and trumpeters and string players, etc.


That’s another reason why solo musicians doing all the tracking themselves at home can struggle with their mixes.  They know that the bass is played way down on that end of the midi-controller and everything else kind of sits in this middle range.  If you are using midi for everything, then most people will play all their midi instruments in the same way. 


Being in studios for so long, you start to develop a knack for feeling out musicians.  Horn players behave like horn players and therefore sound like horn players when they play.  Drummers behave like drummers and usually sound like drummers when they play.  Singers and string players and harmonica players and everybody else roughly kind of act in the same manner and have a certain personality that is evident in their playing.  If a horn player is programming all the different instruments on a midi keyboard, he might find himself in a rut because all the instruments are playing parts like a horn player would instead of all these different personalities bouncing off each other.


The upshot is that if you are doing everything yourself at home and you aren’t well versed in orchestration or how certain instruments sound and play and how they do that in relation to other instruments, you might end up with a big pile of mid-range instrumentation that obscures the vocal as well as the other instruments.


So it is important to bear this in mind while writing and try to compartmentalize various parts to certain parts of the frequency range so that they don’t interfere with each other.  Keep the horns high and the guitar low and the vocal all by its lonesome.


Of course this isn’t always possible, so to address it you might emphasize certain frequencies in this band in some instruments and not in others.  For instance if a guitar is playing rhythm chords and a piano is chucking along as well, you might boost the guitar at 450 and then do the opposite in the piano.  This doesn’t need to be a drastic EQ, just enough to relegate each instrument to a certain portion of the range.


Much like the bass, we are dealing with a limited range of frequencies to do that with, so you might also want to try treating the upper harmonics which will give you much more room to play with.  These will come into play in the next portion of the audio spectrum:  the mid-frequencies.

As previously mentioned, the bass portion of the audible spectrum runs from 20 Hz to about 300 Hz.  Setting aside the previously discussed sub-bass portion of this frequency band (frequencies 45 Hz and below), we can say that the bass portion of the spectrum should be reserved primarily for the fundamental frequencies of the roots of the chord changes in the song insofar as tonal content is concerned.  Of course this range should also incorporate low frequency sounds such as kick drums, toms, and even room tones.


Many of the biggest problems people encounter in tracking, mixing, and mastering occur squarely in this region.  Terms like muddy, boomy, and woofy all deal explicitly with the bass region.  We all want “big bass” with lots of thunderous kick drums and thumpin’ bass lines, but unfortunately the arithmetic is not so simple as “turn them all up.”  As many of you following along at home might have already experienced, turning up all the bass instruments in your mix is a recipe for a muddy, distorted mess.


So how do we properly address these issues to get a decent sounding mix?  Well, first we need to take note of the frequency band that encompasses the bass portion and see how it compares to the other bandwidths:


Bass from 25-300 Hz

Treble 2.4-20 kHz.


Look at that again.  That says that the bass range has a bandwidth of about 275 Hz while the treble range has a bandwidth of almost 18,000 Hz!  No wonder we run into problems of indistinct bass but not indistinct top end.


In composition, there is something called the Lower Interval Limit.  This is a commonly held set of rules that say, based on the frequency of the first note, how big the interval must be in order for that interval to sound clear and distinct.  For instance, if we were to use the 440 Hz A as our base note and play the C above that to form a harmonic interval of a minor third, we’d have a difference in frequencies 83.25 cycles per second (C5 is 523.25 Hz so 523.25-440=83.25 Hz).  This is a difference that our ears can distinctly hear without hesitation and we perceive as a pleasant albeit sad sonority.


Now imagine that we started with A four octaves down.  This A has a fundamental frequency of 27.5 Hz.  The minor third above that is a C with a fundamental frequency of 32.70 Hz.  This provides a much tougher to distinguish difference of only 5.2 Hz.


Furthermore, the difference between that A and it’s next closest upper neighbor A# is only 1.64 Hz.  So even in a melodic context, it can sometimes be difficult to properly distinguish the two notes.


As an aside, here is a handy-dandy list of the lowest notes generally accepted in order to have a properly sounding interval.  Bear in mind that these are only commonly held compositional standards and are free to be broken at any time:



Lowest Pitch

Second Pitch

Minor Second



Major Second



Minor Third



Major Third



Perfect Fourth



Diminished Fifth



Perfect Fifth



Minor Sixth



Major Sixth



Minor Seventh



Major Seventh




The first column is the desired interval.  The second column is the lowest note from which you can build the desired interval.  The third column is the co-responding note needed above the lowest pitch to complete the desired interval.


In my mind, muddiness occurs when we have too much bass information going on simultaneously that creates a big mess of sounds too close in frequency content.  This contributes to a washy indistinct bass. 


Generally speaking, the most common problem is figuring out how to separate the kick drum from the bass.  It is important to remember that even though the kick drum is often regarded as an atonal instrument, it still produces tonal frequencies and especially distinct fundamentals.  So if the bass and the kick drum are sounding in roughly the same range, our ears will be unable to distinguish the two sonorities.


One way to address this is by making sure that each instrument emphasizes different portions of the bass frequency band.  Ideally, these portions would follow the Lower Interval Limit.  For example, if the kick drum is tuned so that its fundamental sounds at about 60 Hz (which is roughly a B1), the bass should play no lower than D#2.  This way the fundamentals adhere to the lower interval limit theory and are reasonably sure to be clear and distinct sounds.


While the Lower Interval Limit theory  is not explicitly intended for this purpose and often it is meant to be used for harmonic intervals (notes sounding at the same time, generally on the same instrument but not necessarily), the point is that creating distinguishable sonorities is all about being able to distinctly hear differences between sounds.  We want to refrain from confusing our ears with sounds that are too close together that muddle distinction.


This will obviously not solve all the problems.  If you’ve ever seen a waveform for a kick drum, you know that its spectral content is very broad and not relegated simply to its fundamental frequency.  As such, it is further beneficial to deal with frequencies beyond the fundamental, however most of that will be dealt with as we move up the audible spectrum into the mid ranges.  For now the focus is on what we can do specifically in the bass register to prevent problems.


That aside, there is always a big collection of frequencies that sound in the bass register on a kick drum.  This is due to many resonations that aren’t perfectly in tune:  the beater head, the shell, the resonant head, not to mention all the nodes between the lugs on the head that yield very dense and complex waveforms.  These frequencies can be so broad that they can encompass a very large portion of the bass register and make the aforementioned solution pretty impossible. 


One way to address this strictly in the bass register is to eq out whole sections of the kick drum so that it creates space for the bass guitar.  If you know the key of the song, you can determine the lowest note the bass player might play.  Your job then would be to carve out a nice chunk of the kick drum sound, not only cover the register that the bass plays, but also below it to create enough space that the two sounds are distinguishable from each other by the Lower Interval Limit theory.


Another aside:  doing these things may seem drastic, but bear in mind that your ultimate goal is NOT to create the best kick drum sound possible and the best bass guitar sound possible and add them together.  Instead, your goal is to create the best sounds for each instrument that work together so that they sound good together in the mix as a whole.


Another issue that crops up is when there are a whole bunch of overdubs that play in the same area.  It is difficult in the bass range for a bass guitar to sound distinct from a bass synth and then have both of them stand out from the kick because you only have about 225 Hz to work with.  Layering a bunch off overdubs can lead to muddling if you do too much layering in the bass region.  One example that comes immediately to mind is an 808 kick drum and trying to make that audible against a bed of kick drum and bass guitar.  The easy thing about 808s is that they are basically sine waves.  So it is easy to determine the note that the 808 is sounding, and game plan the kick and bass around it.


But for more dense things like synthesizers, it can be more problematic.  As I mentioned in the harmonics primer, there are a whole bunch of other frequencies that sound at any given time from any instrument.  Some of these occur in the harmonic series, but many others like formants and native resonances do not.  Sometimes these can occur in clusters around the fundamental note and when that occurs, the extracurricular frequencies from the bass and the synth and kick all roll up together and make a big muddy mess.


The best way to address this is to avoid excessive overdubs in the bass register.  Another way to deal with it is to find out what you can change easily, like the kick drum since it is static, and treat it in a way that keeps it out of the way of the bass and other instruments.  For instance you might EQ to emphasize the fundamental below the key of the song, and then eq out portions from the root up about an octave and a half to keep it out of the way of the bass and other bass instruments.


Next week, I’ll post some common terms associated with bass problems with some quick tips on how to address them.


Then, I’ll examine some issues in the mid-range and further delve into how to mitigate problems associated with the bass as well as those unique to the mid-range.


Featured Columns