Need Mastering?
Learn more now
© 2024 Fix Your Mix. All rights reserved.

Archive for the ‘Production’ Category

The final section of the audible spectrum is the high frequency or treble portion. Humans theoretically are able to hear up to 20 KHz (that is, newborn baby girls can theoretically hear up to 20 KHz at normal listening levels; for the rest of us, considerably lower). So what could happen in the 16,500 Hz range if no new instruments can sound there? 

 

It contains almost nothing but upper harmonics of treble instruments and room tone.  This helps solo instruments and vocals sound present and full, but also adds brightness and clarity to a mix

 

Most telephones cut off around 3.5 KHz yet you can still tell whose voice it is on the phone.  This tells you that practically everything needed to understand and distinguish any audio content pretty much lives below this range.

 

Pretty much only dog whistles operate in this range, so there is absolutely no need to worry about any more fundamentals or really any lower order harmonics getting in the way of any treatment you decide to apply.

 

Boosting in this range again helps with upper harmonics and upper harmonics are important to our brains in calculating proximity.  The closer we are to something, the more detail we can hear in the sound.  Similarly, the quieter an environment is when a sound is made, the more apparent that sound seems to us.  The upper harmonics of a sound are generally very soft and are the first things to go when we are either far away from a sound source or it is sounding in a noisy environment.  As such, the more upper frequency detail we can hear, the closer our mind perceives the sound source to be.  Furthermore, we perceive upper harmonic detail as clarity and salience.

 

Many mastering engineers, as a final polish job, will use a very hi-fi shelving EQ and boost the frequencies from roughly 16 or 18 KHz up about 3 or 4 dB.  The difference can be quite astonishing.

 

This range also gives you airiness and some pleasant room sounds.  But boosting this range can also have negative effects like hissing, piercing, and sibilance.

 

Sibilance, which is an overemphasis on frequencies ranging roughly from 6-8 KHz, is by far the most apparent and troublesome.  The beast way to deal with this is with a de-esser rather than an EQ so as not to sacrifice the harmonic content that you like that isn’t abrasive.  A de-esser is a frequency-dependent compressor, it only compresses a narrow bandwidth, usually between 4 and 9 KHz to tame sibilance.  It can be adjusted to work on cymbals or even hiss if it has the proper variables.

 

 

Next week, I’ll examine some of the commonly used terms associated with high frequency content and that will wrap up my series on the Audible Spectrum.  I hope you’ve enjoyed it so far!

Common terms in the Upper-Mid range, part 12

Posted by Fix Your Mix On July - 16 - 2009COMMENT ON THIS POST

Bright:  Emphasis on high-frequencies, specifically upper-mids with emphasis on harmonics.

 

Crunchy:  Exists between 2K and 4K, typically distortion based and generally pleasant.  Can lend rhythmic distinction to distorted rhythm parts.

 

Detailed:  Minutiae of the music are easily audible.  Present sounding, intimate and close with lots of articulation and transient response throughout the upper mid range.

 

Forward:  Present, in your face.  Detail present in transients and upper harmonics which lends a feeling of proximity.

 

Glassy:  Brittle sounding, too much upper-mid content especially with regard to harmonics in relation to fundamentals. 

 

Grungy:  Lots of distortion with emphasis on odd harmonics.

 

Hard:  Excellent transient response combined with an overemphasis on upper-midrange frequencies.

 

Harsh:  Peaking in the 2-6 KHz range.

 

Metallic:  Emphasis on upper-mid range frequencies, specifically those that deal with odd order harmonics in this range.

 

Pinched:  Narrow-bandwidth, often relegated to the upper-mid range frequencies.  Try boosting lower frequencies to balance.

Consisting of frequencies from 1.2 to 3.5 KHz, we once again have a nice range of frequencies to play with.  The high-mids contain lots of harmonics, especially the lower to mid order harmonics for the mid-range instruments.  The range of  2300 Hz gives you plenty of room to work with in carving out specific places for various instruments to sit.  Only the highest instruments can really play in this range:  the piccolo can sound in this band and it also accounts for the top octave or so of the piano, which most people know not a whole lot happens there.  So there are no new instruments sounding fundamentals to get in the way and cover up the harmonic content that you are treating in this band.

 

Additionally, this range is very important because it contains much of the sudden transient content.  Attack transients, sibilance, consonants and more all live in this register, so it is very important for understandability as well as punchiness, presence, and dynamics. 

 

The human voice is the most dynamic instrument on the planet.  When I work on TV and movies, I’m always amazed at how suddenly the voice changes dynamics.  Looking at the waveforms, the T, C, D, B and other consonant sounds are so short and quiet while the vowel sounds are exponentially louder and longer.  This can be a problem when mixing music because you might miss out on an initial or ultimate consonant sound that totally changes the meaning of a song without that consonant.  I remember working on a Christian rock album and the line was “We know that we can’t live with out you.”  When the mixing was done, they loved the track, but the “t” in “can’t” had disappeared.  Of course in a religious context you don’t want to be saying that you know you can live without God, so we had to spend a little extra time making sure that came across without being overbearing.

 

Just a crazy little factoid, almost all consonant sounds sound the same no matter who says them.  The majority of the time, you can fly in a t from one person, paste it in, and no one would know the difference.  You really only know from the vowel sounds what somebody sounds like.  Consonants are just air pushing against your lips, teeth, tongue, and mouth and we are all roughly equal to each other in body composition—at least enough that it isn’t imminently audible in most instances.

 

This is also the range where attack sounds live:  picks strumming strings, sticks striking cymbals, this is the range where you can hear all that.  Giving a boost to those sounds in this range can lead to a more present sound.  After all, your mind thinks your closer to something the more detail you can hear of it.  So if you can hear a stick tapping a drumhead, by god you must be close to it.  We’ll talk more about psychoacoustics and proximity in the next article.

 

Also, many big time producers believe that in this band lies the frequency that makes digital sound abrasive and therefore worse than tape, which centers around 2 KHz.  While this may or may not be the case, it certainly cannot be argued that harshness, edginess, and abrasiveness live in this frequency band.  Raucous and in your face sounds like screeching guitars and sailing synths need this range so that they can cut and make your ear drums bleed.

 

Next week, I’ll look at some common terms for upper-mid range problems and some common solutions

Common Mid-Range Terms, part 10

Posted by Fix Your Mix On July - 2 - 2009COMMENT ON THIS POST

Honky:  When you cup your hands and sing into it, that is pretty much what honkiness is.  This is a frequency buildup around 500-700 Hz, so cut in that area or boost the lows.

 

Nasal:  Like when you pinch your nose and speak.  This is very similar to honky except that it is a bit higher around 800-1000 Hz.

 

Radio-Filter:  The most overused pop cliché out there.  I wish everyone would stop doing this, but to do it properly you should know:  Old radios had small speakers which meant poor bass response and sometimes weak highs as well.  They also had poor construction which means limited dynamic range.  So use high and low pass filters centered around 1 KHz.  Most of the effect will be accomplished by the high-pass filter.  The low pass filter can be adjusted to taste.  Compress heavily to limit dynamic range.

 

Tinny:  Sounds like it’s coming through a tin can.  To me this also indicates peaky mids which would be a significant bump at around 1 KHz.  Perhaps some high-end reverberations of a metallic variety.  Can be remedied by boosting lows.

Earlier I defined the mid frequencies as the ones between 600 and 1200 Hz.  These would contain higher portions of the harmonies, higher melodies, and a whole bunch of harmonics.

 

For most of music history, solo singers who could sing very high were coveted.  Coloratura sopranos and castrato singers were great assets because their voices could soar audibly above the rest of the orchestration.  Their vocals pierce because the sit above the normal range for the rest of the instruments.  This frequency bandwidth aligns pretty well with the upper reaches of the soprano voice and the high-flying notes of 80s lead guitar.

 

Now accompaniment instruments such as guitar and piano might also play in this register in band situations, however in this range the emphasis tends to be on notes other than the root or melody.  This allows the soloist or lead instrument to have the spotlight in this frequency band.

 

This is also the register where the frequency range starts to increase.  Previously, we were dealing with relatively small increments between notes and registers, but here we have a gamut of 600 Hz as opposed to the low-mids which were only 300 Hz.  This allows much more room to play with sonics using EQs, harmonic exciters, and other effects which is great because this section houses most of the lower order harmonics other than the fundamental.

 

As mentioned in the primer, harmonics help us distinguish one instrument from another.  Even harmonics give a warmer, organic, and natural sound while odd harmonics impart a more harsh and metallic sound.  Smooth guitars through tube amplifiers have rich even harmonics while harsh distorted heavy metal guitars have more odd harmonic content.  Brass instruments have more of an emphasis on odd harmonics while strings have more even harmonics.

 

So now going back to bass instruments like the kick and bass guitar, another good way to distinguish them from each other is by treating their harmonics in this range differently.  This range is better for this kind of treatment because it avoids putting the changes intended for emphasis in the frequency band with a lot of build-up like the low mids.  This range contains mostly harmonics and solo instruments, so there isn’t a lot to get in the way of hearing these subtle alterations and they are still low enough to be significant to the fundamental sound.

 

So if we have a bass guitar playing mostly root notes down in the key of A, we’d know that the bass is playing notes in the frequency range of 55-110 Hz.  This would mean second harmonics from 110 to 220 and third harmonics from 220 to 440.  These are great to try and treat, especially if you are dealing with sparse mixes, but they aren’t really helpful in densely orchestrated tunes because other instruments will be taking up those frequency bands.  The next harmonics would be from 440 to 880.  These harmonics are in this frequency range, so a nice wide EQ centered at 660 and subtly boosting around might give the bass the audibility you need and it would be nice and smooth since it emphasizes the even harmonic (4th).  You could also try and emphasize the next batch of harmonics which would be 880 to 1760.  This would put the center right at 1.2 KHz, right at the top of our range.  And this would impart a more harsh and aggressive tone.

 

You might de-emphasize those frequencies in the kick drum or even choose to emphasize frequencies that fall on the outskirts of bass guitar’s harmonic ranges.  If you find that the bass rarely sounds harmonics in the 900 range, it’d be a perfect place to emphasize the kick drum and maybe carve out the bass there.  All you need is a little spot in the mix for your ear to key on and you’ve got audibility.

 

Now the bass guitar I’ve used in my example is 1-2 octaves below the other non-bass instruments I’ve talked about in my hypothetical mix.  That means that treating these instruments in the mid-range will be emphasizing lower order harmonics and can really alter the instruments’ sounds.  But you do have a nice wide range to work with, so treating each instrument individually with a different portion of the bandwidth for emphasis can help benefit audibility.  Plus, this is where many of the sounds intrinsic to specific instruments exist, so emphasize the frequency band that makes a trumpet really sound like a trumpet can help to keep it audible but prevent it from overtaking the lead vocal.

 

One thing I want to emphasize here is that most instruments play a range of notes, not just one note like a kick drum.  In the bass guitar example, you saw how wide a range a bass guitar’s frequency content can have from just playing in one octave.  I didn’t give any specifics about the tune other than the key, we don’t know how often it plays what note or which note, we just know the key.  Many experts and magazines will like to give you helpful frequencies to try when mixing.  Bear in mind that these are only guidelines and could not possibly be a one stop fix for all mixing needs.  If somebody tells you to cut 450 in every instance to make a mix better, it would really be a shame for songs in the key of A whose mid-range instruments would be getting de-emphasized when they play the root note…

As previously defined, the low-mid portion of the audible spectrum runs from about 300 Hz to 600 Hz and contains mostly the fundamental frequencies of non-bass instruments.  This is the comfortable middle range for vocalists, the standard range for guitars, horns, strings, and other instruments.

 

It also is the range where the first few harmonics for the lower frequency instruments sound and give character to those instruments.  In more sparse mixes, these upper frequencies can be altered to help separate the bass from the kick and so on.  However this is also where a lot of build up will occur due to orchestration, so don’t bank on these frequencies helping to bail you out in dealing with the bass problems in a dense mix.  I’ll speak more at length about harmonics and how they can help you in next the mid-frequency article.

 

For the voice, most of the power and audibility comes in this range since it is the portion that contains the distinct vowel sounds which vocalists latch on to.  While this is an important range in dialogue and speech, it is also vitally important in music since vowels are what allow singers to elongate words.  Think about it, when you want to hold out a syllable, it is almost always the vowel sound that is held out.  It’s pretty difficult to lengthen a P or D sound.  Holding out an S just sounds sibilant.  So for clear vocals, it is pretty important not to muck up this frequency band.

 

This is easier said than done.  A lot of indie rock musicians have problems with this range.  Being a self-professed indie rock snob, I say this without any intended slight:  most indie rockers are not necessarily the most virtuosic musicians.  You can hear it in Caleb Followill’s vocals and Nick Drake’s guitar playing and Meg White’s drumming.  It isn’t that they are bad or they don’t write good music.  I love their music and they get the point across.  Let’s just say they aren’t necessarily in the realm of Yo Yo Ma or Mozart.

 

The truth is that most musicians who don’t perform a bunch of acrobatics like to stay squarely in this “comfortable” range when playing and that can really cloud the mid-range in a song.  If an untrained keyboard player lays down a keyboard track, changes are they’ll circle middle C.  And weaker vocalists might also stick in this comfortable range as will guitar players and trumpeters and string players, etc.

 

That’s another reason why solo musicians doing all the tracking themselves at home can struggle with their mixes.  They know that the bass is played way down on that end of the midi-controller and everything else kind of sits in this middle range.  If you are using midi for everything, then most people will play all their midi instruments in the same way. 

 

Being in studios for so long, you start to develop a knack for feeling out musicians.  Horn players behave like horn players and therefore sound like horn players when they play.  Drummers behave like drummers and usually sound like drummers when they play.  Singers and string players and harmonica players and everybody else roughly kind of act in the same manner and have a certain personality that is evident in their playing.  If a horn player is programming all the different instruments on a midi keyboard, he might find himself in a rut because all the instruments are playing parts like a horn player would instead of all these different personalities bouncing off each other.

 

The upshot is that if you are doing everything yourself at home and you aren’t well versed in orchestration or how certain instruments sound and play and how they do that in relation to other instruments, you might end up with a big pile of mid-range instrumentation that obscures the vocal as well as the other instruments.

 

So it is important to bear this in mind while writing and try to compartmentalize various parts to certain parts of the frequency range so that they don’t interfere with each other.  Keep the horns high and the guitar low and the vocal all by its lonesome.

 

Of course this isn’t always possible, so to address it you might emphasize certain frequencies in this band in some instruments and not in others.  For instance if a guitar is playing rhythm chords and a piano is chucking along as well, you might boost the guitar at 450 and then do the opposite in the piano.  This doesn’t need to be a drastic EQ, just enough to relegate each instrument to a certain portion of the range.

 

Much like the bass, we are dealing with a limited range of frequencies to do that with, so you might also want to try treating the upper harmonics which will give you much more room to play with.  These will come into play in the next portion of the audio spectrum:  the mid-frequencies.

The audio world can be a frustrating one for many reasons.  From buzzing headphones to crackling pres, our world is rife with little nuisances.  However, the most frustrating thing for me by far is how inexact our nomenclature is.  As a profession, we have really done a disservice to ourselves by not having a standardized and precise language for our trade. 

 

Oh, how easy it would be if someone would walk into one of my mixes and say “Yah, it sounds good, but there is a little too much 2.7 kHz, can you back that down a little?”  Instead, we are left with inexact jargon like “It’s a little harsh, can you do something about that?”  Of course most of us aren’t skilled enough to know exact frequencies without the necessary equipment, present company included.  So it would be ridiculous to say that we should all speak more precisely from now on. 

 

Instead, I will compile a list on this site, over time of course, that enumerates the various inexact terms I encounter in my career and what I would do to remedy them.

 

The first list here is for bass register terms.  Some of this comes with the help of Bruce Bartlett’s Practical Recording Techniques.  Feel free to respond back with more if you can think of them and I’ll try to include them.

 

Ballsy:  Emphasis on frequencies below 300 Hz, but only on mixes with distinct sounds between the bass instruments so as not to be muddy.

 

Bloated:  Emphasis on frequencies below 300 Hz, but with indistinct sounds.  Muddy with low frequency resonances.

 

Boomy:  Too much bass at 125 Hz.  This is often caused by sudden sounds that cause large excursions in the woofer reproducing the sound.

 

Boxy:  Low frequency resonances like being in a box.  Mainly resonances in the upper portion of the bass register from 200-300 Hz since boxes are too thin to adequately hold in low-lows.

 

Chesty:  This obviously refers to recordings of vocalists.  The chest is where the low frequencies reside, especially the native resonances.  It is relatively easy to address because humans are roughly the same size on average, so a simple eq trimming the frequencies somewhere between 120 and 250 Hz should do the trick.

 

Dark:  This usually is a term used in comparison to the upper frequencies.  As such, either decreasing the lower frequencies including the fundamentals or increasing the upper frequencies with an emphasis on harmonics can remedy the problem by evening out the response across the board.

 

Dull:  Along with dark, this usually means too much low register content in comparison to upper frequencies.  The upper frequencies are where you get words like “lively” and “bright” so again, the problem can be remedied by de-emphasizing fundamentals and low frequencies in comparison to the upper harmonics.

 

Ground Noise:  Constant hum between 50 and and 70 Hz, but can be extremely broad spectrum.  If possible filter it out, but it is often best addressed in tracking by using a ground lift or isolation transformer.

 

Muddy:  Too much competing low frequency content in the bass register.  Try etching out portions of the spectrum on each instrument and cutting unnecessary frequencies in other instruments in the bass range.

 

Rumble:  Relatively constant sound between 25 and 40 Hz.  Often caused by AC or other environmental sounds.  Easily addressed with a high-pass filter.

 

Thumpy:  Similar to Boomy–sudden excursions more of an emphasis between 20 and 50 Hz.

 

Tubby:  Low frequency resonances, like boxy, but with more bass collection (since bathtubs are more reverberant than boxes and contain low frequencies better due to density and thickness).  Try equing out low frequencies or using a high pass filter.

 

Warm:  As it pertains to bass, having good bass response without overpowering higher frequencies and without being overpowered by them.  On a scale: dull/dark, warm, bright.

As previously mentioned, the bass portion of the audible spectrum runs from 20 Hz to about 300 Hz.  Setting aside the previously discussed sub-bass portion of this frequency band (frequencies 45 Hz and below), we can say that the bass portion of the spectrum should be reserved primarily for the fundamental frequencies of the roots of the chord changes in the song insofar as tonal content is concerned.  Of course this range should also incorporate low frequency sounds such as kick drums, toms, and even room tones.

 

Many of the biggest problems people encounter in tracking, mixing, and mastering occur squarely in this region.  Terms like muddy, boomy, and woofy all deal explicitly with the bass region.  We all want “big bass” with lots of thunderous kick drums and thumpin’ bass lines, but unfortunately the arithmetic is not so simple as “turn them all up.”  As many of you following along at home might have already experienced, turning up all the bass instruments in your mix is a recipe for a muddy, distorted mess.

 

So how do we properly address these issues to get a decent sounding mix?  Well, first we need to take note of the frequency band that encompasses the bass portion and see how it compares to the other bandwidths:

 

Bass from 25-300 Hz

Treble 2.4-20 kHz.

 

Look at that again.  That says that the bass range has a bandwidth of about 275 Hz while the treble range has a bandwidth of almost 18,000 Hz!  No wonder we run into problems of indistinct bass but not indistinct top end.

 

In composition, there is something called the Lower Interval Limit.  This is a commonly held set of rules that say, based on the frequency of the first note, how big the interval must be in order for that interval to sound clear and distinct.  For instance, if we were to use the 440 Hz A as our base note and play the C above that to form a harmonic interval of a minor third, we’d have a difference in frequencies 83.25 cycles per second (C5 is 523.25 Hz so 523.25-440=83.25 Hz).  This is a difference that our ears can distinctly hear without hesitation and we perceive as a pleasant albeit sad sonority.

 

Now imagine that we started with A four octaves down.  This A has a fundamental frequency of 27.5 Hz.  The minor third above that is a C with a fundamental frequency of 32.70 Hz.  This provides a much tougher to distinguish difference of only 5.2 Hz.

 

Furthermore, the difference between that A and it’s next closest upper neighbor A# is only 1.64 Hz.  So even in a melodic context, it can sometimes be difficult to properly distinguish the two notes.

 

As an aside, here is a handy-dandy list of the lowest notes generally accepted in order to have a properly sounding interval.  Bear in mind that these are only commonly held compositional standards and are free to be broken at any time:

 

Interval

Lowest Pitch

Second Pitch

Minor Second

E2

F2

Major Second

Eb2

F2

Minor Third

C2

Eb2

Major Third

B1

D#2

Perfect Fourth

A1

D2

Diminished Fifth

B0

F1

Perfect Fifth

C#1

G#1

Minor Sixth

F1

Db2

Major Sixth

F1

D2

Minor Seventh

F1

Eb2

Major Seventh

F1

Ed

 

The first column is the desired interval.  The second column is the lowest note from which you can build the desired interval.  The third column is the co-responding note needed above the lowest pitch to complete the desired interval.

 

In my mind, muddiness occurs when we have too much bass information going on simultaneously that creates a big mess of sounds too close in frequency content.  This contributes to a washy indistinct bass. 

 

Generally speaking, the most common problem is figuring out how to separate the kick drum from the bass.  It is important to remember that even though the kick drum is often regarded as an atonal instrument, it still produces tonal frequencies and especially distinct fundamentals.  So if the bass and the kick drum are sounding in roughly the same range, our ears will be unable to distinguish the two sonorities.

 

One way to address this is by making sure that each instrument emphasizes different portions of the bass frequency band.  Ideally, these portions would follow the Lower Interval Limit.  For example, if the kick drum is tuned so that its fundamental sounds at about 60 Hz (which is roughly a B1), the bass should play no lower than D#2.  This way the fundamentals adhere to the lower interval limit theory and are reasonably sure to be clear and distinct sounds.

 

While the Lower Interval Limit theory  is not explicitly intended for this purpose and often it is meant to be used for harmonic intervals (notes sounding at the same time, generally on the same instrument but not necessarily), the point is that creating distinguishable sonorities is all about being able to distinctly hear differences between sounds.  We want to refrain from confusing our ears with sounds that are too close together that muddle distinction.

 

This will obviously not solve all the problems.  If you’ve ever seen a waveform for a kick drum, you know that its spectral content is very broad and not relegated simply to its fundamental frequency.  As such, it is further beneficial to deal with frequencies beyond the fundamental, however most of that will be dealt with as we move up the audible spectrum into the mid ranges.  For now the focus is on what we can do specifically in the bass register to prevent problems.

 

That aside, there is always a big collection of frequencies that sound in the bass register on a kick drum.  This is due to many resonations that aren’t perfectly in tune:  the beater head, the shell, the resonant head, not to mention all the nodes between the lugs on the head that yield very dense and complex waveforms.  These frequencies can be so broad that they can encompass a very large portion of the bass register and make the aforementioned solution pretty impossible. 

 

One way to address this strictly in the bass register is to eq out whole sections of the kick drum so that it creates space for the bass guitar.  If you know the key of the song, you can determine the lowest note the bass player might play.  Your job then would be to carve out a nice chunk of the kick drum sound, not only cover the register that the bass plays, but also below it to create enough space that the two sounds are distinguishable from each other by the Lower Interval Limit theory.

 

Another aside:  doing these things may seem drastic, but bear in mind that your ultimate goal is NOT to create the best kick drum sound possible and the best bass guitar sound possible and add them together.  Instead, your goal is to create the best sounds for each instrument that work together so that they sound good together in the mix as a whole.

 

Another issue that crops up is when there are a whole bunch of overdubs that play in the same area.  It is difficult in the bass range for a bass guitar to sound distinct from a bass synth and then have both of them stand out from the kick because you only have about 225 Hz to work with.  Layering a bunch off overdubs can lead to muddling if you do too much layering in the bass region.  One example that comes immediately to mind is an 808 kick drum and trying to make that audible against a bed of kick drum and bass guitar.  The easy thing about 808s is that they are basically sine waves.  So it is easy to determine the note that the 808 is sounding, and game plan the kick and bass around it.

 

But for more dense things like synthesizers, it can be more problematic.  As I mentioned in the harmonics primer, there are a whole bunch of other frequencies that sound at any given time from any instrument.  Some of these occur in the harmonic series, but many others like formants and native resonances do not.  Sometimes these can occur in clusters around the fundamental note and when that occurs, the extracurricular frequencies from the bass and the synth and kick all roll up together and make a big muddy mess.

 

The best way to address this is to avoid excessive overdubs in the bass register.  Another way to deal with it is to find out what you can change easily, like the kick drum since it is static, and treat it in a way that keeps it out of the way of the bass and other instruments.  For instance you might EQ to emphasize the fundamental below the key of the song, and then eq out portions from the root up about an octave and a half to keep it out of the way of the bass and other bass instruments.

 

Next week, I’ll post some common terms associated with bass problems with some quick tips on how to address them.

 

Then, I’ll examine some issues in the mid-range and further delve into how to mitigate problems associated with the bass as well as those unique to the mid-range.

tiFrom “ill” to “trill,” buzz words have been a mainstay in hip hop culture since its inception, used to associate one’s self with a particular scene or movement. A few years ago, using the word “crunk” in a lyric served as an automatic association with the South while “hyphey” was code for California (specifically the Bay Area). As a rapper, buzzwords can either earn you street cred or date your work and career.


Snoop Dogg is the perfect case study on the benefits of buzz words. Like T-Pain with his Auto-Tune, “izzle” became Snoop’s brand, one which was so heavily copied and referenced that it elevated his status above and beyond his “Gin N’ Juice” days (via imitation being the highest form of flattery).*


While the term “swagger” is not technnically new to hip hop, it has only recently become a movement, turning the game on its head and defining what it means to be cool in 2009. “Swagga Like Us,” a hit collaboration between the four hottest** rappers in the game: Kanye West, TI, Jay-Z, and Lil Wayne. It’s the closest thing to a “super group” rap music has seen thus far. Because of this, everything in the song becomes significant automatically.


There are a number of notable musical devices used in this song***, but what struck me most was the word swagger itself. Dope, fire, fly,… Those terms have all been more or less meaningless, merely synonyms for “cool.” But swagger calls to mind a very specific brand of cool. Swagger is classy. Sophisticated. Timeless. Those who possess swagger stay in control no matter the situation.

Sinatra: the original king of swagg.

Sinatra: The original Sultan of Swag.


I’ve heard some rappers imply that people can have all different ‘types of swag,’ but this article refers to classic Swag, the real deal, of which TI is the archetype. It’s not hard to imagine that TI had Sinatra in mind when crafting his his image.

“A person with swagger is classy, stylish, confident, above the fray, perhaps a bit aloof.”

-TI

Keri Hilson desires a man who has his “swagger right.” Mike Jones isn’t afraid to go pop for a woman with “Swag Through The Roof.” But as with anything that blows up quickly, its popularity could be its downfall…


The Death of Swag?


Several weeks ago, there was an internet uproar when CNN did a segment on Obama’s “swagga” (thank you, CNN for the ‘authentic’ spelling). Ehow.com now has instructions on how to “turn [one’s] swag on” (see: “Turn My Swag On” by Soulja Boy). But swag started going down hill long before CNN caught wind of it.


I consider myself a connoisseur of pop music. Give me the dirtiest, most superficial, mindless morsels of sugary pop goodnesss and I’ll devour them in one bite. But every now and then a song comes along that is just so utterly baffling that I have to stop myself. I’m going to go against popular opinion here and put myself out there: “Swagg Surfin”? Really? Is this serious? “I SWAGG WHEN I SURF NOW WATCH ME SURF N SWAGG”? I practically had a heart attack when I heard this song for the first time. Swagg Surfin is beyond me. Maybe it’s the fake horns, maybe it’s the laughable dance, but I will not allow myself to like this song. Swagg Surfin is the new Laffy Taffy. Take a look:



The funny thing is, I’ve listened to Swagg Surfin so many times now (in an attempt to wrap my head around it) that I actually enjoy the song now. While it’s been all over Atlanta radio for a while now, a lack of a Wikipedia page leads to me to believe F.L.Y. and their Song-Dance has yet to make it out of the South.


So what does it all mean? Is swagger signaling a more mature direction for rap, a response to increasing social awareness from the 2008 presidential election? Has the younger generation decided to “turn (their collective swag) on and tune in?” Can you think of more rap buzz words? Comment with your favorites.


Download Swagga Like Us on Amazon MP3.


*Of course, today all but the dorkiest of middle-class white kids are tired and unamused by izzle references, including Snoop himself I’m sure.


**Young Jeezy is certainly up there, but his latest album didn’t do so well (though I’m a huge fan of “Put On” and “Vacation”) and he is branded as a cocaine dealer (“the snowman”), which is problematic for him because rap has turned away from gangster rap in favor of party/club music. At this time two years ago, every rap client put down Young Jeezy as a reference on our Fix Your Mix Submission Form but now it’s all Swagg Surf or TI.


***This song also struck me because it was on the iTunes Top 10 at the same time as the song its beat was sampled from, which demonstrates another trend, Sampling Stuff That Isn’t Old. Other songwriting devices used in “Swagga Like Us” include Phrygian mode and a driving kick drum pattern



Masking (Producer-Speak)

Posted by Fix Your Mix On May - 28 - 2009COMMENT ON THIS POST

Psychoacoustics plays a very important role in our everyday lives.  We are not necessarily affected by what we hear so much as how our minds interpret what we hear.  For instance, right now you might think you are sitting in a perfectly silent environment.  But listen closer:  the whirr of your computer fan, the gentle hum of the air conditioner, your neighbors’ blarring all kinds of intolerable pop songs.  We can notice all kinds of ambient noise when prompted, but often our minds just let them go unperceived.  This is a good thing because it helps us not be disturbed by all the frivolous noise out there.  Our minds filter out things for us so that we don’t get bothered by them unnecessarily

 

As professionals, amateurs, or hobbyists in the audio realm, we have to be more acquainted with psychoacoustic phenomena than the average Joe.  I have been discussing the sub-bass portion of the audible spectrum, which is the most demanding register in terms of its share of the power spectrum, and it brings up an important psychoacoustic phenomenon called masking.

 

From Sweetwater Sound’s wonderful Word For the Day dictionary:

 

When sounds that contain similar frequencies are played simultaneously, the weaker sound tends to have those overlapping frequencies covered – ‘masked’ – by the frequencies from the stronger sound (especially in a dense mix). The frequencies of the weaker sound are still there; they are just not discernable over the more dominant sound with the same frequencies.

 

This is explicitly why it is important not to have too much information in the sub-bass region especially.  The sub-bass is often an unusable portion of the audible spectrum, yet putting too much of it in a mix, perhaps such that some of it is audible, can cause it to mask neighboring frequencies in the bass register.  This can lead to muddy, indistinct low end as the sub-bass masks frequencies in the bass section.

 

This becomes even more of an issue in digital audio due to encoding algorithms.  The designers of audio codecs, notably MP3s, use masking as a way of excising “unnecessary” portions of audio.  They have processes set up that detect masked frequencies and eliminate them from the mix.  These algorithms are necessarily imperfect since no single metric could feasibly fit all recorded music.

 mp3-waves

 

 

 

 

 


 

 

 

 

 

 


 

 

 

If you look at a spectrometer for a full spectrum mix, you can see that the sub-bass portion generally reads extremely loud even though you can’t hear most of it.  This means that, to an algorithm searching for masking phenomena, the sub-bass would read as the stronger sound, and despite being largely inaudible and unusable, the algorithm would preserve it at the expense of other more important, but less spectrally powerful portions of the audio spectrum.

 

As we’ll further explore next week, the entire bass region (including the sub-bass) is a relatively small region in terms of frequency bandwidth, so neighboring frequencies are very dependent on each other down there.  Masking can occur at any portion of the audio spectrum, but it is especially important in the bass region.

WORK WITH US







Featured Columns