Need Mastering?
Learn more now
© 2024 Fix Your Mix. All rights reserved.

Archive for the ‘Production’ Category

Last week we discussed some of the inherent problems with sub-bass frequencies and how to deal with them.  One of the major issues is how sounds in that bandwidth lack specificity.  One instrument’s rumble, boom, and thud sound pretty similar to any other instrument’s.  For the frequency bands above the sub, we have to start talking about fundamentals, overtones, harmonics, and formants in order to properly appreciate some of the roles each portion of the audible spectrum plays in our interpretation of sound.

 

Since most of our clients and readers deal at least some of the time in the digital domain, chances are you’ve seen a complex waveform that looks something like this:

 waveform

 

 

 

 

 

 

 

 

 

 

 

In simple terms, waveforms of this type are the summation of various component frequencies.  In the illustration below, you see how a simple sine wave becomes more complex by the addition of harmonics:

 

complex_waveformdesktopmusichandbook 

The waveform starts with the fundamental frequency.  This is the lowest frequency present in the waveform that falls within the harmonic series.  When you play the 440 Hz A on the piano, 440 is really just the frequency of the fundamental, not the only frequency present.  Other frequencies are created when you play notes on almost any instrument in any environment—these additional frequencies beyond the fundamental are what help us distinguish one instrument from another.  Those that are above the fundamental are called overtones or upper partials.  Overtones that are integer multiples of the fundamental are called harmonics.

 

 

There can also be lower partials or undertones, though these are slightly less common.  And there are also sub-harmonics which follow the pattern of (1/x)(fundamental).  That is to say ½(440 Hz), ¼(440 Hz), etc.

 

Existing both above and below the fundamental are things called formants, which are acoustical resonances that, on an instrument, will sound no matter what.  For a violin, one formant of the instrument is a frequency whose nodes lie on opposite ends of the length of the violin.  Any vibration from any note stimulates the violin body itself to resonate and the aforementioned frequency sounds as well.

 

Formants and overtones are some of the things that allow us to distinguish a 440 A on the piano from a 440 A on a synthesizer, a singer, a violin, or a drum.  The also help us separate a Yamaha from a Stradivarius.

 

So if I were to hit that 440 Hz A on a piano, I would generate several frequencies:  the fundamental at 440 Hz; harmonics at 880, 1320, 1760, etc.; as well as whatever formants are present in that specific instrument.

 

The ratio of these frequencies relative to each other is what makes a characteristic sound.  So for instance, a guitar with nickel wound strings might sound that very same 440 Hz A and but have more emphasis on odd numbered harmonics whereas a guitar with nylon strings might hit that same 440 Hz A and have more emphasis on the even numbered harmonics.  Similarly, the nickel-stringed guitar might have a formant at 900 Hz and the nylon might have a formant at 4200 Hz.

 

You can see that when dealing with overtones and formants, you can very quickly span the entire audio spectrum.  That’s why if you get yourself a spectrum analyzer or even some of the nice plugin digital EQs out there, you’ll see that hitting any note on any instrument produces many more frequencies than that of the fundamental note you hit. 

 

When we talk about treating the bass, mid, and upper frequency bands over the next few weeks, you’ll see how important overtones and formants are to audio perception.

More from Phil’s Audible Spectrum series:

Last week we started examining component parts of the audible spectrum.  Of those component parts, perhaps none is more misunderstood and mishandled than the sub.  Perhaps it’s all those cars with bumpin’ sound systems out there, but it seems like everyone wants to cram as much “sub” as they can in the mix.  Just make sure you know what you are asking for!

 

Firstly, I just have to provide a disclaimer that I think any car with a big subwoofer in the back sounds terrible to me.  Outside my studio someone was parked blaring some Lady Gaga tune or something like that and all I could hear was the sub.  I could hear it distinctly too despite being three walls and a hundred yards away.  I can’t help but think about how badly those people are destroying their ears.  Morever, it just plain doesn’t sound good to me.

 

As I mentioned last week:  for practical purposes, Sub-Bass should be anything that sounds below the lowest fundamental note of your song.  This can include percussion and any sub-harmonics, resonations, formants, and room tones.  These are frequencies that would really only be reproduced by sub-woofers and large format PA/sound reinforcement systems so even if you have a million dollar audio setup and can hear all the way down to 20 Hz, realize that 90% of your fans still won’t hear that.  As I mentioned in the Limitations article, most of this won’t be reproduced by any consumer grade sound system.

 

Moreover, the sub is for audio content that lacks position specificity.  If you’ve ever seen a surround sound set-up before, you know that there are 5 speakers (LCR and two rears) plus a single sub-woofer.  Sub frequencies are very difficult to locate spatially and will more or less sound like they are coming from the same place no matter the position of the loudspeaker.  This is why surround sound setups don’t also require 5 separate subs.  A single sub placed in the center will suffice for all positions in the surround soundstage.  Because of this, too much sub content will turn into a big muddy bass because there is no real way to separate the rumble of the kick from the rumble of the bass or the rumble of the synth.

 

In order to get a focused sounding sub—the kind that moves you in the club or the kind that is noticeable (in a pleasant way) in home hi-fi systems that can actually reproduce those frequencies—you need to alter your thinking about the sub.  Don’t think of it as a separate frequency band that needs to stand on its own merit or be equal to the other frequency bands.  In fact, it helps even more to think of it as a garnish on the bass.  Something to help emphasize the bass, but not overpower it or stand on its own. 

 

If your bass lives in the bass and mid range frequencies, adding in the sub should make it stand out all the more.  But the bass should not expressly be confined in the sub regions. 

 

Furthermore, a sub bass is more clearly defined by what is NOT in it and for how long.  Imagine a band consisting of a drummer, a bass player, a synth player, and maybe a string orchestra—rocking out 80s arena style.  With all of those instruments you have the OPTION of including all that information in the sub:  the kick drum, the bass formants, the synth sub-harmonics, and the orchestra formants.  There would also be additional room tones and environmental sounds all going into the sub.  Since the sub has no position specificity and because sounds are distinguished from each other predominantly by upper harmonics, the sub sounds will be big and washed out because you won’t easily be able to tell the sub-bass components of each instrument apart from each other.

 

This introduces the problem of muddled bass.  So a kick drum that is short in duration might get buried by the longer notes of the orchestra and synthesizer, so you might get more sub overall, but lose the clarity of the kick.  In these sub ranges, sounds are really just a rumble and boom so the only way to tell things apart is by relative volumes and note duration.  Cramming all that stuff together like in the example above obscures all that.  That just creates an audible but unusable low-frequency noise floor.

 

Instead, a more preferable choice is to be selective in what makes it to the sub woofer.  That’s how you really draw emphasis and get the most out of the sub frequencies.  Make the drummer sound like John Bonham by putting a high pass filter over the mix at 45 Hz and bypass the filter on the kick drum track or maybe the entire drumset.  Then the kick drum really gets beefy and the rest of the ensemble doesn’t cloud that portion of the spectrum.

 

If you have a ticky kick drum like Metallica, you could instead opt to make the bass guitar or orchestra super fat by sending that to the sub instead.  The point is to be selective about what information makes it to the woofer so as not to obfuscate the sonic image with unnecessary clutter.

 

Additionally, as I’ve mentioned before, it is imperative to understand that low frequencies are extremely power dense.  So if you are actually “hearing” anything below 40 Hz, you are taking up way too much of the power spectrum.  This will blow out speakers, distort channel strips, and otherwise yield bad mixes.  And in a closed system, this extreme bass content (which is barely audible) will steal precious headroom from the more important frequencies.

 

The important take home lessons are to not expect any of your listeners to hear any of the sub-bass.  For the majority of them, the sub doesn’t exist altogether.  For the rest of them, be selective in the sub-bass content in order to make sure that you are actually using the woofers properly.

More from Phil’s Audible Spectrum series:

Over the past two weeks we have been discussing items pertaining to the audio spectrum at large.  In this article we’ll begin breaking down the audio spectrum into its component parts.  Though we disagree a bit on our subdivisions, Jay’s primer has excellent listening examples to hear each section individually.

 

Generally speaking, sounds can be lumped into three basic segments of the audio spectrum:  Bass, Mid, and Treble. 

 

The associated ranges would be approximately:

 

Bass 25 to 300 Hz.

Mids 300 to 2.4k Hz

Treble 2.4 to 20 kHz

 

Additionally, they can further be broken down in numerous ways depending on how people want to define sections:

 

Sub 25 to 45 Hz

Bass 45 to 300 Hz

Low-Mid 300 to 600 Hz

Mid 600  to 1.2k Hz

High-Mid  1.2 to 2.4 kHz

Treble 2.4 to 15 kHz

Super Treble 15 kHz to ~ 100 kHz

 

This Interactive Frequency Chart, much like the Carnegie Chart in the earlier article will help you understand how the frequency ranges match up with practical instrumentation.

 

For practical purposes, Sub-Bass should be anything that sounds below the lowest fundamental note of your song.  This can include percussion and any sub-harmonics, resonations, formants, and room tones.  These are frequencies that would really only be reproduced by sub-woofers and large format PA/sound reinforcement systems.  Some of this is undesirable—if you’ve ever watched an NFL game on windy day with a system that has a sub, pretty much everything is a big bass wash because of low-frequency wind noise.  We’ll go more in depth on that next week.

 

Bass should be reserved for the fundamental notes of the changes.  That is, the lowest sounding note of each chord progression.  This typically would include all the notes that would normally be played by a bass (Victor Wooten excluded).  This would also include bass playing synths and the left hand of the piano in many instances.

 

The Low-Mids and Mids include fundamental notes for melodic instruments as well as the first few orders of harmonics.  Harmonics help us distinguish sounds from each other and play a very important role in presence and clarity.  More on this when I examine the mid frequencies in two weeks.

 

The High-Mids deserve their own category because these frequencies contain sudden transient content.  For percussion, this would be the sound of sticks or mallets hitting the drum heads and cymbals.  For guitarists, this would be the sound of picks striking strings.  For vocalists, this would be the sound of hard consonance and sibilance.  All of these can be problematic, but also contribute greatly to impression of presence.

 

The treble portion of the audio spectrum contains almost nothing but upper harmonics of treble instruments and room tone.  This helps lead instruments and vocals sound present and full, but also adds brightness and clarity to a mix.

 

Over the next few weeks I’ll go into greater detail on problems with each part of the frequency spectrum.

More from Phil’s Audible Spectrum series:

Recording 101 teaches us that the audio spectrum is 20-20,000 Hz and it is our job as recording engineers to manage those frequencies. For introductory level classes, that is a usable definition, but it often leads to misunderstandings. >Do we hear 20 Hz as much as 20,000 Hz? Do we hear those frequencies as well as 2,000 Hz? The answer to both is no. In fact, given contemporary technological limitations, it isn’t even possible to accomplish most of that.

 

For those of you who read Jay’s Primer on Audio Frequency Bands and made it all the way the bottom, you would have read some interesting things about broadcast standards and encoding algorithms.  Broadcast standards here in the US actually cut off frequencies above 15 kHz.  That is, radio and television broadcasts don’t even bother with the top 5000 Hz of the audible spectrum!  If there were such a thing as radio anymore, you’d know to laugh off any audio engineer who promises you “radio quality mixes.”  Also, cutoffs are employed in almost all digital encoding algorithms in order to prevent aliasing of upper frequencies.

 

On the other end of the spectrum, most playback systems are not designed to go below 30 Hz.  Currently, the lowest reproducible frequency by any JBL system is a live sound reinforcement loud speaker with woofer that goes down to 25 Hz.  They also have consumer and studio woofers with roughly the same specs.  You’ll notice that these are all woofer systems and not standard speakers for desktop and meter-bridge monitoring.  The standard studio monitors without a woofer falloff sharply at ~45 Hz.  With this in mind, you should know not to expect to hear anything below 40 Hz on a standard system without a woofer.  Furthermore, you should know that about 90% of your audience will not be able to physically reproduce anything below 50 Hz given the standard consumer set up.

 

This is not to downplay the psychological impact of low or high frequencies.  These play a very important role in psychoacoustics.  Low-lows, though inaudible, help us perceive lowness partially through feel rather than sound.  High-highs also help us perceive presence and therefore clarity by giving more emphasis to the minutiae of a sound that you’d only hear by being close to it in the real world.

 

Next week, I’ll clearly define the component regions of the audio spectrum and talk about the various ways to treat undesirable maladies afflicting them individually.

More from Phil’s Audible Spectrum series:

The Audible Frequency Spectrum, Part 1 (Producer Speak)

Posted by Fix Your Mix On April - 19 - 20094 COMMENTS

Over the course of hundreds of interactions with clients through Fix Your Mix, both in a mixing and mastering capacity, I have noticed that there is a great disagreement out there on the practical frequencies in audio.  This is strange to me because we have such a vague lexicon for our enterprise (boomy, boxy, tinny, etc.) that you’d think we’d all latch on to terms with such defined parameters as Low, Low-Mid, High, et al.

 

But nevertheless, every couple months I get a client who says “I love the mix, but I’d really like to hear more bass, can you boost 10 Hz by like 5 dB?”  So for all of you loyal readers out there and as a reference for future clients, I have composed a series of articles describing the portions of the frequency spectrum.

 

Here is an excellent primer for discussing frequency ranges. Jay works in post-production (television, film, etc.), so his end goals are different from those of us in the music business. He also neglects to emphasize the importance of upper frequencies for imbuing a recording with presence, clarity, and professional quality.  But other than that it is an excellent breakdown of the frequency bands.  For this week though, we’ll be talking about the audible frequency spectrum at large.

 

The audible frequency range is generally accepted to run from 20 to 20,000 Hz.  Some people hear more, most people hear less.  However, it is important to understand that this broad frequency range is supposed to include the frequencies that the average person is physically able to hear.  For the purposes of experimentation, frequencies outside of the range can be heard, but they have to be amplified to such an extreme that they are not worth measuring.

 

fletcher-munsonTo the left is the Fletcher-Munson Equal Loudness Curve, established in 1937.  It is probably the most cited graph in psychoacoustics (although the Robinson-Dadson Equal Loudness Curve of 1956 has been shown to be more accurate, since Fletcher-Munson is the most widely used, the following commentary will focus on that).  This graph plots sound pressure level (SPL) in phons against frequency.  The lines indicate equal apparent loudness.  That is, if you were to follow each line, from 20 to 20k, you’d see the variation in amplitude necessary to make each frequency sound equal in loudness.  For example, on the top curve, take 1000 Hz sounding at 120 phons as the baseline.  In order to hear 20 Hz at the same apparent level, you’d have to amplify it to 130 phons.  The same goes for 20k.

 

Another interesting phenomenon about this curve is how exaggerated the differences become at lower amplitudes.  For instance, when you look at 1000 Hz at 20 phons (the third line from the bottom), you can see that it takes almost 80 phons to sound at the same apparent level.

 

Now bear in mind, this is not to say that you want to go and quadruple your bass content to get a booming mix.  On the contrary, this is to say that you really shouldn’t expect to hear anything beyond a certain points in the mix.  In almost all instances of music recording, there will be frequency content below easy audibility.  The point of mixing is not necessarily to make them audible.  Sometimes these frequencies are meant to be felt rather than heard.  Other times, these frequencies don’t really add much to the mix at all—eating up large portions of the usable power spectrum and overloading your mix with unnecessary content that either will hurt fidelity due to digital encoding or broadcast algorithms, or will be cast off anyway due to physical limitations of sound reproduction systems.

 

freq-1Here is a graph of all the frequency ranges for common instruments and their notes as shown on a piano.  What you’ll notice is that the range for a concert bass is from ~90 Hz to ~350 Hz.  The absolute lowest note on the piano is around ~28 Hz, and that is a note that you will likely never hit.  Practically all the action in musical instruments occurs between 60 and 5000 Hz.  Allowing for formants, harmonics, and other sonic phenomena outside of the fundamental frequency of the note, it is safe to say that practically all usable and desirable sounds fall within 20-20K and that range could even reasonably be made smaller.

 

In next week’s article I will examine these specific limitations and discuss why the low frequencies are the most problematic.

More from Phil’s Audible Spectrum series:

Yamaha NS-10s (Producer Speak)

Posted by Fix Your Mix On April - 16 - 2009COMMENT ON THIS POST

NS-10In 1978 the Yamaha NS-10 first hit the home audio market. The speakers were originally designed for the consumer rather than the professional sphere. The only problem was that the speakers sounded terrible and no one wanted them for that purpose. They were often described as overly bright and harsh and the frequency response was abysmal in the low end (criticisms which are founded and still exist to this day). However, despite its audiophilic shortcomings, Fate found other uses for this Little-Speaker-That-Couldn’t.


As New Wave, punk, and other lo-fi genres began to take hold on the world, a DIY spirit took over and smaller, cheaper recording studios were created that catered to a clientele who didn’t necessarily place a premium on fidelity. Near-field monitoring became the fashionable choice for these studios because it minimized the effect of listening environment on the sound of a mix. This allowed bedrooms, basements, strip-malls and other ostensibly acoustically unsound venues to become mixing environments.


In these situations the NS-10s weaknesses became strengths. Their lack of low-end capability meant that room nodes (standing waves in a listening environment which cause certain frequencies to be accentuated because of the geometry of the room) weren’t much of an issue since these acoustic phenomena are largely confined to the lower frequencies. Furthermore, their use with cheaper, lower output amplifiers (as was common in these smaller studios) meant that the program output was lower. These volume levels are generally agreed to be the NS-10s’ most accurate operating range. And of course the price, as a previously undesirable commodity, was just right for small studios.


Over the course of the 1980s, the NS-10 became a mainstay of the recording studio and their ubiquity, coupled with the fact that their poor sonic characteristics generally do not incite the individual characteristics of a listening environment, meant that the NS-10 could become a fairly universal reference. By and large, NS-10s were thought to sound reasonably similar in every listening environment. Thus, most mixing decisions are themselves adequately portable.


However, the NS-10 is only as useful as you are familiar with its sonic characteristics. A +7 dB peak at around 1500 Hz contributes to the audibility of some mid-range sounds such as the human voice and acoustic guitar. Operating without this knowledge may result in a weak vocal or acoustic in the mix when you take your songs to other environs.


It is also very difficult to judge a mix’s low-lows on NS-10s. The speaker simply was not designed to reproduce those frequencies. If you aren’t aware of this, then you may find yourself pumping in a ton of low-end just so that the sub frequencies are audible, but if you took it to the club, you’d probably blow out the speakers with all that 808!


It is now agreed in most professional circles that NS-10s are an excellent reference at low volume levels and for gross judgments that do not invoke the sub-frequencies. Armed with this knowledge, you’ll have a better understanding of how to use this omnipresent piece of gear and knowing how to properly use a tool is the most important part of the audio world.

The Decibel (Producer Speak)

Posted by Fix Your Mix On April - 9 - 20092 COMMENTS

neve-flying-faders_1There are some instances when a limited amount of knowledge can do a great deal of harm. For instance, you might know that a bit of sun is good for you. If you are not fully versed in the effects of sun exposure to the skin, you might be wondering what those strange, asymmetrical spots are that keep popping up all over your body. Get those checked out; seriously I worry about you sometimes…

 

Other times, a basic understanding of something might be helpful the most of the time. Take Euclidean geometry for example. If you aren’t an astrophysicist or a nuclear scientist, pretty much everything you need to know falls into Euclidean space.

 

But there are also times when the common sense understanding of something gets you by enough so that you don’t realize all the other times that it is absolutely wrong and leads you astray. This is the case with our friend the decibel.

 

I was working on a record a while back with producer/engineer extraordinaire Paul Kolderie (Radiohead, Pixies, Mighty Mighty Bosstones) and he mentioned something in passing that really caught my attention. I can’t really recall what the situation was, but we were setting up a session and he said to me “I can’t stand it when people ask me to change something by half a dB. A dB is the lowest possible change you can perceive, so saying half a dB is meaningless.”

 

Many nights I woke abruptly from sleep in a cold sweat tormented by what he had said. Something sounded so right and yet so wrong about that. I mean, if I told you to change something by half a dB twice—both equally insignificant changes by his definition—I would get a change of full dB, and therefore a significant change. Using some simple extrapolation, you can’t keep considering fractional changes in decibels as insignificant, because surely enough they add up.

 

So what exactly is a dB and what change in dBs is significant to our ear and in our mix? Well, without getting overly scientific about it and also restricting the question to audio applications (sorry electrical engineers), a decibel is a convenient unit of measure that expresses very large changes in magnitude against a reference level in a concise manner. Concision was important back in the days of hand calculation.

 

When they were busy wiring up the world for telephone usage, Bell Laboratories thought it’d be really swell if they could measure the amount of degradation in audio level over a mile of telephone cable. They did the calculations but soon found that expressing the quantities in conventional terms meant using insanely large and unwieldy numbers. So they decided to use a logarithmic function to bring the numbers to more manageable figures for simple calculation. Logarithms of numbers are useful because they have some of the same arithmetic applications as regular integers (for example, you can add two logarithms with the same base just like adding to regular numbers). The unit they came up with became known as a Bell in honor of the company and Mr. Alexander Graham Bell. So a decibel is actually 1/10 of a Bell.

 

So why do we talk about tenths of something? After all we don’t regularly deal in decimeters or decigrams. Well in the mid 1800s, some very clever psychophysicists began studying something called Just Noticeable Differences (JND) in sensation. A JND is the smallest incremental change in a sensation that is perceptible to the average person. This could be the JND in touch as measured in PSI or the JND in sight as measured in lumens. Someone discovered that a tenth of a Bell roughly correlated to the smallest detectable change in a sound to the human ear. As such, the decibel became a very important measurement in audio because it was simple to express changes that actually meant something with regard to common perception.

It is important to note that JNDs relate to the AVERAGE person. As such, musicians and audio professionals are often able to detect much more minute changes in audio level.

When studying JNDs, another useful but perhaps counterintuitive aspect of the decibel arose—a doubling of volume roughly correlated in a change of +/- 10 dB. This is useful but strange in that the arithmetic is skewed—you ’d expect a doubling in the perceived volume of something that sounds at +2 dB to be +4 dB. But then again, what is a doubling of something that measures 0 dB? This exposes some of the fundamental limitations in the simple definition of the decibel—human perception complicates the simple calculations.

 

Such problems spurred further investigation into situational applications of JNDs and Signal Detection Theory was born. In basic terms, the object of Signal Detection Theory is to figure out what extra factors go in to our perception of a sound and how it compares against “noise” or unrelated signals. For instance, does a +1 dB change to a signal still sound like an increase of 1 JND if the sound is played over white noise? What about if the original signal is 100 Hz sine wave? What about 30 KHz?  What if the original signal is a voice played over a country band?  Or a metal band?

 

It was discovered that the JND of a signal changes based on frequency range and initial level. A JND is around 1 dB for soft sounds at frequencies in the low and mid range—the frequencies we perceive most readily. Really loud sounds can have a JND of 1/3 to 1/2 dB. Really soft sounds on the edge of audibility might have JNDs of a couple dB.

 

Furthermore, other things can color sounds in such a way that you can take the same sound, add something to it and suddenly the JND might be more or less than a dB. Perceptual Encoding Theorists look for factors outside the Critical Band of Frequency for a sound (the frequency or frequencies that define a sound) that would alter our perception of it. For instance, adding a slight reverb in some cases might cause the JND to rise (meaning you need to turn the signal up more to get a perceivable change) or adding a harmonic exciter in most cases would cause the JND to lower (meaning you wouldn’t need to turn the signal up as much to get a perceivable change). This is because new nerve endings are being excited and these cause our minds to perceive the sound in a different way than we had previously.

 

As you can see, the decibel is not quite as simple as its common sense understanding in the audio world. So when you need to make something appear twice as loud, you know what to do. When somebody tells you to make their vocals 20 dB louder, you know that that is laughably extreme (for the most part) and you should adjust your corrections appropriately. When someone asks you to turn something down by 1/3 of a dB, you know that it is really only going to be detectable if that sound is already pretty loud.

Recording Techniques in “Kids” by MGMT

Posted by Fix Your Mix On April - 1 - 2009COMMENT ON THIS POST

In our first time at bat on these Sonic Deconstruction articles, the song choice appears to be a swing and a miss on the recording techniques day. A calamitous choice for one simple reason: almost everything is a sample, loop, or synth! As a result, recording methods aren’t immediately intuitive in the way that King of Leon or Foo Fighters would be. It also doesn’t help that the one track that undoubtedly existed at one point in the real acoustic world (as opposed to tracks that could have been DI’ed or midi triggered) is the vocal track and frankly it doesn’t sound very good. But this is our dishwashing liquid and dammit, we’re going to soak in it.

outside

Recorded at Dave Fridmann’s residential studio in upstate New York, MGMT’s Oracular Spectacular is probably the ideal album to record there. In his September 2000 article in Sound on Sound Magazine, Dave intimated that the design of Tarbox Road Studios is somewhat less than ideal:

The design work required to turn the house into a studio was taken on by Dave himself, who felt that the recommendations of a professional studio designer would in any case be beyond his means…

‘When people are normally doing acoustical design they’re worried about a lot of isolation, and worried about floating floors and cement structures to isolate you from each other. And I was worried about it, but I really couldn’t do anything about it, so I didn’t worry too much, just did what I could.’

Like many residential type facilities—professional, pro-sumer, or hobbyist—layoutisolation is a concern. So when big bands come in wanting to track everything live you often get so much bleed that you lose flexibility in your tracks. Your guitars are in your drums, your drums are in your vocals, you can’t change one without leaving some ghostly artifact somewhere else. Well with a band like MGMT that consists exclusively of two musicians playing instruments that could very well exist entirely in the box, those issues are no longer a concern.

It is my belief that at least a few of the synthesizers were amped or re-amped for mixing. There is a lot of dirt and grit on the synthesizers, especially when compared with the infantile clarity of the sounds in the EP version, which makes me think that amp gain, color, and distortion are part of the sound. There is an audible grime on the melody synth that is evident when the keyboardist lands on that C# that holds for a measure. It almost sounds like that kind of battered old Leslie cabinet.

studio3The vocals are an interesting beast—they are exceedingly sibilant to my ear, which could very well be a combination of mixing and mastering (provided by Greg Calbi). This assaulting high frequency presence might indicate that Fridmann used a hi-fi mic on a less-than-hi-fi singer. I know that his favorite mic is his tube U-47 (one of my personal favorites as well), so he might’ve used that old standby. On a singer with an unpolished and young voice like in MGMT, I likely would’ve opted for a dynamic microphone with a bigger, heavier diaphragm to compensate for the vocal character like the SM7. These mics have the effect of covering up the less audible imperfections that might otherwise be present when a tube mic is used. Either way, the vocals are heavily processed with filters, fuzz, compressors, and fx so the original character of the vocal as interpreted through the microphone is likely lost except on the multitrack file.

By and large, the greatest assets to the sounds on the record would be the mixing techniques. Check back on Friday for some in-depth speculation. Dave, if you’re reading, feel free to set us straight!

Claps & Snaps: The Death of the Snare Drum (TrendWatch)

Posted by Keith Freund On March - 27 - 20092 COMMENTS

Not exactly breaking news, but humor me: scan the Billboard Hip Hop chart and you’ll see that it is hard to find a rap song with a snare drum on the back beat. Why?


tombstoneLet’s go back to the late 90s for a moment. From groups like OutKast, No Limit Soldiers (Master P, Mystikal), and Cash Money Millionaires (Juvenile, Birdman) sprung a new era of southern music which began to seep into America’s collective consciousness. Still, with artists like Eminem, Jay-Z, Kanye West, and 50 Cent (and production teams like Neptunes and Timbaland), it would be another half a decade or so before the South virtually became Top 40 rap.


Growing up in Atlanta, I had a somewhat distorted view of the influence of southern rap. In fact, just the other day I was discussing this very topic with a well-known East Coast rap mixer and discovered that songs like “Back That Azz Up” didn’t have the nearly the impact on the national level that they did in Georgia. In fact, he said that Mystikal’s “Shake Ya Ass” was the song that, for him, signaled the entrance of southern music into the mainstream.


Crunk JuiceThe south officially became mainstream with the Crunk movement, which is when all the clapping and snapping started. For me, the turning point was when “Get Low” came out and Lil Jon became The Face of Crunk on the national and international levels.


Which brings us back to the initial question: Why? While the actual reason probably has something to do with tools available, the whims of producers, and the butterfly effect…

A clap or snap provides two distinct advantages over a snare drum (1) It leaves room for other elements in the mix (does not compete with the vocal) and (2) provides a human element.


As Phil pointed out in a previous post, when it comes to a mix, in order for something to be big, something else has to be small. While it may seem that layers upon layers of sounds would lead to a bigger mix, it also leads to a smaller vocal, smaller drums, smaller bass. When you’ve only got a clap, an 808, and a vocal, each of those elements can be huge. Unlike the epic snare drums that typify the rock idiom, claps are humble, unassuming, and fun.


Many people simply do not enjoy instrumental music because there is nothing human to connect with. They need a lead vocal to connect with the song. To a lesser extent, claps and snaps serve as this same kind of human element. (If I really wanted to get academic about this, I could relate this to the call-and-response aesthetic seen in traditional African music… but I’ll abstain.)


Slowly but surely, Crunk has split off into two genres which are in effect today:


“Snap Music”


The first branch is known as snap music. In my mind, snap music is the only authentically southern rap around because it is still exclusively being made in the South (in other words Kanye isn’t stealing it). Here are its signature characteristics, in order of importance:

  • A single-note bassline (no chord progression)
  • Sparse arrangements
  • 808 kick sound
  • Monophonic, short, riff-based melodic elements
  • Snaps on beats 2 and 4
  • Fruity Loops-esque synth patches
  • Syncopated snare-fills

Snap music is a little less produced than everything else on the radio. I’m talking Yung Joc, I’m talking Soulja Boy, hell, I’m talking “Laffy Taffy”:



Perhaps even more importantly, it’s a lot more convenient to snap while dancing than to clap (not to mention cooler-looking).


Mainstream Rap


The other branch is what I would simply call mainstream rap: your Lil Waynes, your T-Pains, your TIs. This style is characterized by the following:

  • Auto-Tune/choruses with singing
  • Claps on beats 2 and 4
  • 808s, either supplementing or serving as the kick sound
  • A melodic bassline (in other words, there is an actual chord progression)
  • Futuristic, techno-like synth patches

This form of southern rap is so far-reaching that virtually every Top 40 artist uses claps, from New York to New Orleans.


Next time you’re composing a track, remember that the samples you use play an enormous role in defining that song’s style and determines the demographic to which your music appeals.


*Note: There are several notable exceptions to this rule right now:


One is Jamie Foxx’s “Blame It,” which utilizes a combination of both a clap and a snare on beats 2 and 4. The tune has reached #6 on iTunes and #1 on the Billboard Hip-Hop chart.


TI’s “Live Your Life” feat. Rihanna uses a snare on the backbeat, but it has a syncopated snare pattern too, which gives the song a kind of majestic, almost military band sound.


Perhaps the most complete exception is “Swagga Like Us,” with only a snare on beats 2 and 4. This choice was undoubtedly very conscious–because of the gravity of this collaboration, they were able to use an unusual instrumental and be perceived as innovative rather than out of touch. I think that it was inspired by the movie Drumline based on the feel established by the kick drum pattern.

How Do I Sound Like The Knife?

Posted by Fix Your Mix On March - 26 - 20095 COMMENTS

theknife_promoIn 2006, The Knife’s Silent Shout was received with near universal acclaim. Pitchfork, who honored the Swedish duo with the title Album of the Year, recently hyperbolized that the siblings had created a masterwork that “arguably sounded like nothing before it.”  Indie rock critics’ penchant for overstatement aside, the group does have a distinct sound—one that peculiarly hasn’t been co-opted by imitators at large. 

 

Perhaps it speaks to the reverence hipsters have for their perceived groundbreakers, or maybe it just means that the gear they use is too obscure to reproduce.  If voice transformers were as ubiquitous as Auto-Tune, would we be hearing The Knife pull a T-Pain on MTV complaining that they had been swagga-jacked?  Well in honor of the release of Karin Dreijer Andersson’s new solo project, Fever Ray, I’ll demystify some of her and her brother’s sonic magic.

 

Very rarely is a band’s unique sonic character defined by a single effect but honestly, all songwriting and execution aside, there isn’t much that is wholly distinctive about the group insofar as sounds are concerned.  The beats aren’t revolutionary and could very well have come from any can of prefab loops.  The synth sounds are fairly generic and not treated in any inventive new way.  The album itself is fairly quiet by today’s standards, perhaps attributable to a Swedish mastering job.  The swelling synthesizer in “Silent Shout” actually pops out quite sharply and distinctly from the rest of the mix showing that the tune is not overly compressed.  The tracks are immersed in several very artificial sounding reverbs, but that is not uncommon for electronica tracks.

 

voicelive-largeReally, the only thing that grabs me about this group from a sonic perspective is the haunting vocal timbre.  Layers of vocals with pitch-shifting, formant-altering effects contribute to this ethereal tone as provided primarily by the TC-Helicon VoiceLive.  This handy little box allows the user to input a source program (instrument, microphone, etc.) and alter the pitch up to an octave in either direction, adjust the formants, loop, and add reverb.

 

Throughout the record, the sub-octave is the most used effect, although parallel 4ths and 5ths are occasionally audible and the super-octave is mixed in for flavor.  In some instances, the “MIX BALANCE” fader is all the way up to 100% effect output such that Karin’s natural tones are inaudible.  Other times they are mixed in tastefully with the dry vocal track.  The sheer prevalence of this effect contribute to the conspicuous absence in the tracks where the vocals are unaffected.  Songs such as “Keep the Streets Empty” sound all the more stark and vulnerable whereas the effected tracks have more body, presence, and strength.

 

One of the greatest tools in this gizmo is the formant filter.  Formants are intrinsic resonances from an acoustic sound source.  These in tandem with spectral content are what allow us to distinguish between human voices in the same range singing the same note or tell a Stradivarius from a Yamaha.  The formant filter allows the user to alter the sonic quality of the output, thereby creating the effect of different singers and thickness or various otherworldly sounds.

 

It isn’t immediately clear to me whether or not the looping functions were especially useful to either The Knife or Fever Ray since looping facilities were surely available in their DAW, but the applications are very intriguing for live performance.  An extant device that I like to use to a similar end is the Electroharmonix Microsynth.  It has some of the same facilities although it doesn’t allow you to loop.

 

The use of such a filter is not unprecedented. Apollo 440 used the same device to similar effect back in 1990.  Brian Eno famously used formant filtering in his 2005 release entitled Another Day on Earth.  He has even accomplished similar ends with his famous suitcase ring-modulator that he has used throughout his storied career. So even though the predominant discriminator for both The Knife and Fever Ray is the vocal effect and even though there isn’t much that is revolutionary about their instrumentals, something in their approach to songwriting is what contributed to many critics touting them as some of the most unique sounding artists of our time.  With all this in mind, it is important to stress again that there is a marked difference between obtaining an artists sounds and sounding like that artist. 

WORK WITH US







Featured Columns