Monday, October 22, 2012

A non-technical article on Mixing


Before You Record:

There are many ingredients to a good mix.  The sound of the final product is influenced by everything that produces the sound from the very beginning.  Consider:

  • The sound of the instruments themselves - You wouldn’t pick an Ibanez guitar through a Triple Rectifier for your country band.  You wouldn’t pick a Telecaster and a Twin Reverb for your metal project.  An entry-level guitar through a cheap practice amp will not sound as good as a well-appointed guitar rig.  A drum kit that is not properly tuned or a singer who squeaks notes out from their pinched little throat cannot be fixed by a great mix engineer.  The basic rule of recording is “crap in = crap out.”
  • Mic choice and mic placement - Mics have different designs and utilize different technologies.  Some are accurate.  Some have character.  Some are bright.  Some are warm.  Some are designed to bring out the depth of a kick drum from up close while also accentuating the click of the beater, while others are designed to capture the subtle nuances of instruments from a distance.
  • The recording environment:  Some rooms sound spacious.  Some sound dry.  You know that hollow, boxy sound from your video camera when you recorded your friend singing in your basement?  You can’t EQ that out.
  • The performance itself:  A great mix of a crappy performance still sounds crappy.  A mediocre mix of a great performance will still sound pretty good.

So, before you have even hit the “record” button, a lot of the sound of your mix is pre-determined by these factors.  Great musicians will give great performances and have great tones.  A great studio will have great rooms.  A great producer will know which microphones to use, where to put them, and how to use the preamps to help get the best possible tone from the microphones.


Okay... now MIX! - 

Part 1 - What’s the Frequency, Kenneth?

The human ear is designed to hear frequencies from 20hz (vibrations per second) to 20 000hz (also known as 20khz.)  A veeery low bass rumble from a kicker box in the Honda Civic that goes booming past your house might be in the range of about 40hz. The mosquito buzzing around your ear clocks in closer to 16khz.  We call these low and high frequencies.

Each instrument has a fundamental (the main pitch the note is playing), and then the overtones that help to shape the timbre and character of the note, which occur at much higher frequencies.  The open E string on a bass guitar is about 40hz.  If you had a pure tone at 40hz, it would not sound like a bass.  If you wanted to bring out the attack of the note, you would have to boost frequencies around 2khz.  Although the note that is being played is at a very low frequency, the “pluck” of the note is much higher.  If you want to make the bass guitar “stick out” more in the mix, it is not necessarily about boosting the bottom frequencies where the fundamentals are.  You could bring out the attack of the notes instead, making the bass much more noticeable.

Imagine an orchestra.  You have a wide range of instruments that, cumulatively, take up an enormous frequency range.  The double-basses, the tuba and the tympani occupy the bottom; the piccolos, cornets and chimes occupy the very top, and the other instruments occupy various ranges within the middle.  Now, imagine the basses and tubas only.  Now throw in a bassoon playing in its low register.  How well do you think you would hear it?  Obviously, not very well.  It is competing with that low frequency space with two other instruments.  Have it play an octave higher, and it will be much easier to pick out, as it would have more of its own “space” to occupy where there is less “competition.”

The same thing happens in a mix.  If you want to bring out the “whoomp” of a kick drum by boosting the EQ at 40hz, and the “thud” of the bass by boosting the EQ at 40hz, you’ve got two things competing for that same frequency space.  A better solution would be to bring out the “thud” of the bass by boosting the EQ at 40hz, but then cutting out some of the 40hz of the kick drum, giving the bass more room to exist down there.  But what about the kick drum?  Boost instead, the click of the beater, which lives somewhere around 2khz.  There is no “competition” from the bass guitar there, and each instrument now has its own space in the frequency spectrum.  The bass is no longer hiding behind the kick drum, or vice-versa.  Both instruments can now be heard clearly.  There will also be a bit of an “auditory illusion” at work where, even though you are boosting the click of the beater on the kick drum, the listeners will actually attend more to the lower frequency (the fundamental) of the kick drum, and hear it too... or at least they’ll think they will.

Use the same approach to “notch out” places for other instruments to exist within the frequency spectrum by cutting some of the top end from the guitars to make room for the vocals, or rolling the bottom off the guitars so that they don’t compete with the bass.

Part 2 - Louder than bombs

Imagine you’re downtown Manhattan, surrounded by a huge density of very tall buildings.  Which one is the tallest?  Well, it can be hard to say.  From where you’re standing, they’re all pretty darn tall.  In fact, with that many of them at the height they are, does it really matter which one is tallest?  In a room full of giants, none of them look especially large.  Now, take one of those buildings and put it in a small town.  All of a sudden, that one building stands out like crazy.  It’s monstrous!  That’s right.  Everything is relative.  Not everything can be louder than everything else.  We judge volume in a mix not on its own merits, but by comparing it to the other elements in the mix.  If one thing is going to be very big, then you need to have things around it that are comparatively very small.

Imagine a metal band.  The drummer wants big, huge, crashing drums.  The guitarist wants a wall of massive guitars.  The bass player wants to rock the house with his crushing rhythm.  The singer just wants to be heard.  Everyone says, “turn me up.”  Obviously this won’t work.  If you listen to most metal recordings with those huge walls of guitars, you might be surprised when you *really* listen, that the drums and bass aren’t nearly as loud as you thought they were.

Why not?   Answer = arrangement.  That big wall of guitars isn’t ear-assaultingly loud all the time.  If they were, they would lose their effect... like that one teacher we all had who did nothing but yell all the time.  After a while, it just becomes part of the landscape and you stop noticing.  Placed beside themselves, they want “periods of smallness” too, in order to make those huge moments still seem huge.  So, when those guitars are getting small for a bit, there is a perfect opportunity to ramp up those toms and put in a nice bass run to not only remind the listener that the drums and bass are still there, but to impose a sense of how big they are.  The listener focuses on that big drum sound and the fat bass while the guitars are laying low, and then when the wall of guitars comes in, they sound huge again.  The listener just doesn’t notice (until he/she is trained to notice), that the drums and bass are now comparatively small again.

Part 3 - Space

The first of two types of space is the stereo field.  You have the right-left space represented by the right and left speakers.  We’ve all heard the fighter jet in the movies that you hear in one speaker and then seems to fly over to the other speaker before finally fading off into the distance.  This is the stereo field.  If you imagine a live performance, you would traditionally have the musicians arranged on the stage in a way that makes sense for the musical presentation.  Often, we mix with that in mind.  Placing the lead singer in the left speaker would be just as strange as having the lead singer standing near the stage entrance by the curtain to sing while the rest of the band played on stage.  

Another way of giving instruments their own space is to separate them not only by frequency space, but by stereo space.  Imagine two guitar players on a stage, each with their amps stacked on top of each other.  It wouldn’t surprise you to find it difficult to tell which guitarist is playing which notes.  In fact, it might likely just sound like one guitar playing a jumble of notes.  But if you took one guitarist and had his amp on the left side of the stage, and the other guitarist and put her amp on the other side of the stage, it becomes much  more obvious which guitarist is doing what.  The sounds still blend, but the effect of them being two independent parts is made much more distinct.  Because guitars and keyboards are often competing for frequency space with lead vocals, by moving those parts a little further right and left, while keeping the singer in the center, allows each instrument to have it’s own stereo space, with no competition from the other parts within that right-left spectrum.


The other kind of space is depth.  Stages have the right/left dimensions, but also the front/back dimensions.   In an orchestra, the tympani and double-basses are in the back, while the flutes and violins are in the front.  How do you create the illusion of depth when mixing a track?  Two things:  reverb and EQ.

The more reverb you add to something, the further back it appears to be in the mix.  Consider the fact that, even in a cavernous gymnasium, if you are right in front of the sound source, you hear very little reverb.  Stand at the far end of that cavernous gymnasium with the instrument at the other far end, and you’ll hear tons of reverb.  The trick here is that, if you want to put the drums at the back of the stage in your mix, and you put reverb on the kick drum, the bottom end of your mix will often start to smear.  The solution is to add reverb to everything except the kick.  This also points out another common mistake in mixing - adding too much reverb to a vocal.  It gives the effect of the singer being placed at the back of the stage.... or worse, in an entirely different room!

Consider this basic fact of acoustical physics:  Low frequency notes have very long sound waves, and high frequencies have very short sound waves.  The longer the distance a note has to travel, the more the higher frequencies get lost.  So, rolling off the top end of an instrument will help give it the illusion of being farther away.  By adding loads of reverb and rolling off lots of the top end, you can make something sound *really* far away!  Of course, in mixing, a little goes a long way.

Part 4 - Putting it Together

Before you begin, you have a few goals.  First, determine what kind of sound you are looking for in the first place.  You might not be able to pick the performer or their instrument, but you can pick microphones and decide where to put them.  You also want to consider the question of whether you are recording a singer with a band, or a band with a singer.  The answer to this question will determine your approach to mixing, and might even impact your miking choices.

A singer with a band is usually approached from the top down.  You dial in the vocal sound, and then bring up the instruments to that place where they play appropriate supporting roles.  A band with a singer generally suggests the opposite.  You build the band from the bottom up - drums, bass, guitar, etc., and get the band sounding the way you like, and then introduce the vocals to that mix.  The question suggests subtlety, but the differences in approach can produce radically different mixes.

You next need to decide which instruments and voices are going to be the “focal points” of the mix, and which others are going to play supporting roles.  Remember the “in a room full of giants” quote?  Right.  Not everyone can be a giant.   As a general rule, no matter who you decide gets a supporting role, you will risk getting someone’s nose out of joint.  The trick is to carve out spaces for everyone to shine at least here and there, as I have discussed above.

In any case, your goal is to have all parts heard as much as they need to be through a combination of carving out frequencies, and placing each thing in its own right/left “stereo” space, and its own front/rear “depth” space.  You will need to make sacrifices and compromises.  You will need to take advantage of psycho-acoustic illusions.  Just because everything sounds great on their own doesn’t mean everything will sound great together.  You will find that sometimes, a part that fits perfectly into your mix really doesn’t sound that good at all when played by itself.  That’s okay.  It’s not about everyone being a star.  It’s about everything working together for a great production, with you being the director.

Part 5:  Mastering

Mastering has become one of the most misused words in audio.  Back in the day, when songs were cut to vinyl, the music would first be recorded to a tape machine, and then that recording would need to be transferred to a wax cylinder, from which the vinyl records would be manufactured.  Mastering was the process of going from the tape to the “master” which was the wax cylinder.  In order to do this properly, a few things became part of the process:

  • ordering the songs - There are lots of things to think about in determining the best order for your LP.
  • getting a consistent tone from one song to the next.  This is done with EQ.
  • tops and tails - Beginnings and endings are edited and faded out so that they flow naturally and sound “right.”
  • compression - It was often found that the cutting needle in the wax cylinder was literally bounced out of the wax, thereby ruining the cutting process, whenever sudden transients (say, a sudden snare pop, or a cymbal crash, or an extra hard jab on the bass) occurred.  In order to tame these so that the cutter wouldn’t jump out, they used compression.

Nowadays, the word “mastering” is often used as a generic term for compressing the daylights out of a track to make it as loud as other loud-as-bombs commercial recordings.   Whatever floats your boat, or suits your purposes, I suppose.  We’ll save the “volume wars” discussion for another time, but in short, if you listen to an album from the 1970’s and an album from the 2000’s, you’ll find that the modern record is very, very loud (for many people and purposes, there is a rough equation that suggests “louder=better”  compared to the classic album.  You’ll also find that the classic one has MUCH more dynamic range.  Why?  Compression.

Compression basically takes the loudest parts of your mix and the quietest parts of your mix and brings them closer together.  The best way to visualize that is this:  If you play a classic album from the ‘60’s or ‘70’s, and watch the meters on your audio system, they will bounce up and down along most of the entire length of the meter along with the music.  The quiet parts are quiet and the louder parts are louder.  If you look at a wave form of a classic recording, you see obvious mountains and valleys.  If you play a modern recording and watch the meters, they will basically just flicker right around the maximum level before it distorts.  The loud parts are loud, and the quiet parts are... well... damned near as loud as the loud parts.  If you look at a wave form of a modern song, it looks somewhere between a fuzzy caterpillar and a big long brick.

It’s a trade-off.  Louder songs on the radio or wherever get the listeners’ attention and sell more records.  People like to play their music loud, and the louder the play it, the more their bass slams and their highs soar.  Recordings where the dynamic range is preserved sound more natural, and usually have a lot more “punch” to them.  Take an older recording and turn up the volume so that it seems as loud as the newer recording, and many people will say that the older one now actually sounds much better than the newer one.

One artefact of compression is that it does seem to “level out” the mix a little more.  Because those quieter parts do, indeed, seem louder than they did before compressing the mix, those little things like reverbs, delays, and subtle things almost buried in the mix become more evident.  This can be both a blessing and a curse.  On the other hand, not compressing a mix leaves many listeners feeling like they are listening to an inferior product because it isn’t as loud as their other songs.  Balancing the two worlds can be very tricky.

Conclusion

It is almost impossible to tell someone how to EQ a guitar or what the best compression settings are for a snare drum because there are so many variables that any answer would be considered a vague ballpark at best.  I’ve also skipped over a number of technical things:  microphone polar patterns, limiting, side-chaining, the difference between inserts and aux buses, the difference between time-based effects and modulation effects, what the difference is between phase and polarity, why you really DO need proper monitors and not stereo speakers, and all sorts of other things.  My hope is that, with what I have written about, that no matter what your tools, and no matter what your effects, you’ll be able to approach a mix with a practical way of looking at things and for helping each part to find its own place in the space you are creating.  In the end, whatever sounds good, IS good. If you don’t get it good from the beginning, damage control is a frustrating inevitability.   If you get it good from the beginning, you can easily help it along and things will almost seem to mix themselves.


No comments:

Post a Comment