Mixing Part 1: The tools.
There are a number of ways to skin a cat. The same can be said of mixing. Both will leave you with blood on your hands and a very upset girlfriend. But not for the same reason. Mixing leaves you open to accusations of not having any time for your loved one, and always having the thousand yard stare of a war veteran. Skinning your girlfriend’s cat leaves you open to accusations of horrible brutality.
Luckily I’ve only went down one of those paths. So without further ado here is my guide to flaying domesticated mammals:
Only kidding. We’re talking about mixing here. More specifically, the tools that get it done. But you’re going to need to pay attention and if you don’t understand what I’m saying, ask or you’ll have to repeat the class.
Mixing is literally that: taking more than one sound and mixing them together. But if that were the end of it I suspect it wouldn’t be a thing at all any more than opening the drawer to get a teabag is a named part of “making a cup of tea”. As it is, for a mix to sound good it has to fulfil some very difficult criteria. It has to be clear in its intent. It has to generally have a balanced spectral content: that is the relative amount of low frequencies and high frequencies. Too much or too little of any frequency range makes the mix sound amateurish. Most of all it has to be true to the intention of the song. It has to somehow evoke the dynamic contrasts between sections, the tension and release of parts, and the balance of instruments as the musicians intended. It doesn’t have to be based on reality, but it does have to make you believe it’s happening. We all know a human voice isn’t as loud as a drum kit or cranked guitar amp, but when we hear songs we don’t care because it’s made to sound believable.
Each part that I’ve recorded gets its own track. Some parts get more than one track if I’ve used more than one mic, as mentioned in previous instalments of “Josh drinks whisky and talks about recording”. Each of these tracks can be raised or lowered in volume, and panned anywhere between the left and right speaker, like so: (turn up your speakers so you can clearly hear what I’m doing!)
Mixes tend to need more than just those volume and panning adjustments to sound good, though, unless it’s a simple mix or has been recorded in the most expert way imaginable. And I promise I’ve not done that. Mixes also have to cater for the deficiencies of the human ear. We think it’s a great organ, and in some ways it is – I’ve read that if it were much more sensitive we’d actually be able to hear the effect of air molecules vibrating against it in Brownian motion. So in some ways it’s as high fidelity as it’s possible to be in air. It can detect an incredible range of frequencies and process them into something we understand as sound rather than just a bunch of vibrations in the fluid that surrounds us (yep, air is a fluid!).
But ears are also totally shit. Like, f*cking blind to sound.
If you’re listening to an instrument that has lots of bass, and there’s another instrument that also has lots of bass but is quieter, you probably won’t hear the bass from the other instrument at all. You’ll just hear a muddy noise that gets in the way of you being able to hear what’s happening. This masking happens in time too. A loud sound will mask a quiet sound even if the quiet sound happens a split second before the loud sound. The result of this masking is audio confusion: Instruments that should by rights sound clear, that sound great on their own, will somehow vanish without a trace into the mix, leaving only a sense of congestion and lack of clarity.
But we have tools to combat this. Clever little tools. The most powerful of these is the EQ (short for equalisation). It’s one of the first effects they ever made, because they needed it. With EQ we can filter out frequencies we don’t want to hear or wouldn’t hear anyway, add frequencies where there are gaps in the mix to help a sound cut through, get rid of bad sounds and emphasise good ones. Sometimes I think of mixing as being like trying to push a bunch of big plasticine shapes onto a little pane of glass, and having to somehow change the shapes to make them all fit while still keeping them recognisable. Sorry if that’s a stupid analogy, I genuinely imagine this when I’m mixing!
The next most powerful tool is compression. It’s a mysterious tool that takes years to understand, let alone master. At its most simple, it compresses the volume range of whatever you put into it – the loud and quiet bits come out more even in volume than they were before. This is handy. A good live band can go from whisper quiet to roaring, and that sounds great live but it wouldn’t work on a recording as you drive along in your car or listen on the bus: Make it loud enough to hear the quiet bits and the loud bits would destroy you, make it quiet enough that the loud bits are fine and you wouldn’t hear the quiet bits. Almost all recordings, even classical ones, have compression for this reason. And used sparingly, we don’t even notice, because we expect to hear the loud and quiet bits clearly and our ears actually compress by themselves at high volumes.
Compression has more tricks to reveal. Weird little controls labelled “attack” and “release”. What do they do? Attack tells the compressor how long it should wait after it hears a loud sound before it actually reduces the volume of the loud bit. So if you set it to a second, the first second of any loud sound gets through unaffected before the compressor cottons on to you and ducks the volume down. In practice, a second is too long. Reduce it to between 20 and 60 milliseconds or so and you get this great loud and punchy spike at the beginning of each loud part, but then the compressor kicks in and keeps the rest of the volume manageable. That initial loud spike grabs the attention of your ear and makes them think “Oh! A loud bit! This is cool.” Except it’s not actually loud for the rest of the time. It’s just a trick.
The release knob tells the compressor how long after it’s stopped hearing a loud sound it should wait before it stops clamping down on the volume. This knob is really difficult to get right, mainly because even after years of mixing I often can’t tell the difference. But sometimes I can, and there’s usually a setting that “feels” good even if I couldn’t tell you exactly why I prefer it. But for a simple example, imagine Drummer boy is hitting the bass drum 4 times a second. If I set the release to more than a quarter of a second, the compressor isn’t releasing its grip on the volume by the time the next bass drum hit happens, so it will never give me the punchy attack I want: it’s operating too slowly.
Compression is hard to get your head around, Again, turn this up and you’ll hear what’s happening better:
Really, EQ and compression are the two most powerful mix tools you’ve got – you can shape the sounds hugely with these two, and mixes have been done without anything else. There are other effects that can get pretty fun. Reverb is one, echo is another. Reverb is important – without it the sound is dead, and has no context. We’re not used to hearing no reverb. The first time I stood in a totally dead sounding room was the weirdest thing I’ve experienced. People next to me sounded 10 meters away, yet I could hear the slightest whoosh of air from a closing door. Or a sphincter. No hiding in there.
So, those are the most important tools. Next week, I’ll talk through the actual mixing of one of the songs on the EP.
Hold on to your cats, it might be a wild ride.