Equalization brings the best out of the voice and leaves the detritus behind, once and for all.
In the struggle to make recorded music natural sounding and pleasant to the human ear, engineers go to great lengths to remove some of the natural sound. Great changes in the signal result in an almost superhuman sound that showcase the essence of the human voice. One of the last parts of this process is equalization.
When it comes to EQ, you can either boost or cut (attenuate) frequencies, thereby altering the overall characteristic of a signal. Before getting deep into the practices of EQing the voice, let’s cover the first rule of EQing.
Why didn’t we do this earlier in the process? The adjustments made with EQ might interact with the compressor and the de-esser. Keep the process as simple as possible and use EQ towards the end of the signal path.
Let’s start with some oft-quoted advice about EQ:
Subtract before you add.
A classic approach is to find a signal wanting in a particular frequency range and to boost those areas. Applying boosts to every part of your mix leads to a situation where everything is boosted and the mix gets out of control.
If you find yourself reaching for a boost, consider what can be cut from a signal. Often you can achieve that boost by loosing some other frequencies and pulling the fader up a little bit. This is a more transparent adjustment, and leaves plenty of space for other instruments, processing, and the ears of the listener.
There are plenty of producers who don’t hold with this advice. These are guys that know their way around a studio and the gear that resides in it. After developing an intuitive understanding of mixing over years, they develop their specific techniques. I think this advice gets thrown around so much because it’s easier to stay out of danger with subtractive EQ. Over time, you might rewrite this rule for yourself.
In your productions, you’ll probably spend a lot of time tweaking EQ. When you do, keep the second rule of EQ in mind.
Make deep cuts and shallow boosts.
Our ears notice boosts more than cuts. You’ll find that you can cut frequencies by 10 decibels or more and not feel a huge loss. On the other hand, big boosts (over 5 decibels) are quite noticeable. Sometimes they are quite offensive.
When you’re changing the character of the voice, you need to make decisions about how the voice sounds on its own. We’ll quickly look at a standard approach to improving the sound of the voice.
Step 1: reduce the low end noise plaguing the signal. Just cut everything below about 100 Hz. The first time I did this, I was pretty skeptical but soon wondered why I never noticed all the junk in that range. What you lose is a low rumble and any bass frequencies secondary to the voice itself. A high pass filter like this is crucial to removing mud from the signal. It’s an improvement to the voice and the mix as a whole.
Step 2: Add some air to the voice by slightly boosting the frequencies above 12 kHz. This choice is strongly dependent on the voice in question and the microphone it was recorded with. For instance, this is actually a good way to make some female voice sound worse. Those are cases where rounding off a little extreme highs may be in order. Generally, male voices really benefit from adding some high end. Remember, if it doesn’t sound good, don’t do it.
Step 3: This one is a bit trickier and will test the accuracy of your ears. You’re going to use a narrow notch of EQ to remove an offensive, mid range frequency that sits somewhere in the 1-2 kHz area, depending on the voice. The classic way to do this is to make a dramatic boost in that range and slide it around until you isolate the most nasal sound in the voice. Then change that boost to a sharp cut.
I believe this practice stems from the use of less expensive dynamic microphones that have a mid range boost in their frequency response. The Shure SM58 is a great example of such a mic. It’s one of the most used vocal mics in the world, but it can bring out the most nasal frequency in some singers.
Some engineers object to this technique because it’s a bizarre way to approach EQing. When you make that mid-frequency boost, every single frequency in this area sounds terrible, because you’re boosting it in isolation. I have to admit I have a hard time settling on a frequency with this technique. Each one begs for its own brand of destruction.
The technique is a good exercise in understanding where these frequencies sit but in the end it might be better to just slide a cut around in the mid range until you eliminate the offender. Either way, use your ears until you hear the best results. The cut might not need to be that drastic either. If it sounds good with 5 dB removed, do that.
There is another feature of EQ we haven’t mentioned, which is the bandwidth of the cut or boost, notated as Q. Wider Qs sound a little more natural but lack surgical precision. Experiment with different Q values with both cuts and boosts.
These three steps are very useful for almost all vocals, but don’t take them as gospel. Do what’s right for each voice. And make sure you listen in context.
The context, of course, is the song. Try to make all EQ adjustments with the rest of the song. A prime purpose of EQ is to make instruments and voices fit together. Without listening to the ensemble, there is potential to make some poor judgments. EQ should enhance the voice and make it sit perfectly in the mix.
I recently relived some of those poor judgments of my own. I took a look at an old mix session and was a little embarrassed at the changes I had made. Every track, except the vocals, was peppered with excessive EQ adjustments and gratuitous use of multiband compression. I wish I could account for those decisions now.
After bypassing all those plugins, I was left with raw tracks that actually sounded better. I can only laugh and marvel at how far my ears have come since that session.
We’ve come a long way with these vocals. Let’s add a little space to them in Part 9.