5 Audio Recording Tips – Part 3: Cool Stuff About EQ

[captionpix imgsrc="http://homebrewaudio.co/wp-content/themes/eleven40/images/girl-peeking-EQ.png"]

This is the third in a 3-part series sharing the five things I really wish I'd known about audio recording when I was a newbie.  The first one talked about stereo sound.  Last time I told you what EQ means.  This time I have 2 more tips that involve EQ.  I turns out that EQ is a pretty handy thing to know about.  So here are numbers 4 and 5.

4. Certain Sounds Can Always Be Found At the Same Place On The EQ Spectrum

Within that range of 20-20 KHz (the “K” meaning “thousand), all manner of fascinating things happen. For example, do you know where on that spectrum a baby’s cry lives? You got it; at the most annoying frequency there is, around 3KHz. I say “annoying” because it is the frequency we humans are MOST sensitive to. Some think it is a survival thing for our species, being able to hear a baby crying in the woods. But since we don’t usually have a “crying baby” solo in our audio recordings, why is this helpful? It tells us that everything we can hear vibrates around certain predictable frequencies. And that knowledge gives us access to some pretty cool superpowers.

For example, I know just where to find the signal of a bass guitar on the spectrum. It’ll be down around the low frequencies of 80-100 Hz. So will the kick drum. The electric guitar will usually be in the mid-range between 500Hz and 1KHz in the upper mid-range along with keyboards, violas and acoustic guitars and human voices. Clarinets, violins, and harmonica’s tend to hang out mainly in the upper-mid range around 2-5 KHz, and things like cymbals, castanets, and tambourines like to be in the “highs’ up around the 6 KHz.

Just ignore the fact that, in a range that tops out at 20KHz, we call things at 7 or 8 KHz “highs.” It’s because the spectrum is exponential, and not linear. If that makes your brain hurt just ignore it. It’s sometimes better that way.

Now that we know where specific things live in the range of hearing, we can adjust volumes at JUST those frequencies without affecting the rest of the audio.

But since we don’t usually have a “crying baby” solo in our audio recordings, why is this helpful? It tells us that everything we can hear vibrates around certain predictable frequencies. And that knowledge gives us access to some pretty cool superpowers.

Knowledge of this “range of human hearing,” and how to use (or NOT use) an EQ will come in handy more often than almost any knowledge. Once we know where to quickly find where a sound is on the EQ spectrum, we can surgically enhance, remove, or otherwise shape sounds at JUST their own frequencies, without affecting other sounds at other frequencies.

But how in the world can we adjust volume in just one narrow frequency, say 100 Hz, without also changing the volume at all the frequencies? Hmm, wasn’t there some discussion about a thing called an “EQ” that had a whole bunch of sliders on it? Could it be that those sliders were located as specific frequencies, and could turn the volume up or down just at those frequencies without affecting the rest of the sound? Why, yes. It could be. Now you know.

5. Mixing With EQ Instead of Volume Controls

Once you know where the frequencies of certain instruments are likely to live, you can use an EQ to prevent these sounds from stepping all over each other in a mix and sounding like a jumbled mess, with bass guitar covering the sound of a kick-drum, or the keyboard drowning out the guitar. “But you wouldn’t need to use an EQ to do this if you use a ‘multi-track’ recorder, right?” That’s what recording engineers and producers usually use to record bands, and other musical acts, so each instrument and voice is recorded onto their very own “track.” Since every different sound has their own volume control, it seems obvious what to do if something is too loud or quiet, right? ‘Don’t you just simply use a mixer to turn the “too loud” track down, and vice versa. I mean, isn’t that what “mixing” means?’ That’s what I used to think too. The answer is… “only sometimes.” For example, even after spending hours mixing a song one day, I simply could NOT hear the harmonies over the other instruments unless I turned them up so loud that they sounded way out-of-balance with the lead vocal. It was like a bad arcade game. There was simply no volume I could find for the harmonies that was “right.” It was either lost in the crowd of other sounds, or it was too loud in the mix.

Then I learned about the best use of EQ, which is to “shape” different sounds so that they don’t live in the same, over-crowded small car. Let’s say you have one really, really fat guy and one skinny guy trying to fit into the back seat of a Volkswagen Bug. There is only enough room for 2 average-sized people, and the fat guy takes up the space of both of those average people already. Somebody is going to be sitting on TOP of someone else! If the fat guy is sitting on the skinny guy, Jack Spratt disappears almost completely. If Jack sits on top of Fat Albert, he will be shoved into the ceiling, have no way to put a seat belt on, it’s just all kinds of ugly no matter which way you shove ‘em in.

But if I had a “People Equalizer” (PE?), I could use it to “shape” Albert’s girth, scooping away fat until he fit nicely into one side of the seat, making plenty of room for Jack. Then if I wanted to, I could shape Jack a bit in the other direction, maybe some padding to his bony arse so he could more comfortably sit in his seat. Jack just played the role of the “harmonies” from my earlier mixing disaster. Albert was the acoustic guitar. Just trying to “mix” the track volumes in my song was like moving Jack and Albert around in the back seat. There was no right answer. But knowing that skinny guys who sing harmony usually take up space primarily between 500 and 3,000 Hz, while fat guitar players can take up a huge space between 100 and 5,000 hertz, I can afford to slim the guitar down by scooping some of it out between, say, 1-2KHz, and then push the harmonies through that hole I just made by boosting its EQ in the same spot (1-2 KHz). Nobody would be able to tell that there was any piece of the guitar sound missing because there was so much of it left over that it could still be heard just fine. But now, so can the harmonies…because we gave them their own space! And we did all this without even touching the volume controls on the mixer. So it turns out the EQ does have it’s uses!

So those are the “big five” as I see it (#1 started on the first post in this series) If I had read an article like this when I first started down the path of audio recording, things would have been my learning curve could probably been shortened by a decade or so! I hope some young would-be recording engineers out there can benefit from this article the way I could not.

Learn more home recording tips at Home Brew Audio

Trackbacks

  1. […] part 2 of the series.  In part 3 we’ll reveal some wacky-but-true stuff about EQ. var dd_offset_from_content = 40; var […]

Speak Your Mind

*