In a rock song, you want powerful and up-front electric guitar – that’s what rock is all about, right? Unfortunately electric guitars and voices usually occupy the same spot on the frequency spectrum around 3 KHz. So the guitar often overpowers the vocal. Well, when we’re mixing, if we want to turn one thing down in relation to another (that’s what mixing is all about, right?), we can just move a slider on, say, the guitar track until it’s low enough that the vocal can be heard clearly. The only problem is that by the time you get a clear vocal all the way through the song, you’ve turned the guitar down too much for it to be punchy and powerful.
Of course you can use EQ to reduce the guitar’s frequency at around 3 KHz, which will help. But what if that isn’t enough? It often isn’t. Well that is where “ducking” comes in (which can be used for any and all instruments – not just guitar of course). See my article on ducking here: What is Ducking In Audio Recording?, where I show you how to do ducking in Reaper recording software. This technique uses a compressor to automatically push down the level of the guitar (or any instrument track), but ONLY when the singer is singing. You could do this manually if you are very, very patient. I highlighted an article earlier this week showing you in a video how to do this in Making The Vocal Track Sit Well In The Mix .
Here is an article by Bjorgvin Benediktsson, complete with audio examples, that talks about “side-chain” compression (the way you make ducking happen) to get the guitar out of the way of the vocal in a rock mix:
http://audio.tutsplus.com/tutorials/production/how-to-use-side-chain-compression-to-make-rock-guitars-stay-out-of-the-vocals-way/
EQ
Mixing Tips For Recorded Drums
Here are some tips for getting nice punchy drums in your recordings. In the title, I specifically say recorded drums because these tips by Bjorgvin Benediktsson talk about treating the overheads, which mean the recorded sound of all the drums coming through a pair of microphones set up high and a god distance away from the drum kit to capture the whole of the kit rather than any individual drum. If you didn’t record the drums, but rather use MIDI drums or some other form of virtual drumming in your mix, you likely won’t have any overheads to treat.
Of course some computer drum programs out there are more sophisticated than others. The one that comes to mind for me is my new favorite drum program called Drum Experience (by Centipeak), which not only gives you samples of each drum at multiple velocities (each velocity is actually a different recording sample), but also gives you unparalleled samples of microphone feeds as well. For example, not only do you get overhead mics for a kit, but you also get the option of turning on or off the different mic feeds for every drum! Simply amazing.
But I digress. This is supposed to be about what to do in your multi-track software (or console if you’re old-school) to get a tight and punchy drum sound. The tips involve a sequence, such as treating the overheads first, and types of effects to apply (mostly EQ, compression) to overheads and individual drums.
Read Bjorgvin’s post here: http://www.audio-issues.com/music-mixing/how-to-mix-drums-rockstar/
5 Audio Recording Tips – Part 3: Cool Stuff About EQ
This is the third in a 3-part series sharing the five things I really wish I’d known about audio recording when I was a newbie. The first one talked about stereo sound. Last time (in part 2) I told you what EQ means. This time I have 2 more tips that involve EQ. It turns out that EQ is a pretty handy thing to know about. So here are numbers 4 and 5 of the 5 things I wish I’d known.
4. Certain Types of Sounds Are Always At The Same Predictable Area of the EQ Spectrum
We humans have the ability to only hear stuff in the range of 20-20 KHz (KHz = “kilohertz”). For example, a baby’s cry occurs predictably at the frequency we are most sensitive to, around 3KHz.
This is likely a survival thing for us. It’s pretty important to be able to respond to the cries of our young. How is this relevant to modern audio recording? Well it tells us that certain sounds – bass guitar, acoustic guitar, vocals, hi-hats, etc. will pretty much always be at the same frequency areas.
Also, problems such as p-pops, saliva clicks, and sibilance (for vocals), as well as “mudiness” and bass problems (kick drum fighting the bass, etc.) all happen at the same frequency places.
A bass guitar be down around the low frequencies of 80-100 Hz. So will the kick drum. So that helps you separate them by boosting one and reducing another at nearby frequencies so they can both be heard.
The electric guitar will usually be in the mid-range between 500Hz and 1KHz. So will keyboards, violas and acoustic guitars and voices. So you can use certain EQ adjustments to separate those.
Clarinets, violins, and harmonica’s tend to generate energy in the upper-mid-range, around 2-5 KHz. And stuff like cymbals, and tambourines will be in the “highs’ up around the 6 KHz area.
Now that we know where specific things live in the range of hearing, we can adjust volumes at JUST those frequencies without affecting the rest of the audio.
How Does This Help?
Knowledge of this “range of human hearing,” and how to use (or NOT use) an EQ will come in handy more often than almost any knowledge. Once we know where to quickly find where a sound is on the EQ spectrum, we can surgically enhance, remove, or otherwise shape sounds at JUST their own frequencies, without affecting other sounds at other frequencies.
But how in the world can we adjust volume in just one narrow frequency, say 100 Hz, without also changing the volume at all the frequencies? Hmm, wasn’t there some discussion about a thing called an “EQ” that had a whole bunch of sliders on it? Could it be that those sliders were located as specific frequencies, and could turn the volume up or down just at those frequencies without affecting the rest of the sound? Why, yes. It could be. Now you know.
5. Mixing With EQ Instead of Volume Controls
Once you know where the frequencies of certain instruments are likely to live, you can use an EQ to prevent these sounds from stepping all over each other in a mix and sounding like a jumbled mess, with bass guitar covering the sound of a kick-drum, or the keyboard drowning out the guitar.
Since every different sound has their own volume control, it seems obvious what to do if something is too loud or quiet, right? ‘With multitrack recording software, can’t you just simply turn the “too loud” track down, and vice versa. I mean, isn’t that what “mixing” means?’
That’s what I used to think too. The answer is… “only sometimes.” For example, even after spending hours mixing a song one day, I simply could NOT hear the harmonies over the other instruments unless I turned them up so loud that they sounded way out-of-balance with the lead vocal. It was like a bad arcade game. There was simply no volume I could find for the harmonies that was “right.” It was either lost in the crowd of other sounds, or it was too loud in the mix.
Then I learned about the best use of EQ, which is to “shape” different sounds so that they don’t live in the same, over-crowded small car. Let’s say you have one really, really fat guy and one skinny guy trying to fit into the back seat of a Volkswagen Bug. There is only enough room for 2 average-sized people, and the fat guy takes up the space of both of those average people already. Somebody is going to be sitting on TOP of someone else! If the fat guy is sitting on the skinny guy, Jack Spratt disappears almost completely. If Jack sits on top of Fat Albert, he will be shoved into the ceiling, have no way to put a seat belt on, it’s just all kinds of ugly no matter which way you shove ‘em in.
But if I had a “People Equalizer” (PE?), I could use it to “shape” Albert’s girth, scooping away fat until he fit nicely into one side of the seat, making plenty of room for Jack. Then if I wanted to, I could shape Jack a bit in the other direction, maybe some padding to his bony arse so he could more comfortably sit in his seat. Jack just played the role of the “harmonies” from my earlier mixing disaster. Albert was the acoustic guitar. Just trying to “mix” the track volumes in my song was like moving Jack and Albert around in the back seat.
There was no right answer. But knowing that skinny guys who sing harmony usually take up space primarily between 500 and 3,000 Hz, while fat guitar players can take up a huge space between 100 and 5,000 hertz, I can afford to slim the guitar down by scooping some of it out between, say, 1-2KHz, and then push the harmonies through that hole I just made by boosting its EQ in the same spot (1-2 KHz).
Nobody would be able to tell that there was any piece of the guitar sound missing because there was so much of it left over that it could still be heard just fine. But now, so can the harmonies…because we gave them their own space! And we did all this without even touching the volume controls on the mixer. So it turns out the EQ does have it’s uses!
So those are the “big five” as I see it (#1 started on the first post in this series) If I had read an article like this when I first started down the path of audio recording, things would have been my learning curve could probably been shortened by a decade or so! I hope some young would-be recording engineers out there can benefit from this article the way I could not.
Mixing With No EQ?
In an ideal world, you wouldn’t have to use EQ when mixing music. Not only would all the sounds be perfect, but they wouldn’t overlap with any of the other sounds. Of course, this is not a perfect world and at the very least, you’ll just about always need some EQ (removing energy at certain frequencies is typically better than adding) to help two sounds that have lots of overlap in the same space on the frequency spectrum (piano, guitar and vocals for instance), come through the mix. Other ways to make this happen include panning and of course, track volume. Usually the best mixes use all of these techniques and more.
Here is an article by Björgvin Benediktsson that has some tips on how to mix guitar and vocals without using EQ:
http://www.audio-issues.com/music-mixing/how-to-mix-the-guitar-and-vocals-together-without-using-any-eq/
5 Audio Recording Tips – Part 2: Recording Loud and EQ
This is Part 2 in the series 5 Audio Recording Tips For Newbieswhere I tell you about the 5 things I really wish I had known about audio when I first started recording it. This way you don’t have to wait for years to find out about them! In the last (which was also the first:)) article I talked about stereo. Yes, even in this day of surround and 5.1 or 7.whatever, stereo is still very important.
In this post I want to tell you about tips 2 and 3, recording a good strong signal and understanding the seemingly baffling concept of EQ (short for “equalization” which is just about as confusing a moniker as it can be for what it describes).
2. Record As Loud As You Possibly Can
Noise, like the tools in your garage that you haven’t used since last century, will be with us always. Technically, noise is any sound in your recording OTHER than the thing you tried to record. It could be a bird outside your window, computer fans, lawnmowers, or all the electrical junk that is a fact of life when you use electric stuff. Generally speaking, the less noise, the better your recording. OK, so why do I wish someone had told me this at the beginning? I mean, it’s obvious isn’t it? As with so many things, “knowing about it” and “knowing what to do about it” do not always go hand-in-hand.
A LOT of programs are available for reducing noise in audio recordings. But the truth is, once the noise is in already in your recording, it’ll be very difficult to get rid of. Allow me to give you an example. Podcaster Pete goes to lots of trade shows and records interviews he on his iPod to publish on his website monthly. Before publishing, however, he sends his raw audio to an “audio guy” to have it turned into something publishable. But when said guy opens the audio and sees the signal on the computer, he is amazed by how quiet the recording is! It barely even registers on the playback meters. In order to even hear it at a reasonable volume, he has to increase the volume of the entire file, noise and all. Then, he hears the voices talking, but it’s on a backdrop of god-awful hiss and crackle. The only way forward now is use digital noise reduction tools to try and filter out the hiss as much as possible, which leads to a passably listen-able interview, but one that sounds a little like it was recorded underwater, a typical artifact of noise-reduction.
The bottom line is this: The best way to fight noise is to limit how much of it gets into your recording in the first place. In order to get the best quality possible when recording, make sure you feed the recorder a loud enough signal. But you have to be careful. Too loud, and you’ll get distortion; too quiet, and you’ll get too much noise.
You can only do so much to FIX a noisy recording after the fact. So what could Podcaster Pete have done to make a supremely better recording to start with? Some might say, “dude, just stop recording with that inexpensive mic going into your iPod!” Sure, he could buy a “digital field recorder” for between $200 and $400. Or he could find a way to make the recording LOUDER in the first place by feeding his iPod more signal, which is a faster and cheaper solution. My guess is that Pete wanted to avoid being TOO loud and overloading the input (a good idea!), but in doing so, he erred on the side of recording at too low a volume. THAT is almost as bad, since you have to crank the volume on both the tiny signal AND the noise in order to hear anything.
The bottom line is this: The best way to fight noise is to limit how much of it gets into your recording in the first place. In order to get the best quality possible when recording, make sure you feed the recorder a loud enough signal. But you have to be careful. Too loud, and you’ll get distortion; too quiet, and you’ll get too much noise.
3. EQ…That Thing You Never Knew Quite How To Use
[captionpix imgsrc=”http://homebrewaudio.co/wp-content/themes/eleven40/images/Smiley_Face_EQ-250.png” captiontext=”EQ Frequency Slider Smiley Face”]
Have you ever seen one of these things? They used to be common in “entertainment systems.” Right along with your CD Player, Cassette Player, Record Player, amplifier, and “tuner” (meaning…”radio”), would be this other big boxy thing with nothing but a row like 30 vertical (meaning “uppy-downy”) slider controls across the face of it. These sliders had little square button-like thingies that you could slide up or down. They usually started out in the middle (at the “zero” mark). About the only thing they seemed good for is making funny patterns, like smiley faces, or mountain ranges, by moving the sliders up or down in the right way. Besides maybe making you feel better by having your stereo “smile” at you, I really don’t think anybody ever knew what to do with one of these things. With a scary name like “graphic equalizer,” it sounded so important. It also sounded like a good name for a “rated-R” action movie, but that’s another story. Anyway, you had to make everyone think you knew why you had one, so you pretended to know what it did. But in reality, you felt safer just leaving it alone to sit there with its straight row of slider buttons right down the center, the way it was the day you brought it home because it came with all the other stuff.
You are probably familiar with some kinds of EQ without realizing it. You know those controls on your music player labeled “bass” and “treble”?” That’s a crude EQ! If your graphic EQ box had only 2 little sliders on it, it would be the same thing. One control makes the sound “bassier” (the low sounds) and the other makes the sound “treblier” (the high sounds). I always laugh when I see someone turn both controls all the way up or down. They have accomplished absolutely nothing that the volume knob wouldn’t do. If both (all) the sliders are up, you just turned the radio up. Congratulations. An EQ is only useful if you can make shapes OTHER than straight lines with the sliders.
So now you’re wondering what the heck an EQ IS good for aren’t you? Well for one thing, it turns out that our ears lie to us! Can you believe it? I know! It’s crazy, right? Every human has lying ears. I was floored when I first heard. See, it turns out that there are all kinds of sounds out there in the world that we can’t hear! As a matter of fact, MOST sounds are inaudible to humans. The only sounds humans CAN hear are those in the range between 20 hertz (abbreviated as “Hz”) and 20,000 Hz. A “hertz” is a measure of how often (or “frequently”) something shakes back and forth (or “vibrates”) in one second. Sound is just energy that makes air particles shake back-and-forth. When these air particles vibrate with a frequency of between 20 times and 20-thousand times per second, it makes a sound that is in the “range of human hearing.” Though if you’re 21 or older, good luck hearing things above 16,000 Hz;) (where the so called “teen buzz” or “mosquito frequency” lives. Confused? Ask a teenager.
That’s part 2 of the series. In part 3 we’ll reveal some wacky-but-true stuff about EQ.