The use of delays in audio recording, either in between a sound and the reverberation that follows it, or between two otherwise very similar or identical sounds, is very useful for a number of reasons. But all these reasons have to do with the way our brains tell us which direction some sound is coming from, or how far away it is.
Thanks Mother Nature for that. These kinds of things help us to survive in the world, and understanding how our brains interpret sound location and distance can also help us do things to our audio productions in the studio to make the sounds more meaningful and rich to the listener.
The tips in the article referenced below are mainly focused on how we sense the front-to-back sources of sound. If you add these tricks to the other ways to make sounds appear to separate side-to-side (using the panning controls on a track) and even bottom-to-top (using EQ to help separate sounds with high and low frequency controls), you can start to create awesome mixes.
In Des’ article here: http://www.hometracked.com/2008/03/04/using-delays-for-3d-sound-placement/, he talks about these things and even provides a few audio examples. In one comparison, he makes a drum kit appear either closer or farther away by inserting a delay between the direct sound of the drum and the reverb that follows (if the reverb hits your ears after you hear the direct sound, the drums appear closer).
Have fun fooling other people’s neurons to make audio sound better!
Should You Use A Click Track?
I always use some form of tempo guide, like a click track, when recording music. For one thing, it leaves open the option to add MIDI drums or percussion later in the project. But it also allows you to copy and paste parts of, say, a guitar part that may have had a buzz, or a screw-up. If the whole song is locked to the same beats per minute (BPM), it’s a lot easier to copy a section of it (say the same guitar chord without the buzz or screw-up) and paste it into another part of the song. And if you plan to send your part of a recording to a drummer who lives in another state, it sure helps him/her to lay down the drums after the fact if the song has a steady and consistent tempo.
So for me the only question is whether I want to use my recording software’s built-in click track ( I use Reaper for my main DAW), which is the tiny beep-bop sound made by the metronome, or put a MIDI drum part onto a separate track. It’s easier and faster to just turn on the metronome. But it’s hard for some people to “groove” to that mechanical clip-clop sound of a click track. So I prefer to just load up a MIDI drum kit, enter a kick, snare and hi-hat pattern, and just paint that across the track for the duration of the song (or more accurately, my estimate of the duration, since I use the click track or drum track to record the very first thing in the song.
But some folks don’t like to use any kind of rhythm or tempo guide when recording. Some even detest the sound of the built-in click track and simply cannot follow it. Others may say that a song recorded to a click track is too mechanical sounding, lacking the human variations in tempo. For some drummers it may seem to be an outright insult to suggest they can’t maintain a perfect beat for the duration of a song. That’s OK too. There is no rule that says you have to use a click track. In fact, if you have an entire band recording all at once (as opposed to recording piecemeal and having each band member add their part at different times), it’s often preferable. It makes things easier and eliminates two of the advantages I mentioned in the first paragraph (adding drums or percussion later and/or sending a track to a drummer after the fact).
I suspect anyone you ask about whether they prefer to record to a click track (or even a MIDI drum guide track) will have their own opinion, and it’s probably split 50/50. But I’d really like to know how you feel about it! Let me (and the world) know your preference by leaving a comment below. “Click Track: Yes or No?”
3 Audio Mixing Habits To Avoid
I recently read an article that has some tips on a few very common mindset traps you should avoid. As I read through them I realized that I have had them all at various times, and it never ends well.
The first bad habit is to assume your tracks will sound better once they are mixed with the other tracks. Now I actually don’t think it’s 100 percent wrong all the time. But in general, you should make each individual track sound as good as possible and not count on the “bad” to be less noticeable when in a crowd with other sounds.
One exception that comes to mind though is how sometimes individual tracks, when soloed out of a mix, may not sound great on their own after you have applied EQ and other effects to individual instruments to allow them all to be heard in a mix. For example, it is not uncommon to scoop out a band of frequencies of an acoustic guitar to allow a piano to poke through the mix better. The parts of the guitar that sound good blended with the piano may be the high parts, and the piano may fill the middle frequencies to provide a balanced mix. But when soloed, the guitar might be missing a bunch of the middle frequencies that the piano is providing, making it sound too high and thin all by itself. In this case, the guitar really WOULD sound better in the mix. There are lots of examples like this all over the frequency spectrum. So keep that in mind when avoiding this particular bad habit.
The next one was to assume your mix will sound better when it’s mastered. While it might be true, you should never count on it. In fact just never think this way at all, ever. Just mix your songs as if your mix is what will be heard by the public. Then all you have to do is leave enough headroom for a mastering engineer to do his or her job to make it even better.
The last bad habit is the assume everyone will listen to you mix in stereo. The idea here is that when all your tracks are panned, they may sound great. But when “folded” to the center things might very well not play well with each other – with frequencies possibly cancelling each other out or combining to become too loud, or both. So it is often recommended that you test your mixes in mono to see if you have these phase problems. On the other hand, I think in this day and age, it is actually true that most folks will be listening in stereo. Also, I remember reading an article a few years ago in Recording Magazine by a producer who said he didn’t care (and his name was not “Honey Badger:)). He said if someone listens to his mix in mono and it sounds all whacked out, so much the better. It might be interesting to the person who dared not listen in stereo. Of course, I paraphrase.
Anyway, read Graham’s article about these three habits here: http://therecordingrevolution.com/2012/08/27/3-dangerous-assumptions-in-the-studio/
Reverb Effect Has Many Uses In A Music Mix
Most people know what reverb sounds like at its most basic. All by itself on a single track it can make a voice sound like it’s in an empty gymnasium, a cavern, a bathroom, or any number of different kinds of spaces. When I was in college I used to play guitar and sing in the stairwell because of the reverb sound in there. Weird Al recorded some of his early pieces (back in the Dr. Demento days) in a bathroom for its natural reverb.
It’s something that my wife (Lisa Theriot) does not like when I put too much of it on her voice in a recording. When I first started recording, I used to put it on everything. I used too much because I liked how it sounded. But I ended up making the mixes sound muddy and, well, reverby. As usual, my wife was right. Too much reverb can mess things up. It’s actually a common thing for beginners to do – use too much of just about every effect they get their hands on, especially reverb and compression.
But there are other uses for reverb than to make a voice or instrument sound like it was recorded in a concert hall (or canyon, or whatever). Used in a bunch of different subtle ways, it can help certain move certain sounds around in a mix, appearing to come from further away or closer, from the left or right. It can give more space to some sounds and blend multiple other sounds to appear more amorphous.
In this article, Adrian calls reverb “the most essential” effect. Find out why by reading the entire article here: http://audio.tutsplus.com/articles/general/why-reverb-is-the-most-essential-effect-in-your-toolkit/
Digging Into Digital Recording
In case you didn’t already know this – if we are recording audio into our computers, we’re doing digital recording. This is as opposed to analog audio recording, which is another way of saying the way we used to do it before computers (and I include a “digital recorder” as a form of a computer) came around. Before we had easy access to those things, we had to use tape to record audio. And before that, we had to record audio direct-to either wire, vinyl or wax.
I usually try to keep explanations of such things as digital recording as simple as possible, like recording into a microphone that is plugged into a computer. I might offer metaphors like when the Master Control Program digitized Jeff Bridges in Tron, but I try not to wallow in the technical mire.
Here are a couple of my posts on the topic, which should help explain things in a more understandable way than usual (that’s my goal anyway. Let me know in the comments if I succeed!):
16-Bit Audio Recording – What The Heck Does It Mean?
A Common Misconception About Bit Depth In Digital Audio
However, there are folks who really like to wallow in the technical mire. For them I offer this article, which digs into the science of digital recording. It’s actually an excerpt from the book “The Science Of Sound Recording” by Jay Kadis.
Check it out here: http://www.prosoundweb.com/article/the_science_of_sound_recording_part_1/
Enjoy!
Ken