When doing multi-track music recording, you obviously must mix all the instruments and voices together at the end so that everything is heard and plays well together. The challenge in mixing is to make sure many different sounds are audible in the right context and in the correct space – both left-to-right as well as front-to-back. One common rule-of-thumb is “never mix with headphones.” More specifically, it means you should not mix with ONLY headphones. You get a much more accurate picture (odd word to use for audio, right?) of what’s going on if you can hear the mix through loudspeakers (monitors), so the sound is coming through the air before it hits your ears.
However, if your space – typically a converted bedroom when we are talking about home recording studios – is less than ideal, mixing with loudspeakers creates its own problems. Reflections off the walls, ceiling, floor, etc. bounce around and create inaccuracies in what you ultimately hear (constructive and destructive wave interference are the culprits, in case you remember your wave mechanics from school;)). So there are times when it is advisable to do your mixing with both headphones and with loudspeakers.
Björgvin Benediktsson has some tips for this in his post over at Audio Issues. Check them out here: http://www.audio-issues.com/music-mixing/mixing-with-headphones-use-this-one-trick-for-better-translation/#axzz2ceYlFcI2
Auxiliary Sends For Effects
I wrote about this topic once before in the post – Using Auxiliary Sends For Effects In Pro Tools. That article references a video from Wink Sound showing you how to use auxiliary sends in Pro Tools to process effects on a bass guitar track – as opposed to simply adding (instantiating is the word usually used here, meaning creating an “instance” of an effect) each effect directly onto the bass track.
Well, I wanted to add some things to what I said in that article. First, I stated that the main reason for using an auxiliary send was to save processing power on your computer. Though this is still a major benefit if you have multiple tracks all dipping their buckets (my metaphor) into the same effects (say, reverb) trough (a metaphor for the auxiliary track with the effect on it). However, that is not the only benefit of using an auxiliary send/track for processing some effects.
As shown in the example in the video, the main benefit is the extra control you have by processing the effect independently of the dry (meaning with no effects) audio from the bass track. In the video, Mike creates a new track – the auxiliary track – and puts a chorus effect on it. Then he uses something called a “send,” on the bass track, which basically just taps/siphons/splits off the audio signal so you can send it somewhere else, while the main bass audio continues to the master output as normal. The “send” is routed to the auxiliary track to be treated by the chorus effect.
So far, there isn’t much of an advantage to doing this as opposed to just sticking (instantiating) the chorus effect on the bass track, other than from an organization standpoint where you have both a dry and a wet signal to work with. But then Mike talks about adding an EQ to the aux track (which already has a chorus effect on it) in order to filter the chorus so that the chorus only happens in a certain frequency range. You wouldn’t be able to do that if you just slapped both the EQ and the chorus effects onto the bass track.
So to sum up ((ha! a little send/bus/aux pun for you:)), using a “send” from an audio track to process effects has multiple advantages. It allows you to create a common effect track that you can send multiple tracks in order to share the same effect (BTW, you use a “bus,” as explained in the video, to accept multiple track sends for summing. Tracks, like an auxiliary track, cannot usually accept inputs from more than one source. So you set that single source as the bus). And it also allows you the flexibility to process effects differently than you could if they were all just plugged into (instantiated) the audio track.
Below is that video I keep talking about. It uses Pro Tools, but the steps are the same in any digital audio workstation (DAW). However, things can get more flexible, and consequently more complex, in computer mixers like you have in DAWs. For example, in Reaper, tracks can also act as busses! That means tracks CAN accept multiple sends from other tracks. So you can set up a track as an effects bus, putting the effects on that track and sort of combining the aux and bus into one. So you wouldn’t have to route the send from and audio track first to a bus, and then to the aux track. Yeah, I know. That’s a bit confusing, even to me, to read it. I’ll write an article just on sends busses and auxiliary tracks to help it make sense.
Anyway, here is the video. I promise this time:).
Cheers!
Bit-Crushing: Distorting Digital Audio On Purpose
Usually when I talk about digital audio and things like bit-depth (see our post – 16-Bit Audio Recording – What The Heck Does It Mean?) and Sampling Frequency (see our post – What Is Sampling Frequency?), The presumed goal is that you want your audio to be as clear and clean (free from noise) as possible. But believe it or not, there are times, especially in modern electronic dance music (EDM) when you may WANT to do the opposite. That is, you want a nasty, distorted audio sound as an effect.
In the analog days, distortion was created when physical devices, like amps, tubes, or other components in the signal, were overloaded. This is how you get that rock and roll “power chord” sound. You did it on purpose. In the digital world, things are a bit (ha!) different. You can get distortion by lowering (when to get “good audio,” you’d think “higher is better”) things like bit-depth and sampling frequency. By doing that, you are changing the shape of the audio wave form from typically curvy, to “squared off” of jagged. It’s more to do with digital audio conversion than physical gear.
That digital audio distortion and “lo-fi” sound is often sought in electronic music, and one of the common methods is called “bit-crushing.” That basically means, for example, taking something down from 16-bit to 8-bit. Here is an article that explains that in a bit more detail:
http://www.musicradar.com/us/tuition/tech/distortion-saturation-and-bitcrushing-explained-549516
Tips For Recording Virtual Electric Guitar
Virtual instruments are a favorite topic of mine. Sampling and modeling technology makes it possible to play and record instruments that sound real – ARE, in fact, real in many ways – without having to have the actual instrument in your studio. And never mind the advantage – in certain cases – of not having to learn to play that instrument in real life. I put violins into this category. I tried. I really did. But I never got nearly good enough to sound like the virtual violins I can trigger via MIDI and “play” with a keyboard. So there.
Electric guitar is another issue. I CAN play guitar. And I DO have an amp. But I almost never record “the normal way,” meaning with a mic pointing at the amp. I get to have tons of choices of different amps and sounds, with my amp simulator set-up, Pod Farm, from Line 6. I love it.
Here is and article with several tips on how to get the best results when recording an electric guitar without an actual amplifier. Check it out here: http://www.prosoundweb.com/article/six_tips_for_great_electric_guitars_without_amps/
Recording In A Poor Room
I just read an article offering tips on how to record good audio if you are stuck doing it in a poor room. My definition of a poor room is a rectangular room, usually a converted bedroom in your house. By definition a rectangular is poor because the way sound bounces around in one of those, you tend to get dead zones where certain frequencies are cancelled out, and other areas where certain frequencies get artificially boosted. Those things make it worse for listening (important for mixing and mastering) than for recording, really. Probably the worst thing for recording is to have bare walls, ceiling and floors that are parallel, which make for lots of echos and reverb (yeah, technically the same thing – shh!), which you don’t want in your recordings.
We already have an article on this topic here – How to Build a Home Recording Studio: Part 2 – Four Tips For Preventing Noise, and actually here as well – Recording Vocals In a Bedroom Studio. Preventing noise is really what it’s all about. And room echo is one category of noise.
Mattress
The tips in the article I mentioned started out by saying to use a mattress BEHIND the person recording (assumption was that you are recording vocals). The logic was that the voice would bounce off the wall behind the mic, reflect again behind the vocalist, bounce off THAT wall, and then bounce back into the sensitive end of the mic. So if the mattress is behind the vocalist, the reflection off the front wall will be trapped by the mattress and not be able to bounce off the back wall to enter the sensitive side of the mic. OK, maybe. But if you put the mattress in front of the vocalist, behind the mic, the voice won’t reach the front wall, so there will be nothing to bounce off the back wall. The real truth is that it will depend on your vocalist, your room, and what kind of mattress. So if you decide to try the mattress method. Be sure to try it both ways. Heck, why not try both. If you have multiple mattresses, you could build your own mattress vocal booth.
Add Plush Furniture
The next tip from that article is to add soft, plush furniture, which is actually sort of an extension of the mattress idea, which is to have more things to absorb the sound, the logic probably being that what gets absorbed won’t be able to bounce off the walls. Again, this may or may not work. Sort of related to this is something I did once. We had a closet pole from which hung several wool cloaks (part of our medieval re-creation hobby). I positioned myself completely surrounded – almost covered – in wool cloaks. That eliminated outside noise and echo.
Use Dynamic Microphones
I strongly disagree with this one as a top tip for reducing noise. While it is true that dynamic mics offer less sensitivity than studio condensers, they also are usually (until you get into mics costing well over $300) not good enough for things like voice-over recording or lead vocals for music. The author states that the trade-off is worth it. Trust me on this one. Unless you are using one of the expensive dynamics, no it isn’t. I do voice-over work AND sing lead vocals, and I do it in a bedroom! See my articles above for the countermeasures I use. It is VERY possible to do it in a poor room. No, it isn’t ideal, but it definitely CAN be done. Now before I start getting hate mail from owners of really good dynamic mics, let me say that it is ALSO possible to get excellent vocal sound from certain dynamic mics, such as the Electro-Voice RE20 and the Shure SM7b. But these mics are $449 and $349, respectively. So at the very least, I would qualify this particular tip by saying “if you can afford it, try a really good dynamic mic to help reduce room noise.” But if you are on a budget, you can get more bang for your buck with a large-diaphragm condenser mic, which can act as a great all-around mic. The two dynamics I mentioned are pretty much designed for vocal broadcast – for radio, podcast, voice-over, etc. But I highly recommend renting or borrowing one of these mics before buying, as it may not help much, if at all, with your particular room.
Use Close-Miking
This one I absolutely agree with. It’s in my list of the top things to do. In fact, I would say it should be the first thing you try. By getting your lips very close to the mic – like 2-to-4 inches – you help to sort of “crowd out” the other types of noise. Also, if you have a mic that has a cardioid pick-up pattern, close-miking can give you increased low-frequency response, which can be pleasing for voice-overs.
Like so many things in audio recording, there are very few rules that apply to everyone and every space. Try these things out and see what works best for your voice, your room, and your budget.
Happy recording!