Cakewalk Sonar is music-making software that grew from its humble beginnings as simply “Cakewalk” in the 1980s. Like many other recording programs, Cakewalk developed from just being MIDI software into Sonar, which is a full-fledged digital audio workstation (DAW). With its ability to record multi-track audio as well as the industry standard, Pro Tools, Sonar is definitely a power player in the audio recording arena. But since it started as MIDI program, not surprisingly it, does MIDI extremely well too. Interestingly, other current DAWs (Adobe Audition and Reaper are among these) developed in the opposite way – audio first, then added MIDI – and as a result, find that the MIDI functionality is not quite as good as the Pro Tools and Sonar programs out there.
A site called The Daw Studio has a piece on Cakewalk sonar.
To read to original article, click here- http://thedawstudio.com/Gear/DAW-Cakewalk-Sonar.html
Archives for August 2013
Mixing With Headphones
When doing multi-track music recording, you obviously must mix all the instruments and voices together at the end so that everything is heard and plays well together. The challenge in mixing is to make sure many different sounds are audible in the right context and in the correct space – both left-to-right as well as front-to-back. One common rule-of-thumb is “never mix with headphones.” More specifically, it means you should not mix with ONLY headphones. You get a much more accurate picture (odd word to use for audio, right?) of what’s going on if you can hear the mix through loudspeakers (monitors), so the sound is coming through the air before it hits your ears.
However, if your space – typically a converted bedroom when we are talking about home recording studios – is less than ideal, mixing with loudspeakers creates its own problems. Reflections off the walls, ceiling, floor, etc. bounce around and create inaccuracies in what you ultimately hear (constructive and destructive wave interference are the culprits, in case you remember your wave mechanics from school;)). So there are times when it is advisable to do your mixing with both headphones and with loudspeakers.
Björgvin Benediktsson has some tips for this in his post over at Audio Issues. Check them out here: http://www.audio-issues.com/music-mixing/mixing-with-headphones-use-this-one-trick-for-better-translation/#axzz2ceYlFcI2
Auxiliary Sends For Effects
I wrote about this topic once before in the post – Using Auxiliary Sends For Effects In Pro Tools. That article references a video from Wink Sound showing you how to use auxiliary sends in Pro Tools to process effects on a bass guitar track – as opposed to simply adding (instantiating is the word usually used here, meaning creating an “instance” of an effect) each effect directly onto the bass track.
Well, I wanted to add some things to what I said in that article. First, I stated that the main reason for using an auxiliary send was to save processing power on your computer. Though this is still a major benefit if you have multiple tracks all dipping their buckets (my metaphor) into the same effects (say, reverb) trough (a metaphor for the auxiliary track with the effect on it). However, that is not the only benefit of using an auxiliary send/track for processing some effects.
As shown in the example in the video, the main benefit is the extra control you have by processing the effect independently of the dry (meaning with no effects) audio from the bass track. In the video, Mike creates a new track – the auxiliary track – and puts a chorus effect on it. Then he uses something called a “send,” on the bass track, which basically just taps/siphons/splits off the audio signal so you can send it somewhere else, while the main bass audio continues to the master output as normal. The “send” is routed to the auxiliary track to be treated by the chorus effect.
So far, there isn’t much of an advantage to doing this as opposed to just sticking (instantiating) the chorus effect on the bass track, other than from an organization standpoint where you have both a dry and a wet signal to work with. But then Mike talks about adding an EQ to the aux track (which already has a chorus effect on it) in order to filter the chorus so that the chorus only happens in a certain frequency range. You wouldn’t be able to do that if you just slapped both the EQ and the chorus effects onto the bass track.
So to sum up ((ha! a little send/bus/aux pun for you:)), using a “send” from an audio track to process effects has multiple advantages. It allows you to create a common effect track that you can send multiple tracks in order to share the same effect (BTW, you use a “bus,” as explained in the video, to accept multiple track sends for summing. Tracks, like an auxiliary track, cannot usually accept inputs from more than one source. So you set that single source as the bus). And it also allows you the flexibility to process effects differently than you could if they were all just plugged into (instantiated) the audio track.
Below is that video I keep talking about. It uses Pro Tools, but the steps are the same in any digital audio workstation (DAW). However, things can get more flexible, and consequently more complex, in computer mixers like you have in DAWs. For example, in Reaper, tracks can also act as busses! That means tracks CAN accept multiple sends from other tracks. So you can set up a track as an effects bus, putting the effects on that track and sort of combining the aux and bus into one. So you wouldn’t have to route the send from and audio track first to a bus, and then to the aux track. Yeah, I know. That’s a bit confusing, even to me, to read it. I’ll write an article just on sends busses and auxiliary tracks to help it make sense.
Anyway, here is the video. I promise this time:).
Cheers!
Bit-Crushing: Distorting Digital Audio On Purpose
Usually when I talk about digital audio and things like bit-depth (see our post – 16-Bit Audio Recording – What The Heck Does It Mean?) and Sampling Frequency (see our post – What Is Sampling Frequency?), The presumed goal is that you want your audio to be as clear and clean (free from noise) as possible. But believe it or not, there are times, especially in modern electronic dance music (EDM) when you may WANT to do the opposite. That is, you want a nasty, distorted audio sound as an effect.
In the analog days, distortion was created when physical devices, like amps, tubes, or other components in the signal, were overloaded. This is how you get that rock and roll “power chord” sound. You did it on purpose. In the digital world, things are a bit (ha!) different. You can get distortion by lowering (when to get “good audio,” you’d think “higher is better”) things like bit-depth and sampling frequency. By doing that, you are changing the shape of the audio wave form from typically curvy, to “squared off” of jagged. It’s more to do with digital audio conversion than physical gear.
That digital audio distortion and “lo-fi” sound is often sought in electronic music, and one of the common methods is called “bit-crushing.” That basically means, for example, taking something down from 16-bit to 8-bit. Here is an article that explains that in a bit more detail:
http://www.musicradar.com/us/tuition/tech/distortion-saturation-and-bitcrushing-explained-549516