The Bit Depth Monster Will Eat Your Recordings and Swallow Them Whole

    Printer-friendly versionSend by email

    Who’s ready to get nitty gritty? Okay, me. I’m back doing some studio projects here and there, and it’s reminding me of technical details that are not just valuable, but completely indispensible. So let’s go…and a warning ahead…this one’s gonna be a long one.

    Some of us had the benefit and pure joy of having worked in analog before the digital studio invasion, and you’ll often hear engineers pining over the good old days of quarter inch tape. It’s not uncommon for people who came out of the analog world to completely bad-mouth digital to the point that they seem almost insane. Just open the conversation in any gang of self-respecting engineers and you are eventually going to hear “Digital SUCKS!!!!”
     
    From the standpoint of being a well rounded medium that was forgiving and naturally well balanced and a true reproduction of what you wanted to capture in the studio, analog tape was awesome. I often look back fondly at my days in large analog studios carrying in those big badass boxes of 24 track tapes, winding on the reels, and everybody hunkering down for long sessions together playing with the then large buttoned stock of outboard gear that we had come to love and respect.
     
    But analog was dense and clunky and demanding. If you had gain staged your signal properly, you were good to go mostly because analog tape was notoriously supple. A little oversaturation sounded good. But performances were massively rehearsed and you had one track to do your work usually. And you relied A LOT on the expertise of your tracking engineer. There would eventually be a section that needed to be redone, and the engineer was going to have to punch you in and out exactly at the right points in the tape pass to grab your performance while leaving the rest of the track intact. Things didn’t always go as planned, and there lots of white knuckle moments. And redos. And heavy sighs. And splicing. Mixes required lots of people to hit all the mutes and faders and EQ changes like a well-oiled machine. Sometimes that went great. And sometimes you took the best of 6 very different randomly “performed” mixes. Lots of moving parts in the studio required lots of maintenance and even more knowledge. People had specialized jobs, and every good studio had at least one if not two full-time techs who spent the majority of their time fixing equipment. It was fun, but also VERY time consuming, expensive, and accessible by only those of us lucky enough to have had the means to choose a very specific path in music. And this is the reason that many large studios are now obsolete.
     
    Flip forward 20 years to the digital revolution and all of the above has mostly disappeared. Especially for the average working musician, who are now operating their own affordable digital home studios with more automation than anyone could have ever imagined back in analog land. The tools are amazing, even if we have sacrificed a little natural warmth in the process. We could all probably argue for days about whether the digital revolution has actually enhanced the world of recording or degraded it, and we would get every opinion under the sun in the process, but – we don’t need to. Because it’s here to stay. And if you want to record from here on out, you are going to have to learn how to deal with it. Suck it up, purists.
     
    Digital, unlike analog, is not a truly natural occurring state of being for sound. There is a lot of conversion going in, and this is an important concept for digital home studio engineers to wrap their heads around.
     
    I’m pretty certain if you’re interested in this topic, you’ve already got an I/O box and have already recorded some stuff into your DAW (Digital Audio Workstation). You might even be working on a song right now. Or maybe you’re even finishing up your first full-length album.
     
    If you’re planning on sharing that song with the world and you want to put your best foot forward, then you need to know about bit depth. Yes. I know. You’ve never heard of that before. That’s because no one talks about it. Don’t ask me why, I’m not sure. Probably because it’s technical and not fun, and as soon as you start talking about it someone who doesn’t understand it is gonna try to tell you that it’s not important. However – bit depth, to the digital engineer is very important. Here’s why:
     
    In our current state of digital being, you are probably working on a DAW that offers you either  a 44.1 KHz or greater sampling rate. Some of you working in video may often use 48 KHz. And people are moving further and further into 96 and 192 KHz recording. The sampling rate is rather simple, even though most people only know that a larger number is supposed to be better. What it actually tells you is how many samples per second of an analog sound wave can be recorded and played back by your DAW. 44.1 translates into 44,100 slices of the wave per second.
     
    Unlike listening to an analog sound wave which your ear hears as a continuous piece of sound information, in the world of digital, your DAW grabs and plays back enough slices to trick your ears into believing that they are hearing the entire sound wave. The more samples per second, and the closer the digital reproduction becomes to the complete sound wave. This is why more samples means supposedly better sound. For an in-depth explanation of the smoothing involved in sampling rate and how it mimics analog, Bruce Bartlett has written an awesome piece called “Digital Recording Does Not Chop Up Your Music.”
     
    So that’s the conversion process going into your computer. But – what’s interesting about all of this obsession in the digital audio community about increased sample rate, is that when you get to the point where you are going to dump all that same data OUT to a single stereo digital source, our world hasn’t really progressed very much. This is where bit depth comes into play.
     
    If you type “bit depth” into Google, you’re going to get a truckload of different definitions for it, including some that now apply to photo processing and video. I’m not going to go into all the technical numbers and details about the topic, because this is an entry about practical applications. So for our everyday studio work, let’s think of bit depth as a finite tunnel at the end of your recording project. Let’s say you have 24 bits to work with (although you may only have 16 in your system, which is also perfectly fine). This means that only a certain number of “binary words” made up of 0s and 1s (the alphabet of digital) that are now known as your recording will fit into that 24 bit finite tunnel. Binary words past the capacity of the 24 bit tunnel will have to go somewhere else other than inside the tunnel. In audio, they go to that magical place called “away.” Like where we throw all our trash and stuff we should probably be recycling right about now.
     
    Like a freeway where five lanes have to merge into two at the scene of an accident, the tunnel can easily become overwhelmed with data that wants to get through. But unlike the freeway, where cars simply wait their turn and crawl through slowly (this is how your internet connection also deals with the problem), the digital music tunnel has no choice but to eliminate extra information that wants to get through when it can’t handle anymore – so imagine those cars in the extra lanes just flying off the side of the road when they start to merge. In the digital audio world, this is called truncation. Digital information simply gets chopped off the ends of the five lane freeway and goes to “away.” (Someday we should all visit this place just to see all the shit that is there. It’s probably like a museum by now.)
     

    In real audio terms, the dynamic range (what we called headroom in analog days) in your recording becomes smaller. Your stereo field may shorten and narrow. Longer trailing reverbs will disappear, the quality of certain EQs will be diminished, and your overall sound quality will suffer. There may even be distortion. This is obviously a problem when we have spent so much effort and money on having more sampling rate and really trying to get good recordings. There are some people out there who claim that “frequency range” is not affected in truncation. In terms of audio, that is obviously, well…stupid – because audio files are ALL frequency range information. I don’t know what form of crack those people are on, but I’d like some, please.
     
    For awhile, the maximum bit depth out there was 16 bits. We went to 24 bit about ten years ago. And oddly – that’s pretty much where we’ve stayed, due mostly to a determination by the computer industry. Even more interesting is that bit-wise (if that’s a word, but if George Bush can make up words, so can I) our methods for delivering music keep getting relatively smaller as our sampling rate numbers go up. CD quality means that even if you have sent your final data out in 24 bits, it will have to be “dithered” (think further truncated) down to 16 bits. Convert your files to mp3s and forget it – from the compression viewpoint, you’re recording is now very far from its original state of 24 bits. Engineers raised in analog will tell you they hate the sound. They may be the only people out there who can really hear the difference, but that still tells you something.
     
    And though people will argue all the time that all of these minor differences don’t matter at all, the reality is there are times when they do and times when they don’t. If you have absolutely STELLAR recording skills and some really great equipment, then it might not matter at all. If you are a total newbie with some just okay equipment, then it could matter a whole lot. Because if there is one thing you will learn in the studio if you spend a lot of time there, is you can’t put something back in that isn’t there in the first place. And quality is one of those things.
     
    So how do you fend off the bit depth monster and keep him from eating your recordings for lunch? There are real things that you can do in every one of your recording sessions to make sure your quality is not devoured later:
     
    1.)    While for many home studio engineers the limit on the number of tracks you record has become a question of how much processing power their computer has, remember that when you mix, all of this information has to go somewhere. As does all the information generated by every plug-in on every track. Just because you CAN record 150 tracks and use 10 plug-ins on each track doesn’t mean that you should. Chances are if you have overwhelmed your stereo field in the first place, you are also going to lose some quality during truncation. Sometimes less is more. I usually hate that phrase, but I have to admit that sometimes it is true.
     
    2.)    Use routing and plug-ins efficiently: subgroup like tracks that share plug-ins into the same Aux track whenever possible to reduce the number of instances that a plug-in is running (this will also save you processing power). Are you putting an effect on a track just because, or because someone said you should and not because it sounds better? Don’t. Running tons of them for no reason means you essentially cancel out the effects or quality of others – not just because of bit depth, but because they start to over-occupy your stereo field and lay on top of each other. If you don’t need something, don’t use it. Use plug-ins like spices, and go to the excellent sound guy’s primary effect – EQ – before immediately trying to fix an incomplete track with layers of fancy filters or delays. Learn the EQ spectrum and experiment with it often. Very often, if used correctly, you don’t even need other effects.
     
    3.)    Bounce down your MIDI tracks to audio tracks. Yes – this means recording them in real-time internally. I’m a firm believer in this old-fashioned approach for a few reasons. The first is that it saves processing power at your mix down. It requires more processing power to run loads of MIDI and also playback audio at the same time AND run all your plug-ins. Processing glitches and digital hiccups are not your friend at mixdown or bounce to disc time. Keep the MIDI tracks but turn them off so you can change the patches later if you feel like it. But once at mix down, this can simplify your life.
     
    The second is that if you further want to streamline your mix, during your MIDI bounce you can also bounce the plug-ins settings you are using on them at the same time. Yes, I know, this means you can’t change them as you go, but if you do this late enough in your mix, you often won’t need to. And even if you did, you just engage the MIDI track, make your adjustments, and re-bounce the track in real time. Again, this saves running too many plug-ins at the end of the mix. And it keeps my track mixer window a little tighter, which means I won’t make some stupid mistake that I didn’t mean to and don’t have to scroll from here to China to see all my tracks.
     
    The third reason is it is actually a really great way to archive and backup the integrity of your final mix. It’s all there together in audio files and doesn’t require any MIDI to play back correctly, which is great if in two years you need to do a remix on someone else’s DAW because you can just bump out all the audio tracks as AIFF files and easily import them no matter what system they are using or which virtual instruments their system includes.
     
    4.)    Experiment with not using the Bounce to Disc function in your DAW. But instead, like you did for your MIDI tracks – route a stereo master of all your tracks and record your final mix in REALTIME to a new stereo track. Export that track out instead of a bounced stereo mix. The internal bounce function provides different people with different results on different systems. Truthfully, we have no idea what happens to your audio once that bar starts rolling by. Some computer nerd is going to give you some technical explanation ending in the words, “it doesn’t matter.” You have to be the judge of that – and it just takes a little experimentation to do it, so go ahead and take a Saturday and just try it out on your system. Some people also use a separate mastering/sweetening program like Peak to do the actual dither to 16 bit instead of the one in your DAW. Peak LE offers a free version, which I’ve used on several occasions to at-home “master” a track for film with great results.
     
    Even better – if you’re doing a full album for mass release – run your stereo master out to an external high quality recorder of some kind or directly to your mastering engineer’s system in REALTIME (If they will let you. Some will and some won’t.) This used to be standard practice when digital first entered the fray and bounce to disc didn’t exist yet – and there’s a reason – it guarantees the integrity of your final mix. What happens when your mastering engineer masters truncated file copies of your mixes that you copied to a CD for him/her, or even worse, compressed and sent via e-mail? Anything that went wrong during truncation is now amplified. And since mastering now means your recording is going to be further compressed and get louder, it is possible that those problems will now stick out. Obviously, your mastering engineer can only make things sound better if you give him/her the best mix to work with.
     
    5.)    Take unnecessary obstacles out of your signal path. In one of my mastering experiences with Alan Yoshida at Ocean Way Studios (also known as some of the best ears in the business), I experimented with dropping out the master channel of my DAW all together, because Alan was hearing some stereo field issues in the prelim mix I took to him. When a genius in a million dollar room tells you that something is not right, it’s good to listen. I had no idea where to look for the issue because my system was setup normally and at home, obviously – on speakers nowhere near the quality in Alan’s room – it was a lot less obvious. But, for some bizarre reason, the master fader channel inside my system was degrading the overall sound of the whole mix and, according to Alan, eliminating the far rear of the stereo field. Since the master wasn’t necessary, I ran all my tracks and auxes into one bus and then assigned that directly to my main outs. And the problem disappeared. Try to remember that your studio is a lot like your instrument signal path. You wouldn’t put extra stuff in the chain that you don’t use, right? If you don’t do this on your pedalboard, don’t do it in your DAW.
     
     
    There is one critical phase in your recordings which provides you with a whole lot of quality control: listening back on many different systems once you have finished a mix. Go ahead and experiment with this A LOT. Bounce down in different ways, burn to a CD and listen to it in your car. Take it to a friend’s house. Listen on an old boom box, on your laptop through the computer speakers, and on an iPod.
     
    Does it sound as good as when you’re leaning into your near-fields or going over your mix on headphones in your studio? What do the vocals sound like in terms of their overall mix? Has it changed? Are they crisper or have they lost some lower mid frequency warmth? Are your reverbs as deep and sparkly as they were in the studio? Can you hear the trails? Stereo field depth and range is a key part of the digital information package. Is that phaser you put on the keys actually sweeping from far left to far right, or is it somehow now just lingering in the middle? Does your mix suddenly sound flat - like everything is sitting on top of each other? Is the overall mix distorted in any way?
     
    Do the necessary experiments for output to find the best possible reproduction of what you’re hearing on playback on your system and you will have found the solution to dealing with the bit-depth monster in your personal studio.
     
    And, egads – by all means – if you are releasing your own indie album, take your time with the final mix and mastering stage. I see musicians all the time who spend months and months on the recording of their record and then rush right through the mix and master as if it barely matters because they are so excited that the process is “done.”
     
    Sadly, when you listen to their records, it’s hard to know where things went wrong. All you notice is they don’t sound as good as they should. I’m listening to one right now where there is noticeable distortion in almost every track. Hard to tell whether that was fader overload during the mix, a problem with bounce down or the compression of delivery as an mp3. Either way, it’s not pleasant, it’s hurting my ears, and the result is even though the music is great and I really like the artist, I don’t want to listen to it. 
     
    So…get in there and get your hands dirty if you want to fight off the bit depth monster.
     
    Nobody ever said recording was for wusses.