Use delay to make a rolling kick drum.

Often I use delay on the main kick drum to create a rolling or pumping under current to a song. This technique is sort of the old school equivalent to sidechaining a bassline. However, the old school method sounds different enough that it should be a color on anyone’s sound making palette. It’s a simple trick and in Ableton its just a matter of a few clicks to the desired effect.

First create an Impluse and put a place in a 4/4 kick drum. Next, add an Ableton Simple Delay to a Send Return channel. The Simple Delay loads up with the preset we want so you don’t have to tweak anything. Lastly, turn up the Send Return’s volume on the Impulse Channel to hear the kick drum start pumping and rolling along.

Imagine you have a song and during the verse you have the Delay off (by turning the Send Return to zero) and then when the Chorus begins the Delay is on. This builds some tension and energy into the Chorus. Maybe you have a song and you can’t get any bass sound to fit? Just forget the bass and use a delaying kick drum instead. Many dance records in the 90s used this technique. Partly because it was a sure way to get a dance groove and possibly even because there wasn’t enough sample memory available for a bass sound in an Akai S900!

Adding a delay to a bassline which has notes strategically placed off the 4/4 grid can get you an old school EBM sound. Early Front Line Assembly tracks all had basslines treated with delay in this manner. Here’s an example:

But let’s not stay stuck in the 90s. Switch the kick to something tight, increase the shuffle to about 50% and replace the bass sound with a high end noise sound and add a low pad and your now in this decade:

I know this is an incredible simple technique but it’s hundreds of small details like these that add up to a song that’s infinity interesting.

Related post: 6 steps to Sidechain the Auto Filter in Ableton Live 7

Published by

Oliver Chesler

"Hello my name is Oliver and I'm going to tell you a story." I have been recording music since 1989 under the name The Horrorist. I have released over 60 singles and 4 full length albums. To hear my music please go to:

15 thoughts on “Use delay to make a rolling kick drum.”

  1. Hey Oliver,

    Thanks for the tip…and don’t apologize for sharing the simple stuff. I’m just getting started in all this processing stuff (been in some kind of music forever, but not this part) and this kind of thing is really helpful–especially with the pictures and sound examples. Thank you!

    BTW, if you wanted to share some stuff on different kinds of filtering effects for snare drums, I’m looking for information on that, too.

  2. Hi Oliver,
    man, thanks for always sharing your tips.
    It’s because of artists like you I will never loose my faith in good music!
    Thanks man!

  3. Sometimes I don’t want my electronic drum sounds to be sequenced or multi-tracked, I want to play it all realtime, yet at the same time I want the drums to sound more complex than my skill level allows. :) I use a technique similar to this.

    What I do on my sampler is take my percussion sample, loop the sample to match the timing I want (an 8th or 16th note or whatever), set the amp env so it has a nice decay while looping (this creates a pseudo-delay effect). Then I map drum pad velocity to amp, and have that play along with a non-looped copy at normal amp.

    Then I don’t mess around with any send knobs or anything, I just hit the pad harder for more “delay”, lighter for less “delay”.


    Comments are closed for this old entry, I just wanted to tell you something since I’m a coder and audio engineer as well as a certified electronics engineer.

    Your explanation regarding volume faders is totally incorrect on a very fundamental level. Lowering faders in the mixer is bad.

    Let’s say the individual Audio channels in your particular DAW are 32 bit internally:

    If you keep the volume as close to max as possible for a channel (close to clipping but 1-2 db under, looking at the volume/level meter, NOT the fader since the fader position doesn’t matter as the fader is just for adjusting the RELATIVE volume of whatever is input INTO the channel) you may get this for a nearly perfectly peaking channel where the headroom is fully utilized:

    However, once you start to go under the highest possible peak of a signal (with the help of adjusting down with the fader or plugins that output at a lower volume than what went into them) you start losing bits, LITERALLY degrading the sound:

    This bit reduction, or loss of bits is NOT GOOD. You may have a beautifully sounding 24 bit audio sample playing back on a channel which supports up to 32 bit of internal headroom, and you reduce the volume by -40 db (using the fader), which leads to it being played back as 12 bit (an example, consider that every -3 dB halves the volume, dramatically reducing bit depth after a while)! Remember, this is DIGITAL, where the ONE AND ONLY way of reducing volume is that you LITERALLY remove bits off the top end to get a lower total signal strength (volume).

    Therefore you should ALWAYS strive to max out the headroom on channels, having near as-close-as-possible-to-peak for the channels that you want loudest in the mix, and having others (that are meant to be more quiet) go under that. The method you suggest is totally wrong for this reason. You SHOULDN’T (and will do yourself a disservice) adjust EVERY track -12dB with the faders, that’s insane and is NOT the way DAWs are meant to be used. They don’t default faders at 0dB for nothing, defaulting at 0dB means no adjustment in volume for that track, why? Because digital mixers DO NOT NEED to down-adjust audio to be able to sum it.

    The reason for that is that INTERNALLY, the MASTER BUS that sums all the channels before going to the OUTPUT will have extremely high resolution, perhaps 128 bit, which allows the audio engine to sum AN INFINITE amount of channels WITHOUT clipping their sums, EVER. You can have 300 tracks all reaching peak volume, and still INTERNALLY at the OUTPUT SUM BUS it would not clip.

    This sum is then COMPARED to your specifications for the project, depending on if it’s set to 16 or 24 bit etc (depends on your soundcard). So if the MASTER SUM is, say 46 bits, that’s clearly larger than 24 bit and you can’t produce those top 22 bits with your chosen output format/soundcard settings! What do you do? Simple, do what the DAW manufacturers want you to do, you adjust the MASTER OUTPUT FADER down, to make the MASTER SUM go down and get within the limits of the output specifications. So you tackle that 46 bit sum and attenuate it down to 24 bits and presto, you’re done!

    But having to attenuate this internal sum (today’s current situation, to adjust the master sum to the output specs) is not bad, since this attenuation still stops at a high bit depth of 16 or 24 bits, meaning the result is still totally pristine. Going from 46 bit -> 24 bit = a 24 bit result = perfect audio.

    This is different from adjusting INDIVIDUAL channels, where you have sounds sampled at (or produced at, by softsynths) 24 bits and then you unknowingly bring their fader down, taking them from 24 bit -> 15 bit, and so on.

    See the difference between output sum and channel levels?

    So all that red, ominous, peaking MASTER OUTPUT is showing is that you may have 40 bits of data in the master bus, and it won’t fit in the 24 bits of the final output. That’s why you SHOULD lower the master fader until you don’t “clip” anymore (see, I put that in quote marks, because if I haven’t said it enough already, there ISN’T any clipping INTERNALLY, it’s just that the INTERNAL RESULT would be too big for the output bit depth to handle, so you have to lower the master volume a bit to reduce the number of bits in the OUTPUT, and bring it down to something playable by your studio audio interface, which most often operates at either 16 or 24 bits of output capacity).

    This is why channel sums will NEVER even be CLOSE to the internal summing engine limits of DAWs, the ONLY limiting factor is what bit depth you are aiming to output at, which will force you to lower the master fader. End of story.

    So to sum it up:

    – Keep individual channels as close to maxing out the meter as possible, to PRESERVE THE AUDIO QUALITY OF EACH INDIVIDUAL TRACK, and down-adjust ONLY the channels that you want more subtle/quiet. Your aim is to have the most prominent channels reaching close-to-peak-levels, to have the highest bit depth possible, and adjust other tracks down from there, to attain your standard song dynamics where some sounds are more quiet than others. So you should base your channels around the loudest sound being at peak, and quieter sounds going down from there. Bit depth is literally the one and only factor of digital audio quality, and once you adjust your volume down it’s LOST and never to be recovered, because the bit depth has been reduced by lowering the volume.

    – Adjust the master output until the INTERNAL sum comes within 16/24 bit limits, or whatever bitrate you are mixing at. Again, remember this, your project bitrate has nothing to do with the internal audio engine, it mixes at such a high bitrates that you can NEVER make it clip. The thing that says that the result “clips” is when the final sum uses more bits than the OUTPUT bitrate of your soundcard/project setting can handle. If you have set your soundcard to 24bit, and you have a 26 bit master sum, you’ll lose 2 bits off the top, leading to the clipped peaks you see and the red clipping indicator. Therefore, ALL that is needed is to down-adjust on the master fader to get it within 24 bit limits and voila, you have the maximum audio quality attainable. Again, NEVER down-adjust the individual channels the way you described, it’s going to lower audio quality.

    Now you know how audio engines work, and this is how you maximize the digital headroom potential.

    There was something interesting in your post however, you mentioned PLUGINS “not liking loud signals”. That’s actually possible with some cheap plugins, let’s say if your project is set to 24 bit and a BAD plugin only has a 16 bit process internally, that means that it can’t handle PEAKING VOLUME (all 24 bits used), so your channel levels indeed WOULD overload that plugin. BUT that would ONLY apply for those BAD plugins which process audio at low bitrates internally, and NOT AT ALL for the audio engine in your DAW itself. As far as the audio engine goes, MAXIMUM VOLUME = BEST QUALITY, no argument about that, ask any coder or DAW manufacturer or just re-read what I’ve written above.

    But HONESTLY, I’d love to know if ANY PRO plugins are actually that stupid to not process audio in something like 64 bit (or AT LEAST 32 bit) internally, to prevent clipping of the input signals. I’d have thought a REQUIREMENT when coding a plugin synth/fx would be to support up to 32 bit audio, and can’t imagine ANY pro plugins not fulfilling the requirement of having the plugin code working at 32 bit internally to preserve all audio resolution.

    Lastly, for a comparison of digital volume vs analog volume and to explain why analog ideas (where summing many loud channels DOES create “hot” high-voltage signal sums that MAY make analog audio processors overload due to overvoltage at the input stage) don’t apply to the digital world:
    The lowering of volume of ANALOG audio vs DIGITAL audio are completely different beasts. Analog is just a fluctuating voltage and it can easily be lowered without reducing the sampling RESOLUTION (bit depth ie 16/24/32 bit) of the audio, simply by inserting a resistor in the electrical path, lowering the signal voltage and thereby reducing the volume. This is of course because analog audio is not sample-based and not dependent on converters and bit depths, it’s entirely voltage based and that’s what determines the volume, where higher voltages = louder. You lose voltage, but not audio quality since there is no such thing as voltage headroom, meaning that a higher voltage just creates a larger difference between the audio signal and the inherent normal electrical interferences existing in ALL analog signal paths, so as long as you have audio voltages in the hundreds of millivolts you will have a sufficient distance to the inherent electrical interference that you have no loss of audio quality (analog loss of audio quality = low voltage electrical noise, nothing else, and this electrical noise gets more noticeable if your audio signal is low voltage, requiring more amplification, which in turn also amplifies the noise signals). But with DIGITAL the ONLY way to lower volume is to start DELETING bits off the top end, which LOWERS THE RESOLUTION (bit depth) = worse audio quality.

    As final words regarding your proposed change, -12dB is not that much of a reduction in quality though, so you may only be going from 24bit audio for each channel, to 18 bit for instance (in an example where your project was set to 24 bit), and if that helps plugins that only process audio in 16 bit get LESS clipped (bit depth overloaded), then that’s good. As I said though, I’d be surprised if anything but the WORST homemade plugins processed audio in less than 32 bit bit depth. Most plugins I’ve seen process the audio data in 32/64/96 or 128 bits.

  5. Chris,

    Don’t most DAWs using floating point values for summing? I believe so. In which case, raising or lowering your levels wouldn’t effect your bit resolution as you describe it, as a floating point number is a fixed precision and only the sign and the exponent change.

  6. Rex, you’re onto something there!

    I haven’t thought about the fact that this might have changed since the early-mid 90s when they used fixed point (since it’s faster when calculating). They still use 32 bits for the channels though, and something like 128 bit for the master summing bus. The 32 bits per channel are split to 24 bits for the mantissa, which is adequate as we only record up to 24 bit today.

    Therefore the other things I said still stand. The master bus cannot be overloaded, and the individual channels, while not a requirement that the levels stay maxed the way I described to utilize all the bit depth (as would have been the case if the channels were still fixed-point, something I learnt a looong time ago and as you see it was outdated), cannot be overloaded and as long as any plugin processes audio using 32 bits of precision, you CAN’T clip any plugins AT ALL.

    So my question and statement still stands: ARE THERE any PRO plugins that are stupid enough not to process at 32 bits internally? And the fact that the master bus CANNOT be overloaded so you have NO reason to lower all individual channels just to “not clip” the master bus since there IS NO clipping even when the indicator is in the red, as explained above all that means is that the channel sum value would be too large for your 24 bit sound card to handle, and has to be brought down a bit.

    Therefore I still question everything about the original article, as the master bus cannot be overloaded (as the article erroneously claims), and I seriously doubt that anyone but the dumbest home-coders would be dumb enough to not use at least 32 bit processing internally in the plugins. Most plugins that deal with high fidelity tout their usage of 64/96/128 bits to oversample and prevent audio artifacts.

    So ARE THERE any PRO plugins with this defect? That’s the only standing point of this argument, and I am thoroughly interested to hear if there are any so I can avoid them.

  7. Found more than a couple sites talking about how DAWs now use floating point:

    This is far from the original question though (regarding if there are any PRO plugins where the coders are so INCOMPETENT that they don’t use at least 32 bit precision, and where the people OVERLOOKING the project would be equally stupid and not see the problem in the coding, thereby using too low precision and overloading the plugin bandwidth the way Oliver described in his article), but I found these pages interesting nonetheless.

    If you only read one thing here, read this paper by Rane explaining why fixed point is better:

    And here’s an interesting discussion regarding how a 48 bit double precision fixed point value is more accurate audio-wise since it doesn’t have to deal with floating point rounding errors:
    The first post has an error saying that “floating point calculations are used because the calculations are cheaper to achieve from ready-made chips” which is totally wrong and someone corrects him a few replies down, as you should all know, floating point chips are much MUCH more expensive. It’s the answers to his initial question that are interesting.

  8. Hi Everyone. Sorry that comments were closed on older stories… that was an accident caused by one of the plug-ins I am running with this WordPress install. I fixed it now and you should be able to comment on any post no matter how old.

  9. That’s good Oliver, I wish these comments could have been posted to the original story.

    What’s your comment to what I’ve posted here? Clearly explaining that distortion (clipping) CAN’T come from the DAW bus, and that in the case of plugins it would ONLY be stupid coders that could make the mistake of not writing their plugin code to be capable of handling at least 32 bit audio signals. I doubt that ANY major PRO plugins have this defect, and would be really surprised if even an AMATEUR coder would do this oversight since the first thing you learn when you start coding plugins is that you need to process the audio at sufficient bit depth to allow for any of todays input signal word lengths (another word for bit depth). This fundamental support for todays standard audio word lengths (16/24 being the most common) is a requirement that only the dumbest of dumb coders could overlook, hence why I don’t think ANY pro plugins have made the mistake you describe. They’re not idiots, the DAW manufacturers plugins can DEFINITELY be trusted since they know what they’re doing, they coded the engine and know everything in intricate detail. So that leaves 3rd party plugins, of which I think ALL pro companies would have good enough DSP coders to know this FUNDAMENTAL requirement. That only leaves homebrew plugins by teenagers, and even them would have heard about bit depth when they started to write their plugins, it’s one of the first things you have to tackle.

  10. Disappointing that you don’t confront this technical dissection of the fictitious issue, I was hoping to hear if there really were ANY plugins that had this problem.

  11. Disappointing that you don’t confront this technical dissection of the fictitious issue, I was hoping to hear if there really were ANY plugins that had this problem. Also, if ANY (which I doubt) plugins had this problem of processing audio at less than 32 bits, then lowering volumes by just -12 decibels isn’t going to get it within limits.

    Speaking of which, are the faders in Ableton live post-fx or pre-fx? If they are post-fx (bringing down volumes AFTER the effect chain) then lowering volume would have NO way of affecting plugin processing AT ALL.

Leave a Reply

Your email address will not be published. Required fields are marked *