Posted on | April 6, 2014 | No Comments
So I recently got a MakeNoise SharedSystem modular rig, and one thing missing from it was an apparent lack of the ability to make… noise. White noise.
However, by pushing the Wogglebug and the DPO’s internal modulation routing to the extreme, you can get some decent-sounding white noise. Basically, you turn most of the knobs on both modules all the way clockwise and listen to the DPO final output.
Here’s how it sounds, going through an MMG for filter sweeps and the Echophon for some delay:
Posted on | February 29, 2012 | No Comments
This is a very simple trick to do, but not so obvious to figure out that it’s even possible. The idea is to sidechain compress the processing on a Return bus by its own input signal, in order to clear out some “empty” space around the dry signal. It’s like making a “breathing fx bus”.
For example, if you have a staccato vocal sample being sent into a reverb or a delay, using this trick the effect tails will “swell in” over time after the dry signal stops. It’s similar to kick sidechaining.
Here’s an example without a halo:
That’s not the most inspiring demo, but this can sound very organic, and helps clear space in a full mix. To set up in Live:
- Send sound from an Audio track to a Return track.
- On the Return track, add a plugin that creates a temporal tail: ie reverb or delay.
- Add a compressor after the fx.
- Enable Sidechain, and set the Audio From dropdown to the same Return track you’re on.
- Set the Audio From position to “Pre FX” in order to sidechain from the dry signal.
- Set up your threshold, release, ratio etc. to get your desired “halo” pumping sound around the input signal.
This can be a really nice way to get some breathy fluttering organic motion in a network of Return tracks that might even be cross-sending signal to each other in a feedback network…
Posted on | May 11, 2011 | No Comments
So I’ve had a Ruin & Wesen MiniCommand for a little under a year, but haven’t been using it as much as I would like because it didn’t integrate well with my setup — until last night.
The standard way to use the MiniCommand is to connect it in a closed MIDI loop with the device in question — which makes it hard use in a computer-based MIDI setup with a sequencer. There are ways around this, eg. daisy-chaining the MiniCommand between the computer’s MIDI interface and the device you want to control, but I have found that this introduces some small timing delays (enough to drive me crazy).
Posted on | January 6, 2011 | 3 Comments
Here’s a cool sound-design trick. If you want to get a vocal-sounding ‘formant filter’ effect out of a synth that only has a normal lowpass filter, you can take advantage of a quirk of sample-rate reduction effects to generate multiple “mirrored” filter sweeps through the wonder of aliasing.
Here’s a sound clip from my machinedrum with a simple sawtooth note and a resonant lowpass filter being modulated down over a quick sweep. It’s played four times, each with increasing amounts of sample-rate reduction applied:
This sample looks like this in a sonogram (I used the Sonogram View plugin that Apple includes with XCode). Horizontal axis is time, vertical is frequency:
Notice that as the aliasing (reflected frequencies) increase with the sample-rate reduction effect, you begin to see multiple copies of the filter sweep. This creates the lovely, complicated “alien voice” sound. Here’s a short MachineDrum loop I was playing around with when I realized what was going on here:
And for the Elektron-heads reading this, here’s the MD sysex for that pattern+kit:
PS: the wikipedia article on aliasing has a good rundown on the details of this phenomenon.
Posted on | June 6, 2010 | 4 Comments
In Part one of this series, I posted tips for getting the Monomachine and Machinedrum synced and recording properly with your Live sessions. The other half of the equation is which operations to avoid that might introduce latency and timing errors during your sessions.
Ableton Prints Recordings Where It Thinks You Heard Them
I guess this design must be intuitive for many users, but it confused me for a while. If you have a setup with anything but a miniscule audio buffer, monitoring through a virtual instrument witha few latency-inducing plugins in the monitoring chain, you will hear a fair amount of monitoring latency when you play a note. The same goes for recording audio.
When recording a MIDI clip, I expected that Live puts the actual MIDI events when I played them — which it doesn’t. It shifts the MIDI notes later in time to match when you actually heard the output sound — trying to account for your audio buffer delay, the latency of your virtual instrument, and any audio processing delay from plugins in the downstream signal path. There’s one exception to this — it doesn’t worry about delays you might hear due to any “Sends” your track is using.
So your MIDI notes (and CC’s) are recorded with “baked-in” delays the size of your monitoring chain latency. I’m going to call this baked latency.
Posted on | June 6, 2010 | 10 Comments
Recently I’ve been (going crazy) getting the timing tight between Ableton and two outboard sequencers — the Elektron Monomachine and Machinedrum. On their own, these silver boxes have amazingly tight timing. They can sync to each other to create a great live setup.
Add a computer DAW into the loop, and you introduce jitter, latency, and general zaniness to the equation. And it’s not trivial — this is obviously-missing-the-downbeat, shoes-in-a-dryer kind of bad. I tested the jitter / latency by ear, as well as by recording audio clips and measuring the millisecond offsets from the expected hit times.
I don’t think this is fundamentally a slow computer / poor setup issue either — I’m running a good interface, using a tiny 32 sample audio buffer. The rest of the setup is an i7 Intel Mac running OS X 10.6.3, Ableton Live 8.1.3, Emagic Unitor 8 midi interface and an Elektron TM-1 TurboMidi interface for the Machinedrum.
Below is a journal of what’s working, what isn’t, and my theories on why… Read more
Posted on | November 20, 2009 | 2 Comments
The basic idea is to use a simple OSC library available for Ruby to code interesting music, and have Native Instruments’ Reaktor serve as the sound engine. Tadayoshi Funaba has an excellent site including all sorts of interesting Ruby modules. I grabbed the osc.rb module and had fun with it.
I’m giving a brief presentation at the Bay Area Computer Music Technology Group (BArCMuT) meet-up tomorrow, un-officially as part of RubyConf 2009 here in San Francisco.
Here’s a link with downloads and code from my talk. It should be all you need to get started, if you have a system capable of running Ruby, and a copy of Reaktor 5+ (this should work with the demo version too).
Ruby mono sequence example:reaktorOscMonoSequences-192 MP3
Ruby polyphonic drums example:reaktorOscPolyphonicDrums-192 MP3
Leave a comment below if you have any questions, or cool discoveries!
Posted on | November 16, 2009 | 4 Comments
So this is another example of using the MD’s internal sampler to create a recursive “feedback loop” of sampling and resampling and resampling.… This has a tendency of psychedelically twisting the underlying beat. The way this stuff sounds has really surpassed my wildest dreams.
Posted on | November 4, 2009 | 2 Comments
This was a first test at using the Machinedrum’s internal sampler recursively. I was trying to emulate my fractal wavetables sounds in hardware, as closely as the MD could do it.
Posted on | September 3, 2009 | No Comments
I recently slugged through mixdown on my track Super Broken and found the following 5 tips invaluable:
1. Mono Is Awesome
I’ve heard this one a million times, but never actually tried it. This article does a great job describing the hows and whys: The Secret Benefits To Mixing In Mono. Among other great insights — if you sum to mono and listen through a single speaker, you get less room and cross-speaker interference.
2. FX Halos
This is a great trick for time-expanding effects like delays and reverb. In a word, duck your effects sends by the signals feeding them. The gradual release of your ducker / compressor creates a “halo” around the dry sound, as the effected tail glides up into the mix. This article does a great job describing how to set this up in Ableton Live.
3. The Law Of “Common Fate”
Learned this one from John Chowning, the father of FM synthesis, at a BArCMuT talk.
Gestalt psychology turns out to be a goldmine for some making abstract works of art (like electronic music). The law of “common fate”, according to Wikipedia, is: “Elements with the same moving direction are perceived as a collective or unit.”
Chowning’s example had to do with applying vibrato to FM string sounds, but it has applicability all over the mixing process.
For example, when “pumping” pads, hi-hats, and basslines in syncopation with the kick drum, the principle of “common fate” suggests your brain will gel them into a unit — providing more contrast between the upbeat and downbeat.
4. Embrace Subtle Delays
This is related to the previous point on “common fate”. I’ve found it’s very useful to use a short “ambience” ‘verb, and send low levels of many parts of the song in order to “seat” everything in an acoustic space. Again, this is an old trick, but I found this article illuminating in knowing what my brain wants to hear.
5. If You Make Dance Music, You Need To Be Able To Monitor Down To 28 Hz
And unless you’re in a really, really well-setup room with no neighbors, that means getting a good pair of ‘cans.
After extensive research into every pair of headphones I could find, I narrowed the field down to the Ultrasone HFI-550’s. Got mine off Amazon for $89. All I have to say is – 50 mm drivers (they don’t make the 550’s anymore, but the HFI-580’s are similar). I feel these come the closest to replicating the sound of your track playing over a nice club system — especially in the bass department. They didn’t sound great when I first got them (compared to a 4 year old pair I’d borrowed from a friend), but I’ve been burning them in with medium-loud pink noise and the bass extension is loosening up nicely. Update 2011: I don’t love the sound of the HFI 550’s after all. I found my old Sony MDR-V7506’s actually seem more faithful in the bass department, despite their smaller (40mm) drivers. The insight still stands — if you want to rock the subs, make sure you can hear the lows with your monitoring setup. A good pair of cans can help you check your mixes: you can hear the bass without the distractions of any room modes or other free-air acoustic problems.
If you can hear the sub-bass, you can mix the sub-bass. Simple as that.keep looking »