Posted on | April 6, 2014 | No Comments
1) Aeolian Intro
3) C5 (Patchen’s Yellow Fever Mix)
4) Night Groove (Original Mix)
5) Night Groove (Bluetech Remix)
7) Talk Is Cheap
Posted on | November 17, 2014 | No Comments
I got a modular piece accepted on the Make Noise Records “Shared System Series” compilation today! The series is intended to showcase artists using the MakeNoise SharedSystem rig to make a live composition, with no overdubs and no external effects (except an optional reverb).
Here’s my piece:
Here’s the patch:
And check out the whole playlist here:
Posted on | April 6, 2014 | No Comments
So I recently got a MakeNoise SharedSystem modular rig, and one thing missing from it was an apparent lack of the ability to make… noise. White noise.
However, by pushing the Wogglebug and the DPO’s internal modulation routing to the extreme, you can get some decent-sounding white noise. Basically, you turn most of the knobs on both modules all the way clockwise and listen to the DPO final output.
Here’s how it sounds, going through an MMG for filter sweeps and the Echophon for some delay:
Posted on | February 29, 2012 | No Comments
This is a very simple trick to do, but not so obvious to figure out that it’s even possible. The idea is to sidechain compress the processing on a Return bus by its own input signal, in order to clear out some “empty” space around the dry signal. It’s like making a “breathing fx bus”.
For example, if you have a staccato vocal sample being sent into a reverb or a delay, using this trick the effect tails will “swell in” over time after the dry signal stops. It’s similar to kick sidechaining.
Here’s an example without a halo:
That’s not the most inspiring demo, but this can sound very organic, and helps clear space in a full mix. To set up in Live:
- Send sound from an Audio track to a Return track.
- On the Return track, add a plugin that creates a temporal tail: ie reverb or delay.
- Add a compressor after the fx.
- Enable Sidechain, and set the Audio From dropdown to the same Return track you’re on.
- Set the Audio From position to “Pre FX” in order to sidechain from the dry signal.
- Set up your threshold, release, ratio etc. to get your desired “halo” pumping sound around the input signal.
This can be a really nice way to get some breathy fluttering organic motion in a network of Return tracks that might even be cross-sending signal to each other in a feedback network…
Posted on | May 11, 2011 | No Comments
So I’ve had a Ruin & Wesen MiniCommand for a little under a year, but haven’t been using it as much as I would like because it didn’t integrate well with my setup — until last night.
The standard way to use the MiniCommand is to connect it in a closed MIDI loop with the device in question — which makes it hard use in a computer-based MIDI setup with a sequencer. There are ways around this, eg. daisy-chaining the MiniCommand between the computer’s MIDI interface and the device you want to control, but I have found that this introduces some small timing delays (enough to drive me crazy).
Posted on | January 6, 2011 | 3 Comments
Here’s a cool sound-design trick. If you want to get a vocal-sounding ‘formant filter’ effect out of a synth that only has a normal lowpass filter, you can take advantage of a quirk of sample-rate reduction effects to generate multiple “mirrored” filter sweeps through the wonder of aliasing.
Here’s a sound clip from my machinedrum with a simple sawtooth note and a resonant lowpass filter being modulated down over a quick sweep. It’s played four times, each with increasing amounts of sample-rate reduction applied:
This sample looks like this in a sonogram (I used the Sonogram View plugin that Apple includes with XCode). Horizontal axis is time, vertical is frequency:
Notice that as the aliasing (reflected frequencies) increase with the sample-rate reduction effect, you begin to see multiple copies of the filter sweep. This creates the lovely, complicated “alien voice” sound. Here’s a short MachineDrum loop I was playing around with when I realized what was going on here:
And for the Elektron-heads reading this, here’s the MD sysex for that pattern+kit:
PS: the wikipedia article on aliasing has a good rundown on the details of this phenomenon.
Posted on | September 12, 2010 | No Comments
Out now on Beatport, Wake Up Tech features three original tracks and two dancefloor-friendly remixes by Pointbender (Sean Anderson) and Gift Culture (Michael Hale). This EP was released under my tech-house alias ‘Chakaharta’.
- Superbroken (Pointbender’s Superbender Mix)
- Superbroken (Gift Culture’s Psytech Mix)
Album artwork by Jamie Cameron Northrup.
Available for Download on Beatport. Please pass the word if you like it, and thanks for your support!
Nick Warren - “Excellent” [Hope Recordings)
Noah Pred – “thanks for sending! pointbender mix works best for me but the gift culture mix has a nice groove too” [Thoughtless Music]
Josh Collins – “really like those tracks, nice work!” [Human Life/NRK]
Shur-i-kan — “Very chunky!” [Freerange / NRK / Slip & Slide]
Soul Minority – “Alembe is Superb !! Will support Other tracks are a bit too techy for me, but Alembe in 10/10 !!! Thanks !” [Kolour / Pack Up And Dance / Stratospherik]
Posted on | June 6, 2010 | 4 Comments
In Part one of this series, I posted tips for getting the Monomachine and Machinedrum synced and recording properly with your Live sessions. The other half of the equation is which operations to avoid that might introduce latency and timing errors during your sessions.
Ableton Prints Recordings Where It Thinks You Heard Them
I guess this design must be intuitive for many users, but it confused me for a while. If you have a setup with anything but a miniscule audio buffer, monitoring through a virtual instrument witha few latency-inducing plugins in the monitoring chain, you will hear a fair amount of monitoring latency when you play a note. The same goes for recording audio.
When recording a MIDI clip, I expected that Live puts the actual MIDI events when I played them — which it doesn’t. It shifts the MIDI notes later in time to match when you actually heard the output sound — trying to account for your audio buffer delay, the latency of your virtual instrument, and any audio processing delay from plugins in the downstream signal path. There’s one exception to this — it doesn’t worry about delays you might hear due to any “Sends” your track is using.
So your MIDI notes (and CC’s) are recorded with “baked-in” delays the size of your monitoring chain latency. I’m going to call this baked latency.
Posted on | June 6, 2010 | 8 Comments
Recently I’ve been (going crazy) getting the timing tight between Ableton and two outboard sequencers — the Elektron Monomachine and Machinedrum. On their own, these silver boxes have amazingly tight timing. They can sync to each other to create a great live setup.
Add a computer DAW into the loop, and you introduce jitter, latency, and general zaniness to the equation. And it’s not trivial — this is obviously-missing-the-downbeat, shoes-in-a-dryer kind of bad. I tested the jitter / latency by ear, as well as by recording audio clips and measuring the millisecond offsets from the expected hit times.
I don’t think this is fundamentally a slow computer / poor setup issue either — I’m running a good interface, using a tiny 32 sample audio buffer. The rest of the setup is an i7 Intel Mac running OS X 10.6.3, Ableton Live 8.1.3, Emagic Unitor 8 midi interface and an Elektron TM-1 TurboMidi interface for the Machinedrum.
Below is a journal of what’s working, what isn’t, and my theories on why… Read more
Posted on | November 20, 2009 | 2 Comments
The basic idea is to use a simple OSC library available for Ruby to code interesting music, and have Native Instruments’ Reaktor serve as the sound engine. Tadayoshi Funaba has an excellent site including all sorts of interesting Ruby modules. I grabbed the osc.rb module and had fun with it.
I’m giving a brief presentation at the Bay Area Computer Music Technology Group (BArCMuT) meet-up tomorrow, un-officially as part of RubyConf 2009 here in San Francisco.
Here’s a link with downloads and code from my talk. It should be all you need to get started, if you have a system capable of running Ruby, and a copy of Reaktor 5+ (this should work with the demo version too).
Ruby mono sequence example:reaktorOscMonoSequences-192 MP3
Ruby polyphonic drums example:reaktorOscPolyphonicDrums-192 MP3
Leave a comment below if you have any questions, or cool discoveries!
keep looking »