Posted on | April 6, 2014 | No Comments
1) Aeolian Intro
3) C5 (Patchen’s Yellow Fever Mix)
4) Night Groove (Original Mix)
5) Night Groove (Bluetech Remix)
7) Talk Is Cheap
Posted on | December 24, 2015 | No Comments
This is a mid-side stereo patch exploring ways of spreading the various outputs from the DPO around the stereo field. The DPO sends 6 waveforms to the RxMx. The lower-numbered RxMx channels are fed fundamentals, and become the “mid” signal. The higher channels have increasingly spectrally-rich oscillators, and become the “side” signal.
The first clip has some reverb, ping-pong delay and drums added. The second clip is the raw patch:
Mid-side decoding is:
L = M + S
R = M — S
Maths is used to invert the “Side” signal and subtract it from “Mid”. The Optomix is used to add these two together. That way, we have the L and R signals.
I’m filtering the Side signal via the MMG, which allows for filtersweeps that only happen in stereo. It’s also a good idea to scoop out the frequency range occupied by the Mid signal with a highpass so the decoded sound isn’t as hollow. One of the many beauties of MS encoding is you can do stereo filtering (of sorts), using mono filters / effects.
One interesting part is calibrating the levels of channels 2 and 3 on Maths to get the balance right. Set the RxMx channel and radiate controls so you only hear channel 1. This should be pure “Mid”. Adjust Maths Ch 2 so that the L and R outputs are the same level. Next, set the RxMx so you only hear channel 6 — this should be pure “Side”. Adjust Ch 3 on Maths in the negative until the L and R channels have a roughly equal level. The sound should be completely phase-inverted from left to right.
Now, setting channel and radiate should mix between mono and stereo imaging, with the higher harmonics appearing mainly in the stereo field. Things can get pretty nuts of you tune Oscillator A and B to different frequencies.
Posted on | November 17, 2014 | No Comments
I got a modular piece accepted on the Make Noise Records “Shared System Series” compilation today! The series is intended to showcase artists using the MakeNoise SharedSystem rig to make a live composition, with no overdubs and no external effects (except an optional reverb).
Here’s my piece:
Here’s the patch:
And check out the whole playlist here:
Posted on | April 6, 2014 | No Comments
So I recently got a MakeNoise SharedSystem modular rig, and one thing missing from it was an apparent lack of the ability to make… noise. White noise.
However, by pushing the Wogglebug and the DPO’s internal modulation routing to the extreme, you can get some decent-sounding white noise. Basically, you turn most of the knobs on both modules all the way clockwise and listen to the DPO final output.
Here’s how it sounds, going through an MMG for filter sweeps and the Echophon for some delay:
Posted on | February 29, 2012 | No Comments
This is a very simple trick to do, but not so obvious to figure out that it’s even possible. The idea is to sidechain compress the processing on a Return bus by its own input signal, in order to clear out some “empty” space around the dry signal. It’s like making a “breathing fx bus”.
For example, if you have a staccato vocal sample being sent into a reverb or a delay, using this trick the effect tails will “swell in” over time after the dry signal stops. It’s similar to kick sidechaining.
Here’s an example without a halo:
That’s not the most inspiring demo, but this can sound very organic, and helps clear space in a full mix. To set up in Live:
- Send sound from an Audio track to a Return track.
- On the Return track, add a plugin that creates a temporal tail: ie reverb or delay.
- Add a compressor after the fx.
- Enable Sidechain, and set the Audio From dropdown to the same Return track you’re on.
- Set the Audio From position to “Pre FX” in order to sidechain from the dry signal.
- Set up your threshold, release, ratio etc. to get your desired “halo” pumping sound around the input signal.
This can be a really nice way to get some breathy fluttering organic motion in a network of Return tracks that might even be cross-sending signal to each other in a feedback network…
Posted on | May 11, 2011 | No Comments
So I’ve had a Ruin & Wesen MiniCommand for a little under a year, but haven’t been using it as much as I would like because it didn’t integrate well with my setup — until last night.
The standard way to use the MiniCommand is to connect it in a closed MIDI loop with the device in question — which makes it hard use in a computer-based MIDI setup with a sequencer. There are ways around this, eg. daisy-chaining the MiniCommand between the computer’s MIDI interface and the device you want to control, but I have found that this introduces some small timing delays (enough to drive me crazy).
Posted on | January 6, 2011 | 3 Comments
Here’s a cool sound-design trick. If you want to get a vocal-sounding ‘formant filter’ effect out of a synth that only has a normal lowpass filter, you can take advantage of a quirk of sample-rate reduction effects to generate multiple “mirrored” filter sweeps through the wonder of aliasing.
Here’s a sound clip from my machinedrum with a simple sawtooth note and a resonant lowpass filter being modulated down over a quick sweep. It’s played four times, each with increasing amounts of sample-rate reduction applied:
This sample looks like this in a sonogram (I used the Sonogram View plugin that Apple includes with XCode). Horizontal axis is time, vertical is frequency:
Notice that as the aliasing (reflected frequencies) increase with the sample-rate reduction effect, you begin to see multiple copies of the filter sweep. This creates the lovely, complicated “alien voice” sound. Here’s a short MachineDrum loop I was playing around with when I realized what was going on here:
And for the Elektron-heads reading this, here’s the MD sysex for that pattern+kit:
PS: the wikipedia article on aliasing has a good rundown on the details of this phenomenon.
Posted on | September 12, 2010 | No Comments
Out now on Beatport, Wake Up Tech features three original tracks and two dancefloor-friendly remixes by Pointbender (Sean Anderson) and Gift Culture (Michael Hale). This EP was released under my tech-house alias ‘Chakaharta’.
- Superbroken (Pointbender’s Superbender Mix)
- Superbroken (Gift Culture’s Psytech Mix)
Album artwork by Jamie Cameron Northrup.
Available for Download on Beatport. Please pass the word if you like it, and thanks for your support!
Nick Warren - “Excellent” [Hope Recordings)
Noah Pred – “thanks for sending! pointbender mix works best for me but the gift culture mix has a nice groove too” [Thoughtless Music]
Josh Collins – “really like those tracks, nice work!” [Human Life/NRK]
Shur-i-kan — “Very chunky!” [Freerange / NRK / Slip & Slide]
Soul Minority – “Alembe is Superb !! Will support Other tracks are a bit too techy for me, but Alembe in 10/10 !!! Thanks !” [Kolour / Pack Up And Dance / Stratospherik]
Posted on | June 6, 2010 | 4 Comments
In Part one of this series, I posted tips for getting the Monomachine and Machinedrum synced and recording properly with your Live sessions. The other half of the equation is which operations to avoid that might introduce latency and timing errors during your sessions.
Ableton Prints Recordings Where It Thinks You Heard Them
I guess this design must be intuitive for many users, but it confused me for a while. If you have a setup with anything but a miniscule audio buffer, monitoring through a virtual instrument witha few latency-inducing plugins in the monitoring chain, you will hear a fair amount of monitoring latency when you play a note. The same goes for recording audio.
When recording a MIDI clip, I expected that Live puts the actual MIDI events when I played them — which it doesn’t. It shifts the MIDI notes later in time to match when you actually heard the output sound — trying to account for your audio buffer delay, the latency of your virtual instrument, and any audio processing delay from plugins in the downstream signal path. There’s one exception to this — it doesn’t worry about delays you might hear due to any “Sends” your track is using.
So your MIDI notes (and CC’s) are recorded with “baked-in” delays the size of your monitoring chain latency. I’m going to call this baked latency.
Posted on | June 6, 2010 | 10 Comments
Recently I’ve been (going crazy) getting the timing tight between Ableton and two outboard sequencers — the Elektron Monomachine and Machinedrum. On their own, these silver boxes have amazingly tight timing. They can sync to each other to create a great live setup.
Add a computer DAW into the loop, and you introduce jitter, latency, and general zaniness to the equation. And it’s not trivial — this is obviously-missing-the-downbeat, shoes-in-a-dryer kind of bad. I tested the jitter / latency by ear, as well as by recording audio clips and measuring the millisecond offsets from the expected hit times.
I don’t think this is fundamentally a slow computer / poor setup issue either — I’m running a good interface, using a tiny 32 sample audio buffer. The rest of the setup is an i7 Intel Mac running OS X 10.6.3, Ableton Live 8.1.3, Emagic Unitor 8 midi interface and an Elektron TM-1 TurboMidi interface for the Machinedrum.
Below is a journal of what’s working, what isn’t, and my theories on why… Read morekeep looking »