jdn

... purveyor of funky beats and assorted electric treats ...

Labs

Most Common Keys For House and Deep Tech Music

Posted on | March 6, 2018 | No Comments

In an attempt to get my tracks to sound more like the tunes I like to DJ, I ran across an arti­cle about how club sub freq’s can range down to about 40Hz, which is just below the note E1.

Note Hz
C1 32.7
D1 36.7
E1 41.2
F1 43.7
G1 49.0
A1 55.0

The arti­cle (wish I could find it again) men­tioned that’s why a bunch of tracks are in F and G — because those root bass notes fall com­fort­ably in the range that can be played back. A low F1 note is prob­a­bly one of those tones that sounds KILLER on a good rig.

Look­ing at some of my favorite house, tech­no, and deep tech tracks from 2017, I noticed that most are in F, Bb, Eb, G, or C. I’m not sure my analy­sis soft­ware (djay Pro 2) is dis­cern­ing major vs minor here.

But check out the cir­cle of fifths:

(By Just plain Bill — Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=4463183)

… those keys are all clus­tered in the upper-left cor­ner, which means that they will be easy to tran­si­tion to dur­ing a set because jumps to neigh­bor­ing keys in the cir­cle of fifths sound good.

Now inter­est­ing­ly, this guy’s analy­sis of the Beat­port Top 100 Tracks showed a dif­fer­ent set of pop­u­lar keys — but I don’t play tracks as “Pop”-y as the Top 100. Who knows.

 

Wogglebug + DPO as White Noise Source

Posted on | April 6, 2014 | No Comments

So I recent­ly got a MakeNoise SharedSys­tem mod­u­lar rig, and one thing miss­ing from it was an appar­ent lack of the abil­i­ty to make… noise. White noise.

How­ev­er, by push­ing the Wog­gle­bug and the DPO’s inter­nal mod­u­la­tion rout­ing to the extreme, you can get some decent-sound­ing white noise. Basi­cal­ly, you turn most of the knobs on both mod­ules all the way clock­wise and lis­ten to the DPO final out­put.

Click for full image

Click for full image

Here’s how it sounds, going through an MMG for fil­ter sweeps and the Echophon for some delay:

FX Halos in Ableton Live

Posted on | February 29, 2012 | No Comments

This is a very sim­ple trick to do, but not so obvi­ous to fig­ure out that it’s even pos­si­ble.  The idea is to sidechain com­press the pro­cess­ing on a Return bus by its own input sig­nal, in order to clear out some “emp­ty” space around the dry sig­nal. It’s like mak­ing a “breath­ing fx bus”.

For exam­ple, if you have a stac­ca­to vocal sam­ple being sent into a reverb or a delay, using this trick the effect tails will “swell in” over time after the dry sig­nal stops.  It’s sim­i­lar to kick sidechain­ing.

Here’s an exam­ple with­out a halo:

fxNo­Ha­lo

Now with:

fxHa­lo

That’s not the most inspir­ing demo, but this can sound very organ­ic, and helps clear space in a full mix.  To set up in Live:

  • Send sound from an Audio track to a Return track.
  • On the Return track, add a plu­g­in that cre­ates a tem­po­ral tail: ie reverb or delay.
  • Add a com­pres­sor after the fx.
  • Enable Sidechain, and set the Audio From drop­down to the same Return track you’re on.
  • Set the Audio From posi­tion to “Pre FX” in order to sidechain from the dry sig­nal.
  • Set up your thresh­old, release, ratio etc. to get your desired “halo” pump­ing sound around the input sig­nal.

This can be a real­ly nice way to get some breathy flut­ter­ing organ­ic motion in a net­work of Return tracks that might even be cross-send­ing sig­nal to each oth­er in a feed­back net­work…

Click for full-size

MiniCommand, Machinedrum, and OS X

Posted on | May 11, 2011 | No Comments

So I’ve had a Ruin & Wesen Mini­Com­mand for a lit­tle under a year, but haven’t been using it as much as I would like because it didn’t inte­grate well with my set­up — until last night.

The stan­dard way to use the Mini­Com­mand is to con­nect it in a closed MIDI loop with the device in ques­tion — which makes it hard use in a com­put­er-based MIDI set­up with a sequencer. There are ways around this, eg. daisy-chain­ing the Mini­Com­mand between the computer’s MIDI inter­face and the device you want to con­trol, but I have found that this intro­duces some small tim­ing delays (enough to dri­ve me crazy).

Read more

Alien Autopsy Via Sample-Rate Reduction

Posted on | January 6, 2011 | 2 Comments

Here’s a cool sound-design trick. If you want to get a vocal-sound­ing ‘for­mant fil­ter’ effect out of a synth that only has a nor­mal low­pass fil­ter, you can take advan­tage of a quirk of sam­ple-rate reduc­tion effects to gen­er­ate mul­ti­ple “mir­rored” fil­ter sweeps through the won­der of alias­ing.

Here’s a sound clip from my machine­drum with a sim­ple saw­tooth note and a res­o­nant low­pass fil­ter being mod­u­lat­ed down over a quick sweep. It’s played four times, each with increas­ing amounts of sam­ple-rate reduc­tion applied:

increas­ing srr

This sam­ple looks like this in a sono­gram (I used the Sono­gram View plu­g­in that Apple includes with XCode). Hor­i­zon­tal axis is time, ver­ti­cal is fre­quen­cy:

Notice that as the alias­ing (reflect­ed fre­quen­cies) increase with the sam­ple-rate reduc­tion effect, you begin to see mul­ti­ple copies of the fil­ter sweep. This cre­ates the love­ly, com­pli­cat­ed “alien voice” sound. Here’s a short Machine­Drum loop I was play­ing around with when I real­ized what was going on here:

alien-autop­sy-192

And for the Elek­tron-heads read­ing this, here’s the MD sysex for that pattern+kit:
alien-autopsy-md.syx

PS: the wikipedia arti­cle on alias­ing has a good run­down on the details of this phe­nom­e­non.

Ableton Live, The Machinedrum and The Monomachine (Part 2): Minimizing Latency

Posted on | June 6, 2010 | 4 Comments

In Part one of this series, I post­ed tips for get­ting the Mono­ma­chine and Machine­drum synced and record­ing prop­er­ly with your Live ses­sions. The oth­er half of the equa­tion is which oper­a­tions to avoid that might intro­duce laten­cy and tim­ing errors dur­ing your ses­sions.

Ableton Prints Recordings Where It Thinks You Heard Them

I guess this design must be intu­itive for many users, but it con­fused me for a while.  If you have a set­up with any­thing but a minis­cule audio buffer, mon­i­tor­ing through a vir­tu­al instru­ment witha few laten­cy-induc­ing plu­g­ins in the mon­i­tor­ing chain, you will hear a fair amount of mon­i­tor­ing laten­cy when you play a note.  The same goes for record­ing audio.

When record­ing a MIDI clip, I expect­ed that Live puts the actu­al MIDI events when I played them — which it doesn’t.  It shifts the MIDI notes lat­er in time to match when you actu­al­ly heard the out­put sound — try­ing to account for your audio buffer delay, the laten­cy of your vir­tu­al instru­ment, and any audio pro­cess­ing delay from plu­g­ins in the down­stream sig­nal path.  There’s one excep­tion to this — it doesn’t wor­ry about delays you might hear due to any “Sends” your track is using.

So your MIDI notes (and CC’s) are record­ed with “baked-in” delays the size of your mon­i­tor­ing chain laten­cy. I’m going to call this baked laten­cy.

Read more

Ableton Live, The Machinedrum and The Monomachine: Midi Sync Notes

Posted on | June 6, 2010 | 9 Comments

Recent­ly I’ve been (going crazy) get­ting the tim­ing tight between Able­ton and two out­board sequencers — the Elek­tron Mono­ma­chine and Machine­drum.  On their own, these sil­ver box­es have amaz­ing­ly tight tim­ing. They can sync to each oth­er to cre­ate a great live set­up.

Add a com­put­er DAW into the loop, and you intro­duce jit­ter, laten­cy, and gen­er­al zani­ness to the equa­tion.  And it’s not triv­ial — this is obvi­ous­ly-miss­ing-the-down­beat, shoes-in-a-dry­er kind of bad.  I test­ed the jit­ter / laten­cy by ear, as well as by record­ing audio clips and mea­sur­ing the mil­lisec­ond off­sets from the expect­ed hit times.

I don’t think this is fun­da­men­tal­ly a slow com­put­er / poor set­up issue either — I’m run­ning a good inter­face, using a tiny 32 sam­ple audio buffer. The rest of the set­up is an i7 Intel Mac run­ning OS X 10.6.3, Able­ton Live 8.1.3, Emag­ic Uni­tor 8 midi inter­face and an Elek­tron TM-1 Tur­bo­Mi­di inter­face for the Machine­drum.

Below is a jour­nal of what’s work­ing, what isn’t, and my the­o­ries on why… Read more

How To: Algorithmic Music with Ruby, Reaktor, and OSC

Posted on | November 20, 2009 | 2 Comments

The basic idea is to use a sim­ple OSC library avail­able for Ruby to code inter­est­ing music, and have Native Instru­ments’ Reak­tor serve as the sound engine. Tadayoshi Fun­a­ba has an excel­lent site includ­ing all sorts of inter­est­ing Ruby mod­ules.  I grabbed the osc.rb mod­ule and had fun with it.

I’m giv­ing a brief pre­sen­ta­tion at the Bay Area Com­put­er Music Tech­nol­o­gy Group (BAr­C­MuT) meet-up tomor­row, un-offi­cial­ly as part of Ruby­Conf 2009 here in San Fran­cis­co.

Here’s a link with down­loads and code from my talk.  It should be all you need to get start­ed, if you have a sys­tem capa­ble of run­ning Ruby, and a copy of Reak­tor 5+ (this should work with the demo ver­sion too).

Ruby mono sequence exam­ple:

reak­torOsc­Mono­Se­quences-192 MP3

Ruby poly­phon­ic drums exam­ple:

reak­torOscPoly­phon­ic­Drums-192 MP3

Leave a com­ment below if you have any ques­tions, or cool dis­cov­er­ies!

oscMenu

Machinedrum Recursive Sampling Test 02

Posted on | November 16, 2009 | 4 Comments

74650135_4a839e2e2a_o

So this is anoth­er exam­ple of using the MD’s inter­nal sam­pler to cre­ate a recur­sive “feed­back loop” of sam­pling and resam­pling and resam­pling.… This has a ten­den­cy of psy­che­del­i­cal­ly twist­ing the under­ly­ing beat.  The way this stuff sounds has real­ly sur­passed my wildest dreams.

MD Recurse Test 02

Read more

Machinedrum Recursive Sampling Test 01

Posted on | November 4, 2009 | 1 Comment

This was a first test at using the Machinedrum’s inter­nal sam­pler recur­sive­ly.  I was try­ing to emu­late my frac­tal waveta­bles sounds in hard­ware, as close­ly as the MD could do it.

Read more

keep looking »