147 results found with an empty search
- EQ Filter Slope Myths: Why Steeper Isn’t Always Better
When producers talk about EQ, slope settings are usually the least-understood part of the filter. You’ll often hear: “A 48 dB/octave filter sounds more professional.” “Steeper is always cleaner.” It isn’t true. In this post, we’ll cut through the myths and explain what EQ slopes actually do – and what they don’t – so you can make better mixing decisions without chasing the wrong settings. What Is a Filter Slope? A filter slope tells you how quickly an EQ reduces the level of frequencies beyond the cutoff point. It’s measured in decibels per octave. 6 dB/oct → gentle roll-off 12 dB/oct → standard musical slope 24 dB/oct → steeper, more defined 48 dB/oct → aggressive, surgical An octave simply means “double or half the frequency.” If your cutoff is 200 Hz, one octave up is 400 Hz, and one octave down is 100 Hz. Myth 1: Steeper Slopes Always Sound Better Producers often reach straight for 24 or 48 dB/oct because they look cleaner. But a steeper slope is not automatically a better choice. In reality: Steeper minimum-phase filters introduce more phase shift They can introduce ringing or resonance near the cutoff point (depending on the design) They often sound unnatural in a musical context What’s happening underneath: Steeper minimum-phase filters introduce more phase shift – a small timing shift around the cutoff frequency. You won’t always hear it instantly, but overuse can make a mix feel slightly less open or alive. Gentle slopes ( 6 or 12 dB/oct ) usually blend more naturally with the material. Myth 2: A 48 dB/oct Filter Fixes Mud Instantly A 48 dB/oct low-cut might look like it cleans a signal quickly… but it often hollows it out. When you remove frequencies too aggressively: Transients lose weight Instruments lose body The track becomes thin without you realising why A 12 dB/oct cut at the right frequency is often more effective – and far more musical. Myth 3: Slope = Quality Some producers believe that higher slope numbers mean “better EQs.” But the truth is: Analog-modelled filters behave differently from ultra-clean digital filters Some designs introduce resonance, others don’t Phase response varies hugely across designs Linear-phase filters behave differently again Linear-phase EQs avoid phase shift – but on steep slopes they introduce pre-ringing , a small echo that occurs just before a transient. On kicks and snares, that pre-ringing can be more damaging than phase shift itself. Steeper is never free. You always pay somewhere. The filter design matters far more than the number. Myth 4: Slopes Don’t Affect the Sound (Only the Curve) Another common misunderstanding. Slopes absolutely affect the sound – not just the graph. For example: A 6 dB/oct low-cut on vocals can sound warm and natural A 24 dB/oct low-cut in the same place can sound clinical A 12 dB/oct high-cut on a pad can sound smooth A 48 dB/oct high-cut can sound sudden or synthetic The slope controls how the filter transitions into the cut – not just how steep it looks on screen. That transition is what your ear reacts to. Choosing the Right Slope A simple rule of thumb: If you’re unsure which slope to use, start with 12 dB/oct . It’s the most musical and forgiving. Go steeper only when the situation clearly needs it. Here’s how each behaves: 6 dB/oct – Natural & Gentle Great for: Vocals Pads Acoustic instruments Bass cleanup without losing weight Feels musical and almost invisible. 12 dB/oct – The Workhorse The most flexible slope. Works on almost anything. Perfect for: Removing excess lows General tone shaping Mastering cleanup Subtle top-end smoothing 24 dB/oct – Defined & Clear Use when you need stronger separation between frequency areas. Works well for: Synths FX Kick/Bass separation Removing rumble in electronic music A useful trick: You can “build” a steeper slope by stacking two gentler filters in series – for example, two 12 dB/oct filters instead of one 24 dB/oct. Stacking gentler filters creates a smoother transition (knee) into the cut, which often sounds more natural than a single aggressive slope. 48 dB/oct – Surgical & Extreme Good for: Eliminating noise Cleaning sub rumble Special FX Sound design Hard digital crossovers Not usually ideal for natural instruments. A Better Way to Think About Slopes Instead of asking: “Which slope is better?” Ask: “How natural or unnatural do I want this cut to feel?” Because slopes don’t just shape the frequency – They shape the character, phase behaviour, and perception of the sound. EQ Slope Myths – Quick Recap Steeper isn’t automatically better Slopes affect sound and feel, not just the curve Gentle slopes are often more musical Filter design matters more than numbers Steeper filters always involve trade-offs You can stack gentler filters for finer control Slopes aren’t about numbers. They’re about feel.
- Slap Delay in Electronic Music: The Invisible Space
I’ve always liked slap delay. Not as an obvious effect. More for what happens when you take it away. A slap is just a short bounce back of the original sound. Usually somewhere around 70–110ms. One repeat. Low level. No drama. When it’s set right, you don’t hear it as a delay. You just feel that the sound belongs. Mute it, and suddenly the track feels flatter. Slightly disconnected. Like something’s been pulled forward out of the space it was sitting in. That’s what I mean by invisible space. What It’s Actually Doing A slap delay is really just a single early reflection. Not a reverb tail. Not an echo. Just a wall. Your ears are wired to interpret reflections as placement. A short reflection suggests proximity. A surface nearby. A physical environment. So even in a completely digital mix, that short repeat tells the brain: This sound exists somewhere. Without it, very clean electronic sounds can feel almost too graphic. Very precise. Very exposed. Sometimes that’s right. Sometimes it isn’t. Why It Works So Well in Electronic Production When sounds are very clean and direct, they can start to feel slightly exposed. That clarity is great – until it starts to feel slightly sterile. A slap delay adds: A bit of density A bit of depth A bit of glue But it doesn’t blur the transient. Reverb can soften edges. Slap delay reinforces them. That’s the difference. Setting It So It Disappears There’s nothing complicated about it. Usually around 70–110ms No real feedback Filtered top and bottom Just loud enough that you notice it when it’s gone Filtering matters. Full-range slaps sound like delays. Filtered slaps sound like reflections. I’ll usually high-pass it so the low end stays clean, and roll a bit of top off so it doesn’t compete with the original. It shouldn’t announce itself. If you can clearly hear “the delay,” it’s probably too loud. Where I Use It Vocals. Claps. Short synth stabs. Anything that feels slightly pasted on top of the track instead of inside it. Sometimes I’ll pan the slap a touch off the source – not wide, just enough to open space. It’s a small move, but it changes how the mix feels. Slap vs Reverb Reverb creates atmosphere . Slap creates placement. Reverb spreads time. Slap reinforces time. If reverb is air, slap is architecture. And in electronic music, structure often matters more than atmosphere. The Real Point Slap delay isn’t exciting. It won’t sell a plugin. It won’t impress anyone in isolation. But it’s often the difference between something sounding finished and something sounding like a demo. It’s not about effect. It’s about context. Sometimes a sound doesn’t need more processing. It just needs something to bounce off.
- The Hidden “Legacy” Folder in Logic Pro
If you hold Option while clicking on an Audio FX slot in Logic Pro , something interesting happens. A 'Legacy' folder appears near the bottom of the plug-in list. It’s Apple’s tucked-away archive of classic Logic plug-ins – the ones that quietly disappeared as new tools replaced them. Inside it are old reverbs, gates, amps, DeEssers, Denoisers… and, to my surprise: Grooveshifter. For anyone who used older versions of Logic, that name might ring a bell. What Was Logic Pro Grooveshifter? Grooveshifter was a real-time groove correction plug-in. It wasn’t about hard quantising. It wasn’t about slicing regions. It wasn’t about rewriting timing from scratch. It was about tightening feel. You’d insert it on a drum loop or percussion track and it would subtly pull transients into line with the groove. No manual cutting. No Flex markers. No region editing. Just: Insert → adjust → done. In a busy session, that mattered. Why Logic Pro Grooveshifter Was So Useful There are moments when a part is 90% there. The rhythm works. The feel is right. It just needs nudging. Grooveshifter was great for: Drum loops slightly drifting Percussion layers sitting ahead or behind Stems from other producers Instead of destructive editing, it acted like a micro-timing assistant. Over time, tools in other DAWs like Ableton Live introduced powerful groove engines and warping systems. Logic evolved too, with Flex Time and groove templates. But Grooveshifter was fast. And sometimes speed beats precision. Why Did Apple Hide It? Apple never fully removed these plug-ins – they just stopped listing them normally. Likely reasons: Redundancy (Flex Time and Smart Tempo became more advanced) Legacy code architecture UI modernisation (older brushed-metal era plug-ins) Rather than delete them outright, Apple buried them. Which means they’re still usable – for now. Should You Use Logic Pro Grooveshifter Today? It works. But it’s technically unsupported. If you decide to use it: Treat it as a workflow shortcut Bounce the result once you’re happy Avoid building templates that rely on it long-term That said, for tightening percussion quickly, it still does the job remarkably well. A Small Reminder About Workflow There’s something interesting about rediscovering tools like this. Modern DAWs are powerful. But sometimes the older, simpler utilities solved problems faster. Grooveshifter wasn’t flashy – it just fixed timing in seconds. Sometimes, that’s all you need.
- Clipping vs Compression: What Each Tool Really Does in Electronic Music Mixing
A lot of mix problems come down to one quiet mistake: Using compression to solve peak problems. Sometimes it works. Often it doesn’t – and the mix starts feeling unstable, flat, or oddly aggressive. That’s not because compression is wrong. It’s because clipping and compression do different jobs – and affect sound in fundamentally different ways. Once that distinction clicks, level control gets simpler – and mixes stop fighting back. What Compression Is Actually Doing Compression reshapes level over time . Attack and release aren’t just technical settings – they decide how a sound moves . A compressor listens, reacts, and responds. That response changes the envelope of a sound, which is why compression affects: groove punch sustain perceived energy Compression isn’t just turning things down. It’s re-drawing motion . That’s why compression can feel musical, heavy, soft, aggressive, or lazy depending on how it’s set. It’s also why compression can very easily change the feel of a part – even when it’s only doing a dB or two. If you want movement, compression makes sense. What Clipping Is Actually Doing Clipping works very differently. There’s no attack. No release. No time component. Clipping reshapes the waveform instantly , shaving off the tallest peaks the moment they occur. It doesn’t respond – it contains. Used lightly, clipping: stabilises transients increases perceived density creates headroom without softening attack Clipping isn’t about dynamics. It’s about containment . That’s why clipping often feels invisible when used well. You don’t hear it “working” – you just notice that everything downstream behaves better. Why Compression Often Feels Like It’s Working Too Hard This is where things usually go wrong. Fast transients – especially drums and percussive synths – can carry huge peak energy without much musical weight. When those peaks hit a compressor first, the compressor reacts to the problem , not the sound. So what happens? The compressor clamps down harder than intended Release timing gets pulled around by stray hits The body of the sound starts moving when only the peaks needed control The result is often pumping, dullness, or a mix that feels unstable even though nothing looks extreme on the meters. The compressor isn’t failing. It’s just being asked to fix something that wasn’t its job. If compression feels aggressive, smeary, or unpredictable, the peaks probably needed dealing with first. Different Jobs, Different Tools This is the core distinction: Compression reshapes movement Clipping contains peaks They overlap slightly, but they are not interchangeable. Compression is brilliant when you want a sound to move differently . Clipping is useful when you want a sound to behave better . Once peaks are contained, compression suddenly feels easier to set. It stops reacting to surprises and starts doing what you actually want it to do. That’s not a rule – it’s just cause and effect. What About Very Fast Compressors? It’s worth saying that very fast compressors – like the 1176 -style designs or a Distressor – can catch peaks . With attack times measured in microseconds, they’ll grab transients far earlier than most compressors. But they still behave differently to a clipper . Even at their fastest settings, they’re reacting over time : pulling the attack down, reshaping the envelope, and releasing back into the sound. That changes feel and movement, not just height. Clippers don’t do that . They don’t listen, react, or recover – they simply remove excess peak energy instantly . That’s why fast compressors can sound punchy but different, while light clipping often feels invisible. Order Matters (Conceptually, Not as a Rule) There’s no fixed chain that works every time, but conceptually: Clipping first stabilises Compression after shapes Saturation adds character only if needed Clipping here is structural. Compression is expressive. Think of clipping as stopping the loudest moments from ruining everything else – not as a way of making things loud. If the sound feels solid before compression, everything that follows gets easier. When Compression Is the Right Tool Compression shines when movement is the goal: vocals that need levelling basslines that need shape pads that need controlled sustain anything where dynamics are the expression In these cases, clipping first would miss the point. You’re not trying to contain peaks – you’re trying to sculpt how the sound breathes. That’s compression’s job. When Clipping Is the Right Tool Clipping often makes more sense when: transients are too tall drums feel spiky rather than punchy compressors keep reacting unexpectedly loudness feels fragile Used lightly, clipping can tighten a sound without softening it. The attack stays intact, but the energy becomes more predictable. That’s especially useful on drums, percussion, and transient-heavy synths. If you want stability without losing punch, clipping usually wins. Using Both Without Overdoing Either The most controlled mixes rarely rely on one heavy process. They use small amounts , spread sensibly: light clipping to tame peaks gentle compression to shape movement restraint everywhere If you can clearly hear either tool working, it’s probably too much. The goal isn’t effect – it’s control. Final Thought Clipping doesn’t replace compression. Compression doesn’t replace clipping. They do different jobs. When each tool is used for what it’s actually good at, mixes stop feeling forced – and loudness becomes a side effect rather than a struggle. Control the peaks. Shape the movement. Let the mix do the rest.
- The 7 Jobs of Delay in Electronic Music Production (Beyond Echo & Reverb)
Delay is often treated as decoration. An echo. A throw. Something you add at the end of a phrase. But in electronic music production, delay rarely exists just to repeat a sound. It usually performs a job inside the mix. It places things. It keeps energy moving. It adds density. It builds tension. When you think about the jobs of delay in electronic music production, it becomes harder to use it casually. You stop adding it out of habit and start asking what it’s actually doing. Here are the roles I notice it performing most often. 1. Delay as Placement Short delays (slap) act like early reflections. A single repeat around 60–120ms doesn’t feel like “echo.” It feels like environment. Instead of pushing something back with reverb, a subtle slap can anchor it into the mix while keeping it clear. That’s particularly useful in electronic production, where elements are often dry and direct. A short delay can position a vocal without washing it out, add depth to percussion without softening transients, or replace reverb entirely in minimal arrangements. Reverb spreads. Delay connects. For example, on a dry techno vocal, a 90ms mono slap can seat it into the track without pushing it back. It feels placed rather than floating. 2. Delay as Rhythm Delay doesn’t just follow rhythm. It can generate it. A dotted 1/8 delay against straight 1/4 notes creates forward motion. Unsynced delay times introduce subtle push and pull. Higher feedback can turn a single hit into a pattern. Sometimes delay becomes a secondary percussion layer. Instead of adding more drums, repetition creates movement underneath what’s already there. In rhythm-driven electronic music, that movement keeps sections alive even when nothing new has been added. A classic example is a dotted 1/8 delay on a sustained synth in house or melodic techno. The original note holds, but the repeats create forward motion between kicks. 3. Delay as Density Low-level delays increase information over time. A quiet repeat thickens a synth line. Filtered feedback adds motion between notes. A subtle 1/16 repeat keeps a stab alive after it hits. This isn’t about stereo width. It’s about preventing drop-off. In sparse arrangements, energy often dips between hits. Delay can stop that dip without making the mix louder or busier. A darker, degrading delay – tape-style designs for example – can add density without building up harsh top end. A pristine digital delay behaves differently. The choice affects how weight accumulates in a mix. Sometimes what people describe as “glue” isn’t compression. It’s repetition carrying energy forward just enough that the section doesn’t collapse between notes. Delay adds density without adding weight. And that distinction matters. For instance, a low-level 1/16 delay tucked under a percussive stab can stop the groove from feeling empty between hits – especially in minimal arrangements. 4. Delay as Width Short delays between left and right channels create separation. Not reverb. Not chorus. Just time offset – often referred to as the Haas effect. The original stays focused in the centre while the repeats expand outward. One side might decay slightly longer. One side might be filtered differently. Width from delay comes from difference. And difference is what makes stereo feel wide. Used carefully, delay can expand a sound without weakening the middle of the mix. A common approach is keeping the lead vocal mono, but sending only the delay return wide. The centre stays solid while the reflections create width around it. If you’re using the Haas effect for this, the offset usually sits somewhere between 5–20ms. Below 10ms feels subtle. Around 15–20ms the widening becomes obvious. Push much past 30ms and it starts to read as an echo rather than space. Used carefully, this lets the vocal stay focused while the delay builds dimension around it. 5. Delay as Tension Longer feedback changes how we perceive time. Let a phrase repeat slightly longer than expected and anticipation builds. Automate feedback before a drop and everything feels like it’s stretching. Cut it suddenly and the impact feels sharper. Delay can hold a moment in place just long enough to make you wait. And tension is often just time being stretched. Before a drop, slowly increasing feedback on a vocal throw can make the section feel like it’s pulling upward – even though no new element has been introduced. 6. Delay as Transition Electronic music relies on movement between sections. Delay often does that work quietly. A vocal throw into a breakdown. A synth tail that bridges into the next bar. A repeat that carries momentum after the drums drop out. It stops arrangements from feeling like blocks placed next to each other. A well-timed delay throw often feels more musical than a riser. Muting the drums for a bar and letting only the delay tail carry through can soften the entry into a breakdown without breaking momentum. 7. Delay as Contrast Sometimes the job of delay is to disappear. A dry verse followed by a repeating chorus. A stripped-back breakdown with no reflections at all. Then a section where everything trails. Delay changes how exposed a sound feels. Remove it, and the part becomes direct – almost exposed. Bring it back, and it feels supported again. In electronic music, contrast creates impact. Silence against repetition. Stillness against movement. Sometimes the power of delay isn’t in adding it. It’s in deciding exactly when it shouldn’t be there. Try muting all delays in a final chorus. The sudden dryness can feel surprisingly powerful – because the ear has grown used to the reflections. A Better Way to Think About Delay Across all of these roles, delay is doing something behavioural rather than decorative. It’s shaping how a section feels over time. Instead of asking: “Should I add delay here?” Ask: What’s missing? Is there a gap between phrases? Is the groove losing energy? Does the section feel flat? Does the transition feel abrupt? Delay isn’t just an effect. It’s a structural tool. Most of the time, you don’t notice it. You notice when it’s gone.
- SSL G3 MultiBusComp Review: How I Use It on My 2-Bus for Structure and Glue
Why the SSL G3 MultiBusComp Has Stayed on My Mix Bus I’ve been using the SSL G3 MultiBusComp for about six months now, and while it wasn't instant, it’s slowly finding a permanent place on my 2-bus. Over time I realised something. It gives the mix a solid, confident hold without sounding forced There’s a definition to it. The compression feels structured rather than squeezed. Here’s how I’ve been using it. How I Set Up the SSL G3 MultiBusComp on the 2-Bus 1. I Start with the Mid Band I solo the mid band (using the headphone icon). I treat this as the anchor of the mix. Then I adjust the crossover frequencies: On the left side , I find the body of the kick by setting the low-to-mid crossover. On the right side , I find the top of the snare by setting the mid-to-high crossover. By isolating this area, I’m essentially controlling the main body of the track. In electronic music especially, that low-mid region carries the weight and drive. Once that feels stable, everything else tends to fall into place. Attack and Release Settings For the mid band: Release: Mostly left on Auto Attack: Usually between 10ms and 30ms For me, 30ms is the sweet spot. It lets enough of the transient through so the kick and snare still feel round and confident. The compression holds the body rather than flattening the impact. If 30ms feels too explosive, I’ll drop to 10ms. That tightens things without killing the energy. Auto release works well here. It breathes naturally and avoids obvious pumping. On a mix bus multiband, that musical movement matters more than clinical precision. Moving to the High and Low Bands Once the mid band feels right, I move to the high and low. Most of the time: Attack = same as mid Release = Auto Main change = Ratio The ratio becomes the tone control. It’s less about “clamping down” and more about asking: Is the low end moving too much? Are the highs jumping forward unpredictably? The low band might get slightly more control if the kick and bass are pushing too hard. The high band might get a touch more ratio if the top end feels edgy. After that, I adjust the high and low makeup gains to match the mid and bring everything back into balance. That final gain matching step is important. It keeps the compression feeling intentional rather than corrective. Why It Works So Well on the 2-Bus The SSL G3 MultiBusComp isn’t a surgical mastering multiband. It behaves more like a musical shaping tool . What I’m hearing when it’s set right: The mix feels denser without sounding limited The low end tightens without losing weight The centre feels controlled but not squashed The track holds together in a confident way It’s subtle, but it’s structural. And that’s why I’m reaching for it more often than I expected. What About the 4K Drive and HQ Mode? The SSL G3 MultiBusComp includes per-band 4K Drive , and after initially leaving it alone, I recently tested it more seriously on a few masters. The difference was immediate. The strength of it is that Drive can be applied per band . Wherever it’s introduced – low, mid or high – it adds a sense of focus and forward presence to that area . The high band was the most obvious example I found, as it immediately brought clarity and intent to the top end. But the same principle applies across the spectrum. A touch on the mid band adds density . A touch on the low band can give the bottom more authority. Drive starts at 1 and runs up to 11. There’s no lower setting, and on some masters even 1 was too much. In those cases, I left it off that band. But more often than not, that lowest setting was enough to bring the track into play without sounding exaggerated. It’s not distortion. It’s colour – and it’s a quality one. With HQ mode engaged ( oversampling active), the tone felt cleaner and more refined. The overall result had that familiar, professional finish without feeling over-processed. Used carefully, the Drive and HQ combination can add harmonic density and focus in a very controlled way. It’s a feature worth exploring – especially at mastering stage where small tonal shifts matter. The Logic Behind This Setup There isn’t one fixed way to use the SSL G3 MultiBusComp, but this approach reflects how multiband compression tends to work best on a mix bus: Anchor the core (mid band first) Let transients breathe (10–30ms attack) Use auto release for musical movement Adjust ratios per band instead of wildly different timing settings Rebalance with makeup gain The key is restraint. On my 2-bus, I’m rarely pushing more than 1–3 dB of gain reduction per band. It’s about stability, not domination. Final Thoughts on the SSL G3 MultiBusComp The SSL G3 MultiBusComp has surprised me. I didn’t expect to use a multiband compressor this often on my 2-bus. But when it’s set gently and deliberately, it doesn’t feel like “multiband compression.” It feels like structure.
- Parallel Harmony vs Diatonic Harmony: The Secret Behind Rave Stab Chords
The First Time I Noticed It The first time I really understood this wasn’t from theory . It was hearing it on early house and techno tracks in the early ’90s. Many of those tracks used parallel harmony – the same chord shape moving up and down the keyboard. There were these sounds that felt different to anything else around at the time. Fresh. You heard it and thought, what is that? Later on, when we were making tracks ourselves, we realised what was happening. You’d sample a piano chord, map it across the keyboard, and play it up and down. Same chord. Different pitch. Looking back, that was parallel harmony in its simplest form. Only later did I learn the formal distinction between that and diatonic movement. Two Ways Chords Move In electronic music, chords usually move in one of two ways. They either adapt to the scale – diatonic harmony – or they keep the same shape and slide – parallel harmony. Both are valid. They just create very different results. Diatonic Harmony: The Adaptive Approach Diatonic harmony is the traditional system. The chord quality changes depending on where it sits in the scale. In A minor, for example: Chord i is minor Chord ii° is diminished Chord III is major As the root changes, the system reshapes the spacing between the notes so everything stays inside the scale. On a keyboard, that means the notes remain on the white keys in A minor. That reshaping creates contrast. You get tension and release. Direction. A sense that the track is deliberately moving somewhere. Because the chords move between minor, major and diminished qualities, the harmony gains depth and colour. This approach is strong in melodic techno, progressive house, trance, cinematic work – anything that leans into progression and lift. Parallel Harmony: The Shape Stays Fixed Parallel harmony ignores scale correction. You choose a voicing – often a minor 7, minor 9, or some stacked preset shape – and you move it. The internal intervals don’t change. Only the root shifts. If you sample a chord and pitch it up, the spacing between the notes doesn’t change. You’re not recalculating harmony. You’re preserving the original structure. That’s parallel harmony in its most literal digital form. You can see the difference clearly in a piano roll. With diatonic movement, the shape subtly bends as it moves – one interval tightens, another widens – so it stays inside the scale. With parallel movement, the MIDI block keeps the same outline. You just drag it up or down. The shape doesn’t adjust. When the voicing stays identical, the colour stays consistent. The chord doesn’t flip between major and minor qualities. If you start with a moody minor 9, it remains that same mood wherever you move it. That consistency protects the atmosphere. It keeps the identity intact. Why Electronic Music Uses Parallel Movement So Often Electronic production is often texture-first. Parallel movement keeps the harmonic colour stable. Because the shape isn’t constantly being corrected by the scale, it can feel slightly suspended rather than resolved. That’s part of why it works so well in dub techno, deep house, jungle pads and early rave. A lot of this came from workflow rather than theory. Chord memory buttons. Preset stacks on modules like the E-mu Orbit. Early Akai libraries full of ready-made chord stabs. You’d take one chord and pitch it across the keyboard. Most people weren’t recalculating degrees. They were reacting to impact. Parallel harmony wasn’t a theory. It was a workflow. When Diatonic Harmony Is Stronger If you want clear resolution , emotional lift , cinematic movement or a more traditional songwriting arc , diatonic harmony gives you contrast . Parallel harmony gives you cohesion . I don’t tend to switch between the two inside a single track. If I start in parallel, I usually stay there. If I start diatonic, I stay in that lane. Mixing them mid-track can shift the identity more than you expect. Learning theory helped me understand what was happening. For a while it made everything feel bigger and more complicated. Over time it simplified again. Sometimes you want movement through function. Sometimes you want movement through feel. Both are valid. Why Rave Stab Chords Work Many classic house and techno stabs use parallel harmony . A single chord is sampled or programmed and then pitched across the keyboard. Because the voicing stays identical, the sound keeps its character as it moves. That’s why rave stabs often feel so consistent and powerful. The mood of the chord doesn’t change as it shifts position – only the pitch. This approach became common in early house and techno because it was quick, practical and worked well with samplers. The result is the familiar stab sound heard across rave, house and techno records. Exploring Both Approaches Understanding these two approaches eventually led me to build the Chord Machine . It generates progressions in two ways: diatonic harmony (Theory Voicing) , where chords adapt to the scale, and parallel harmony (Detroit Voicing) , where the voicing stays fixed and moves together. Each produces a different character. The tool simply makes it easy to explore both. The important part isn’t choosing the ‘right’ one – it’s knowing which feel you’re committing to, and whether you want your harmony to stay inside the scale or move in parallel shapes .
- The Usual Suspects JE-8086: Free JP-8000 Emulation and Why It's Different From Everything Else
Blueplate 2001 There's a Virus sitting in my memory that I can still hear clearly – Blueplate Records Studios, Chicago. I didn't know what it was at the time. The Access Virus was just another piece of gear in a room full of gear that had earned its place. It ended up being on quite a few records as the sound was solid. The Nord Lead too – bright, immediate. That one I noticed straight away. I thought about that a lot recently when I came across The Usual Suspects . MAME for Synths If you ever used MAME – the arcade emulator – you'll know the feeling I'm about to describe. I grew up putting 10p coins into Outrun machines. Then one day MAME arrived , and Outrun was just... on my Mac. Not an approximation of it. Not something inspired by it. The actual game, running on an emulated version of the original arcade hardware . Sitting in a folder on a desktop. That idea has only grown since. These days the same philosophy runs across everything from Raspberry Pi setups to handheld devices you can hold in your palm to full arcade cabinets running hundreds of original ROMs. The notion that the original hardware experience should be preserved – and made accessible – has become a whole movement. The Usual Suspects are doing exactly that. But for synths. What They're Actually Doing Most plugin emulations – even the great ones – are modelling exercises. Developers listen to the hardware, study its behaviour, and build something that sounds and feels as close as possible. Arturia do this brilliantly. U-he do this brilliantly. The results are genuinely excellent tools. The Usual Suspects take a different approach entirely. They reverse-engineer the actual DSP chip inside the hardware and emulate it at the cycle level. The synth's original code runs inside that emulated chip. In practical use, it behaves like having the original synth brain living in your DAW – not an interpretation of how it sounds, but the actual engine running natively in your session. The difference is the same as the difference between a painting of a place and a photograph of it. Their catalogue so far covers some of the defining machines of the late 90s and early 2000s. The Access Virus A, B and C via Osirus . The Virus TI via OsTIrus . The Waldorf microQ via Vavra . The Waldorf Microwave II/XT via Xenia . The Clavia Nord Lead 2X via Nodal Red 2x . And now the Roland JP-8000 and JP-8080 via JE-8086 . All of them free. The JP-8000 and the SuperSaw The JP-8000 is the synth behind one of the most recognisable sounds in electronic music – the SuperSaw oscillator. Seven detuned sawtooth waves stacked together, that soaring, wide, relentless sound that defined late 90s trance and still shows up everywhere if you know where to listen. Arturia's Jup-8000 V is a solid plugin. But JE-8086 is a different proposition. It's not modelling the SuperSaw – it's running the original Roland firmware on an emulated Toshiba TC170C140 chip, the custom DSP at the heart of both the JP-8000 and JP-8080. The GUI is faithful to the original hardware. The presets are the originals. The behaviour is the behaviour. The One Catch You need to supply the ROM yourself . The Usual Suspects don't provide the firmware – for obvious legal reasons , and they're quite firm about it. Obtaining a ROM without owning the original hardware is a grey area . That said, Roland hosts the JP-8000 firmware update on their own support page – if you own the hardware, that's the obvious route – and that's how many people are legitimately getting hold of it. Once you have it, installation is mostly straightforward – though Mac users will need to run a terminal command to sign the plugins manually , since they're not officially Apple-signed . It's a one-time step, not a big deal, but worth knowing before you dive in. They also ship a benchmarking tool that tells you how many instances your CPU can handle before you commit . Smart move – this is a full chip emulation, and it's not lightweight. The Studio Without the Zip Code You can still pick up a JP-8000 or JP-8080 on eBay for reasonable money. The hardware isn’t gone. But to have the actual internal code – the same firmware, the same engine – running inside your DAW? That’s something else. The closest thing to the hardware, without needing the hardware. That’s not a small shift. That’s a change in how these instruments exist. Download: dsp56300.wordpress.com
- How to Turn Old Samples Into New Ideas with Glitch Lab
If you have thousands of samples but struggle to find new ideas in them, tools like Glitch Lab can completely change how you use your library. Instead of constantly searching for new sounds: Rediscover the ones you already have. The Problem: Too Many Samples, Too Little Inspiration You have thousands of samples. Drums. Vocals. Textures. Loops. One-shots. Entire folders of sounds that once felt exciting.But after a while something happens. You scroll through them and nothing really jumps out anymore. Not because the sounds are bad. They just feel familiar. That’s one of the strange things about samples. They don’t stop being useful. They just stop surprising you. Creative Ways to Transform Samples with Glitch Processing One way to break that familiarity is by changing how the sample behaves. Instead of playing the sound from start to finish in the same way every time, glitch and granular tools allow you to reinterpret the audio . The recording becomes less like a finished object and more like material you can move through . You can land on tiny fragments, repeat them rhythmically, reshape the pitch, distort them, filter them, or move through the sound continuously. That’s where something like Glitch Lab becomes interesting. Where Glitch Lab Comes In Instead of treating a sample like a fixed recording, the system breaks it into small grains and allows those fragments to be retriggered, repositioned, and reshaped. The result is that the same sample can suddenly start behaving differently. A chord might become a rhythmic pattern. A vocal snippet might turn into something melodic. A texture might begin to pulse like a groove. Nothing about the original recording has changed. Only how it is being read . Instant Inspiration: The Chaos Button At the centre of Glitch Lab is one control: Chaos. Press it once and the system generates a new configuration of grain sizes, pitch movement, sequencing and modulation. Sometimes the change is subtle. Other times the sample becomes something completely different. A chord suddenly behaves like a melody. A texture starts outlining a rhythm. A vocal fragment turns into something strange and musical. Discovering Hidden Moments One thing you quickly realise when working this way is that most samples contain interesting moments you would never normally hear. Small fragments inside the recording. A harmonic that appears for a split second. A strange transient between notes. When you start scanning through the audio in grains, those tiny details suddenly become usable material. Sometimes the most interesting sound ends up being something that originally lasted less than half a second. Chaos First, Control After The Chaos button is really just a way of finding ideas quickly. Once something interesting appears, you can begin shaping it. Glitch Lab gives you a lot of control over how the sample behaves: Grain Position and Loop Size: Choose which part of the sample becomes the source. Pitch Step Sequencing : Turn static sounds into melodic movement. Scan LFO: Create motion across the sample. Filtering and Resonance: Focus the tonal character. Distortion and Bit Crushing: Add edge, weight, or digital texture. The chaos generates the idea. The controls allow you to refine it. Turning Samples into Rhythms and Melodies One of the surprising things about this approach is how quickly a sample can start producing musical movement. Even if the original audio contains no rhythm, slicing and retriggering it against tempo can create patterns. Pitch sequencing can introduce melodic shifts. Modulation can keep the sound evolving instead of looping predictably. In many cases a single sample becomes the source of an entire musical idea. A Different Way to Use Your Library Glitch Lab doesn’t just process audio; it reinterprets it. When recordings become flexible material instead of fixed objects, a single “old” sample can become the basis of an entirely new idea. Glitch Lab is a free tool on this site. Glitch Lab free samples
- How to Create Chord Progressions Without Knowing Music Theory
Epic & Meditative (i - ♭VI - ♭VII - i) → Dm - B♭ - C - Dm Not knowing music theory doesn’t mean you can’t write great chord progressions. Over the years, I’ve explored multiple ways to generate harmonically rich progressions without having to rely on deep theoretical knowledge. Whether you’re looking for instant inspiration or a way to gradually build your understanding, there are plenty of approaches to creating progressions that sound professional and musical. I ended up building a small browser-based Chord Machine for this exact job: generating a progression quickly, then letting me adjust it by ear. It’s basically a sketchpad – get harmony moving, tweak voicings and rhythm, then export the MIDI when something clicks. Other approaches: 1. Borrow Progressions from Existing Songs One of the easiest ways to find inspiration is to analyse progressions from your favourite tracks . Many songs across genres use similar progressions, and understanding these can help you craft your own. HookTheory: A Deep Well of Chord Progressions HookTheory is a fantastic resource that lets you browse the chord progressions of thousands of popular songs. You can search for a track, see its chords, and analyse how they function within the key. 💡 How to use it: 1. Pick a song you love. 2. Look at the chord progression and see how it moves. 3. Try using a similar sequence in your own track but with a different rhythm or feel. 4. Experiment with transposing the progression into different keys for variety. This approach is great because it teaches you by ear , letting you absorb theory naturally rather than forcing you to memorise rules. 2. Use MIDI Chord Packs If you want to work fast, MIDI chord packs are a great shortcut. These are pre-made progressions that you can drag and drop into your DAW, giving you instant access to well-structured harmonic sequences. Where to Find Great MIDI Packs: 🎹 Unison MIDI Chord Pack – A huge collection of progressions covering multiple genres. 🎵 Cymatics Chord Progressions – Designed for modern electronic music. 📁 Red Sounds MIDI Chords – Packs focused on R&B, pop, and house music. 💡 How to use them effectively: • Drag a MIDI file into your DAW and assign it to a synth or piano. • Edit the MIDI notes—adjust the voicings, extend or shorten chords, or change inversions. • Add your own rhythmic patterns or arpeggios to make it feel unique. MIDI packs can be a great learning tool because they expose you to different progression styles, allowing you to see how chords flow together. 3. Use a Chord Progression Chart Chord progression charts give you a structured way to build progressions without needing deep music theory knowledge . They show common sequences that work well together in different keys. How a Chord Progression Chart Works A simple chart lists the diatonic chords in a key. For example, in C Major: Degree Chord Function I C Major Root chord (stable) ii D Minor Adds movement iii E Minor Emotional feel IV F Major Prepares for resolution V G Major Builds tension vi A Minor Common in pop & electronic vii° B Diminished Used for tension 💡 How to create a progression: 1. Start with a I chord (C Major). 2. Move to a vi (A Minor) for an emotional shift. 3. Use a IV (F Major) for movement. 4. Resolve with a V (G Major) leading back to I . Common Progressions to Try: • I - V - vi - IV (C - G - Am - F) – Used in thousands of hit songs. • vi - IV - I - V (Am - F - C - G) – Emotional, often found in pop and house music. • ii - V - I (Dm - G - C) – A classic jazz and deep house progression. Using charts like this lets you experiment with structure while maintaining musicality . 4. Create Chord Progressions in Your DAW Modern DAWs now include tools that help you generate and experiment with chord progressions even if you don’t have much theory knowledge. Create Chord Progressions in Logic Pro Logic Pro X offers built-in tools to help you craft chord progressions quickly, even if you’re not deep into music theory. Chord Track: This feature lets you place chords along a timeline, selecting the root note, chord quality, and inversion. You can tweak each chord’s details and structure as you go. Chord Progressions Feature: Apply pre-set progressions directly to a MIDI region or a Session Player track, instantly generating harmonic movement. 💡 How to Use It Effectively: 1. Add a Chord Track and set a key to guide your progression. 2. Input chords manually or apply a Chord Progression preset. 3. Experiment with inversions and voicings for richer harmonies. 4. Use a MIDI controller to trigger and test your progression in real time. This approach keeps composition fluid and intuitive , letting you focus on creativity while maintaining musical coherence. Ableton Live: Chord & Scale MIDI Effects Ableton offers Chord and Scale MIDI effects that automatically harmonise notes into proper progressions. This means you can play a single note and let the DAW generate full chords in key. 💡 How to use them effectively: 1. Set your DAW to a key using the Scale feature. 2. Use a Chord plugin to automatically generate chords when playing single notes. 3. Experiment with arpeggiators or rhythmic variations to add movement. This is a great way to explore harmony creatively without being bogged down by theoretical constraints. 5. Learn the Theory Over Time If you want more control over your compositions, learning some fundamentals over time can help explain what you’re already hearing . While the previous methods are great for quick results, understanding the why behind chord movements will empower you to experiment freely . Why Learning Theory is Worth It: You’ll gain confidence in writing your own progressions from scratch. You won’t need to rely on external tools to create music. You’ll recognise common patterns and know how to tweak them for originality. 📚 Where to Start Learning Music Theory: • Hooktheory I & II – Interactive books that teach harmony in a modern, visual way. • Musictheory.net – A free online resource with practical lessons. • “How to Write Songs on Keyboard” by Rikky Rooksby – Covers chord structures in-depth. • YouTube Channels – Signals Music Studio , 12Tone , and Adam Neely all have fantastic breakdowns of music theory in an easy-to-understand way. While it takes time to master theory, you don’t need to know everything to start applying it to your productions today . Final Thoughts There are many ways to create chord progressions without knowing music theory, from analysing songs and using MIDI packs to leveraging DAW tools and progression charts. The important thing is finding an approach that works for you and helps you stay creative . Which Approach is Best for You? 🎹 Want instant inspiration? → Try HookTheory or MIDI chord packs . 💡 Prefer structured guidance? → Use a chord progression chart . 🎛 Want hands-on creativity? → Explore DAW chord generators . 🎶 Looking to grow long-term? → Start learning music theory gradually. For me, tools like chord machine work best when they support listening rather than decision-making. No matter which method you choose, experiment, trust your ears, and don’t be afraid to break the rules . At the end of the day, the best chord progressions are the ones that feel right in your music . 🚀
- Why 0.1 dB Matters in Mixing (The Final 5% That Brings a Mix Into Focus)
Most of the big decisions in a mix are obvious. You move faders. You shape sounds with EQ. You compress things into place. At this stage you’re making bold moves – sometimes 2 dB, sometimes 6 dB or more. You’re still building the structure of the track. But when a mix starts to work, something interesting happens. The movements get smaller. You stop fighting for space and start searching for focus . Suddenly you’re nudging a fader by: 0.3 dB 0.2 dB 0.1 dB To someone starting out this can look almost obsessive. But when a mix is close, those tiny adjustments can be the difference between something that feels nearly right and something that suddenly locks into place . When I First Noticed It I first noticed the importance of these small adjustments in the late ’90s when working in Cubase . Back then most of what we were doing was MIDI , which made timing exploration very easy. If something didn’t feel right, you could slow the track right down and work on the placement with much greater precision. Cubase also had a nudge function for shifting track timing. When you adjusted it, a small boot icon appeared, representing the track being kicked slightly forward or backward. It was simple, but it revealed something important very quickly: Sometimes a sound isn’t wrong. It’s just slightly out of place . Later, when I moved to Logic in the early 2000s, the same idea carried over. Using Alt + Arrow Keys , you can nudge regions forward or backward in tiny increments. The principle is exactly the same. A track doesn’t always need fixing. Sometimes it just needs a small push into the pocket . Where These Micro Adjustments Happen These “last 5%” changes tend to happen in a few specific places. Most often: EQ adjustments Compressor settings Track timing Timing in particular can go extremely deep. If milliseconds aren’t precise enough, I’ll switch from milliseconds to samples and move things even more precisely. At that point you’re barely moving anything – just nudging until it sits and feels properly with the groove. What Actually Changes When these adjustments are right, the change isn’t dramatic. But you hear it. The groove becomes tighter . A sound suddenly sits properly . The whole image becomes clearer . Nothing suddenly jumps out. It’s more like the mix comes into focus . Why This Only Works When the Mix Is Already Working It’s important to say this clearly. Tiny adjustments only make sense once the mix already has shape . You need the mix picture first . If the balance is wrong, the arrangement isn’t working, or sounds are clashing, moving something 0.1 dB won’t solve anything . But once the foundation is there, those small adjustments suddenly become audible. That’s where the last few percent of the mix happens. A Simple Technique: The Two-Beat Loop When tightening timing, I often loop very small sections . Sometimes just two beats . This works particularly well with vocals. Loop two beats of the vocal phrase against the drums and the metronome , then nudge the timing until it sits exactly where it should. When the loop is that small, your brain stops focusing on the words and starts focusing on the timing itself . It becomes very clear if something is slightly early or late. Why Beginners Often Miss This When teaching production, I’ve noticed beginners tend to make very large adjustments . I’ll say: “Just turn that down a little.” And the fader drops 4 dB . That’s not wrong. It’s just how people hear when they’re learning. Over time you start hearing the relationships between sounds at a much finer level . That’s when subtle adjustments start to matter. The real skill isn’t just making small moves. It’s knowing when the mix is ready . Conclusion: Simple in its Complexity Mixing can look complicated from the outside. But sometimes the final stage comes down to something very simple. A tenth of a decibel. A few milliseconds. A tiny timing nudge. The devil—and the magic—is always in the detail.
- Is the Fletcher-Munson Curve What I'm Seeing on the Totalyser?
When I’m deep into a mix, riding the faders, tweaking EQs, balancing elements by feel - not by numbers - I’ll often glance over at the meter. More often than not, the Totalyser is showing a curve that looks suspiciously familiar: a lift in the lows, a slight dip in the mids, and a rise up top. Almost like a soft smile. And every time, I think: Is that the Fletcher-Munson curve? Here’s the thing - I’m not aiming for it. I don’t treat it like a target. But when the mix feels right - like really right - that curve just seems to be there . Not because I forced it, but because everything has found its place. The energy is balanced. The track is alive. And there it is on the meter, clear as day. The Curve I'm Not Aiming For… But Often Land On The Fletcher-Munson curve - also known as equal-loudness contours - is about perception, not measurement. It shows how our hearing responds to frequency at different volumes. At lower volumes, the ear is far less sensitive to lows and highs. The midrange - especially around 2 to 5 kHz - is where we hear most clearly. And the wild thing is: When a mix is balanced and feels right, the visual curve on the Totalyser often echoes that perception. Not because I was chasing it, but because I was trusting my ears. It’s not science—it’s feel. And maybe that’s the point. What Is the Fletcher-Munson Curve? Let’s break it down properly. The curves were first documented in the 1930s by Harvey Fletcher and Wilden A. Munson at Bell Labs. They set out to understand how we perceive loudness across the frequency spectrum - and what they found was that equal energy doesn’t mean equal loudness . At low listening levels, bass and treble frequencies are perceived as quieter than mids. You need to crank the low end and the highs to hear them at the same perceived volume as, say, a vocal or snare. Here’s a visual of the curves to give you the full picture: Each line represents the relative levels needed across frequencies for sounds to feel equally loud. Notice how the lows and highs dip sharply at lower SPL (Sound Pressure Level)? That’s the “smile.” As you turn up the volume, these dips flatten out. Your perception evens out. That’s why a mix can sound dull at low levels and suddenly sparkle when louder. Your ears fill in the bass and top end differently depending on level. The Meter Reflects the Mix, Not the Other Way Around I’ve learned to trust my ears first, always. But I’ve also noticed this: When I reach the point in a mix where everything feels tight, present and alive - the Totalyser often shows a curve with a gentle lift in the lows and a dip through the mids. It’s a familiar shape. But here’s the thing: it’s not the full Fletcher-Munson curve. Not quite. That top-end lift you see on the classic equal-loudness contour? I don’t see that on my Totalyser. If anything, the highs often taper off. And yet - it still feels balanced. It still feels right. That’s the clue: I’m not aiming for a curve, Fletcher-Munson or otherwise. I’m aiming for balance, presence and emotional impact. And when I hit that, the visual readout just happens to resemble something close to Fletcher-Munson - up to a point. So, should you aim for that curve? No. If you try to force your mix to match a meter shape, you’ll likely end up flattening the personality of your track. But if you mix with your ears - if you trust your instinct - you might see something curve-shaped emerge. Not because you were chasing it, but because balance tends to leave a trace. That curve isn’t the goal. It’s the ghost of a good decision.












