How Do Audio Effects Shape the Sound You Create?

Audio effects are to producers what melodies and chord progressions are to composers. Audio effects are widely used in various tracks, but beginners may not understand the real role, effects play in shaping the overall track. While sound signals or samples can create a basic track, it’s the audio effects that act as the salt and spices that convert a bland track into a masterpiece.

Audio effects are possible with both digital and analog mixing devices. Regardless of the mixing method, it’s very important for a producer to understand how sound effects work to incorporate it correctly into the final product. If the producer is familiar with the core concepts of how sound effects are used, then it’s possible to avoid drowning in the ocean of plugins available today. Just like too much salt can ruin a great dish, too much or too little of audio effects can ruin a great track.

In this post, we’ll introduce you to audio effects, how they work and how they are used enhance audio mixes and tracks.

Different Types of Audio Effects

Hardware effects rack usually found in bigger or older studios.

When experienced producers talk about audio effects, they are talking about an array of different signal sounds created using either hardware or software. Audio effects can be broken down into five main categories: modulation, time-based, spectral, dynamic, and filter. Each of these effects can be manipulated based on parameters such as drive and feedback.

Common modulation effects include tremolo, chorus, flanger and phaser. Echo, delay, and reverb are time-based audio effects. The ever-favourite EQ and panning are spectral sound effects. Dynamic audio effects include compression and distortion.

Each of these audio effects can add depth, emotion, and various other features to the tracks you produce. The track can take a life of its own depending on the audio effects in use. There are many more audio effects in addition to the ones listed above, but those are the most commonly utilized. The above-listed effects comprise the most basic sound effects that all decent producers and mixers should know about. Let’s talk a little bit about each audio effect and what they do to an audio track.

Spectral Audio Effects


Panning refers to distributing a single audio signal in a multi-channel or stereo (two-channel) field. Panning basically makes various parts of a track sound like its moving from one field to another. You can visualize this by imagining a picture move from one space to another. In fact, this was the inspiration for audio panning.

Humans have the ability to understand the placement of audio, or where a sound comes from. This understanding is essential when creating audio illusions in movies and music tracks. If you have ever watched an action movie in surround sound, then you know what panning is. In music, Roger Waters’s “Amused to Death” album has a number of excellent songs that use this sound effect.

Panning effects are best used to make tracks sound as if the audio is sweeping across a stereo field, giving the sense of sound jumping from the left ear to the right. When panning occurs, the center of the track is kept busy using low-frequency sounds such as drums or bass. The lead elements, typically the vocals, are usually kept in the center as well.

Panning can be used to add clarity to tracks that use multiple sounds. It can prevent the overlapping muddiness when more than a single instrument is in use. And more interestingly, it brings life to a music track.

Most beginners make the mistake of not doing any panning to their mix at all. In a realistic setup, where a group of musicians are playing, they will be standing around, forming a group, rather than a straight line. So visualize that and pan instruments in your mix accordingly.

Equalization (EQ)

The human ears can process sounds in a frequency range between 20 and 20,000 Hz. This is our audible sound spectrum. EQ is a technique that cuts this spectrum into sections called bands. Sound engineers then use these bands to enhance or subdue parts of a mixed track.

Doing effective EQ-ing on your track takes up some considerable skill and experience. It’s up to the mixing engineer to create bands and boost each as desired.

Keep in mind that EQ doesn’t add new frequencies to a track. It alters parts of the existing spectrum to create novel-sounding effects. It is an effect that generally determines the character of a mixed track. It creates the tone of audio and can sometimes be used to balance out the existing frequency. EQ sound effects can change a track either very significantly or minutely depending on the preferences of the sound producer. EQ is the key effect that showcases the skill of a mixing engineer.

If there is one type of effect you should master first, it should be the equalizer. Mixing effective EQ is the secret sauce to producing a successful track, so master it before anything else.

While modern digital EQs are transparent-sounding, analogue-modelled EQ plugins normally adds sound characteristics to your sound.

Also, different types of EQ effects have different types of sound characters. A digital plugin EQ is usually more transparent sounding while a vintage EQ, whether as a plugin or hardware effect, normally adds extra sound character to your sound, which can add interesting sound characteristics to your tracks.

Time-Based Audio Effects

Delay and Echo

Delay is a highly pronounced sound effect that can add layers to a track giving it more complexity and depth. Delay is prominently used by many mixing engineers because this effect also provides the foundation for adding other audio effects like reverb and chorus.

Electric guitarists love this effect! You even find many guitar amps with built-in delay effects, in hopes to attract electric guitarist. Watch a video of the delay effect in action below:

Delay occurs when one audio signal is played back after a given time limit (hence the delay) following another signal. Mixing engineers call the held back signal, the “wet” signal, while the original signal is called the “dry” signal.”  This effect can be pushed back as the mixer desires to create wholly unique sounds.

Delay also creates another time-based sound effect: echo. Delay effects tend to feature “taps” or playback heads that are heard apart. The result is an echoing effect at various intervals. Mastering delay will allow mixers to master the echo effect as well.

Delay is commonly used to “fill out” tracks and lead elements like vocals and guitar, giving it a very pleasing sustained effect. However, beyond that depending on your mastery of delay & echo effects as a mixing engineer, you can use them on other types of instruments or track, interestingly creating out-of-the-world type of sounds.


Reverberation is a natural sound effect that we hardly ever notice. It’s a result of the natural way we hear sounds. Some sounds directly reach our eardrums, while others bounce off various surfaces before being picked up by our auditory senses. As a result, we “hear” these latter sound waves as a single sound, but only after a delay and more quietly. This is what reverb is. Not to be confused as an ‘echo’ but if you have ever heard “echoes” in cathedrals, large hallways or inside caves, that is reverb.

Reverb occurs in studio or soundstage settings when some sounds bounce off the space. Sound engineers can also create reverb audio using enclosed spaces and metal plates that give off vibrations. However, today, most music producers use software plugins to emulate and customize natural reverb sounds, while controlling the frequencies, feedback and mix.

Adding reverb effects to your tracks, give your instruments a space soundscape. Imagine recording dry recordings of a choir in a well-padded recording studio. Then with a great reverb plugin, you creatively mix wet reverb signals into that recording, creating effects of choirs singing in cathedrals.

See a clip of the Oracle reverb plugin in action. Oracle is a top-rated digital reverb plugin that comes with other built-in effects.

Modulation Audio Effects


Chorus as an audio effect refers to similar but varying sounds heard as one. Think of it as a recording of the same note, stacked on a few times each other, then each tuned & timed slightly off each other to create that overall ‘chorus’ effect. Each sound in a chorus overlaps, creating a singular, distant sound. It’s very much like a choir singing, where a group of singers voice multiple lines at the same time, creating a singular song.

Chorus is a central part of most mixed tracks. This is the sound effect that adds complexity and depth to audio clips. Music producers usually use chorus effects to improve the harmony or fatten up a given track, so that the effected track is layered and fuller. Chorus effects were heavily used in many music pieces in the 80s.

Beyond mixing in music production, the guitar is the most common instrument to have the modulating chorus effect added to it. You even find chorus effects built into guitar amps and amp modulators.

However, there’s no one hard and fast rule to using the chorus effect. For instance, chorus effects can be used on bass guitars as well, to slightly fatten and beef up its sound. In addition, more commonly, we see music producers putting down chorus effects on vocals, keyboards, and synthesizers.


Tremolo is Italian for “trembling.” And that’s exactly what this sound effect does. It changes the amplitude of a track to make it sound like it’s “trembling.” Keep in mind, though, that tremolo is different from vibrato, which is a pitch modulation effect. Instead of modulating the pitch of the instrument, tremolo works on the volume of an audio signal.

In the past, mixing engineers create tremolo by changing and amp up and down in a speedy manner. Tremolo can be altered in speed (rate) or volume (depth). Audio tracks can get a pulsating effect when tremolo is applied. The fading effects in most mixed tracks are achieved this sound effect.  In other words, tremolo creates the movement in an audio signal and makes it more rhythmic.

See how the tremolo effect can be created within a synth like Massive by Native Instruments, in the video below:

Flanger and Phaser

Flanging and phasing are sound effects created using low-frequency oscillators (LFOs). You normally see ‘LFOs’ within synth plugins, however, if you come across a flanger or phaser effect, and wondered what actually goes behind, it’s the low-frequency oscillators at work.

Both flanger and phaser effects sound similar and are both created similarly.

Flanging causes short delays in audio signals. The longer these delays are (over 10 milliseconds for example), the effect will naturally morph into a chorus. Phasers are similar, but instead of delays, these effects are created using filters. Phasers have notches called “stages.” The more stages there are, the more intense the track will become.

In practical use, both flangers and phasers create windy, “whooshing” sound effects. If the effects are applied too much, you will get a sound like a whale call (not a good thing). Normally used very subtly in tracks, flanger and phaser effects are used to put in movement in your tracks and they commonly used in risers & down risers used in electronic music.

Flanger and phase effects, where originally created almost exclusively using electric guitars and synthesizers. These effects are characteristics of rock and funk records. The seventies funk sounds are distinct largely thanks to flanging and phasing. For an excellent example of the use of these effects, listen to “Shine On You Crazy Diamond” by Pink Floyd.

Dynamic Audio Effects


Don’t confuse the dynamic distortion sound effect with audio distortion that can actually ruin tracks. Distortion as a sound effect is created using instruments like synthesizers and guitars. Exotic distortion sounds can be created using rackmounts, pedals, and VST. Mixing engineers use distortion effects to make audio tracks fuller and complex. Think of all those eighties rock tracks remixed in the nineties. These remix albums are overloaded with distortion effects.

Technically speaking, distortion occurs when a sound engineer overloads and audio circuit clipping the signal. In isolation, distortion can be a bit overwhelming. But it can be creatively used to fatten up tracks, giving them the extra ‘bite’ or ‘dirt’, eventually making your audio track sound grittier and fuller with energy.

Distortion effects are usually used on electric guitars and also on synth lead anthems in electronic music.


Compression is a way that changes the dynamic range of a frequency. In simple terms, compression makes the loud parts of an audio signal quieter and the quiet parts louder. Overall, compression effects reduce the dynamic range of a track.

Compression is used to balance out the overall loudness of a track. It eliminates clipping and makes tracks tighter. However, there’s such a thing as over-compression, which accentuates noise and dulls the original audio. If compression effects are applied correctly, tracks would sound punchier rather than dull.

This effect can also be used to make audio signals louder in general. Compression was commonly used in classical dance music. The pumping sound you hear in well-known dance albums like “Untitled” by Lady Starlight is a result of this sound effect.


Audio filters alter frequency ranges of a signal. A filter can either attenuate—or cut—a frequency, or amplify—or boost—it. Audio filters are categorized as high-pass (HPF), low-pass (LPF), or band-pass (BPF) filters. Each filter has a determined threshold for cutting or boosting a frequency. The categorizations depend on how filters make alterations above or below this threshold.

An audio filter is an important creative tool for a mixing producer. These can significantly enhance tracks as well as correct problems in the original. Filters can create spaces for adding new instrumental sounds in a track as well. Filters are essential for adding character and creating dramatic effects in tracks.

As you might have guessed, DJs love to creatively use filter effects to make interesting transitions when DJ-ing. You’ve probably listened to a DJ who used frequency EQ filters to momentarily cut out the bass frequencies in a playing track, before promptly turning the frequency filter back to bring the beat drop.

Where To Go From Here?

Now that you know what the most commonly used audio effects are and what they do, where do you go from here? You launch your DAW and start throwing every effect you have on your tracks, but mastering these audio effects will take time & experience.

Electronic Music Production Courses from

Mixing and knowing exactly when to pull up which effects for the job, is not only science but art as well. Without the audio effects in your music, your music will sound bland. Of course, it’s also important to use the best audio effect plugins with high-quality audio processing, to produce good sounding music. Remember, not all effect plugins are created equal, so its best try them out while trusting your ears to make judgement.

Besides that, I’d recommend that you’d take a course in music production and mixing, to understand and learn how these effects can be used in music production.

But at the end of the day, learning to use effects and mix, comes from real practice. As they say, practice makes perfect. Be creative and don’t be afraid to experiment with audio effects. Don’t be afraid of rules. Instead, go against them and who knows, you might develop your own signature sound or style as a music producer.

What effects are you currently using in the studio? Let me know in the comment section below as I’d love to hear from you.

Drop Your Comments Here