跳至主要內容
All Posts
audiopodcastingnormalizationLUFSaudio levels

Audio Normalization for Podcasts: Best Practices

A practical guide to audio normalization for podcasters. Learn about LUFS standards, peak vs loudness normalization, and how to get consistent audio levels across episodes.

FileMuncher TeamMarch 12, 202610 min read

If you've ever listened to a podcast where one host is barely audible while the other blows out your eardrums, you've experienced the problem that audio normalization solves. It's one of the most important — and most misunderstood — steps in podcast production.

This guide covers what normalization actually does, the standards that matter, and how to get consistent audio levels across your episodes without destroying your dynamic range.

What Is Audio Normalization?

Audio normalization is the process of adjusting the overall level of an audio signal to a target value. That's the simple version. The nuance lies in how that adjustment is measured and applied.

There are two fundamentally different approaches:

Peak Normalization

Peak normalization finds the loudest single sample in your audio and adjusts the entire file so that sample hits a target level (usually 0 dBFS or just below it). Every other sample is adjusted by the same amount.

Example: Your loudest moment is at -6 dBFS. Peak normalization to -1 dBFS adds +5 dB to the entire file.

The problem: Peak normalization tells you nothing about how loud the audio sounds. A file with one loud spike and mostly quiet content will be normalized based on that spike, leaving the average level low. Two files peak-normalized to the same value can sound dramatically different in loudness.

Loudness Normalization

Loudness normalization measures the perceived loudness of the entire file (or segments of it) using psychoacoustic models that approximate how human hearing works. It then adjusts the level so the perceived loudness hits a target value.

This is measured in LUFS (Loudness Units relative to Full Scale), and it's the standard that matters for podcasting.

Why it's better: Two files loudness-normalized to the same LUFS value will sound approximately the same volume to a listener, regardless of their peak levels or dynamic range.

Understanding LUFS

LUFS is an internationally standardized measurement (ITU-R BS.1770) for perceived audio loudness. Unlike decibels, which measure signal amplitude, LUFS accounts for how human ears perceive different frequencies at different volumes.

Key LUFS Measurements

Integrated LUFS — The average loudness across the entire file. This is the primary number you care about for podcast normalization.

Short-term LUFS — Loudness measured over a 3-second sliding window. Useful for identifying sections that are too loud or too quiet.

Momentary LUFS — Loudness measured over a 400ms window. The most granular measurement, showing real-time loudness.

True Peak (dBTP) — The actual peak level accounting for inter-sample peaks (peaks that occur between digital samples during playback). Important for preventing distortion.

Industry-Standard LUFS Targets

Different platforms have different loudness standards. Here's what matters for podcasters:

PlatformTarget LUFSTrue Peak MaxNotes
Apple Podcasts-16 LUFS-1 dBTPApple's official recommendation
Spotify-14 LUFS-1 dBTPSpotify normalizes to this internally
YouTube-14 LUFS-1 dBTPYouTube applies normalization on playback
Amazon Music / Audible-14 LUFS-2 dBTPStricter true peak requirement
General podcast standard-16 LUFS-1 dBTPSafe target for cross-platform distribution

Why -16 LUFS Is the Safe Target

If you publish on multiple platforms, -16 LUFS is the recommended target for several reasons:

  1. Apple Podcasts recommends it. Apple is the largest podcast platform by listener hours, and they explicitly recommend -16 LUFS in their podcaster documentation.

  2. Platforms that target -14 LUFS will turn you down, not up. Spotify and YouTube reduce the volume of audio louder than their target but generally don't boost quieter audio. Publishing at -16 LUFS means you'll play at -16 on Apple and -16 on Spotify (slightly below their -14 target, but not adjusted). Publishing at -14 LUFS means Apple listeners hear your podcast louder than others.

  3. More headroom. -16 LUFS gives you 2 dB more headroom for dynamic range, reducing the risk of clipping on playback.

The Normalization Workflow for Podcasters

Here's a practical, step-by-step process for normalizing podcast audio. This assumes you've already done basic editing (removing mistakes, ums, long silences).

Step 1: Record at Proper Levels

Normalization fixes level consistency, not level quality. If your recording is too quiet (below -30 dBFS peaks) or clipping (hitting 0 dBFS regularly), normalization can't fully compensate.

Target recording levels: Peaks between -12 dBFS and -6 dBFS during normal speech. This gives you headroom for loud moments without the noise floor being too prominent.

Step 2: Apply Compression (Dynamic Range Control)

Before normalizing, apply gentle compression to reduce the gap between your loudest and quietest moments. Podcast speech typically benefits from:

  • Ratio: 2:1 to 4:1
  • Threshold: Around -18 to -24 dBFS
  • Attack: 10–20ms (fast enough to catch plosives)
  • Release: 100–200ms (natural-sounding recovery)

Compression makes normalization more effective because it brings the overall signal closer to a consistent level. Without compression, normalization might push your average loudness to -16 LUFS while leaving individual sections too quiet or too loud.

Step 3: Normalize to Your Target LUFS

This is where you apply loudness normalization to hit your target — typically -16 LUFS integrated with a true peak ceiling of -1 dBTP.

You can do this with FileMuncher's volume adjustment tool, which lets you adjust audio levels directly in your browser. For podcast episodes, set your target level and the tool applies the gain adjustment needed to reach it. No files are uploaded — processing happens locally on your device.

Step 4: Check True Peak Levels

After normalization, verify that true peaks don't exceed -1 dBTP. If they do, apply a limiter with a ceiling of -1 dBTP to catch any peaks that exceed the threshold.

True peaks above -1 dBTP can cause distortion on some playback systems, particularly when the audio is converted to lossy formats (MP3, AAC) for distribution.

Step 5: Export in the Right Format

For podcast distribution, the standard format is:

  • MP3 at 128 kbps CBR (mono) or 192 kbps CBR (stereo) for broad compatibility
  • AAC at 128 kbps for Apple-first distribution (better quality per bitrate)
  • Sample rate: 44.1 kHz
  • Bit depth: 16-bit (for the final export — edit at 24-bit or higher)

Normalizing Multi-Track Recordings

Podcast interviews and multi-host shows present a specific challenge: each track (each microphone) has different recording levels and characteristics.

The Wrong Way

Recording all hosts into a single track, then normalizing the mix. This means one host's level is locked relative to the other — if one is louder, they'll always be louder.

The Right Way

  1. Record each host on a separate track. Every remote recording tool (Riverside, SquadCast, Zencastr) and local recording setup should capture separate tracks per microphone.

  2. Process each track independently. Apply noise reduction, compression, and normalization to each track separately. This ensures each host sounds consistent regardless of their microphone, room acoustics, or speaking volume.

  3. Normalize each track to -16 LUFS. When each track is independently normalized, mixing them together produces balanced audio where every voice is at a comparable level.

  4. Mix and check the final output. After mixing the normalized tracks, check the combined output's integrated LUFS. Mixing multiple -16 LUFS tracks will produce a louder combined signal. Adjust the mix level so the final output is also at -16 LUFS.

If you need to combine processed audio segments, FileMuncher's audio merge tool lets you join multiple audio files into a single file directly in the browser.

Common Normalization Mistakes

Normalizing Before Editing

If you normalize first and then cut sections, add music, or apply effects, your loudness will change. Always normalize as one of the last steps in your workflow.

Using Peak Normalization Instead of Loudness Normalization

Peak normalization (to 0 dBFS or -1 dBFS) is a legacy approach. It doesn't account for perceived loudness and won't give you consistent levels across episodes. Always use loudness normalization (LUFS-based).

Over-Compressing Before Normalizing

Heavy compression makes audio louder but also flattens dynamics, making speech sound unnatural and fatiguing to listen to over long episodes. Aim for 3–6 dB of gain reduction from compression — enough to tame peaks without squashing the life out of the conversation.

Ignoring the Noise Floor

Normalization amplifies everything, including background noise. If your recording has a noticeable noise floor (air conditioning hum, room reverb, computer fan), normalization will make it louder. Apply noise reduction before normalization.

Not Trimming Dead Air

Long silences at the beginning or end of a file can skew integrated LUFS measurements. Trim your audio before measuring and normalizing. FileMuncher's audio trim tool handles this quickly — drop your file in, set start and end points, and export the trimmed version.

Checking Your Levels After Export

After exporting your final episode, verify the loudness meets your target. Several approaches work:

FFmpeg (command line):

ffmpeg -i episode.mp3 -af loudnorm=print_format=json -f null -

This prints the integrated LUFS, true peak, and loudness range of your file.

Dedicated loudness meters: Tools like Youlean Loudness Meter (free) or iZotope Insight provide real-time LUFS monitoring during playback.

Platform-specific validators: Spotify's Loudness Check and Apple's afinfo tool can verify compliance with platform targets.

Loudness Across Episodes: Consistency Matters

Individual episode normalization is important, but consistency across episodes matters just as much. If Episode 12 is at -16 LUFS and Episode 13 is at -19 LUFS, listeners will notice and reach for the volume knob.

Tips for cross-episode consistency:

  1. Use the same signal chain. Apply the same compression settings, EQ, and normalization target to every episode. Save these as presets in your editing software.

  2. Spot-check against previous episodes. Before publishing, compare the loudness of your new episode against the last 2-3 episodes. They should be within 1 LUFS of each other.

  3. Keep a loudness log. Track the integrated LUFS, true peak, and loudness range of each episode. This helps you catch drift over time and identify when your recording setup has changed.

Normalization for Different Podcast Formats

Solo Shows (Single Microphone)

The simplest case. Record, edit, compress, normalize to -16 LUFS, export. One track, one processing chain.

Interview Shows (Two Tracks)

Process each track independently as described in the multi-track section above. Pay special attention to balancing the two voices — guests on remote connections typically have different audio quality than the host.

Narrative / Produced Shows (Multiple Sources)

Shows with narration, interviews, sound effects, and music require more nuanced mixing. Normalize dialogue tracks to -16 LUFS, then mix music and effects relative to the dialogue (typically 10–20 dB below dialogue level). Check the final mix's integrated LUFS.

Live / Unedited Shows

If you can't do post-production, apply a loudness normalizer and limiter in your recording chain (real-time processing). Hardware or software processors like the dbx 286s or Rode Rodecaster handle this automatically.

Frequently Asked Questions

What happens if I don't normalize my podcast audio?

Your episodes will have inconsistent volume levels — both within episodes and across episodes. Listeners will constantly adjust their volume, which creates a poor listening experience. Podcast apps with built-in normalization (like Apple Podcasts) will attempt to compensate, but the results are less precise than doing it yourself.

Can normalization fix a badly recorded episode?

It can fix level inconsistency, but it can't fix fundamental recording problems like clipping (distortion from levels that were too hot), excessive room reverb, or a high noise floor. These problems need to be addressed at the recording stage or with specialized restoration tools.

Should I normalize in mono or stereo?

Most podcasts should be published in mono. Stereo doubles the file size with no benefit for spoken word content — listeners don't perceive stereo positioning in speech. The exception is shows with music or sound design where stereo enhances the experience.

Does my DAW's export normalization work, or do I need a separate tool?

Most DAWs (Audacity, Reaper, Logic, Adobe Audition) include loudness normalization in their export settings. These work well. The key is ensuring they're using loudness normalization (LUFS-based), not peak normalization.


Adjust your podcast audio levels now — browser-based loudness adjustment with no upload, no account, no watermark.

Try it yourself — free

All FileMuncher tools run in your browser. No signup, no uploads, no file size limits.