Frequently Asked Questions

Other stem splitters sound have less instruments, less reliable presets, less flexibility. They fail when you want to extract weird instruments or FX sounds. SAM Audio lets you describe any sound with text. This means it can deal with exotic and borderline alien sounds with surprising accuracy.
Mastering is what makes your music sound good on any listening device. It also makes sure you match your genre loudness and tone so your audience recognizes your track. Mastering needs to be done smartly to comply with streaming platform rules. It is usually done by a secondary pair of ears. Neural Analog speeds up this process using algorithms.
For the best results: Restore first, then Master. Restoration rebuilds the high-end frequencies and removes muddiness. Mastering then takes that clean, full-range signal and optimizes it for commercial loudness.
100%. Neural Analog is a processing tool. If you owned the rights before, you own them after. It's like asking "is this still my shirt?" after picking it up from the dry cleaner.
You can try every feature for free to see how it sounds. These tools use beefy GPUs, so processing full tracks requires a paid subscription plan.
Yep. Your paid plan remains active until the next billing cycle. Payments are handled by Stripe.
Absolutely. You can use DAW exports, recordings, samples, or whatever. Neural Analog is especially good at finishing AI-generated tracks, but the tech works on any audio signal that needs that extra polish.
The SAM Audio playground is limited to 30-second clips in mono. To process full songs and stereo sounds, the playground is not enough. Use Neural Analog to process long or stereo audio with SAM Audio.
That depends on processing, but most tracks take about a minute. For longer tracks, despite using fast GPUs, the processing time can take longer. You can leave the page and come back later.
With the custom prompt option, you can describe any instrument you want to isolate (e.g., "saxophone", "synthesizer", "electric guitar"). The SAM Audio model uses your description to separate that specific sound from the mix, giving you flexibility beyond standard presets.
LUFS (Loudness Units relative to Full Scale) is the standard measurement for perceived audio loudness. Spotify, Apple Music, and YouTube automatically adjust every song to around -14 LUFS. If your track is louder, they turn it down. If it's quieter, they leave it as is—making it sound weak. Matching this target gives you consistent playback and maximum competitive loudness.
Free audio editors or basic volume boosts simply turn everything up, which causes digital clipping, ruins your dynamic range, and makes harsh frequencies even harsher. Streaming platforms detect this distortion and will penalize your track's quality. Proper mastering with Neural Analog does much more than just increase volume: it uses smart limiting to reach commercial loudness without clipping, applies dynamic Match EQ to fix muddy or harsh tonal balance, and can even restore missing high frequencies that AI generators leave out.
The mastering process preserves your original creative intent. Your track will sound clearer, more present, and competitively loud without sounding squashed or distorted.
Yes. However, for AI-generated MP3s, you can achieve even better results by first using Neural Analog's Audio Restoration service to rebuild missing frequencies, then mastering the restored file. This two-step process gives you the highest possible quality.
No. Audio Restoration rebuilds missing frequencies from compressed audio (like 16kHz cutoffs). Auto mastering optimizes loudness and dynamics for streaming. For AI-generated music, using both services produces the highest quality results.
Matchering is a reference-based mastering method. You provide a target track and a reference track, then the system aligns tonal balance and loudness so your mix is closer to the reference character.
No. Static EQ presets apply the same curve to every song. Matchering adapts to the specific reference audio, so the resulting EQ move is context-aware.
No. You can run Match EQ in the browser on Neural Analog by uploading your track and optional reference track, then exporting the mastered file.
Use a commercially released song in the same genre, tempo range, and vibe as your target. A bad reference can force the wrong tone onto your track.
Yes. Humans hear frequencies between 20Hz to 20Khz. Low quality audio hardware, like smartphones, can easily play these frequencies.
Unfortunately, hearing degrades with age and with exposure to loud sounds. For example, the famous ultrasound "Mosquitone" (17.8kHz) is clearly audible to younger people, but notoriously difficult to hear for adults.
However, even if not directly heard, high-frequency content is crucial for the transient response and "spatial feel" of audio, which impacts how you perceive quality even if you don't consciously hear a sine wave at that pitch.
Simple WAV conversion does not add missing detail. Conversion in tools like Audacity mostly repackages existing samples through interpolation, while True Audio Restoration uses generative AI (similar to image super-resolution) to predict and insert missing detail, recovering dynamic range and "air" lost to compression.

Restoration and mastering solve different problems. Many mastering services rely on multiband compression to boost or compress existing frequencies, so mastering without restoration can amplify artifacts instead of fixing the root quality issue.

Once your audio is restored, Automatic Mastering can polish it for professional release, with intelligent loudness optimization tailored to your track.

Audio restoration analyzes spectral content and removes lossy-compression "chirps" and "warbles", replacing them with coherent harmonic content.

That's okay! Restoration is not a silver bullet. Neural Analog offers other tools such as stem splitting and mastering to power your music creation. This enables a hybrid production workflow where you can replace low-quality AI elements entirely.
No, it does not just add noise. Unlike enhancers that layer white noise, the generative model reconstructs the clean signal that should be there by separating useful harmonic and transient structure from compression artifacts and other degradations.
You can Restore Audio for free on a sample. The models run on beefy GPUs, so for longer audio, you'll need a paid subscription.
Neural MP3 upscaling reconstructs coherent signal, not noise. It removes compression artifacts while rebuilding harmonic content. The result is cleaner than the original MP3, with measurable improvement in SNR and spectral flatness.
Test MP3 upscaling on short samples for free. For full tracks and stems, paid subscriptions cover GPU processing costs.
UniverSR is the most compute-heavy restoration model in Neural Analog. It uses much more GPU time than the standard upscalers, so it is reserved to Pro users and above for longer music, foley, and sound-effect restoration jobs.
UniverSR is a heavier and more capable restoration model. It is better suited to difficult degraded audio, including foley, sound effects, music, voice, and other narrowband material. AudioSR is lighter and still useful for classic high-frequency regeneration workflows, but UniverSR generally gives stronger results on harder inputs at a much higher compute cost.
UniverSR works on more than music. It can help with foley, retro game sounds, sound effects, voice, archival recordings, and other degraded or low-bandwidth audio. Results still depend on the source, so the best way to evaluate a difficult file is to try a representative excerpt.
Paste your Udio song URL into the Udio Downloader input field. After processing, you get a direct download to the file generated by Udio's AI.
Paste a public Udio playlist link in the Udio Downloader to import the tracks. You can download Udio playlist songs one by one, or use batch download to grab them all at once (Subscribers only).
Paste a public Udio creator profile link to download your Udio library. This imports the creator page and prepares your tracks for download, up to 100 at a time.
Paste your Suno song URL into the Suno Downloader input field. After processing, you get a direct download to the file generated by Suno's AI.
Paste a public Suno playlist link in the Suno Downloader to import the tracks. You can download Suno playlist songs one by one, or use batch download to grab them all at once (Subscribers only).
Paste a public Suno creator profile link to download all your Suno songs. This imports the creator page and prepares your Suno library for download, up to 100 tracks at a time.
Use a Suno playlist or creator page link to batch download Suno songs. Batches of up to 100 tracks at a time are currently supported.
Paste your Mureka song URL into the Mureka Downloader input field. After processing, you get a direct download to the file generated by Mureka's AI.
Paste a public Mureka playlist link in the Mureka Downloader to import the tracks. You can download Mureka playlist songs one by one, or use batch download to grab them all at once (Subscribers only).
Paste a public Mureka creator profile link to download all your Mureka songs. This imports the creator page and prepares your Mureka library for download, up to 100 tracks at a time.
Use a Mureka playlist or creator page link to batch download Mureka songs. Batches of up to 100 tracks at a time are currently supported.
Paste your Producer.ai song URL into the Producer.ai Downloader input field. After processing, you get a direct download to the file generated by Producer.ai.
Paste your Sonauto song URL into the Sonauto Downloader input field. After processing, you get a direct download to the file generated by Sonauto.
Not yet. Sonauto import currently supports public song links only. Paste one song URL at a time in the Sonauto Downloader.
Paste a public Producer.ai playlist link in the Producer.ai Downloader to import the tracks. You can download Producer.ai playlist songs one by one, or use batch download to grab them all at once (Subscribers only).
Currently, Producer.ai creator profiles do not list songs, which makes automatic imports impossible. To download your tracks, use individual song links or add them to a playlist, then paste that playlist link in the Producer.ai Downloader.
Use a Producer.ai playlist link to batch download Producer.ai songs. Batches of up to 100 tracks at a time are currently supported.
You can download audio from Udio, Suno, Mureka, Producer.ai, and Sonauto. Need another platform? Join Discord and ask..
Yes. By default, you get the standard MP3. Neural Analog includes an Audio Restoration feature that uses AI to reconstruct frequencies lost during compression, turning your 16kHz MP3 into a high-fidelity 20kHz+ WAV file. Open the Audio Restoration page.
Yes. By default, you get the standard MP3. Neural Analog includes an Audio Restoration feature that uses AI to reconstruct frequencies lost during compression, turning your 16kHz MP3 into a high-fidelity 20kHz+ WAV file. Open the Audio Restoration page.
Udio Downloader allows you to split your downloaded track into multiple stems (Vocals, Drums, Bass, etc.) using the stem extraction engine. Open the Stem Separation Guides.
Suno Downloader allows you to split your downloaded track into multiple stems (Vocals, Drums, Bass, etc.) using the stem extraction engine. Open the Stem Separation Guides.
The Udio Downloader automatically embed the Song Title, Artist Name (your username), and the generated Cover Art directly into the file. This means when you put the file on your phone or iPod, it looks like a real released track.
The Suno Downloader automatically embed the Song Title, Artist Name (your username), and the generated Cover Art directly into the file. This means when you put the file on your phone or iPod, it looks like a real released track.
Yes. By default, you get the standard MP3. Neural Analog includes an Audio Restoration feature that uses AI to reconstruct frequencies lost during compression, turning your 16kHz MP3 into a high-fidelity 20kHz+ WAV file. Open the Audio Restoration page.
Mureka Downloader allows you to split your downloaded track into multiple stems (Vocals, Drums, Bass, etc.) using the stem extraction engine. Open the Stem Separation Guides.
The Mureka Downloader automatically embed the Song Title, Artist Name (your username), and the generated Cover Art directly into the file. This means when you put the file on your phone or iPod, it looks like a real released track.
Yes. By default, you get the standard MP3. Neural Analog includes an Audio Restoration feature that uses AI to reconstruct frequencies lost during compression, turning your 16kHz MP3 into a high-fidelity 20kHz+ WAV file. Open the Audio Restoration page.
Producer.ai Downloader allows you to split your downloaded track into multiple stems (Vocals, Drums, Bass, etc.) using the stem extraction engine. Open the Stem Separation Guides.
The Producer.ai Downloader automatically embed the Song Title, Artist Name (your username), and the generated Cover Art directly into the file. This means when you put the file on your phone or iPod, it looks like a real released track.
The data requirements vary depending on the model you choose:
  • DDSP: Approximately 10-15 minutes of clean, monophonic recordings. Ideally with MIDI transcription (or easy to be transcribed). Best for single instrument timbres.
  • RAVE: 2-3 hours of clean, high-quality coherent audio (single style). Works with diverse sound types and can handle more complex timbres.
  • AFTER: Typically more than 1 hour of audio samples for good results, ideally with MIDI transcription (or easy to be transcribed). Supports polyphonic content.

Make sure you own the rights to use the audio as training data. You agree to be the one responsible for copyright compliance in case of prejudice.

For detailed best practices, check the documentation for each model: RAVE, AFTER, DDSP.

You do! Any model you train using the platform is 100% yours, as long as you own the training data (be responsible).
The models are trained using high-performance GPU pipelines. The training process includes:
  • Preprocessing your audio data to extract relevant features
  • Data augmentation using advanced AI models to improve robustness
  • Multi-stage training with optimized hyperparameters for each model type
  • Quality validation to ensure your model meets performance standards

All of this happens automatically in the cloud—you just upload your data, monitor with the available tools, and wait for the results.

Each model type requires its own VST/AU plugin to run in your DAW:
  • RAVE: Use Neutone FX VST, IRCAM RAVE VST, or nn~ for Max/MSP and PureData
  • AFTER: Max for Live devices (Ableton only) or nn~ for Max/MSP
  • DDSP: Neutone FX VST or the Magenta DDSP VST

For detailed setup instructions, check the documentation for each model: RAVE, AFTER, DDSP.

Training time depends on the model complexity and size. On a single high-end GPU:
  • DDSP: Approximately 6 hours
  • RAVE: Approximately 12 hours
  • AFTER: Approximately 24 hours

You'll receive email notifications when your model is ready for download. Training happens in the background, so you can close your browser and come back later.

Suno's output loudness varies based on track dynamics and generation randomness, and many tracks come out around -18 to -22 LUFS while streaming platforms normalize near -14 LUFS. You can use auto mastering to raise loudness for consistent playback while maintaining audio quality.
First, get your track sounding professional: restore audio quality and master it to at least -14 LUFS using Neural Analog. Then use a distributor like DistroKid, TuneCore, or CD Baby to upload to Spotify. Make sure you have the rights to the music before distributing.
Suno outputs compressed MP3s with frequencies often cut off at 16kHz. Neural Analog's restoration uses AI to rebuild missing frequencies up to 20kHz and remove compression artifacts. After restoration, auto-mastering optimizes loudness. The combo gives you release-ready quality.
Suno provides basic stem extraction (drums, guitars, bass, vocals, background vocals, keys, synth). Neural Analog goes further by using SAM Audio AI to let you describe and isolate ANY specific instrument, like "saxophone," "electric piano," or "synth lead."
Simply describe the instrument you want to isolate (e.g., "acoustic guitar," "violin," "synthesizer"). SAM Audio analyzes your Suno track and intelligently separates that specific sound from the mix - something impossible with standard stem splitters.
For standard instruments, yeah, thanks to best-in-class models trained over thousands of references. Except very limited artifacts and low bleeding, results are full. For exotic, weird, alien sounds, SAM Audio is the best model available. Nothing will be as good as recording the instruments live yourself, but it will let you make remixes and push your creative boundaries further.
Yes. Cancel your subscription at any time from your account settings. Payments and billing are handled by Stripe. If you cancel, your current plan access will continue until the end of your current billing period. Note that payments are not refundable. Try the features first with the Free plan and use monthly billing for maximum flexibility.
Yes. You can try all features for free on a sample of your audio. This lets you hear exactly how your audio would sound with these tools.
Yes.
  • Upgrades (e.g. Plus to Pro, Pro to Max): Instant. You pay only the prorated difference. You get access right away to increased quotas.
  • Downgrades & Interval Changes: Take effect at the end of your current billing cycle. You stay on your current plan until then.
Dereverb removes room echo so your audio sounds drier and clearer. It reduces room reflections and reverb tails so the direct signal stands out.
AI dereverb separates room echo from the source so you hear a cleaner recording. It uses deconvolution and time-based analysis to estimate the original anechoic signal and un-mix room reflections from direct sound.
Yes, it can remove slapback and similar echoes. The models detect different spatial reflections, including long reverb tails and distinct single-bounce slapback delays.
A noise gate cuts audio, while AI dereverb cleans echo. A gate mutes below a threshold and can chop natural decay, while dereverb attenuates reverb components and preserves vocal decay and transients.
Yes, you can keep only the reverb trail. The 'Dereverb' preset outputs two stems: a dry stem and a reverb-only stem containing room reflections and delays.
When you extract the stem with Neural Analog, you'll receive both the separated music track and the isolated crowd noise track. You can keep just the crowd noise if that's what you need.
It can clean up loud live recordings. The model separates audience noise (cheering, clapping, talking) from musical content to isolate a cleaner music track.
Using SAM Audio, you get both tracks: the track with the bird sound, and the track without. Bird-noise extraction returns both outputs: a cleaned music stem and an isolated bird-only ambience stem, so you can keep either one.
Yes. SAM Audio is prompt-based, so you can target sounds like 'animal noises', 'bird noises', 'coocoo noises', 'cuckoo calls', insects, or similar ambience layers. Results depend on how much those sounds overlap with vocals and instruments.
Mid/Sides processing helps preserve stereo width while isolating centered and side information more cleanly. This usually keeps instruments natural and avoids collapsing the stereo field during denoising.
Yes. Extract with a bird/ambience-removal prompt to get a cleaner stem with only instruments and vocals emphasized, while the ambience is routed to a separate stem.
Prompting alone usually does not remove drums fully because Suno's generation objective prioritizes overall song quality. You can use AI drum removal when you need dedicated percussion isolation.
Yeah, there is a model for that. Start by splitting the drums track. Then, split the drums track with the '5 drums tracks' model to get hihats, snare, kick, drums, cymbals. Mute or change volume of individual tracks for precise edits.
Yes, it can address that issue. The model is tuned for AI-generated audio from platforms like Suno and Producer.ai, improving percussion separation.
Neural Analog uses modern models built for high-quality stem separation. Architectures include BS-Roformer, MelBand-Roformer, MDX23 andSAM Audio. These models outperform older tools like Spleeter or Demucs by using advanced signal processing and deep learning. Learn more in the Stem Separation Guides.
Dolby Atmos is an immersive audio format that places sounds in 3D space instead of only left and right stereo. This adds width, depth, and height so a mix feels bigger and more detailed.
ADM BW64 is the main exchange format for Dolby Atmos projects. It stores both the audio and the metadata needed for immersive playback, so sessions move cleanly between tools and delivery steps.
No. Mapping is automatic. Stems and reverb layers are detected, then routed to appropriate Atmos positions without manual channel assignment.
Yes, when the source is compressed or artifacted. Restoring first can recover clarity and high-end detail, giving cleaner stems and better spatial results in the final Atmos upmix.
Binaural tags control how Atmos elements translate on headphones, so people with headsets like Apple AirPods can listen to it and still hear a clear immersive image. Good tagging keeps vocals focused while ambience and reverb feel wider.
The goal is to preserve artistic intent while opening the mix in 3D. Stem-first processing and vocal-fidelity pipelines help keep lead vocals stable, protect punch, and avoid washed-out results.
Most upmixing services do not split stems. They mainly add reverb and widen the stereo image, which can make a mix sound muddy and reduce punch.

This workflow first splits stems, separates reverb, then automatically places each part in the right Atmos space. The result is cleaner, more defined, and more faithful to the original artistic intent.

Delivery stays aligned with Dolby Atmos standards through ADM BW64 export, binaural tags, and mastering-friendly defaults.

Specialized vocal-fidelity pipelines help keep lead vocals clear and stable instead of washed out.

Many alternatives are built for audio professionals and come with complex setup. This process stays simple: upload, process, export.

Many AI-generated tracks sound capped and artifacted because the training data is often lossy MP3. Models learn 16kHz limits and compression artifacts from that data, and MP3 upscaling restores toward a full 20kHz spectrum while reducing learned artifacts.
Yes, upscaling can make muffled stems sound more open. Many stem splitters use a "neural audio encoder" capped at 16kHz for speed, and MP3 upscaling restores frequencies up to 20kHz before you apply effects or layering.
Yes, upscale first when your source is MP3. Upscaling before time-stretching, pitch-shifting, or heavy processing reduces artifact amplification and preserves headroom for manipulation.
Upscaling and mastering do different jobs. Upscaling reconstructs missing data and reduces MP3 artifacts, while mastering applies EQ, compression, and loudness optimization.

Always upscale MP3 files first, then master the restored WAV for professional release.

Once your audio is restored, the Automatic Mastering tool can polish it for professional release, with intelligent loudness optimization tailored to your track.

Yes, it can work on old codecs and low bit-depth audio. Processing is based on raw signal rather than a specific codec, so trying a sample is the best way to evaluate performance.

Free users imports from AI music platforms can get featured in the "Latest Uploads" section of the homepage. Pro users can chose to keep their tracks private.

Note that only tracks already public on AI music platforms can be imported (otherwise the robot can't scrap them).

"Latest uploads" help promote the service and create visibility for artists. If privacy is a concern, get the Pro plan.

Yes. Support depends on the model. UniverSR, AudioSR, and FlashSR upscale low resolution audio to 48Khz (super resolution). The 'Music Upscaler' restoration algorithm keeps 48Khz sources at 48Khz, and restores 44.1Khz-or-lower sources at 44.1Khz.
Yes, you can run restoration more than once. You can change the selected Source to run another pass on already restored audio, and batch imports support an 'Iterative restoration' toggle that selects the latest restored version of each file.