Hi everyone,
I'm dealing with a massive NZB backup folder issue and looking for solutions or workarounds.
## The Problem
SABnzbd uses the NZB backup folder for duplicate detection, which is great - it prevents re-downloading content I already have. However, my backup folder has gotten completely out of control:
- Recently had to delete over 20,000 old NZB files just to make it manageable
- The directory got so large that even `ls` would hang - I had to write a script just to delete files older than a certain date
- File system operations are painfully slow
- The flat directory structure makes it impossible to organize
## The Core Issue
SABnzbd's duplicate detection **only works with a flat directory structure** - it can't recursively search subdirectories. This means I can't simply organize old NZBs into yearly/monthly folders without losing duplicate detection for those files.
## What Would Solve This
If SABnzbd could recursively search subdirectories under the nzb_backup_dir, I could easily write a script to run monthly or yearly that organizes NZBs like:
```
nzb_backup_dir/
├── 2023/
│ ├── 01/
│ ├── 02/
│ └── ...
├── 2024/
│ ├── 01/
│ ├── 02/
│ └── ...
└── current/ (recent files)
```
This would keep the directory manageable while maintaining full duplicate detection.
## Alternative Solutions I'm Considering
**1. Database Integration**
- Store NZB metadata in MySQL for fast duplicate checking
- Keep actual files organized however needed
- Would require a plugin or modification to SABnzbd
**2. Periodic Archival (Loss of Duplicate Detection)**
- Archive old NZBs and accept that duplicates might happen for old content
- Not ideal but might be necessary
**3. Custom Indexing Solution**
- Build a tool that indexes all NZBs in a database
- Keep files in flat structure for SABnzbd
- Use database for my own searching/management
- Still doesn't solve the filesystem performance issue
## Questions for the Community
1. **Has anyone successfully patched SABnzbd to support recursive subdirectory scanning for duplicates?**
2. **Are there any existing plugins that add database support for duplicate detection?**
3. **How do others handle backup folders with tens of thousands of NZBs?**
4. **Would the SABnzbd developers consider adding subdirectory support as a feature?**
I'm comfortable with Python and could potentially contribute a patch if someone can point me in the right direction. Running on Solaris/RHEL if that matters.
Any suggestions appreciated - even if it's just how you handle large NZB collections!
Thanks!
---
*Environment: SABnzbd [version] on Solaris/Red Hat Linux*
NZB Backup Folder Getting Huge - Need Database Solution or Alternative
NZB Backup Folder Getting Huge - Need Database Solution or Alternative
Last edited by mongolc on September 29th, 2025, 11:30 am, edited 2 times in total.
Re: NZB Backup Folder Getting Huge - Need Database Solution or Alternative
Your summary doesn't include what you want to achieve with it.
Why do you even need every job backed up?
Why do you even need every job backed up?
If you like our support, check our special newsserver deal or donate at: https://sabnzbd.org/donate
Re: NZB Backup Folder Getting Huge - Need Database Solution or Alternative
> SABnzbd uses the NZB backup folder for duplicate detection
https://sabnzbd.org/wiki/extra/duplicate-detection says "You can prevent checking against the .nzb Backup Folder by disabling the Special setting backup_for_duplicates."
So: is that on or off for you?
If you turn it off, there's still duplicate checking, and you can cleanup your .nzb Backup Folder?
https://sabnzbd.org/wiki/extra/duplicate-detection says "You can prevent checking against the .nzb Backup Folder by disabling the Special setting backup_for_duplicates."
So: is that on or off for you?
If you turn it off, there's still duplicate checking, and you can cleanup your .nzb Backup Folder?
Re: NZB Backup Folder Getting Huge - Need Database Solution or Alternative
But you don't need the backup directory for duplicate detection..
A few versions ago we made quite some improvements to it.
https://sabnzbd.org/wiki/extra/duplicate-detection
A few versions ago we made quite some improvements to it.
https://sabnzbd.org/wiki/extra/duplicate-detection
If you like our support, check our special newsserver deal or donate at: https://sabnzbd.org/donate
Re: NZB Backup Folder Getting Huge - Need Database Solution or Alternative
ahh so as long as I never delete my archived history that's good enough then?safihre wrote: ↑September 29th, 2025, 12:35 pm But you don't need the backup directory for duplicate detection..
A few versions ago we made quite some improvements to it.
https://sabnzbd.org/wiki/extra/duplicate-detection
Re: NZB Backup Folder Getting Huge - Need Database Solution or Alternative
Actually smart duplication would interfere with how sonarr and radarr grab releases. If any is checked for a profile it could download the file first as an SD version then maybe a 720HD version after then 1080p version until it hits the target select quality profile is reached.
Re: NZB Backup Folder Getting Huge - Need Database Solution or Alternative
Yes.
But the regular duplicate detection works for your need.
Since you only used the nzb backup folder, which only worked on exact matches.
If you already use Sonarr and Radarr, you shouldn't need duplicate detection, right?
But the regular duplicate detection works for your need.
Since you only used the nzb backup folder, which only worked on exact matches.
If you already use Sonarr and Radarr, you shouldn't need duplicate detection, right?
If you like our support, check our special newsserver deal or donate at: https://sabnzbd.org/donate
Re: NZB Backup Folder Getting Huge - Need Database Solution or Alternative
I don't run into problems with duplicates on Sonarr and Radarr its mainly pron I have a problem with and ebooks, appz stuff like that. I live in the south where pron is blocked so don't give me crap about downloading it lmao.
But I would using smart detection would be cool if you could apply smart duplicate protection to catergories.

