Page 1 of 1

NZB Backup Folder Getting Huge - Need Database Solution or Alternative

Posted: September 29th, 2025, 10:51 am
by mongolc
Hi everyone,

I'm dealing with a massive NZB backup folder issue and looking for solutions or workarounds.

## The Problem

SABnzbd uses the NZB backup folder for duplicate detection, which is great - it prevents re-downloading content I already have. However, my backup folder has gotten completely out of control:

- Recently had to delete over 20,000 old NZB files just to make it manageable
- The directory got so large that even `ls` would hang - I had to write a script just to delete files older than a certain date
- File system operations are painfully slow
- The flat directory structure makes it impossible to organize

## The Core Issue

SABnzbd's duplicate detection **only works with a flat directory structure** - it can't recursively search subdirectories. This means I can't simply organize old NZBs into yearly/monthly folders without losing duplicate detection for those files.

## What Would Solve This

If SABnzbd could recursively search subdirectories under the nzb_backup_dir, I could easily write a script to run monthly or yearly that organizes NZBs like:
```
nzb_backup_dir/
├── 2023/
│ ├── 01/
│ ├── 02/
│ └── ...
├── 2024/
│ ├── 01/
│ ├── 02/
│ └── ...
└── current/ (recent files)
```

This would keep the directory manageable while maintaining full duplicate detection.

## Alternative Solutions I'm Considering

**1. Database Integration**
- Store NZB metadata in MySQL for fast duplicate checking
- Keep actual files organized however needed
- Would require a plugin or modification to SABnzbd

**2. Periodic Archival (Loss of Duplicate Detection)**
- Archive old NZBs and accept that duplicates might happen for old content
- Not ideal but might be necessary

**3. Custom Indexing Solution**
- Build a tool that indexes all NZBs in a database
- Keep files in flat structure for SABnzbd
- Use database for my own searching/management
- Still doesn't solve the filesystem performance issue

## Questions for the Community

1. **Has anyone successfully patched SABnzbd to support recursive subdirectory scanning for duplicates?**
2. **Are there any existing plugins that add database support for duplicate detection?**
3. **How do others handle backup folders with tens of thousands of NZBs?**
4. **Would the SABnzbd developers consider adding subdirectory support as a feature?**

I'm comfortable with Python and could potentially contribute a patch if someone can point me in the right direction. Running on Solaris/RHEL if that matters.

Any suggestions appreciated - even if it's just how you handle large NZB collections!

Thanks!

---
*Environment: SABnzbd [version] on Solaris/Red Hat Linux*

Re: NZB Backup Folder Getting Huge - Need Database Solution or Alternative

Posted: September 29th, 2025, 11:20 am
by safihre
Your summary doesn't include what you want to achieve with it.
Why do you even need every job backed up?

Re: NZB Backup Folder Getting Huge - Need Database Solution or Alternative

Posted: September 29th, 2025, 11:31 am
by mongolc
safihre wrote: September 29th, 2025, 11:20 am Your summary doesn't include what you want to achieve with it.
Why do you even need every job backed up?
Thanks updated the post.

Re: NZB Backup Folder Getting Huge - Need Database Solution or Alternative

Posted: September 29th, 2025, 12:34 pm
by sander
> SABnzbd uses the NZB backup folder for duplicate detection

https://sabnzbd.org/wiki/extra/duplicate-detection says "You can prevent checking against the .nzb Backup Folder by disabling the Special setting backup_for_duplicates."

So: is that on or off for you?

If you turn it off, there's still duplicate checking, and you can cleanup your .nzb Backup Folder?

Re: NZB Backup Folder Getting Huge - Need Database Solution or Alternative

Posted: September 29th, 2025, 12:35 pm
by safihre
But you don't need the backup directory for duplicate detection..

A few versions ago we made quite some improvements to it.
https://sabnzbd.org/wiki/extra/duplicate-detection

Re: NZB Backup Folder Getting Huge - Need Database Solution or Alternative

Posted: September 29th, 2025, 7:11 pm
by mongolc
safihre wrote: September 29th, 2025, 12:35 pm But you don't need the backup directory for duplicate detection..

A few versions ago we made quite some improvements to it.
https://sabnzbd.org/wiki/extra/duplicate-detection
ahh so as long as I never delete my archived history that's good enough then?

Re: NZB Backup Folder Getting Huge - Need Database Solution or Alternative

Posted: September 29th, 2025, 7:26 pm
by mongolc
Actually smart duplication would interfere with how sonarr and radarr grab releases. If any is checked for a profile it could download the file first as an SD version then maybe a 720HD version after then 1080p version until it hits the target select quality profile is reached.

Re: NZB Backup Folder Getting Huge - Need Database Solution or Alternative

Posted: September 30th, 2025, 5:02 am
by safihre
Yes.
But the regular duplicate detection works for your need.
Since you only used the nzb backup folder, which only worked on exact matches.

If you already use Sonarr and Radarr, you shouldn't need duplicate detection, right?

Re: NZB Backup Folder Getting Huge - Need Database Solution or Alternative

Posted: September 30th, 2025, 8:38 am
by mongolc
safihre wrote: September 30th, 2025, 5:02 am Yes.
But the regular duplicate detection works for your need.
Since you only used the nzb backup folder, which only worked on exact matches.

If you already use Sonarr and Radarr, you shouldn't need duplicate detection, right?
I don't run into problems with duplicates on Sonarr and Radarr its mainly pron I have a problem with and ebooks, appz stuff like that. I live in the south where pron is blocked so don't give me crap about downloading it lmao.

But I would using smart detection would be cool if you could apply smart duplicate protection to catergories.