Running SABnzbd on Solaris 11.4 — Performance Findings & Fixes

Get help with all aspects of SABnzbd
Forum rules
Help us help you:
  • Are you using the latest stable version of SABnzbd? Downloads page.
  • Tell us what system you run SABnzbd on.
  • Adhere to the forum rules.
  • Do you experience problems during downloading?
    Check your connection in Status and Interface settings window.
    Use Test Server in Config > Servers.
    We will probably ask you to do a test using only basic settings.
  • Do you experience problems during repair or unpacking?
    Enable +Debug logging in the Status and Interface settings window and share the relevant parts of the log here using [ code ] sections.
Post Reply
mongolc
Newbie
Newbie
Posts: 30
Joined: March 14th, 2018, 1:07 pm

Running SABnzbd on Solaris 11.4 — Performance Findings & Fixes

Post by mongolc »

Hey all,
Wanted to share some findings from getting SABnzbd 4.5.5 running on Solaris 11.4 (x86, 8 cores, 64GB RAM, ZFS storage). The motivation was eliminating network transfers — previously SABnzbd ran on an Arch Linux VM and had to copy completed downloads over CIFS to the Solaris file server. With 88GB+ 4K remux files, that was slow and occasionally caused byte corruption so I wanted to move it back to Solaris.

SABnzbd installed and ran fine under Python 3.11 in a venv, but the web UI was noticeably sluggish compared to the same version on Linux, and I was getting intermittent "lost connection" flashes in the Glitter interface. Sonarr polling the API would occasionally fail too. Here's what I found and fixed:

1. Memory allocator contention (biggest win)

Solaris's default malloc implementation uses a global lock. Python's allocation patterns — tons of small, frequent allocations across threads — hammer this lock hard. Using prstat -mL I could see CherryPy's main thread spending 87% of its time in lock contention (LCK state).
Fix: export LD_PRELOAD=libumem.so before starting SABnzbd. libumem is Solaris's slab allocator — per-CPU caches, no global lock. UI responsiveness went from sluggish to identical to Linux immediately.

2. Scheduler preemption

Solaris's default TS (timeshare) scheduler was preempting CherryPy threads at bad moments, causing intermittent hangs of 1-3 seconds. These were enough to trigger connection timeouts.
Fix: priocntl -s -c FX -m 60 -p 60 -i pid <sabnzbd_pid> — switches the process to fixed-priority scheduling class. This prevents the scheduler from dynamically lowering SABnzbd's priority when other processes are active.

3. CherryPy server timeout

CherryPy's default timeout in cheroot/server.py is 10 seconds. On Solaris, between GC pauses and scheduler behavior, idle connections would occasionally get reaped. This would cause Sonarr/Radarr to get disconnected mid-poll.
Fix: Changed timeout = 10 to timeout = 300 in cheroot/server.py line 1561. This is in the venv's site-packages, so it survives pip operations only until CherryPy gets reinstalled.

4. Glitter frontend AJAX timeout (potentially relevant to all platforms)

This one isn't Solaris-specific. In interfaces/Glitter/templates/static/javascripts/glitter.basic.js, the callAPI function has a default timeout of 10000ms (10 seconds). Any API response that takes longer than 10 seconds triggers jQuery's .fail() handler, which sets isRestarting(1) and flashes the "lost connection" overlay.
10 seconds is tight. On any system under heavy load — par2 repair, large post-processing, running on low-powered hardware — this could trigger false positives. I bumped it to 30000ms. There's no downside since the timeout only matters for failed requests; successful responses still return as fast as they arrive.
This seems like it could be worth bumping upstream. 20-30 seconds would eliminate false "lost connection" flashes on slower systems without any impact on normal operation.

Summary

SABnzbd runs great on Solaris 11.4 once you account for the platform differences. The core issue is that Python's GIL + CherryPy's threading model interacts poorly with Solaris's default memory allocator and thread scheduler. libumem alone fixes 90% of it.
Items 1, 2, 3, and 5 are Solaris-specific. Item 4 (the 10s frontend AJAX timeout) seems like it could affect any platform under load and might be worth considering as a default change.
Happy to answer questions if anyone else is running SABnzbd on Solaris/illumos. Ran on both for years.

Thanks
mongolc
Newbie
Newbie
Posts: 30
Joined: March 14th, 2018, 1:07 pm

Re: Running SABnzbd on Solaris 11.4 — Performance Findings & Fixes

Post by mongolc »

After running for a bit, the UI was still sluggish after idle periods despite the earlier fixes. The issue is Solaris's aggressive memory reclamation — when SABnzbd sits idle for a few minutes, Solaris deprioritizes the process and lets ZFS ARC reclaim those memory pages. Next time you touch the UI, everything has to get faulted back in.

This is more pronounced in my setup because I run Solaris 11.4 as a VM in ESXi with an LSI HBA passed through directly to a 15-drive SAS array (~66TB raidz2 pool). I deliberately keep the Solaris VM's RAM allocation as low as practical (currently 64GB) because with a pool that size, ZFS ARC will happily eat every byte you give it — and ESXi can't reclaim it for other VMs. So there's constant pressure between ZFS wanting cache and SABnzbd wanting its pages to stay resident.
The fix is dead simple — a cron job that hits the API once a minute to keep the process warm:

* * * * * curl -s "http://localhost:8085/api?mode=version&apikey=XXXX" > /dev/null 2>&1

This isn't just a UI convenience thing either. If you're running Sonarr or Radarr pointed at SABnzbd's API, the same idle-wake problem applies. Sonarr polls SABnzbd every two minutes when downloads are active, but when nothing's queued it backs off to longer intervals. If SABnzbd has gone cold during that gap, the first API response can take several seconds — long enough for Sonarr/Radarr to log connection errors or even mark SABnzbd as unavailable. The keepalive cron eliminates that entirely since the process never goes cold in the first place.

This keeps the main API path and CherryPy threads in CPU cache and prevents Solaris from treating SABnzbd as idle. The config pages still load slow on first access since they're separate routes the cron doesn't touch, but you're rarely in there anyway.

Linux doesn't have this problem because it takes a lazy approach to memory — it leaves idle process pages alone until it's actually running low. Solaris takes the opposite approach: if you're not actively using it, someone else gets it. Great design for 24/7 database servers, less ideal for bursty web apps.
elenaqywec
Newbie
Newbie
Posts: 1
Joined: March 21st, 2026, 10:46 am
Contact:

Re: Running SABnzbd on Solaris 11.4 — Performance Findings & Fixes

Post by elenaqywec »

Preloading libumem alone is a huge win since it eliminates the global lock contention. Switching SABnzbd to fixed-priority scheduling helps avoid those random pauses, and bumping CherryPy’s timeout plus Glitter’s AJAX timeout solves the API disconnects. Your breakdown shows that with a few Solaris-specific tweaks, SABnzbd can run as smoothly as on Linux, even under heavy loads with large files.
Post Reply