Running SABnzbd on Solaris 11.4 — Performance Findings & Fixes
Posted: February 5th, 2026, 7:51 pm
Hey all,
Wanted to share some findings from getting SABnzbd 4.5.5 running on Solaris 11.4 (x86, 8 cores, 64GB RAM, ZFS storage). The motivation was eliminating network transfers — previously SABnzbd ran on an Arch Linux VM and had to copy completed downloads over CIFS to the Solaris file server. With 88GB+ 4K remux files, that was slow and occasionally caused byte corruption so I wanted to move it back to Solaris.
SABnzbd installed and ran fine under Python 3.11 in a venv, but the web UI was noticeably sluggish compared to the same version on Linux, and I was getting intermittent "lost connection" flashes in the Glitter interface. Sonarr polling the API would occasionally fail too. Here's what I found and fixed:
1. Memory allocator contention (biggest win)
Solaris's default malloc implementation uses a global lock. Python's allocation patterns — tons of small, frequent allocations across threads — hammer this lock hard. Using prstat -mL I could see CherryPy's main thread spending 87% of its time in lock contention (LCK state).
Fix: export LD_PRELOAD=libumem.so before starting SABnzbd. libumem is Solaris's slab allocator — per-CPU caches, no global lock. UI responsiveness went from sluggish to identical to Linux immediately.
2. Scheduler preemption
Solaris's default TS (timeshare) scheduler was preempting CherryPy threads at bad moments, causing intermittent hangs of 1-3 seconds. These were enough to trigger connection timeouts.
Fix: priocntl -s -c FX -m 60 -p 60 -i pid <sabnzbd_pid> — switches the process to fixed-priority scheduling class. This prevents the scheduler from dynamically lowering SABnzbd's priority when other processes are active.
3. CherryPy server timeout
CherryPy's default timeout in cheroot/server.py is 10 seconds. On Solaris, between GC pauses and scheduler behavior, idle connections would occasionally get reaped. This would cause Sonarr/Radarr to get disconnected mid-poll.
Fix: Changed timeout = 10 to timeout = 300 in cheroot/server.py line 1561. This is in the venv's site-packages, so it survives pip operations only until CherryPy gets reinstalled.
4. Glitter frontend AJAX timeout (potentially relevant to all platforms)
This one isn't Solaris-specific. In interfaces/Glitter/templates/static/javascripts/glitter.basic.js, the callAPI function has a default timeout of 10000ms (10 seconds). Any API response that takes longer than 10 seconds triggers jQuery's .fail() handler, which sets isRestarting(1) and flashes the "lost connection" overlay.
10 seconds is tight. On any system under heavy load — par2 repair, large post-processing, running on low-powered hardware — this could trigger false positives. I bumped it to 30000ms. There's no downside since the timeout only matters for failed requests; successful responses still return as fast as they arrive.
This seems like it could be worth bumping upstream. 20-30 seconds would eliminate false "lost connection" flashes on slower systems without any impact on normal operation.
Summary
SABnzbd runs great on Solaris 11.4 once you account for the platform differences. The core issue is that Python's GIL + CherryPy's threading model interacts poorly with Solaris's default memory allocator and thread scheduler. libumem alone fixes 90% of it.
Items 1, 2, 3, and 5 are Solaris-specific. Item 4 (the 10s frontend AJAX timeout) seems like it could affect any platform under load and might be worth considering as a default change.
Happy to answer questions if anyone else is running SABnzbd on Solaris/illumos. Ran on both for years.
Thanks
Wanted to share some findings from getting SABnzbd 4.5.5 running on Solaris 11.4 (x86, 8 cores, 64GB RAM, ZFS storage). The motivation was eliminating network transfers — previously SABnzbd ran on an Arch Linux VM and had to copy completed downloads over CIFS to the Solaris file server. With 88GB+ 4K remux files, that was slow and occasionally caused byte corruption so I wanted to move it back to Solaris.
SABnzbd installed and ran fine under Python 3.11 in a venv, but the web UI was noticeably sluggish compared to the same version on Linux, and I was getting intermittent "lost connection" flashes in the Glitter interface. Sonarr polling the API would occasionally fail too. Here's what I found and fixed:
1. Memory allocator contention (biggest win)
Solaris's default malloc implementation uses a global lock. Python's allocation patterns — tons of small, frequent allocations across threads — hammer this lock hard. Using prstat -mL I could see CherryPy's main thread spending 87% of its time in lock contention (LCK state).
Fix: export LD_PRELOAD=libumem.so before starting SABnzbd. libumem is Solaris's slab allocator — per-CPU caches, no global lock. UI responsiveness went from sluggish to identical to Linux immediately.
2. Scheduler preemption
Solaris's default TS (timeshare) scheduler was preempting CherryPy threads at bad moments, causing intermittent hangs of 1-3 seconds. These were enough to trigger connection timeouts.
Fix: priocntl -s -c FX -m 60 -p 60 -i pid <sabnzbd_pid> — switches the process to fixed-priority scheduling class. This prevents the scheduler from dynamically lowering SABnzbd's priority when other processes are active.
3. CherryPy server timeout
CherryPy's default timeout in cheroot/server.py is 10 seconds. On Solaris, between GC pauses and scheduler behavior, idle connections would occasionally get reaped. This would cause Sonarr/Radarr to get disconnected mid-poll.
Fix: Changed timeout = 10 to timeout = 300 in cheroot/server.py line 1561. This is in the venv's site-packages, so it survives pip operations only until CherryPy gets reinstalled.
4. Glitter frontend AJAX timeout (potentially relevant to all platforms)
This one isn't Solaris-specific. In interfaces/Glitter/templates/static/javascripts/glitter.basic.js, the callAPI function has a default timeout of 10000ms (10 seconds). Any API response that takes longer than 10 seconds triggers jQuery's .fail() handler, which sets isRestarting(1) and flashes the "lost connection" overlay.
10 seconds is tight. On any system under heavy load — par2 repair, large post-processing, running on low-powered hardware — this could trigger false positives. I bumped it to 30000ms. There's no downside since the timeout only matters for failed requests; successful responses still return as fast as they arrive.
This seems like it could be worth bumping upstream. 20-30 seconds would eliminate false "lost connection" flashes on slower systems without any impact on normal operation.
Summary
SABnzbd runs great on Solaris 11.4 once you account for the platform differences. The core issue is that Python's GIL + CherryPy's threading model interacts poorly with Solaris's default memory allocator and thread scheduler. libumem alone fixes 90% of it.
Items 1, 2, 3, and 5 are Solaris-specific. Item 4 (the 10s frontend AJAX timeout) seems like it could affect any platform under load and might be worth considering as a default change.
Happy to answer questions if anyone else is running SABnzbd on Solaris/illumos. Ran on both for years.
Thanks