I am having an issue where my sabnzbd frequently shows this disconnect message on the web interface. This also lines up with the app not taking API requests. It can last for 15 - 90 seconds. No rhyme or reason that I can see.
I usually run this workload in a k3s Kubernetes cluster. The container has access to 4 full CPU cores at burst and 6GB or memory. I never see it as memory or cpu constrained. I use a host temp disk for the download which is a new NVME SSD. Once the download completes it is moved to my TrueNAS NFS share. I only have an aggregated dual 1 GB (2GB theoretical total) link to the true NAS server, but it should be sufficient if slower than id like. This disconnect happens every almost every download. It seems to happen when unpacking or moving files.
Thinking this was some sort of issue with Kubernetes or Docker, I also setup SABnzbd on a bare metal ubuntu install with an Intel Core i9, 32GB memory and a 1TB NVME SSD. I have the same issue, disconnects on the web interface as well as api not accepting calls for several seconds.
This also seems to line up with significantly reduced download performance. Could the upload / download be maxing out the bandwidth on the 1GB link? How can I shape this if that is the case..
Finally - I have tried pausing download during post processing - this does not seem to help of affect the behavior at all.
I enabled Debug+ logs and I dont see any errors / warnings or odd messages.
Here are the status screen scores for the kuberbetes workload:
Used cache 0 B (0 articles)
System load 3.21 | 2.27 | 2.54 | V=145M R=90M
System performance (Pystone) 666243 Linux-6.14.6-400.asahi.fc42.aarch64+16k-aarch64-with NEON
Download folder speed 2198.8 MB/s /downloads/incomplete
Complete folder speed 43.3 MB/s /completed
Internet Bandwidth 0 MB/s 0 Mbps
Platform Alpine Linux v3.21
Here are the status screen scores for the bare metal ubuntu box:
Used cache 0 B (0 articles)
System load 2.50 | 2.12 | 1.83 | V=7682M R=184M
Download speed limited by Disk speed (57x)
System performance (Pystone) 632676 Intel(R) Core(TM) i9-10900KF CPU @ 3.70GHz AVX2
Download folder speed 1197.1 MB/s /opt/config/sabnzbdplus/Downloads/incomplete
Complete folder speed 41.9 MB/s /media/sabnzbd/complete
Internet Bandwidth 94.86 MB/s 758.88 Mbps
Platform Ubuntu 24.04.2 LTS
Not 100% sure on the procedure for attaching logs but happy to attach or send out debug logs.
Lost Connection to SABnzbd - Unable to Determine the Cause [k3s Kubernetes, NFS]
Forum rules
Help us help you:
Help us help you:
- Are you using the latest stable version of SABnzbd? Downloads page.
- Tell us what system you run SABnzbd on.
- Adhere to the forum rules.
- Do you experience problems during downloading?
Check your connection in Status and Interface settings window.
Use Test Server in Config > Servers.
We will probably ask you to do a test using only basic settings. - Do you experience problems during repair or unpacking?
Enable +Debug logging in the Status and Interface settings window and share the relevant parts of the log here using [ code ] sections.
-
thatmacadmin
- Newbie

- Posts: 3
- Joined: June 29th, 2025, 7:39 pm
-
Brahiewahiewa
- Newbie

- Posts: 9
- Joined: January 20th, 2014, 11:30 pm
Re: Lost Connection to SABnzbd - Unable to Determine the Cause [k3s Kubernetes, NFS]
Consider locating your "complete" folder on a local disc.
Right now, you're downloading at 95 MB/s while your NFS share only allows 42 MB/s . So things are going to stall.
Or fix your NFS issue; NFS should be able to perform at line speed
Right now, you're downloading at 95 MB/s while your NFS share only allows 42 MB/s . So things are going to stall.
Or fix your NFS issue; NFS should be able to perform at line speed
-
thatmacadmin
- Newbie

- Posts: 3
- Joined: June 29th, 2025, 7:39 pm
Re: Lost Connection to SABnzbd - Unable to Determine the Cause [k3s Kubernetes, NFS]
Hi Brahiewahiewa,
Thanks for the reply. I will for sure look into my NFS - I am using TrueNAS Scale so would expect it to be configured reasonably out of the box. The reason I have avoided putting the complete on a local disk is my Sonarr / Radarr and other such media apps are not able to see said local disk so aren’t able to move the media once its complete. Is there a better flow I could look into to put the completed items on the local fast SSD then move it after the fact?
Ed
Thanks for the reply. I will for sure look into my NFS - I am using TrueNAS Scale so would expect it to be configured reasonably out of the box. The reason I have avoided putting the complete on a local disk is my Sonarr / Radarr and other such media apps are not able to see said local disk so aren’t able to move the media once its complete. Is there a better flow I could look into to put the completed items on the local fast SSD then move it after the fact?
Ed
-
thatmacadmin
- Newbie

- Posts: 3
- Joined: June 29th, 2025, 7:39 pm
Re: Lost Connection to SABnzbd - Unable to Determine the Cause [k3s Kubernetes, NFS]
For anyone that finds this post in the future here is how I fixed this using the suggestions above.
Because I am running the docker container in kubernetes, sharing a local disk between multiple pods such as sabnzbd and sonarr/radarr is difficult. While there are RWX pods in Longhorn and other such providers, they suffer from the same network / NFS constraints which seems to be an issue during unpack / move.
What I ended up doing was:
- Create a pod with both sabnzbd and a NFS server sidecar container to share the completed directory over NFSv4.
- Create and mount the download and completed directories using an emptyDir which uses the hosts SSD directly
- I then made this NFS share available inside the cluster leverging the existing sabnzbd service
- I did have to convert this to a load balancer to make it work but i suspect I can eventually convert this to a ClusterIP in the future
- I setup Cilium Network rules to secure and lock down the traffic
This meant that I was able to take advantage of the fast SSD for the download and complete folders. I was then able to use kubernetes to create an NFS PV and PVC for the sabnzbd completed directory. I then was able to mount this directory inside the *arr and other media service pods. This meant that sonarr and radarr can see the completed downloads from sabnzbd. It then takes care of moving the files to the final NFS share over the network.
I also did some tuning to the NFS share and am now getting about 90% of line speed which is good considering these are spinning disks not SSDs and the NFSv4 protocol adds some overhead as does ZFS etc. The biggest difference was to increase the record size to 4M and to edit the rsize and wsize on the client side.
Now I no longer have the disconnect issue, and xfers are much more perfomant. I hope this was helpful to someone!
Thanks all!
Because I am running the docker container in kubernetes, sharing a local disk between multiple pods such as sabnzbd and sonarr/radarr is difficult. While there are RWX pods in Longhorn and other such providers, they suffer from the same network / NFS constraints which seems to be an issue during unpack / move.
What I ended up doing was:
- Create a pod with both sabnzbd and a NFS server sidecar container to share the completed directory over NFSv4.
- Create and mount the download and completed directories using an emptyDir which uses the hosts SSD directly
- I then made this NFS share available inside the cluster leverging the existing sabnzbd service
- I did have to convert this to a load balancer to make it work but i suspect I can eventually convert this to a ClusterIP in the future
- I setup Cilium Network rules to secure and lock down the traffic
This meant that I was able to take advantage of the fast SSD for the download and complete folders. I was then able to use kubernetes to create an NFS PV and PVC for the sabnzbd completed directory. I then was able to mount this directory inside the *arr and other media service pods. This meant that sonarr and radarr can see the completed downloads from sabnzbd. It then takes care of moving the files to the final NFS share over the network.
I also did some tuning to the NFS share and am now getting about 90% of line speed which is good considering these are spinning disks not SSDs and the NFSv4 protocol adds some overhead as does ZFS etc. The biggest difference was to increase the record size to 4M and to edit the rsize and wsize on the client side.
Now I no longer have the disconnect issue, and xfers are much more perfomant. I hope this was helpful to someone!
Thanks all!
Re: Lost Connection to SABnzbd - Unable to Determine the Cause [k3s Kubernetes, NFS]
Building on this thread since it helped me troubleshoot a similar issue.
I was experiencing the same "lost connection to SABnzbd" error in the web UI during active downloads. My setup uses the arr stack deployed via Docker Compose, all configured around an NFS share.
After extensive troubleshooting, I discovered a network bottleneck was the culprit. The server hosting the NFS share has a 1 Gbps NIC, but a firmware bug was throttling it to 100 Mbps. Once I upgraded the firmware and restored full gigabit speeds, the "lost connection to SABnzbd" errors disappeared completely.
Posting this for anyone else who might encounter the same issue. Don't forget to check for network bottlenecks!
I was experiencing the same "lost connection to SABnzbd" error in the web UI during active downloads. My setup uses the arr stack deployed via Docker Compose, all configured around an NFS share.
After extensive troubleshooting, I discovered a network bottleneck was the culprit. The server hosting the NFS share has a 1 Gbps NIC, but a firmware bug was throttling it to 100 Mbps. Once I upgraded the firmware and restored full gigabit speeds, the "lost connection to SABnzbd" errors disappeared completely.
Posting this for anyone else who might encounter the same issue. Don't forget to check for network bottlenecks!