2.0.0 Issues
Posted: April 24th, 2017, 10:53 am
Following up on my post prior to the ransomware...
I had an issue, at the time, with failed SQL writes and loss of connection to the web site. The base system is operating on an older i7 machine running Server 2016. My SABNZBD, at the time, was running on a Windows server 2016 VM alongside CouchPotato, Sonarr, and Plex. Per your suggestion, I addressed the CPU utilization by moving Plex to its own VM. All three of the remaining VMs communicate through a single virtual disk housed on a fast SSD. This has worked very nicely to solve the originally observed issue. However, I believe that there is another underlying issue that should be addressed:
1. The PAR2 process is incredibly "loud" in terms of the CPU and disk "thrashing". When compared to similar tools (e.g., QuickPar), there is a significant amount of additional resources required to fix downloads that contain lost articles. This system is running the 64-bit variant. I can pull together NZBs for you to use to test, if you're interested.
2. For folks like me that use a "small" SSD to download files, before depositing them on "large" long term storage, the current method of handling the post-download (e.g., "History" in the UI) is very suboptimal. In cases where there are multiple files that require repair, the entire decoding queue slows to a crawl and a tremendous amount of disk space is consumed from (in my case) fast, limited capacity media. I'd suggest that you consider splitting the pipeline into 2 separate queues, one for repair and one for decoding error-free content. This would significantly decrease the storage required for SABNZBD "temporary" files and improve system throughput.
Thanks for your consideration.
I had an issue, at the time, with failed SQL writes and loss of connection to the web site. The base system is operating on an older i7 machine running Server 2016. My SABNZBD, at the time, was running on a Windows server 2016 VM alongside CouchPotato, Sonarr, and Plex. Per your suggestion, I addressed the CPU utilization by moving Plex to its own VM. All three of the remaining VMs communicate through a single virtual disk housed on a fast SSD. This has worked very nicely to solve the originally observed issue. However, I believe that there is another underlying issue that should be addressed:
1. The PAR2 process is incredibly "loud" in terms of the CPU and disk "thrashing". When compared to similar tools (e.g., QuickPar), there is a significant amount of additional resources required to fix downloads that contain lost articles. This system is running the 64-bit variant. I can pull together NZBs for you to use to test, if you're interested.
2. For folks like me that use a "small" SSD to download files, before depositing them on "large" long term storage, the current method of handling the post-download (e.g., "History" in the UI) is very suboptimal. In cases where there are multiple files that require repair, the entire decoding queue slows to a crawl and a tremendous amount of disk space is consumed from (in my case) fast, limited capacity media. I'd suggest that you consider splitting the pipeline into 2 separate queues, one for repair and one for decoding error-free content. This would significantly decrease the storage required for SABNZBD "temporary" files and improve system throughput.
Thanks for your consideration.