sanbnzbd does a reasonable job handling nzb files with multiple parsets. However, it differs processing of the entire sets of pars until the entire nzb has been downloaded. Wouldn't it be better to try and process and the par2 right away?
yes. I know one can (should? though i'm not sure I agree with should) create a separate nzb file for each parity set, however that can be annoying if one is creating an nzb to download multiple items from a search engine (for instance).
A way to perhaps attack this problem would be to preprocess an nzb file. If it's contents can be fully assigned to individual parity sets, create a individual nzb files corresponding to those sets and process normally as that. Otherwise, continue as normal. (and by fully assign I mean get a set of parity "header" files, get the set of "data" files and "parity" file blocks, see if they can all be assigned to those "header" files, if they can, I'd think that you are pretty safe in automatically dividing up the large nzb file into multiple smaller nzb files)
handling nzb files with multiple parsets
Forum rules
Help us help you:
Help us help you:
- Are you using the latest stable version of SABnzbd? Downloads page.
- Tell us what system you run SABnzbd on.
- Adhere to the forum rules.
- Do you experience problems during downloading?
Check your connection in Status and Interface settings window.
Use Test Server in Config > Servers.
We will probably ask you to do a test using only basic settings. - Do you experience problems during repair or unpacking?
Enable +Debug logging in the Status and Interface settings window and share the relevant parts of the log here using [ code ] sections.
Re: handling nzb files with multiple parsets
How is this a bug report?
It's a feature request which is already in the queue for future releases.
(Although quite a while because it's a rather complex issue).
It's a feature request which is already in the queue for future releases.
(Although quite a while because it's a rather complex issue).
Re: handling nzb files with multiple parsets
you're right (I didn't see the feature request forum at the bottom). With that said, I think I proposed a solution that wouldn't require the queue being fundamentally changed, just require the nzb be preprocessed and would handle a large amount of nzbs).
Re: handling nzb files with multiple parsets
The splitting up is complicated.
Not for the average case where you just lump together a bunch of episodes that use a regular naming convention.
However, often even single posts already consist of multiple rar/par sets which also might have deviating naming conventions.
What do you do with spurious extra files?
So, while in principle splitting up an NZB is easy, the many fringe cases make it difficult.
We'll get there someday, but it won't be perfect.
Not for the average case where you just lump together a bunch of episodes that use a regular naming convention.
However, often even single posts already consist of multiple rar/par sets which also might have deviating naming conventions.
What do you do with spurious extra files?
So, while in principle splitting up an NZB is easy, the many fringe cases make it difficult.
We'll get there someday, but it won't be perfect.
Re: handling nzb files with multiple parsets
I agree, my point was, in the case where you can split it up easily, do it. In the case where you can't, don't and just continue with standard behavior.
if one can handle the average and very common case better by preprocessing, why not do that? especially as (I'm guessing, can always be wrong), isn't really a fundamental change to any part of sabnzbd.
just my 2 cents. think it could help a lot, but whatever you guys decide is fine.
if one can handle the average and very common case better by preprocessing, why not do that? especially as (I'm guessing, can always be wrong), isn't really a fundamental change to any part of sabnzbd.
just my 2 cents. think it could help a lot, but whatever you guys decide is fine.

