How easy: More than one active download at a time.
Posted: April 3rd, 2013, 6:10 am
Hi,
I'm not posting this as a bug or feature request, etc, as I'm well aware sabnzbd doesn't support this, and I'm not implying I want this implemented. My question/requirement is about an edge case. I merely want to ask some questions and think out loud. I am a senior software dev with python experience but haven't taken a look at the sabnzbd+ code quite yet, so I want to ask some naive questions to give me an indication of feasibility.
My ISP allows 4 concurrent connections, I am is also not in a "well connected" country, so the speed difference between locally cached and non-cached (by the ISP's news server) is huge. Locally cached files max out my connection, ones that are not cached (older content) come down about 1/4 of the speed. This means that I could max my pipe with 1 thread downloading on a cached file, but 4 running on a non-cached one goes very slowly, increasing the total time for my entire queue.
So in my situation I could want to automatically have more than one download to be active at a time, with say, a limit set on connections per active download. If I set that to 1, I would have 4 downloads running concurrently with 1 connection each. This would allow the "slow" ones to come down at their own pace whilst still allowing, to a degree, the "fast" ones to clear out in the meantime, with TCP hopefully enforcing fairness. I could (and do) just change the order of the queue around manually to put newer stuff first, but it takes some guessing and is onerous: I use sabnzbd+ exactly because it's hands-off.
So, the question becomes: Is the code structured in a way that the above could be done without ripping up large sections of the codebase? I guess it's really down to how deep some assumptions run. Is it all nice and segregated and I could achieve this without extremely deep knowledge of the code/design or rewriting half the application?
If it is not that bad a thing, I could do it myself on a private branch, or merge it at some point, who knows... or, of course, just request it as a feature if I think I'd be messing with your chi too much.
I'm not posting this as a bug or feature request, etc, as I'm well aware sabnzbd doesn't support this, and I'm not implying I want this implemented. My question/requirement is about an edge case. I merely want to ask some questions and think out loud. I am a senior software dev with python experience but haven't taken a look at the sabnzbd+ code quite yet, so I want to ask some naive questions to give me an indication of feasibility.
My ISP allows 4 concurrent connections, I am is also not in a "well connected" country, so the speed difference between locally cached and non-cached (by the ISP's news server) is huge. Locally cached files max out my connection, ones that are not cached (older content) come down about 1/4 of the speed. This means that I could max my pipe with 1 thread downloading on a cached file, but 4 running on a non-cached one goes very slowly, increasing the total time for my entire queue.
So in my situation I could want to automatically have more than one download to be active at a time, with say, a limit set on connections per active download. If I set that to 1, I would have 4 downloads running concurrently with 1 connection each. This would allow the "slow" ones to come down at their own pace whilst still allowing, to a degree, the "fast" ones to clear out in the meantime, with TCP hopefully enforcing fairness. I could (and do) just change the order of the queue around manually to put newer stuff first, but it takes some guessing and is onerous: I use sabnzbd+ exactly because it's hands-off.
So, the question becomes: Is the code structured in a way that the above could be done without ripping up large sections of the codebase? I guess it's really down to how deep some assumptions run. Is it all nice and segregated and I could achieve this without extremely deep knowledge of the code/design or rewriting half the application?
If it is not that bad a thing, I could do it myself on a private branch, or merge it at some point, who knows... or, of course, just request it as a feature if I think I'd be messing with your chi too much.