<div dir="ltr">Signals would not work in my case, as I am writing this for Windows users.</div><div class="gmail_extra"><br><div class="gmail_quote">On 26 July 2017 at 16:18, Peter Cock <span dir="ltr"><<a href="mailto:p.j.a.cock@googlemail.com" target="_blank">p.j.a.cock@googlemail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Wed, Jul 26, 2017 at 1:37 PM, Nabeel Ahmed<br>
<<a href="mailto:chaudhrynabeelahmed@gmail.com">chaudhrynabeelahmed@gmail.com</a><wbr>> wrote:<br>
> Hi,<br>
><br>
> Disclaimer: I haven't used ncbiWWW module.<br>
><br>
> Suggestion 1: if you're using a *NIX system. Can make use of Signals. Wrap<br>
> your call with the signal. Define the signal handler:<br>
<br>
</span>I think that approach would work here - thanks!<br>
<span class=""><br>
> Suggestion 2: using Multiprocessing or multithreading - for it, kindly share<br>
> your script/snippet.<br>
<br>
</span>Again that would likely work, but will be more complicated.<br>
<span class=""><br>
> Suggestion 3: Make direct API calls using 'requests' package.<br>
> In case the API calls are simple (you can easily do so) use request to make<br>
> a call, with timeout flag, once the HTTP request will timeout it'll raise<br>
> Timeout exception, which you can catch and in that block make the second<br>
> call (which as per you, works perfectly fine):<br>
<br>
</span>This is essentially the idea I was initially suggesting, but the problem<br>
isn't actually in the online request (currently done by urlopen).<br>
<br>
With the NCBI BLAST you typically submit a query, wait, check for<br>
progress, wait (repeat), and then download the results. This loop in<br>
Biopython has no timeout - it relies on the NCBI returning results<br>
eventually - or giving an error.<br>
<span class="HOEnZb"><font color="#888888"><br>
Peter<br>
</font></span></blockquote></div><br></div>