[Bioperl-l] looks like a Bio::SeqIO error
hlapp at gmx.net
Mon May 18 13:25:45 UTC 2009
Yep, as Chris wrote, there was a bug about the cleanup not being
complete. Thanks for your report and sticking with it, it helped us
identify and fix that problem.
On May 15, 2009, at 4:55 PM, fungazid wrote:
> hilmar, I believe your suspicions are wrong. The proof: changing -
> format to
> 'Fasta' instead of 'largefasta' in:
> Bio::SeqIO->new(-file=> $fileIn, -format => 'Fasta')
> solved my problem (as was suggested, this is probably not the right
> to use, but it works).
> Hilmar Lapp wrote:
>> I think you're running up against an OS limit on the number of open
>> files, or the number of files in a directory. You can check (and
>> change) your limits with ulimit.
>> The largefasta modules is designed for reading in and handling large
>> (like, really large - whole-chromosome scale) sequences which, if all
>> held in memory, would exhaust the memory either immediately or pretty
>> quickly. So it stores them in temporary files. Most unix systems will
>> limit the number of files you can have open at any one time.
>> If your sequences in that file aren't huge, largefasta isn't the
>> module you want to use - just use the fasta parser, or if you need
>> random access to sequences in the file (do you?) then Bio::DB::Fasta.
>> Writing sequences to temporary files is a waste of time if they fit
>> into memory just fine.
>> The odd thing is that you actually run up to the limit. Normally the
>> temporary files should be closed and deleted when the sequence
>> go out of scope (I think - should verify in the code of course ...) ,
>> so the fact that they don't lets me suspect that the code snippet
>> you presented isn't all that there is to it - are you storing the
>> sequences somewhere in a variable, such as in an array or a hash
>> On May 15, 2009, at 9:05 AM, fungazid wrote:
>>> I hope this is the right address for bioperl programming issues.
>>> saves me a lot of time (not to re-invent the wheel), but there are
>>> extremely irritating problems (I would change the code myself if I
>>> I am trying to read a file (~20MB) containing multiple fasta
>>> with the following lines:
>>> my $seqin = Bio::SeqIO->new('-format'=>'largefasta','-file'=>
>>> LOOP1: while ( my $seqobj1 = $seqin->next_seq())
>>> my $seq=$seqobj1->subseq(1,$seqobj1->length);
>>> This works right for the first ~30000 contig sequences but then the
>>> following message appears:
>>> Error in tempdir() using /tmp/XXXXXXXXXX: Could not create directory
>>> /tmp/6eS92VzVjm: Too many links at /usr/share/perl5/Bio/Root/IO.pm
>>> line 744
>>> DESTROY() mysql_insert obj
>>> destroying HANDLE
>>> What to do ??? (this is only one of some different Bioperl related
>>> bugs that
>>> I'm experiencing)
>>> View this message in context:
>>> Sent from the Perl - Bioperl-L mailing list archive at Nabble.com.
>>> Bioperl-l mailing list
>>> Bioperl-l at lists.open-bio.org
>> : Hilmar Lapp -:- Durham, NC -:- hlapp at gmx dot net :
>> Bioperl-l mailing list
>> Bioperl-l at lists.open-bio.org
> View this message in context: http://www.nabble.com/looks-like-a-Bio%3A%3ASeqIO-error-tp23559474p23567169.html
> Sent from the Perl - Bioperl-L mailing list archive at Nabble.com.
> Bioperl-l mailing list
> Bioperl-l at lists.open-bio.org
: Hilmar Lapp -:- Durham, NC -:- hlapp at gmx dot net :
More information about the Bioperl-l