[Bioperl-l] Memory in Perl
Maido Remm
mremm@ebc.ee
Thu, 31 Aug 2000 11:35:49 +0200
Hi bioperlers!
Has anybody in the Bioperl community found a workaround for perl's
inefficient memory handling?
I am trying to write some genome-scale applications in perl, but I am
having the following problems:
1. Reading more than 1 000 000 blast scores into hashes takes > 100 Mb
memory. Thus ca 100 bytes per hash entry. Is there a way to make the work
with large hashes more efficient?
Numerical 2D-array is not a good solution either, because the size of the
full array would be enormous (50000 genes x 50000 genes for example), and
most of the slots in array will stay empty and unused.
2. Huge memory consumption wouldn't be a problem for our hardware - there
is still plenty of RAM left, but perl reports "Out of Memory" after growing
to about 125 Mb in size.
Is there a perl install-time option to change the limit of memory usage?
Is this a system-dependent feature? We have Alphas with ca 1GB RAM + 3GB
swap.
Is perl able to use swap memory?
Thanks,
Maido