[Bioperl-l] Memory not sufficient when storing human chromosom 1 in BioSQL
George Hartzell
hartzell at alerce.com
Fri Jul 4 19:46:09 UTC 2008
Chris Fields writes:
> Have you tried just loading the sequence into memory using
> Bio::SeqIO? The problem may be the size of the file itself.
>
> chris
>
> On Jul 3, 2008, at 6:48 AM, Andreas Dräger wrote:
>
> > Hi all,
> >
> > Recently I have successfully installed the latest version of BioPerl
> > and BioSQL on my computer, which has 2 GB RAM. Both works fine, but
> > when trying to insert the genbank file of the human chromosome 1,
> > which I have downloaded from the NCBI website (ftp://ftp.ncbi.nih.gov/genomes/H_sapiens/CHR_01/hs_ref_chr1.gbk.gz
> > ) I receive the error message 'Out of memory'. This takes about one
> > hour. My question is, how I can insert large genbank files in my
> > BioSQL database using BioPerl. I do not know, what to do. Thank you
> > for your help!!!
I didn't catch the original question, so I don't know if you provided
any of the specifics of your configuration. It's possible you really
are running out of memory, is there a lot of other activity on the box
or does e.g. top show a lot of free memory?
My first guess is that you're running up against per-process limits.
Check out the man pages for csh and/or bash and read about limit
and/or ulimit. In the [t]csh world, run 'limit' and reflect on it's
output. In the bash world, try 'ulimit -a' and think about what it
has to say.
If you're in the *BSD world you may be being constrained by limits
imposed by your login class, I remember that the various linux flavors
have similar mechanisms for configuring limits.
Finally, it'd be nice to know how large the perl process gets before
it goes blooey. You could just watch the output from top, or you
could wrap up a little scriptlet that uses ps and grep to stash the
data to a file somewhere.
g.
More information about the Bioperl-l
mailing list