[Bioperl-l] storing/retrieving a large hash on file system?
Spiros Denaxas
s.denaxas at gmail.com
Tue May 18 15:41:01 UTC 2010
Hello,
it all really depends on your definition of readable. YAML is readable
but requires a parser ; XML is readable but is bloated and requires a
code and a parser.
You can directly dump the output from Data::Dumper and then eval() it
back in a hash. I would think this is the cleanest way if you
specifically want to dump a hash and re-generate it with no additional
code.
You can set the $Data::Dumper::Indent flag to control how readable the hash is.
hope this helps,
Spiros
On Tue, May 18, 2010 at 4:28 PM, Ben Bimber <bimber at wisc.edu> wrote:
> this question is more of a general perl one than bioperl specific, so
> I hope it is appropriate for this list:
>
> I am writing code that has two steps. the first generates a large,
> complex hash describing mutations. it takes a fair amount of time to
> run this step. the second step uses this data to perform downstream
> calculations. for the purposes of writing/debugging this downstream
> code, it would save me a lot of time if i could run the first step
> once, then store this hash in something like the file system. this
> way I could quickly load it, when debugging the downstream code
> without waiting for the hash to be recreated.
>
> is there a 'best practice' way to do something like this? I could
> save a tab-delimited file, which is human readable, but does not
> represent the structure of the hash, so I would need code to re-parse
> it. I assume I could probably do something along the lines of dumping
> a JSON string, then read/decode it. this is easy, but not so
> human-readable. is there another option i'm not thinking of? what do
> others do in this sort of situation?
>
> thanks in advance.
>
> -Ben
> _______________________________________________
> Bioperl-l mailing list
> Bioperl-l at lists.open-bio.org
> http://lists.open-bio.org/mailman/listinfo/bioperl-l
>
More information about the Bioperl-l
mailing list