From morecmenpasts at wildmail.com Sun Feb 1 06:06:39 2004 From: morecmenpasts at wildmail.com (E.bachman) Date: Sat Jan 31 21:12:05 2004 Subject: [Bioperl-pipeline] Cum And Look Like A P0RN STAR!!!!!!!! sluggishflaxen Message-ID: An HTML attachment was scrubbed... URL: http://portal.open-bio.org/pipermail/bioperl-pipeline/attachments/20040201/b479e840/attachment.htm From moreincheselect at wildmail.com Sun Feb 1 23:17:00 2004 From: moreincheselect at wildmail.com (Bionor) Date: Sun Feb 1 23:28:43 2004 Subject: [Bioperl-pipeline] NaturalGain+ , What more do you want? buster Message-ID: An HTML attachment was scrubbed... URL: http://portal.open-bio.org/pipermail/bioperl-pipeline/attachments/20040202/0159c42d/attachment.htm From 01agebackHedda at rocketmail.com Mon Feb 2 16:04:34 2004 From: 01agebackHedda at rocketmail.com (Dzhao) Date: Tue Feb 3 00:09:37 2004 Subject: [Bioperl-pipeline] Wanna look younger again?? Fargo Message-ID: An HTML attachment was scrubbed... URL: http://portal.open-bio.org/pipermail/bioperl-pipeline/attachments/20040202/3caf60ef/attachment.htm From BigJ0hnsonfoiling at wildmail.com Wed Feb 4 06:39:34 2004 From: BigJ0hnsonfoiling at wildmail.com (Vince) Date: Wed Feb 4 06:45:46 2004 Subject: [Bioperl-pipeline] Enlarge Ur C0ck By 3+inches kernels Message-ID: An HTML attachment was scrubbed... URL: http://portal.open-bio.org/pipermail/bioperl-pipeline/attachments/20040204/145d3d99/attachment.htm From dehneg at labri.fr Tue Feb 10 12:31:42 2004 From: dehneg at labri.fr (Alexandre Dehne) Date: Tue Feb 10 12:37:46 2004 Subject: [Bioperl-pipeline] creating jobs, job_setup Message-ID: <1076434302.1593.276.camel@houat.labri.fr> Hi, First, I would like to congratulate the Biopipe team for having created such a useful tool. The context : For several reasons (some goods and some not so good), some of my runnables take nothing in input and return nothing. The problem : This type of runnables does not match the biopipe "spirit", so it is a problem to create jobs for these runnables via the "create_job" function which needs a array input. The "temporary" unrighteous solution : I have created an InputCreate module named setup_nothing which creates a void input like the following : my @input=$self->create_input("nothing",'',"infile"); my $job = $self->create_job($next_anal,\@input); $self->dbadaptor->get_JobAdaptor->store($job); This way, I launch one job on my analysis as well as on the following ones by placing "COPY_ID_FILE" in their respective rules in the XML file. The questions : Is there a clean way to create jobs without any input (a just_do_it function ?) ? Perhaps the mark in the XML file ? Also, could someone tell me more about this mark ??? Thank you in advance Alexandre From kumark at cshl.org Tue Feb 10 18:30:46 2004 From: kumark at cshl.org (Kiran Kumar) Date: Tue Feb 10 18:36:46 2004 Subject: [Bioperl-pipeline] creating jobs, job_setup In-Reply-To: <1076434302.1593.276.camel@houat.labri.fr> Message-ID: Hi Alexandre, It's nice to know that it fits into your work. In short, you would be able to create job without inputs. The direct way would be 'not to pass' any inputs to "create_job" function. my $job = $self->create_job($next_anal); $self->dbadaptor->get_JobAdaptor->store($job); That should make it 'righteous' :-).. Since you are following the Biopipe spirit, let me go on to explain the other aspects too. On the xml level, you are right that the tag could be used for this purpose. The provides for specifying jobs directly inside the XML file without using a Datamonger/InputCreate. Ofcourse, this is convinient if the number of jobs are handful which otherwise would make the XML file very lengthy. This feature is still there but has not been tested for long time. We have stopped using this feature for a drawback it poses towards the biopipe spirit which is as follows. If the job needs inputs, and it is specified using job_setup options, the xml file becomes too specific and anyone else trying to re-use it would have to change all the input_ids each time they need to run for different sets of inputs. The datamonger/InputCreate on the other hand, provides for the clean separation of input names from the xml pipeline specification. The InputCreates are expected to read the input_names for the the jobs they are gonna create from a file or directory or somewhere (this location for the input_names is specified as the input_create's parameters in the xml file). Hope I havent left you more confused than before! Cheers, Kiran >Hi, > >First, I would like to congratulate the Biopipe team for having created such a useful tool. > > >The context : >For several reasons (some goods and some not so good), some of my runnables take nothing in input and return nothing. > >The problem : >This type of runnables does not match the biopipe "spirit", so it is a problem to create jobs for these runnables via the "create_job" function which needs a array input. > >The "temporary" unrighteous solution : >I have created an InputCreate module named setup_nothing which creates a void input like the following : > my @input=$self->create_input("nothing",'',"infile"); > my $job = $self->create_job($next_anal,\@input); > $self->dbadaptor->get_JobAdaptor->store($job); >This way, I launch one job on my analysis as well as on the following ones by placing "COPY_ID_FILE" in their respective rules in the XML file. > > >The questions : >Is there a clean way to create jobs without any input (a just_do_it function ?) ? >Perhaps the mark in the XML file ? >Also, could someone tell me more about this mark ??? > > >Thank you in advance > > >Alexandre > > > > >_______________________________________________ >bioperl-pipeline mailing list >bioperl-pipeline@bioperl.org >http://bioperl.org/mailman/listinfo/bioperl-pipeline > From dehneg at labri.fr Thu Feb 12 08:48:52 2004 From: dehneg at labri.fr (Alexandre Dehne) Date: Thu Feb 12 08:55:00 2004 Subject: [Bioperl-pipeline] creating jobs, job_setup In-Reply-To: References: Message-ID: <1076593731.1593.358.camel@houat.labri.fr> Hi Kiran, Thank you for answering me. Actually, your solution is very clean but, by using it, other problems came up. Here is the current situation: So, I start a job on my first analysis with your suggestion. Then, more jobs on other analysis are created by placing "COPY_ID_FILE" or "COPY_ID" in their respective rules in the XML file. (Remember that for now, all of my analysis do not take any input and do not give any output. So, this way, everything is fine and work well.) Here comes the problem when I want to use an analysis that needs an input. For that, I am using the data monger. Since the data monger needs an input, it therefore does not work. So, I am trying to create this input by using the following mark: ... $input_description 1 .... My initial data monger (analysis N.1) and the one previously described (analysis N.4) are now called at the beginning of the pipeline. But, the analysis N.4 has to be called after the third one as I specified it in the rules. Do you have any suggestion on how to solve my problem and why the rules are not followed ? Please let me know if I am not clear. Thank you in advance, Alexandre On Wed, 2004-02-11 at 00:30, Kiran Kumar wrote: > Hi Alexandre, > It's nice to know that it fits into your work. > > In short, you would be able to create job without inputs. The direct way > would be 'not to pass' any inputs to "create_job" function. > > my $job = $self->create_job($next_anal); > $self->dbadaptor->get_JobAdaptor->store($job); > That should make it 'righteous' :-).. > > Since you are following the Biopipe spirit, let me go on to explain the > other aspects too. > > > On the xml level, you are right that the tag could be used for > this purpose. > The provides for specifying jobs directly inside the XML file > without using a Datamonger/InputCreate. Ofcourse, this is convinient if > the number of jobs are handful which otherwise would make the XML file > very lengthy. This feature is still there but has not been tested for long > time. We have stopped using this feature for a drawback it poses towards > the biopipe spirit which is as follows. > > If the job needs inputs, and it is specified using job_setup options, > the xml file becomes too specific and anyone else trying to re-use it > would have to change all the input_ids each time they need to run for > different sets of inputs. The datamonger/InputCreate on the other hand, > provides for the clean separation of input names from the xml pipeline > specification. The InputCreates are expected to read the input_names for > the the jobs they are gonna create from a file or directory or somewhere > (this location for the input_names is specified as the input_create's > parameters in the xml file). > > Hope I havent left you more confused than before! > > Cheers, > Kiran > > > >Hi, > > > >First, I would like to congratulate the Biopipe team for having created such a useful tool. > > > > > >The context : > >For several reasons (some goods and some not so good), some of my runnables take nothing in input and return nothing. > > > >The problem : > >This type of runnables does not match the biopipe "spirit", so it is a problem to create jobs for these runnables via the "create_job" function which needs a array input. > > > >The "temporary" unrighteous solution : > >I have created an InputCreate module named setup_nothing which creates a void input like the following : > > my @input=$self->create_input("nothing",'',"infile"); > > my $job = $self->create_job($next_anal,\@input); > > $self->dbadaptor->get_JobAdaptor->store($job); > >This way, I launch one job on my analysis as well as on the following ones by placing "COPY_ID_FILE" in their respective rules in the XML file. > > > > > >The questions : > >Is there a clean way to create jobs without any input (a just_do_it function ?) ? > >Perhaps the mark in the XML file ? > >Also, could someone tell me more about this mark ??? > > > > > >Thank you in advance > > > > > >Alexandre > > > > > > > > > >_______________________________________________ > >bioperl-pipeline mailing list > >bioperl-pipeline@bioperl.org > >http://bioperl.org/mailman/listinfo/bioperl-pipeline > > > From shawnh at fugu-sg.org Fri Feb 13 03:18:58 2004 From: shawnh at fugu-sg.org (Shawn Hoon) Date: Fri Feb 13 03:24:54 2004 Subject: [Bioperl-pipeline] creating jobs, job_setup In-Reply-To: <1076593731.1593.358.camel@houat.labri.fr> References: <1076593731.1593.358.camel@houat.labri.fr> Message-ID: <45CB6AED-5DFD-11D8-A4FD-000A95783436@fugu-sg.org> Hi Alexandre, okay this is what I gather you are trying to do: Run analysis 1 -> 2 -> 3 -> 4 The question is what are your inputs? are u running the four analysis on the same input type? for example, you have four blast analysis that you do on sequences? If so, then what you would do is use a input create/data monger to create inputs for analysis 1. Then in your rules you would specify COPY_ID for analysis 1 -> 2 and 2->3 and 3->4 then the input id will be transferred between analysis. If your input for analysis 2 for example is different from that of analysis 1, then you need to do something different. For this, there are 2 options: 1) If you require that the analysis 1 is completed before 2 is completed, then you need an analysis in between 1 and 2 ( so as a result 2 becomes 3) Analysis 2 would now be an input_create which knows how to create inputs for analysis 2. (Basically we are assuming the this input creation is linked to the input 1 of analysis 1. 2) If you require that all of the inputs from analysis 1 is completed before any analysis 2 jobs are started, you can do a rule WAITFORALL which would then launch a job of analysis 2 (which may or may not be a input create). for your definition below, I don't see why analysis 4 should be executed at startup. Can you provide the xml file? shawn On Feb 12, 2004, at 5:48 AM, Alexandre Dehne wrote: > Hi Kiran, > > Thank you for answering me. > Actually, your solution is very clean but, by using it, other problems > came up. > > Here is the current situation: > So, I start a job on my first analysis with your suggestion. Then, more > jobs on other analysis are created by placing > "COPY_ID_FILE" or "COPY_ID" in their > respective rules in the XML file. > (Remember that for now, all of my analysis do not take any input and do > not give any output. So, this way, everything is fine and work well.) > > Here comes the problem when I want to use an analysis that needs an > input. For that, I am using the data monger. Since the data monger > needs > an input, it therefore does not work. So, I am trying to create this > input by using the following mark: > > ... > > > > $input_description > 1 > > > .... > > > My initial data monger (analysis N.1) and the one previously described > (analysis N.4) are now called at the beginning of the pipeline. > But, the analysis N.4 has to be called after the third one as I > specified it in the rules. > > Do you have any suggestion on how to solve my problem and why the rules > are not followed ? > Please let me know if I am not clear. > > Thank you in advance, > > Alexandre > > > > On Wed, 2004-02-11 at 00:30, Kiran Kumar wrote: >> Hi Alexandre, >> It's nice to know that it fits into your work. >> >> In short, you would be able to create job without inputs. The direct >> way >> would be 'not to pass' any inputs to "create_job" function. >> >> my $job = $self->create_job($next_anal); >> $self->dbadaptor->get_JobAdaptor->store($job); >> That should make it 'righteous' :-).. >> >> Since you are following the Biopipe spirit, let me go on to explain >> the >> other aspects too. >> >> >> On the xml level, you are right that the tag could be >> used for >> this purpose. >> The provides for specifying jobs directly inside the XML >> file >> without using a Datamonger/InputCreate. Ofcourse, this is convinient >> if >> the number of jobs are handful which otherwise would make the XML file >> very lengthy. This feature is still there but has not been tested for >> long >> time. We have stopped using this feature for a drawback it poses >> towards >> the biopipe spirit which is as follows. >> >> If the job needs inputs, and it is specified using job_setup options, >> the xml file becomes too specific and anyone else trying to re-use it >> would have to change all the input_ids each time they need to run for >> different sets of inputs. The datamonger/InputCreate on the other >> hand, >> provides for the clean separation of input names from the xml pipeline >> specification. The InputCreates are expected to read the input_names >> for >> the the jobs they are gonna create from a file or directory or >> somewhere >> (this location for the input_names is specified as the input_create's >> parameters in the xml file). >> >> Hope I havent left you more confused than before! >> >> Cheers, >> Kiran >> >> >>> Hi, >>> >>> First, I would like to congratulate the Biopipe team for having >>> created such a useful tool. >>> >>> >>> The context : >>> For several reasons (some goods and some not so good), some of my >>> runnables take nothing in input and return nothing. >>> >>> The problem : >>> This type of runnables does not match the biopipe "spirit", so it is >>> a problem to create jobs for these runnables via the "create_job" >>> function which needs a array input. >>> >>> The "temporary" unrighteous solution : >>> I have created an InputCreate module named setup_nothing which >>> creates a void input like the following : >>> my @input=$self->create_input("nothing",'',"infile"); >>> my $job = $self->create_job($next_anal,\@input); >>> $self->dbadaptor->get_JobAdaptor->store($job); >>> This way, I launch one job on my analysis as well as on the >>> following ones by placing "COPY_ID_FILE" in their >>> respective rules in the XML file. >>> >>> >>> The questions : >>> Is there a clean way to create jobs without any input (a just_do_it >>> function ?) ? >>> Perhaps the mark in the XML file ? >>> Also, could someone tell me more about this mark ??? >>> >>> >>> Thank you in advance >>> >>> >>> Alexandre >>> >>> >>> >>> >>> _______________________________________________ >>> bioperl-pipeline mailing list >>> bioperl-pipeline@bioperl.org >>> http://bioperl.org/mailman/listinfo/bioperl-pipeline >>> >> > > _______________________________________________ > bioperl-pipeline mailing list > bioperl-pipeline@bioperl.org > http://bioperl.org/mailman/listinfo/bioperl-pipeline > From kokcowardly at byke.com Tue Feb 17 14:44:13 2004 From: kokcowardly at byke.com (Chandiwana) Date: Tue Feb 17 13:45:05 2004 Subject: [Bioperl-pipeline] freewheel CheapPharmacy killer Message-ID: An HTML attachment was scrubbed... URL: http://portal.open-bio.org/pipermail/bioperl-pipeline/attachments/20040217/caca2c04/attachment.htm From m_conte at hotmail.com Wed Feb 18 12:23:17 2004 From: m_conte at hotmail.com (matthieu CONTE) Date: Wed Feb 18 12:29:34 2004 Subject: [Bioperl-pipeline] develop new module Message-ID: Hi, I'm trying to develop a module for a java program. Is there others wrappers in bioperl-run that work on java apps that Eponine and Vista? ----------------------------------------------------------- Matthieu CONTE M. Sc. in Bioinformatics from SIB CIRAD-Biotrop TA40/03 Avenue Agropolis 34398 Montpellier Cedex 5 FRANCE m_conte@hotmail.com tel: (33)04 67 61 60 21 fax :(33) 4 67 61 56 05 ----------------------------------------------------------- _________________________________________________________________ MSN Search, le moteur de recherche qui pense comme vous ! http://search.msn.fr/worldwide.asp From shawnh at stanford.edu Wed Feb 18 13:13:18 2004 From: shawnh at stanford.edu (Shawn Hoon) Date: Wed Feb 18 13:19:29 2004 Subject: [Bioperl-pipeline] develop new module In-Reply-To: References: Message-ID: <20DEEEC4-623E-11D8-A4FD-000A95783436@stanford.edu> I think these two are the only ones. cheers, shawn On Feb 18, 2004, at 9:23 AM, matthieu CONTE wrote: > Hi, > I'm trying to develop a module for a java program. > Is there others wrappers in bioperl-run that work on java apps that > Eponine and Vista? > > > ----------------------------------------------------------- > Matthieu CONTE > M. Sc. in Bioinformatics from SIB > > CIRAD-Biotrop TA40/03 > Avenue Agropolis > 34398 Montpellier Cedex 5 > FRANCE > > m_conte@hotmail.com > tel: (33)04 67 61 60 21 > fax :(33) 4 67 61 56 05 > > ----------------------------------------------------------- > > _________________________________________________________________ > MSN Search, le moteur de recherche qui pense comme vous ! > http://search.msn.fr/worldwide.asp > > _______________________________________________ > bioperl-pipeline mailing list > bioperl-pipeline@bioperl.org > http://bioperl.org/mailman/listinfo/bioperl-pipeline From juguang at tll.org.sg Wed Feb 18 21:11:55 2004 From: juguang at tll.org.sg (Juguang Xiao) Date: Wed Feb 18 21:18:08 2004 Subject: [Bioperl-pipeline] develop new module In-Reply-To: Message-ID: Let us know your java program and we would like to compose the wrapper for you. Juguang On Thursday, February 19, 2004, at 01:23 am, matthieu CONTE wrote: > Hi, > I'm trying to develop a module for a java program. > Is there others wrappers in bioperl-run that work on java apps that > Eponine and Vista? > > > ----------------------------------------------------------- > Matthieu CONTE > M. Sc. in Bioinformatics from SIB > > CIRAD-Biotrop TA40/03 > Avenue Agropolis > 34398 Montpellier Cedex 5 > FRANCE > > m_conte@hotmail.com > tel: (33)04 67 61 60 21 > fax :(33) 4 67 61 56 05 > > ----------------------------------------------------------- > > _________________________________________________________________ > MSN Search, le moteur de recherche qui pense comme vous ! > http://search.msn.fr/worldwide.asp > > _______________________________________________ > bioperl-pipeline mailing list > bioperl-pipeline@bioperl.org > http://bioperl.org/mailman/listinfo/bioperl-pipeline > > Juguang Xiao From MRBATESALAN at netscape.net Mon Feb 23 08:33:32 2004 From: MRBATESALAN at netscape.net (MRBATESALAN@netscape.net) Date: Mon Feb 23 08:39:38 2004 Subject: [Bioperl-pipeline] REPLY SOON Message-ID: Dear Friend, As you read this, I don't want you to feel sorry for me, because, I believe everyone will die someday. My name is BATES ALAN a merchant in Dubai, in the U.A.E.I have been diagnosed with Esophageal cancer. It has defiled all forms of medical treatment, and right now I have only about a few months to live, according to medical experts. I have not particularly lived my life so well, as I never really cared for anyone(not even myself)but my business. Though I am very rich, I was never generous, I was always hostile to people and only focused on my business as that was the only thing I cared for. But now I regret all this as I now know that there is more to life than just wanting to have or make all the money in the world. I believe when God gives me a second chance to come to this world I would live my life a different way from how I have lived it. Now that God has called me, I have willed and given most of my property and assets to my immediate and extended family members as well as a few close friends. I want God to be merciful to me and accept my soul so, I have decided to give alms to charity organizations, as I want this to be one of the last good deeds I do on earth. So far, I have distributed money to some charity organizations in the U.A.E, Algeria and Malaysia. Now that my health has deteriorated so badly, I cannot do this myself anymore. I once asked members of my family to close one of my accounts and distribute the money which I have there to charity organization in Bulgaria and Pakistan, they refused and kept the money to themselves. Hence, I do not trust them anymore, as they seem not to be contended with what I have left for them. The last of my money which no one knows of is the huge cash deposit of eighteen million dollars $18,000,000,00 that I have with a finance/Security Company abroad. I will want you to help me collect this deposit and dispatched it to charity organizations. I have set aside 10% for you and for your time. God be with you. BATES ALAN From dehneg at labri.fr Wed Feb 18 09:36:14 2004 From: dehneg at labri.fr (Alexandre Dehne) Date: Fri Feb 27 10:07:01 2004 Subject: [Bioperl-pipeline] creating jobs, job_setup In-Reply-To: <45CB6AED-5DFD-11D8-A4FD-000A95783436@fugu-sg.org> References: <1076593731.1593.358.camel@houat.labri.fr> <45CB6AED-5DFD-11D8-A4FD-000A95783436@fugu-sg.org> Message-ID: <1077114973.3020.339.camel@houat.labri.fr> Hi, Please excuse me for not being clear enough in my last mail. So I am going to *try* to be more explicite. I am introducing now on three new pipelines (A, B and C) derived from the one I mentionned in my last email to explain my problems (you can forget the pipeline in my last email, I am introducing the new ones entirely ; the xml files are attached). The pipelines A and B allow me to show you the problem that came up using Kiran's solution. The pipelines A and C help me to explain the way the rules were not followed. Let's start with the descriptions of the pipelines A, B and C. First, the common structure of A, B and C : run analysis : 1->2->3->4->5 analysis 1 : datamonger analysis 2 and 3 : analysis that take nothing on input and ouput nothing (using CAAT-Box program in my case). analysis 4 : datamonger analysis 5 : analysis that need an input (from analysis 4), using blast for example. Then the differences between the analysis from A, B and C : # analysis 1 from A and C (my old solution) : a datamonger that uses an InputCreate module named setup_nothing which creates a "void" input like the following : my @input=$self->create_input("nothing",'',"infile"); my $job = $self->create_job($next_anal,\@input); $self->dbadaptor->get_JobAdaptor->store($job); # analysis 1 from B (Kiran's solution) : a datamonger that uses an InputCreate module named setup_nothing_kiran which creates a void input like the following : my $job = $self->create_job($next_anal); $self->dbadaptor->get_JobAdaptor->store($job); # analysis 2 and 3 (from A, B and C) : nothing special exept that the tag has to be "COPY_ID_FILE" in A and B, but can be indeferently "COPY_ID_FILE" or "COPY_ID" in C. # analysis 4 from A and B : a datamonger using an InputCreate module which creates jobs with an input (for example the module setup_initial). Please note that, contrary to the analysis 4 from A, the tag is outside the tag: ... $input_description 1 setup_initial 1 .... # analysis 4 from C : a datamonger using an InputCreate module which creates jobs with an input (for example the module setup_initial). Please note that, contrary to the analysis 4 from A and B, the tag is inside the tag : ... $input_description 1 setup_initial 1 .... # analysis 5 (from A, B and C): nothing special, the same blast analysis as in the example blast_db_file.xml. Okay, now you know all about the pipelines A, B and C. The pipeline A is the one I precedently used to create void inputs (via the module setup_nothing) and the pipeline B is the Kiran's way (via the module setup_nothing). I really like Kiran's solution but without any input the analysis 4 returns the following error to me: ======== " READING: Lost the will to live Error. Problems with runnableDB fetching input [ ------------- EXCEPTION: Bio::Root::Exception ------------- MSG: Runnable Bio::Pipeline::Runnable::DataMonger=HASH(0x89f1b98) cannot call STACK: Error::throw STACK: Bio::Root::Root::throw /usr/lib/perl5/site_perl/5.8.0/Bio/Root/Root.pm:342 STACK: Bio::Pipeline::RunnableDB::setup_runnable_inputs /var/opt/Genolevures/src/biopipe-bundle-0.1/bioperl-pipeline/Bio/Pipeline/RunnableDB.pm:244 STACK: Bio::Pipeline::RunnableDB::fetch_input /var/opt/Genolevures/src/biopipe-bundle-0.1/bioperl-pipeline/Bio/Pipeline/RunnableDB.pm:485 ...... " ============ My questions concerning Kiran's solution are : - How can I manage this problem (using the data monger after an analysis which outputs nothing ) ? - Is there another tag (different from "COPY_ID" or "COPY_ID_FILE") which does not use the concept of copying an input (which does not exist in my case) ? That was the questions using pipeline A and B. Now, I focus on pipelines A and C to show how the rules are not followed. Indeed, by running pipeline C, the analysis 4 starts just after the analysis 1 which contradicts the rules. Do you have an explanation ? Anyway, I overturned this problem by using the pipeline A which consists in writing the tag outside the tag for the analysis 4. I hope I am clear enough. Thanks in advance. Alexandre On Fri, 2004-02-13 at 09:18, Shawn Hoon wrote: > Hi Alexandre, > okay this is what I gather you are trying to do: > > Run analysis 1 -> 2 -> 3 -> 4 > > The question is what are your inputs? are u running the four analysis > on the same input type? for example, you > have four blast analysis that you do on sequences? If so, then what > you would do is use a input create/data monger to > create inputs for analysis 1. Then in your rules you would specify > COPY_ID for analysis 1 -> 2 and 2->3 and 3->4 > then the input id will be transferred between analysis. > > If your input for analysis 2 for example is different from that of > analysis 1, then you need to do something different. > For this, there are 2 options: > > 1) If you require that the analysis 1 is completed before 2 is > completed, then you need an analysis in between 1 and 2 ( so as a > result 2 becomes 3) > Analysis 2 would now be an input_create which knows how to create > inputs for analysis 2. (Basically we are assuming the this input > creation is linked to > the input 1 of analysis 1. > 2) If you require that all of the inputs from analysis 1 is completed > before any analysis 2 jobs are started, you can do a rule WAITFORALL > which would then launch > a job of analysis 2 (which may or may not be a input create). > > for your definition below, I don't see why analysis 4 should be > executed at startup. Can you provide the xml file? > shawn > > > On Feb 12, 2004, at 5:48 AM, Alexandre Dehne wrote: > > > Hi Kiran, > > > > Thank you for answering me. > > Actually, your solution is very clean but, by using it, other problems > > came up. > > > > Here is the current situation: > > So, I start a job on my first analysis with your suggestion. Then, more > > jobs on other analysis are created by placing > > "COPY_ID_FILE" or "COPY_ID" in their > > respective rules in the XML file. > > (Remember that for now, all of my analysis do not take any input and do > > not give any output. So, this way, everything is fine and work well.) > > > > Here comes the problem when I want to use an analysis that needs an > > input. For that, I am using the data monger. Since the data monger > > needs > > an input, it therefore does not work. So, I am trying to create this > > input by using the following mark: > > > > ... > > > > > > > > $input_description > > 1 > > > > > > .... > > > > > > My initial data monger (analysis N.1) and the one previously described > > (analysis N.4) are now called at the beginning of the pipeline. > > But, the analysis N.4 has to be called after the third one as I > > specified it in the rules. > > > > Do you have any suggestion on how to solve my problem and why the rules > > are not followed ? > > Please let me know if I am not clear. > > > > Thank you in advance, > > > > Alexandre > > > > > > > > On Wed, 2004-02-11 at 00:30, Kiran Kumar wrote: > >> Hi Alexandre, > >> It's nice to know that it fits into your work. > >> > >> In short, you would be able to create job without inputs. The direct > >> way > >> would be 'not to pass' any inputs to "create_job" function. > >> > >> my $job = $self->create_job($next_anal); > >> $self->dbadaptor->get_JobAdaptor->store($job); > >> That should make it 'righteous' :-).. > >> > >> Since you are following the Biopipe spirit, let me go on to explain > >> the > >> other aspects too. > >> > >> > >> On the xml level, you are right that the tag could be > >> used for > >> this purpose. > >> The provides for specifying jobs directly inside the XML > >> file > >> without using a Datamonger/InputCreate. Ofcourse, this is convinient > >> if > >> the number of jobs are handful which otherwise would make the XML file > >> very lengthy. This feature is still there but has not been tested for > >> long > >> time. We have stopped using this feature for a drawback it poses > >> towards > >> the biopipe spirit which is as follows. > >> > >> If the job needs inputs, and it is specified using job_setup options, > >> the xml file becomes too specific and anyone else trying to re-use it > >> would have to change all the input_ids each time they need to run for > >> different sets of inputs. The datamonger/InputCreate on the other > >> hand, > >> provides for the clean separation of input names from the xml pipeline > >> specification. The InputCreates are expected to read the input_names > >> for > >> the the jobs they are gonna create from a file or directory or > >> somewhere > >> (this location for the input_names is specified as the input_create's > >> parameters in the xml file). > >> > >> Hope I havent left you more confused than before! > >> > >> Cheers, > >> Kiran > >> > >> > >>> Hi, > >>> > >>> First, I would like to congratulate the Biopipe team for having > >>> created such a useful tool. > >>> > >>> > >>> The context : > >>> For several reasons (some goods and some not so good), some of my > >>> runnables take nothing in input and return nothing. > >>> > >>> The problem : > >>> This type of runnables does not match the biopipe "spirit", so it is > >>> a problem to create jobs for these runnables via the "create_job" > >>> function which needs a array input. > >>> > >>> The "temporary" unrighteous solution : > >>> I have created an InputCreate module named setup_nothing which > >>> creates a void input like the following : > >>> my @input=$self->create_input("nothing",'',"infile"); > >>> my $job = $self->create_job($next_anal,\@input); > >>> $self->dbadaptor->get_JobAdaptor->store($job); > >>> This way, I launch one job on my analysis as well as on the > >>> following ones by placing "COPY_ID_FILE" in their > >>> respective rules in the XML file. > >>> > >>> > >>> The questions : > >>> Is there a clean way to create jobs without any input (a just_do_it > >>> function ?) ? > >>> Perhaps the mark in the XML file ? > >>> Also, could someone tell me more about this mark ??? > >>> > >>> > >>> Thank you in advance > >>> > >>> > >>> Alexandre > >>> > >>> > >>> > >>> > >>> _______________________________________________ > >>> bioperl-pipeline mailing list > >>> bioperl-pipeline@bioperl.org > >>> http://bioperl.org/mailman/listinfo/bioperl-pipeline > >>> > >> > > > > _______________________________________________ > > bioperl-pipeline mailing list > > bioperl-pipeline@bioperl.org > > http://bioperl.org/mailman/listinfo/bioperl-pipeline > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: A.xml Type: text/xml Size: 5858 bytes Desc: not available Url : http://portal.open-bio.org/pipermail/bioperl-pipeline/attachments/20040218/ebda9df9/A-0001.xml -------------- next part -------------- A non-text attachment was scrubbed... Name: B.xml Type: text/xml Size: 5936 bytes Desc: not available Url : http://portal.open-bio.org/pipermail/bioperl-pipeline/attachments/20040218/ebda9df9/B-0001.xml -------------- next part -------------- A non-text attachment was scrubbed... Name: C.xml Type: text/xml Size: 5881 bytes Desc: not available Url : http://portal.open-bio.org/pipermail/bioperl-pipeline/attachments/20040218/ebda9df9/C-0001.xml From faga at cshl.org Fri Feb 27 10:16:59 2004 From: faga at cshl.org (Ben Faga) Date: Fri Feb 27 10:24:23 2004 Subject: [Bioperl-pipeline] typo in XML_README.html Message-ID: <1077895019.6069.18.camel@ricotta> Hi, I hope this doesn't go to a mailing list because I'm just reporting a typo. In the description of at http://www.biopipe.org/docs/html/XML_README.html, the sentence "This makes path definitions and centralized and users of the XML template should only need to modify things here" has an extra "and" in it, I think. Ben