Skip to Content.
Sympa Menu

opal - Re: [Opal] IPPL/OPAL changes require rebuilding both packages

opal AT lists.psi.ch

Subject: The OPAL Discussion Forum

List archive

Re: [Opal] IPPL/OPAL changes require rebuilding both packages


Chronological Thread 
  • From: Yves Ineichen <yves.ineichen AT psi.ch>
  • To: opal AT lists.psi.ch
  • Subject: Re: [Opal] IPPL/OPAL changes require rebuilding both packages
  • Date: Mon, 5 Sep 2011 22:15:56 +0200
  • List-archive: <https://lists.web.psi.ch/pipermail/opal/>
  • List-id: The OPAL Discussion Forum <opal.lists.psi.ch>

I attached a basic example where the wrapper is used to start one
simulation per processor. The library wrapper is by no means stable
(it works, but API most probably will change in the future).

Regards,

2011/9/2 jjyang <jianjun.yang AT psi.ch>:
>
>
> Jianjun Yang
> ---------------------------------------------
> Post-doctoral
> Massachusetts Institute of Technology
>
> Work Address:
> Paul Scherrer Institut WBGB/125
> CH-5232 Villigen PSI Switzerland
> ----------------------------------------------
>
>
>
> 在 2011-9-2,上午11:09, Yves Ineichen 写道:
>
>> Hi all,
>>
>> I made some changes[1] with respect to the MPI Communicator in IPPL.
>> Basically you can now run OPAL (any application using IPPL) on a
>> subset of processors by passing a user defined MPI group to the IPPL
>> constructor. This parameter defaults to MPI_COMM_WORLD so you don't
>> have to change anything if you want to use IPPL as before.
>>
>> To adapt OPAL to the new MPI communicator mechanism I had to adapt
>> some pure MPI_* and H5OpenFile calls (by using Ippl::getComm() to get
>> the currently used communicator). Long story short: When you update
>> OPAL to r12707[1] you NEED TO UPDATE AND REBUILD IPPL as well!
>>
>> Sidenote: [2] also introduces a crude OPAL library wrapper (see
>> src/opal.h/cpp). This means OPAL can now be called as a library from
>> other projects/code/languages.
>
> OPAL as a library, that is cool !
> Maybe it is clearer to give a sample example or the instructions.
>
>>
>> Best regards,
>>
>>
>> [1] https://amas.psi.ch/IPPL/changeset/12699
>> [2] https://amas.psi.ch/OPAL/changeset/12707
>>
>> --
>> Yves Ineichen
>> Paul Scherrer Institut WLGB/125 CH-5232 Villigen PSI
>> Phone Office: +41 56 310 37 63
>> http://amas.web.psi.ch
>> ::p = "This statement cannot be proven"::
>> _______________________________________________
>> Opal mailing list
>> Opal AT lists.psi.ch
>> https://lists.web.psi.ch/mailman/listinfo/opal
>
>



--
Yves Ineichen
Paul Scherrer Institut WLGB/125 CH-5232 Villigen PSI
Phone Office: +41 56 310 37 63
http://amas.web.psi.ch
::p = "This statement cannot be proven"::
#include <iostream>
#include <string>
                                                                                                                
#include <mpi.h>
#include <unistd.h>

#include "opal.h"

int main(int argc, char** argv) {

    int rank, size;
    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Comm_size(MPI_COMM_WORLD, &size);

    // here we assume that we have a 'FinPhase3' directory
    // in PWD containing the input file of the simulation to run
    std::string pwd = getenv("PWD");
    pwd = pwd + "/";
    std::string simd = pwd + "FinPhase3/";
    chdir(simd.c_str());

    // arguments we would pass to OPAL on the command line..
    char *arg[] = { "opal", "FinPhase3.in", "--commlib mpi", "--info 0", "--warn 0" };

    // every processor runs the same opal simulation..
    // with the last argument you steer which processor group is 
    // running the simulation
    run_opal(arg, "FinPhase3.in", -1, MPI_COMM_SELF);

    // restore old PWD
    chdir(pwd.c_str());

    MPI_Barrier(MPI_COMM_WORLD);
    MPI_Finalize();

    return 0;
}



Archive powered by MHonArc 2.6.19.

Top of Page