opal AT lists.psi.ch
Subject: The OPAL Discussion Forum
List archive
- From: Philippe Piot <philippe.piot AT gmail.com>
- To: "Adelmann Andreas (PSI)" <andreas.adelmann AT psi.ch>
- Cc: "opal AT lists.psi.ch" <opal AT lists.psi.ch>
- Subject: Re: [Opal] optimizer sometime gets stuck
- Date: Thu, 27 May 2021 07:45:42 -0500
- Authentication-results: localhost; iprev=pass (mail-pf1-f180.google.com) smtp.remote-ip=209.85.210.180; spf=pass smtp.mailfrom=gmail.com; dkim=pass header.d=gmail.com header.s=20161025 header.a=rsa-sha256; dmarc=pass header.from=gmail.com
Andreas,
Did you ever encounter this type of problem on bebop? This is the cluster I am using -- below is my input script in case you have a good suggestion. Thank you! -- Philippe.
#!/bin/bash -l
#SBATCH -A Bright-Beams
#SBATCH --job-name=awa_optim
#SBATCH -o optim.%j.%N.out
#SBATCH -e optim.%j.%N.error
#SBATCH --time=18:00:00
#SBATCH --nodes=8
#SBATCH --ntasks-per-node=36
#SBATCH --partition=bdwall
#
#export I_MPI_SLURM_EXT=0
#export I_MPI_FABRICS=shm:tmi
ulimit -s unlimited
export OPAL_EXE_PATH=/lcrc/project/Bright-Beams/software/opal/build_gcc/src
#
# cd $SLURM_SUBMIT_DIR
#
rm -rf *.0 tmp *_0
#
# mkdir results tmp
#
# Setup My Environment
module load gcc/7.1.0-4bgguyp
module load boost # needs > 1.66
module load mpich
module load hdf5/1.10.5-fuzylbv # need parallel
module load libszip
module load gsl #/2.4
# Run My Program
mpirun -n $SLURM_NTASKS $OPAL_EXE_PATH/opal awaDrive_optimEmit.in --info 5
#SBATCH -A Bright-Beams
#SBATCH --job-name=awa_optim
#SBATCH -o optim.%j.%N.out
#SBATCH -e optim.%j.%N.error
#SBATCH --time=18:00:00
#SBATCH --nodes=8
#SBATCH --ntasks-per-node=36
#SBATCH --partition=bdwall
#
#export I_MPI_SLURM_EXT=0
#export I_MPI_FABRICS=shm:tmi
ulimit -s unlimited
export OPAL_EXE_PATH=/lcrc/project/Bright-Beams/software/opal/build_gcc/src
#
# cd $SLURM_SUBMIT_DIR
#
rm -rf *.0 tmp *_0
#
# mkdir results tmp
#
# Setup My Environment
module load gcc/7.1.0-4bgguyp
module load boost # needs > 1.66
module load mpich
module load hdf5/1.10.5-fuzylbv # need parallel
module load libszip
module load gsl #/2.4
# Run My Program
mpirun -n $SLURM_NTASKS $OPAL_EXE_PATH/opal awaDrive_optimEmit.in --info 5
On Thu, May 27, 2021 at 7:41 AM Adelmann Andreas (PSI) <andreas.adelmann AT psi.ch> wrote:
Hi Philippe tend to agree with Jochem (I misinterpreted the output snippet in your original email).Cheers A------
Dr. sc. math. Andreas (Andy) Adelmann
Head a.i. Labor for Scientific Computing and Modelling
Paul Scherrer Institut OHSA/ CH-5232 Villigen PSI
Phone Office: xx41 56 310 42 33 Fax: xx41 56 310 31 91
Zoom ID: 470-582-4086 Password: AdAZoom Link: https://ethz.zoom.us/j/4705824086?pwd=dFcvT1pMMGY0bHg0dTNncUNZZTJkZz09
-------------------------------------------------------
Friday: ETH HPK G 28 +41 44 633 3076
============================================
The more exotic, the more abstract the knowledge,
the more profound will be its consequences.
Leon Lederman
============================================
On 27 May 2021, at 14:32, Philippe Piot <philippe.piot AT gmail.com> wrote:
<pilot.trace.0>
-
[Opal] optimizer sometime gets stuck,
Philippe Piot, 05/27/2021
-
Re: [Opal] optimizer sometime gets stuck,
Adelmann Andreas (PSI), 05/27/2021
-
Message not available
- Re: [Opal] optimizer sometime gets stuck, Philippe Piot, 05/27/2021
-
Message not available
-
Message not available
- [Opal] Fwd: optimizer sometime gets stuck | output part I, Philippe Piot, 05/27/2021
-
Re: [Opal] optimizer sometime gets stuck,
Adelmann Andreas (PSI), 05/27/2021
-
Message not available
- [Opal] Fwd: optimizer sometime gets stuck | output part II, Philippe Piot, 05/27/2021
-
Message not available
-
Re: [Opal] optimizer sometime gets stuck,
Adelmann Andreas (PSI), 05/27/2021
-
Re: [Opal] optimizer sometime gets stuck,
Philippe Piot, 05/27/2021
- Re: [Opal] optimizer sometime gets stuck, Adelmann Andreas (PSI), 05/27/2021
-
Re: [Opal] optimizer sometime gets stuck,
Philippe Piot, 05/27/2021
-
Re: [Opal] optimizer sometime gets stuck,
Adelmann Andreas (PSI), 05/27/2021
Archive powered by MHonArc 2.6.19.