Skip to Content.
Sympa Menu

opal - Re: [Opal] Increase in External Field Evaluation Time

opal AT lists.psi.ch

Subject: The OPAL Discussion Forum

List archive

Re: [Opal] Increase in External Field Evaluation Time


Chronological Thread  
  • From: Nicole R Neveu <nneveu AT stanford.edu>
  • To: Christof Metzger-Kraus <christof.j.kraus AT gmail.com>, Chris Hall <chall AT radiasoft.net>
  • Cc: opal <opal AT lists.psi.ch>
  • Subject: Re: [Opal] Increase in External Field Evaluation Time
  • Date: Sun, 17 Jan 2021 06:09:35 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=stanford.edu; dmarc=pass action=none header.from=stanford.edu; dkim=pass header.d=stanford.edu; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ltYE3S4t58cYLFR8vGdRPvemfz2pkxZKiixyOg1FEjo=; b=lUpNUPttZ8zT0bx3zLVxEsVTnOPyfJuRHhoQ6I3RSTzDpW57IxFfmbiFUoGB3UOK1jdgCS0SP/vHxnPZOldMAsWxBcknxaQvLJKjg/djTXG/G0O8RSZtrCKgih7piHYqmaWsE5mYaI9Pwpz6eelENyHgCg6WMUpoFdNhfdeXLaI8Q8aUhAJfZy1mCfIgTWIwDfRMoC+mRMSZgyGRm8Ro4dlOwB/TM3eSKiHvRipxWmhKcnTk7bkBMTWaffw4Mdoo5UsLI+jMMvQrX6lYxYy+RBSsvlIDlhPzBAMAWyHvfZu/tIa2FDFIyDmsNGlpZ/rXKsjYn54vM6UwrDshFi3PKQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ivbQKSXP4N7NvcWf+6TxPrlJToYL6EXEUWPq1IkEKQjN4KQheaemZKeTWkyWvh0DotmT5HsuQlj1j1YR8p2nGx9cemEcqS0GINHjBzNz9YSuR4sjtsbnV0VEkBUZ+ok2j24bUuL0Q5DgEpKCNYPULVQOWajXAApKbBjO310Zzuo0jrXnc9TwurT2UbAKBZFB+ALkAki1Uajqhx62NoBGtOklJcOCBNK2+5Itb3z/Xr4gD4euPgPsl+QD65m6ad0LcTOr65deJZfswVqc6FZQRVYqaMfLN6QSYgqU5ReM0jZMgbZ5JNvVfiuaf7GUVjlquWxxp4i6wrufmWkCZ6K9aw==
  • Authentication-results: gmail.com; dkim=none (message not signed) header.d=none;gmail.com; dmarc=none action=none header.from=stanford.edu;

Hi guys,

 

I needed to do a rebuild today on a computer I wasn’t using for a few months.

I saw a similar behavior to Chris + one more difference.

When I re-ran a LCLS-II test with 2.4, the run time was 9 minutes vs. 6 minutes on my old 2.0 build.

 

I went back and built off the 2.2 branch, and the file would not run.

The error was related interpolation when loading the last field map (a standing wave rf cavity).

See file attached.

 

Then I went back to a simpler test (AWA gun on regression tests).

Only one rf cavity in that file (the gun) + solenoids, and there was no time difference or error when switching between 2.2 and 2.4?

Could this point to something field map related?

 

Thanks,

 

Nicole

 

From: <opal-request AT lists.psi.ch> on behalf of Christof Metzger-Kraus <christof.j.kraus AT gmail.com>
Reply-To: Christof Metzger-Kraus <christof.j.kraus AT gmail.com>
Date: Saturday, January 16, 2021 at 7:18 AM
To: Chris Hall <chall AT radiasoft.net>
Cc: opal <opal AT lists.psi.ch>
Subject: Re: [Opal] Increase in External Field Evaluation Time

 

Hi Chris,

 

I've ran our regression tests with version 2.2 and 2.4 of Opal but I didn't notice any dramatic increase in compute time for the evaluation of external fields for any test. Some ran a bit slower others a bit faster. Can you share the input file and the fields maps? If not: what kind of field maps do you use? Do you use other features such as wake fields or particle matter interaction?

 

Christof

 

On Tue, Jan 12, 2021 at 6:21 AM Christof Metzger-Kraus <christof.j.kraus AT gmail.com> wrote:

Hi Chris,

 

I can't find anything in the changes of the source code that could have such an influence on the compute time for the external fields. I'll investigate this.

 

Christof

 

On Mon, Jan 11, 2021 at 9:44 PM Chris Hall <chall AT radiasoft.net> wrote:

Hi All,

 

I recently noticed that some OPAL simulations were taking much longer than expected. We upgraded to OPAL 2.4.0 from 2.2.0 not that long ago so I reran an old simulation I still had outputs for. From this I see that the average Wall and CPU times for "External field eval." have increased by almost a factor of 10 when I compare 2.4 to 2.2. This leads to a doubling of the total wall clock run time. None of the other individual timing results show significant difference between the two runs.

 

Is anyone aware of changes between the two versions that might have caused this, or any other factors that should be investigated that would lead to this?

 

I've attached the run logs (with step updates cut out) for the runs on versions 2.2 and 2.4 for reference.

 

Thanks!

--

Chris hall

Research Scientist | RadiaSoft

720-502-3928 x709 | chall AT radiasoft.net

radiasoft.net | sirepo.com

Ippl> CommMPI: Parent process waiting for children ...
Ippl> CommMPI: Initialization complete.
> ____ _____ ___
> / __ \| __ \ /\ | |
> | | | | |__) / \ | |
> | | | | ___/ /\ \ | |
> | |__| | | / ____ \| |____
> \____/|_| /_/ \_\______|
OPAL>
OPAL> This is OPAL (Object Oriented Parallel Accelerator Library) Version
2.2.0
OPAL> git rev. f9be6c676d0a5b7d07a49cc1d5ff0e5ae62060f8
OPAL>
OPAL>
OPAL> (c) PSI, http://amas.web.psi.ch
OPAL>
OPAL>
OPAL> The optimiser (former opt-Pilot) is integrated
OPAL>
OPAL> Please send cookies, goodies or other motivations (wine and beer ... )
OPAL> to the OPAL developers opal AT lists.psi.ch
OPAL>
OPAL> Time: 23:13:54 date: 16/01/2021
OPAL>
OPAL> Couldn't find startup file "/home/ac.nneveu/init.opal".
OPAL> Note: this is not mandatory for an OPAL simulation!
OPAL>
OPAL> * Reading input stream "sc_inj_C1.in".
OPAL>
OPAL> value: {EDES,P0}={1.4e-09,1.19616e-06}
OPAL>
OPAL> *
**********************************************************************************

OPAL> * Selected Tracking Method == PARALLEL-T, NEW TRACK
OPAL> *
**********************************************************************************

OPAL[2]> * Generation of distribution with seed = 123456789
OPAL[2]> * isn't scalable with number of particles and cores.
OPAL[3]>
OPAL[3]>
------------------------------------------------------------------------------------
OPAL[3]> READ INITIAL DISTRIBUTION FROM FILE "opal_emitted.txt"
OPAL[3]>
------------------------------------------------------------------------------------
OPAL[3]>
OPAL>
OPAL> * ************* D I S T R I B U T I O N
********************************************
OPAL> *
OPAL> * Number of particles: 50000
OPAL> *
OPAL> * Distribution type: FROMFILE
OPAL> *
OPAL> * Input file: opal_emitted.txt
OPAL> *
OPAL> * Number of energy bins = 1
OPAL> * Distribution is emitted.
OPAL> * Emission time = 6.539901e-11 [sec]
OPAL> * Time per bin = 6.539901e-11 [sec]
OPAL> * Delta t during emission = 6.539901e-13 [sec]
OPAL> *
OPAL> * ------------- THERMAL EMITTANCE MODEL
--------------------------------------------
OPAL> * THERMAL EMITTANCE in NONE MODE
OPAL> * Kinetic energy added to longitudinal momentum = 0.000000e+00 [eV]
OPAL> *
----------------------------------------------------------------------------------
OPAL> *
OPAL> *
**********************************************************************************
OPAL>
OPAL> * ************* B E A M
************************************************************
OPAL> * BEAM BEAM1
OPAL> * PARTICLE ELECTRON
OPAL> * CURRENT 1.870000e+04 A
OPAL> * FREQUENCY 1.870000e+08 MHz
OPAL> * CHARGE -e * 1.000000e+00
OPAL> * REST MASS 5.109990e-04 GeV
OPAL> * MOMENTUM 1.196160e-06
OPAL> * NPART 5.000000e+04
OPAL> *
**********************************************************************************

OPAL>
OPAL> * ************* F I E L D S O L V E R
**********************************************
OPAL> * FIELDSOLVER FS_SC
OPAL> * TYPE FFT
OPAL> * N-PROCESSORS 1
OPAL> * MX 3.200000e+01
OPAL> * MY 3.200000e+01
OPAL> * MT 3.200000e+01
OPAL> * BBOXINCR 1.000000e+00
OPAL> * GRRENSF INTEGRATED
OPAL> * XDIM serial
OPAL> * YDIM serial
OPAL> * Z(T)DIM parallel
OPAL>
OPAL> *
**********************************************************************************

OPAL>
OPAL[2]> Phase space dump frequency 300000000 and statistics dump frequency
10 w.r.t. the time step.
RFCavity [2]> GUN using file rfgunb_187MHz.txt (1D dynamic); zini= 0 m;
zfinal= 0.199 m;
Solenoid [2]> SOL1 using file rfgunb_solenoid.txt (1D magnetostatic); zini=
-0.24 m; zfinal= 0.24 m;
RFCavity [2]> BUNCHER using file rfgunb_buncher.txt (1D dynamic); zini=
-0.179066 m; zfinal= 0.179066 m;
Solenoid [2]> SOL2 using file rfgunb_solenoid.txt (1D magnetostatic); zini=
-0.24 m; zfinal= 0.24 m;
RFCavity [2]> C1 using file L0B_9cell.txt (1D dynamic); zini= -0.659401 m;
zfinal= 0.659397 m;
RFCavity [2]> C2 using file L0B_9cell.txt (1D dynamic); zini= -0.659401 m;
zfinal= 0.659397 m;
RFCavity [2]> C3 using file L0B_9cell.txt (1D dynamic); zini= -0.659401 m;
zfinal= 0.659397 m;
RFCavity [2]> C4 using file L0B_9cell.txt (1D dynamic); zini= -0.659401 m;
zfinal= 0.659397 m;
RFCavity [2]> C5 using file L0B_9cell.txt (1D dynamic); zini= -0.659401 m;
zfinal= 0.659397 m;
RFCavity [2]> C6 using file L0B_9cell.txt (1D dynamic); zini= -0.659401 m;
zfinal= 0.659397 m;
RFCavity [2]> C7 using file L0B_9cell.txt (1D dynamic); zini= -0.659401 m;
zfinal= 0.659397 m;
RFCavity [2]> C8 using file L0B_9cell.txt (1D dynamic); zini= -0.659401 m;
zfinal= 0.659397 m;
Error>
Error> *** User error detected by function "interp.c"
Error> interpolation error
Error> interpolation error
application called MPI_Abort(MPI_COMM_WORLD, -100) - process 0
[unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=-100
:
system msg for write_line failure : Bad file descriptor



Archive powered by MHonArc 2.6.19.

Top of Page