Skip to Content.
Sympa Menu

opal - [Opal] OPAL Monitor Memory Usage

opal AT

Subject: The OPAL Discussion Forum

List archive

[Opal] OPAL Monitor Memory Usage

Chronological Thread  
  • From: Christopher Pierce <cmpierce AT>
  • To: "opal AT" <opal AT>
  • Subject: [Opal] OPAL Monitor Memory Usage
  • Date: Mon, 27 Nov 2023 16:00:00 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; 1; spf=pass; dmarc=pass action=none; dkim=pass; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed;; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Y/bgyl/sh+qx18HzfU6vcqhqf1+p2DCeycNoKIAhrRU=; b=QSMDvFPJMHNFeNlJBBlv901wxi/IkkJfmfsBDrz8eUH7ABhzme+kg64UQ1TtxpWQqjYcdh8v5uIVacijeEOAFfqXTiz5/Qd0XggQ/3wHpDl8t8tbT+EuKDDJtOpCid6xrN7Qhz04T6SDxOGvgVM4Y53uKcK7cYJGqMgPWkTkTbY35EhP9IdxDVh+x3lOYIndnBD6xtF3Bu64TBxuL8mK7l4c4AskeAIYIgiTVPjNd5sTgjKW8Oep25sOGarN/8JjvAkU1w5nfwjCxoIczXUmRWma6OmoVeCpdOPLlKR/PHprwIxi1mJOChHRWsRf9ZyHl7Pypi3UKwp6+CqiambQEg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901;; cv=none; b=RGYvsMxq4Y6TzATDfVpriWrXxo7bAuRca3XpHY5+zKm6vicy2VEsh1vN+BixDP3LI1qYmqH39RQ7+i37KnFMDplXG9KsBOQiImK21PkGHWF8Za6adSZbmOo6V2P3ak98/QQ+mIYN/NiSaVPYgAfJTe92C2ny96cSZzlS4t49wQr4LnjHDm/N7Hc4HRiVl7BdhcJ+1mpyRmFwvFRqpOkR43/QXO2uTZaUUinMJ8JMFWUyfidUbVrXx6+UcWwnAKDxL3ZtPeIZ//H3nXAoCNhR0NtNowG2AOG0A+eVIRnzk94S3fGeR933DCzi9r55rX6d5KvhNk2F3B6oY+0eBLM+aA==
  • Authentication-results:; iprev=pass ( smtp.remote-ip=; spf=pass; dkim=pass header.s=selector2-uchicagoedu-onmicrosoft-com header.a=rsa-sha256; dmarc=pass
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none;
  • Msip_labels:

Hey folks,

I have been using OPAL for some space charge particle tracking on a recent project and have been having a great experience with it so far. Thanks for the nice code!

One thing I ran into recently is an out-of-memory issue when I run it with many "monitor" elements. What seems to be happening is that the memory allocated while saving the particles to disk doesn't get freed after the file is output. In my case, I was using 2 million particles and I could see the memory usage rise by 200MB as the first monitor output is saved to disk and then rise by another 200MB at the second and so on. Since I was producing video-like output with 48 monitors, it didn't take long to exceed our cluster's 4GB per core limit.

I did some digging in the source code and on line 367 in `src/Classic/Structure/LossDataSink.cpp` I saw that the vectors storing particle information in the monitor objects do get cleared. Reading online, though (since I'm not a C++ expert) I saw some people saying this might not free memory ( I implemented their suggestion, ie the following code to replace all of the clear methods.


After recompiling the code, the memory growth seems to have gone away and I can run it on the cluster again without going over our RAM limits.

I wanted to pass this along to see if it's the right thing to do to deal with the memory problem I was having and also in case  anyone else finds it useful.



Archive powered by MHonArc 2.6.24.

Top of Page