Skip to Content.
Sympa Menu

h5part - [H5part] MPI_IO on NFS

h5part AT lists.psi.ch

Subject: H5Part development and discussion

List archive

[H5part] MPI_IO on NFS


Chronological Thread 
  • From: John Biddiscombe <biddisco AT cscs.ch>
  • To: h5part AT lists.psi.ch
  • Subject: [H5part] MPI_IO on NFS
  • Date: Thu, 20 Sep 2007 09:46:22 +0200
  • List-archive: <https://lists.web.psi.ch/pipermail/h5part/>
  • List-id: H5Part development and discussion <h5part.lists.psi.ch>

This is a little off topic, but if you'll forgive me, I have an mpi-h5part question...

Writing in parallel using NFS mounted drives is causing me trouble and I am aware that NFS needs to be configured without attribute caching. Unfortunately, I'm assisting some people in commercial environments who have access to company clusters but are not at liberty to muck about with the file system configuration etc.

The users are running SPH simulations in parallel on the clusters and we'd like to write in parallel from each node into a single H5Part file.
When run using H5PartOpenFile (serial), each process writes data, but the file is overwritten by each node and we end up with a munged/truncated dataset.
When opening with H5PartOpenFileParallel - we get the expected behaviour, but unfortunately, the locking problem makes the system unusable.

Can anyone advise if there is a mode within MPI_IO (or indeed exposed via the H5Part API), that will allow me to open file in parallel, write in parallel, but actually have the data transferred to a single IO node automatically where all writes are perfomed by the one node - without me needing to change the main write code significantly. I'd like to be able to set a few compiler flags and recompiled the same code for either 'true' parallel, or not. Independent and collective IO is close to the same function, so I suspect this capability is already available, but I can't seem to find it.

(I apologise if I seem confused as I am returning to this problem after some time away on other work and may remember things incorrectly)

Many thanks

JB

--
John Biddiscombe, email:biddisco @ cscs.ch
http://www.cscs.ch/about/BJohn.php
CSCS, Swiss National Supercomputing Centre | Tel: +41 (91) 610.82.07
Via Cantonale, 6928 Manno, Switzerland | Fax: +41 (91) 610.82.82





  • [H5part] MPI_IO on NFS, John Biddiscombe, 09/20/2007

Archive powered by MHonArc 2.6.19.

Top of Page