Skip to Content.
Sympa Menu

h5part - Re: [H5part] FYI: H5Part-1.3.2 has been released

h5part AT

Subject: H5Part development and discussion

List archive

Re: [H5part] FYI: H5Part-1.3.2 has been released

Chronological Thread 
  • From: Kurt Stockinger <kstockinger AT>
  • To: h5part AT
  • Subject: Re: [H5part] FYI: H5Part-1.3.2 has been released
  • Date: Thu, 19 Apr 2007 18:47:07 -0700
  • List-archive: <>
  • List-id: H5Part development and discussion <>

Achim Gsell wrote:
> On Thursday 19 April 2007 20:18, John Shalf wrote:
>> On Apr 19, 2007, at 10:45 AM, Achim Gsell wrote:
>>> On Thursday 19 April 2007 17:53, John Biddiscombe wrote:
>>>> 2) Added option to use INDEPENDENT_IO instead of
>>>> COLLECTIVE_IO, I need this when using it on a system with a
>>>> non-parallel file-system, using collective IO causes too
>>>> large performance drop on our cluster.
>>> INDEPENDENT_IO vs COLLECTIVE_IO in H5Part, good point. A
>>> couple of weeks ago a college found a bug in H5Part
>>> concerning COLLECTIVE_IO (or we understand something
>>> completely wrong): The H5FD_MPIO_COLLECTIVE property is
>>> added to the property list f->xfer_prop, but this property
>>> list is not used in I/O calls. Thus we are using independent
>>> I/O. It's the same in version 1.0! John (Shalf), any
>>> idea/comments about this?
>> Either it is a mistake or a misunderstanding about the
>> operation of parallelIO with HDF5. I was under the impression
>> that if you open a file with the COLLECTIVE_IO property, that
>> the implicit mode of operation for the files is collective
>> unless you tell it otherwise. If this is not the case, then we
>> should make sure the collective property is added to all of
>> the xfer lists (but I assume it is implicit if the file was
>> opened in collective mode).
> The xfer_prop list is neither used in the open/create calls nor
> in the read/write calls!
>> Lets do some regression testing with the benchmark (bench.c)
>> to see if there is in fact a difference or not.
> We already run some regression test on our XT3 system at CSCS
> with another test program and compared the results with results
> we got from Bench.c. Our conclusion is that H5Part uses
> independent I/O. If you use collective I/O. you must set the
> right MPI hints. Otherwise the performance is very, very poor.
> With Bench.c you can set MPI hints to whatever you want without
> impact to the I/O performance.

I've updated my local copy of H5Part.c and switched on collective I/O
for writing data sets, i.e. I'm using xfer-prop in H5Dwrite(). I've
also run a small benchmark with 10M particles on Jacquard using 8
processors (no additional "large block optimization yet). The results
confirm what Achim says. Independent I/O is currently performing better
than collective I/O. I will look into this closer and see what happens
for larger files.


H5PartBench/run7> grep "H5Part Effective" Bench-10M-8-dual-INDEP-IO.out
H5Part Effective Data Rate = 151.021296 Megabytes/sec global and
18.877662 Megabytes/sec per task Nprocs= 8
H5Part Effective Data Rate = 148.915825 Megabytes/sec global and
18.614478 Megabytes/sec per task Nprocs= 8
H5Part Effective Data Rate = 146.372345 Megabytes/sec global and
18.296543 Megabytes/sec per task Nprocs= 8

H5PartBench/run7> grep "H5Part Effective" Bench-10M-8-dual-COLL-IO.out
H5Part Effective Data Rate = 73.353497 Megabytes/sec global and 9.169187
Megabytes/sec per task Nprocs= 8
H5Part Effective Data Rate = 70.338839 Megabytes/sec global and 8.792355
Megabytes/sec per task Nprocs= 8
H5Part Effective Data Rate = 70.400440 Megabytes/sec global and 8.800055
Megabytes/sec per task Nprocs= 8

> Achim
> _______________________________________________
> H5Part mailing list
> H5Part AT

Kurt Stockinger
Computational Research Division
Lawrence Berkeley National Laboratory
Mail Stop 50F-1650, 1 Cyclotron Road
Berkeley, California 94720, USA

Tel: +1 (510) 486 5519, Fax: +1 (510) 486 5812
email: KStockinger AT

Archive powered by MHonArc 2.6.19.

Top of Page