Skip to Content.
Sympa Menu

h5part - Re: [H5part] H5Block

h5part AT

Subject: H5Part development and discussion

List archive

Re: [H5part] H5Block

Chronological Thread 
  • From: Achim Gsell <achim AT>
  • To: h5part AT
  • Subject: Re: [H5part] H5Block
  • Date: Sat, 27 Jan 2007 17:58:18 +0100
  • List-archive: <>
  • List-id: H5Part development and discussion <>

On Saturday 27 January 2007 02:02, Kurt Stockinger wrote:

> I've started a brief writeup for the H5Block that covers the
> main methods. See what you think:
> In H5Part all datasets are 1-dimensional arrays. H5Blocks
> stores the datasets as 3-dimensional arrays that can be scalar
> fields (one dataset) or vector fields (3 datasets). The
> methods for reading and writing scalar fields are
> H5Block3dReadScalarField and H5Block3dWriteScalarField,
> respectively. These 3-dimensional datasets can be accessed by
> multiple processors concurrently where each processor can read
> parts of the 3-dimensional dataset. The assignment of parts of
> the data to various processors is defined via the structure
> H5BlockPartition. For instance, assume a 3-dimensional scalar
> field with the rank 4x5x6 is partitioned onto two processors,
> i.e. the rank of dimension i is 4, the rank of dimension j is
> 5 and the rank of dimension k is 6.The specification for
> H5BlockPartition for the processors P1 and P2 could be:
> P1: 0,3, 0,5, 0,2
> P2: 0,3, 0,5, 3,5

The rank of j is 5, so j_end := 4:

P1: 0,3, 0,4, 0,2
P2: 0,3, 0,4, 3,5

> where each of the partitions is defined as follows:
> (i-start, i-end, j-start, j-end, k-start, k-end). In this
> example the data is partitioned along dimension k. Finally,
> the partitioning of blocks to various
> processors is used to define the field layout via
> H5BlockDefine3DFieldLayout. In other words, for reading and
> writing the data in
> parallel, the processor assignment needs to be explicitly
> specified.
> The following two methods provide information about which data
> is accessed by which processor. The method H5Block3dGetProcOf
> determines which processor reads a particular part of the
> dataset. Alternatively, the method H5Block3dGetPartitionOfProc
> returns the data partition for a particular processor.

I introduced this function mainly for writing tests. They are
usefull and unique, if you *write* data. In case of reading a
block, we may have "ghost zones" and a node of the mesh may be
read from more than one processor!


Archive powered by MHonArc 2.6.19.

Top of Page