[svn-r15666] Bug fix: (ID 1157)

Description:
Program would crash, complaining MPI calls were invoked after
MPI_Finalize() has occurred. Previously, H5close() before
MPI_Finalize() would remove the crash.

Solution:
It turned out that two H5 property objects (mpio_pl and acc_tpl)
were not closed before MPI_Finalize().  In the at_exit code,
HDF5 library attempted to close them by releasing the MPI Communicators
in them too. That was the error. Adding code to close them properly
before MPI_Finalize() took care things.

Tested:
Only in kagiso parallel. Did not run h5committest since
kagiso would have been the one running parallel test. This
part of code would not be compiled at all in non-phdf5 mode.
This commit is contained in:
Albert Cheng
2008-09-19 18:19:53 -05:00
parent 8b5ced23dc
commit 50f38c5e59

View File

@@ -176,6 +176,8 @@ int main(int argc, char **argv)
VRFY((acc_tpl >= 0), "", H5FATAL);
ret = H5Pset_fapl_split(acc_tpl, meta_ext, mpio_pl, raw_ext, mpio_pl);
VRFY((ret >= 0), "H5Pset_fapl_split succeeded", H5FATAL);
ret = H5Pclose(mpio_pl);
VRFY((ret >= 0), "H5Pclose mpio_pl succeeded", H5FATAL);
}else{
/* setup file access template */
acc_tpl = H5Pcreate (H5P_FILE_ACCESS);
@@ -314,6 +316,8 @@ int main(int argc, char **argv)
VRFY((ret >= 0), "H5Dclose succeeded", H5FATAL);
ret=H5Fclose(fid);
VRFY((ret >= 0), "H5Fclose succeeded", H5FATAL);
ret=H5Pclose(acc_tpl);
VRFY((ret >= 0), "H5Pclose succeeded", H5FATAL);
/* compute the read and write times */
MPI_Allreduce(&read_tim, &max_read_tim, 1, MPI_DOUBLE, MPI_MAX,
@@ -372,8 +376,6 @@ die_jar_jar_die:
free(tmp);
if (opt_correct) free(tmp2);
/* Close down the HDF5 library before MPI_Finalize. */
H5close();
MPI_Finalize();
return(0);
}