Modified page buffer to split entries only where necessary -- specifically

when handling an I/O request on a metadata entry that has been sub-allocated
from a larger file space allocation (i.e. fixed and extensible array), and
that crosses at least one page boundary.  .

This required modifying the metadata cache to provide the type of the
metadata cache entry in the current I/O request.  For now, this is done
with a function call.  Once we are sure this works, it may be appropriate
to convert this to a macro, or to add a flags parameter to the H5F block
read/write calls.

Also updated the metadata cache to report whether a read request is
speculative -- again via a function call.  This allowed me to remove
the last address static variable in the H5PB_read() call, which is
necessary to support multiple files opened in VFD SWMR mode.

Also re-wrote the H5PB_remove_entries() call to handle release
of large metadata file space allocations that have been sub-allocated
into multiple metadata entries.  Also modified the call to
H5PB_remove_entries() in H5MF__xfree_impl() to invoke it whenever
the page buffer is enabled and the size of the space to be freed is
of page size or larger.

Tested serial / debug on charis and Jelly.

Found a bug in H5MF_xfree_impl(), in which the call to H5PB_remove_entries()
is skipped due to HGOTO_DONE calls earlier in the function.  While the
obvious action is to move the call earlier in the function, best to
consult with Vailin first, as there is much going on and it would be
best to avoid making the situation worse.  If nothing else, there are
some error management issues.
This commit is contained in:
mainzer
2020-04-29 11:34:46 -05:00
committed by David Young
parent d5ad503cfe
commit 18dab4e576
11 changed files with 2902 additions and 326 deletions

View File

@@ -670,19 +670,19 @@ if ( ( (entry_ptr) == NULL ) || \
#define H5PB__UPDATE_STATS_FOR_ACCESS(pb_ptr, type, size) \
{ \
int i; \
int ii; \
\
HDassert(pb_ptr); \
HDassert((pb_ptr)->magic == H5PB__H5PB_T_MAGIC); \
\
if ( H5FD_MEM_DRAW == (type) ) { \
i = H5PB__STATS_RD; \
ii = H5PB__STATS_RD; \
} else if ( (size) > (pb_ptr)->page_size ) { \
i = H5PB__STATS_MPMDE; \
ii = H5PB__STATS_MPMDE; \
} else { \
i = H5PB__STATS_MD; \
ii = H5PB__STATS_MD; \
} \
((pb_ptr)->accesses[i])++; \
((pb_ptr)->accesses[ii])++; \
} /* H5PB__UPDATE_STATS_FOR_ACCESS */
@@ -812,6 +812,20 @@ if ( ( (entry_ptr) == NULL ) || \
((pb_ptr)->loads[i])++; \
} /* H5PB__UPDATE_STATS_FOR_LOAD */
#define H5PB__UPDATE_STATS_FOR_READ_SPLIT(pb_ptr) \
{ \
HDassert(pb_ptr); \
HDassert((pb_ptr)->magic == H5PB__H5PB_T_MAGIC); \
(pb_ptr->md_read_splits)++; \
} /* H5PB__UPDATE_STATS_FOR_READ_SPLIT */
#define H5PB__UPDATE_STATS_FOR_WRITE_SPLIT(pb_ptr) \
{ \
HDassert(pb_ptr); \
HDassert((pb_ptr)->magic == H5PB__H5PB_T_MAGIC); \
(pb_ptr->md_write_splits)++; \
} /* H5PB__UPDATE_STATS_FOR_READ_SPLIT */
#else /* H5PB__COLLECT_PAGE_BUFFER_STATS */
#define H5PB__UPDATE_PB_HIT_RATE_STATS(pb_ptr, hit, is_metadata, is_mpmde)
@@ -834,6 +848,8 @@ if ( ( (entry_ptr) == NULL ) || \
#define H5PB__UPDATE_STATS_FOR_CLEAR(pb_ptr, entry_ptr)
#define H5PB__UPDATE_STATS_FOR_INSERTION(pb_ptr, entry_ptr)
#define H5PB__UPDATE_STATS_FOR_LOAD(pb_ptr, entry_ptr)
#define H5PB__UPDATE_STATS_FOR_READ_SPLIT(pb_ptr)
#define H5PB__UPDATE_STATS_FOR_WRITE_SPLIT(pb_ptr)
#endif /* H5PB__COLLECT_PAGE_BUFFER_STATS */