| Index | index by Group | index by Distribution | index by Vendor | index by creation date | index by Name | Mirrors | Help | Search |
| Name: mvapich2-doc | Distribution: openSUSE Tumbleweed |
| Version: 2.2 | Vendor: openSUSE |
| Release: 15.1 | Build date: Thu May 2 22:36:34 2019 |
| Group: Development/Libraries/Parallel | Build host: armbuild17 |
| Size: 1524115 | Source RPM: mvapich2-2.2-15.1.src.rpm |
| Packager: http://bugs.opensuse.org | |
| Url: http://mvapich.cse.ohio-state.edu/overview/mvapich2/ | |
| Summary: OSU MVAPICH2 MPI package - Documentation | |
This is an MPI-3 implementation which includes all MPI-1 and MPI-2 features. It is based on MPICH2 and MVICH. This package contains the static libraries
BSD-3-Clause
* Thu May 02 2019 Nicolas Morey-Chaisemartin <nmoreychaisemartin@suse.com>
- Add mvapich2-fix-double-free.patch to fix a segfault
when running on a machine with no RDMA hardware (bsc#1133797)
* Wed Mar 20 2019 Ana Guerrero Lopez <aguerrero@suse.com>
- Add patch to remove obsolete GCC check (bnc#1129421). It also patches
autogen.sh to get the autotools working in SLE12SP4.
* 0001-Drop-GCC-check.patch
- Force to re-run autotools to generate properly the files after
patching src/binding/cxx/buildiface
* Sun Nov 18 2018 eich@suse.com
- Add macro _hpc_mvapich2_modules for modules support (bsc#1116458).
* Mon Sep 10 2018 nmoreychaisemartin@suse.com
- Remove bashism in postun scriptlet
* Wed Sep 05 2018 nmoreychaisemartin@suse.com
- Fix handling of mpi-selector during updates (bsc#1098653)
* Sun Aug 19 2018 eich@suse.com
- macros.hpc-mvapich2:
replace %%compiler_family by %%hpc_compiler_family
* Mon Jul 16 2018 msuchanek@suse.com
- Use sched_yield instead of pthread_yield (boo#1102421).
- drop mvapich2-pthread_yield.patch
* Mon Jun 18 2018 nmoreychaisemartin@suse.com
- Add missing bsc and fate references to changelog
* Tue Jun 12 2018 nmoreychaisemartin@suse.com
- Disable HPC builds for SLE12 (fate#323655)
* Sun Mar 25 2018 kasimir_@outlook.de
- Change mvapich2-arm-support.patch to provide missing functions for
armv6hl
* Fri Feb 09 2018 cgoll@suse.com
- Fix summary in module files (bnc#1080259)
* Tue Jan 30 2018 eich@suse.com
- Use macro in mpivars.(c)sh to be independent of changes to the module
setup for the compiler (boo#1078364).
* Fri Jan 05 2018 eich@suse.com
- Switch from gcc6 to gcc7 as additional compiler flavor for HPC on SLES.
- Fix library package requires - use HPC macro (boo#1074890).
* Fri Oct 06 2017 nmoreychaisemartin@suse.com
- Add conflicts between the macros-devel packages
* Thu Oct 05 2017 nmoreychaisemartin@suse.com
- Add BuildRequires to libibmad-devel for older release (SLE <= 12.2, Leap <= 42.2)
* Tue Sep 12 2017 eich@suse.com
- Add HPC specific build targets using environment modules
(FATE#321712).
* Tue Sep 12 2017 nmoreychaisemartin@suse.com
- Drop unnecessary dependency to xorg-x11-devel
* Mon Sep 11 2017 nmoreychaisemartin@suse.com
- Only requires verbs libraries for verbs build.
libibverbs devel causes a SEGV when run in a chroot using the
psm or psm2 conduits
- Add testuite packages for all build flavours
* Thu Jul 13 2017 nmoreychaisemartin@suse.com
- Add LD_LIBRARY_PATH to mpivars.sh and mpivars.csh
* Thu Jul 13 2017 nmoreychaisemartin@suse.com
- Disable rpath in pkgconfig files
* Wed Jul 05 2017 nmoreychaisemartin@suse.com
- Remove redondant configure options already passed by %configure
* Mon Jun 26 2017 nmoreychaisemartin@suse.com
- Change install dir to allow multiple flavor to be installed
at the same time (bsc#934090)
- Fix bsc#1045955
- Fix mvapich2-psm package to use libpsm (TrueScale)
- Add mvapich2-psm2 package using libpsm2 (OmniPath)
* Mon Jun 26 2017 nmoreychaisemartin@suse.com
- Use _multibuild to build the various mvapich2-flavours
* Fri Jun 23 2017 nmoreychaisemartin@suse.com
- Replace dependency from libibmad-devel to infiniband-diags-devel
* Wed Jun 14 2017 nmoreychaisemartin@suse.com
- Have mvapich2 and mvapich2-psm conflicts between them
- Cleanup spec file
- Remove mvapich2-testsuite RPM
* Thu Jun 08 2017 nmoreychaisemartin@suse.com
- Reenable arm compilation
- Rename and cleanup mvapich-s390_get_cycles.patch to
mvapich2-s390_get_cycles.patch for coherency
- Cleanup mvapich2-pthread_yield.patch
- Add mvapich2-arm-support.patch to provide missing functions for
armv7hl and aarch64
* Thu Jun 08 2017 nmoreychaisemartin@suse.com
- Remove version dependencies to libibumad, libibverbs and librdmacm
* Tue May 16 2017 nmoreychaisemartin@suse.com
- Fix mvapich2-testsuite packaging
- Disable build on armv7
* Wed Mar 29 2017 pth@suse.de
- Make dependencies on libs now coming from rdma-core versioned.
* Tue Nov 29 2016 pth@suse.de
- Create environment module (bsc#1004628).
* Wed Nov 23 2016 pth@suse.de
- Fix URL.
- Update to mvapich 2.2 GA. Changes since rc1:
MVAPICH2 2.2 (09/07/2016)
* Features and Enhancements (since 2.2rc2):
- Single node collective tuning for Bridges@PSC, Stampede@TACC and other
architectures
- Enable PSM builds when both PSM and PSM2 libraries are present
- Add support for HCAs that return result of atomics in big endian notation
- Establish loopback connections by default if HCA supports atomics
* Bug Fixes (since 2.2rc2):
- Fix minor error in use of communicator object in collectives
- Fix missing u_int64_t declaration with PGI compilers
- Fix memory leak in RMA rendezvous code path
MVAPICH2 2.2rc2 (08/08/2016)
* Features and Enhancements (since 2.2rc1):
- Enhanced performance for MPI_Comm_split through new bitonic algorithm
- Enable graceful fallback to Shared Memory if LiMIC2 or CMA transfer fails
- Enable support for multiple MPI initializations
- Unify process affinity support in Gen2, PSM and PSM2 channels
- Remove verbs dependency when building the PSM and PSM2 channels
- Allow processes to request MPI_THREAD_MULTIPLE when socket or NUMA node
level affinity is specified
- Point-to-point and collective performance optimization for Intel Knights
Landing
- Automatic detection and tuning for InfiniBand EDR HCAs
- Warn user to reconfigure library if rank type is not large enough to
represent all ranks in job
- Collective tuning for Opal@LLNL, Bridges@PSC, and Stampede-1.5@TACC
- Tuning and architecture detection for Intel Broadwell processors
- Add ability to avoid using --enable-new-dtags with ld
- Add LIBTVMPICH specific CFLAGS and LDFLAGS
* Bug Fixes (since 2.2rc1):
- Disable optimization that removes use of calloc in ptmalloc hook
detection code
- Fix weak alias typos (allows successful compilation with CLANG compiler)
- Fix issues in PSM large message gather operations
- Enhance error checking in collective tuning code
- Fix issues with UD based communication in RoCE mode
- Fix issues with PMI2 support in singleton mode
- Fix default binding bug in hydra launcher
- Fix issues with Checkpoint Restart when launched with mpirun_rsh
- Fix fortran binding issues with Intel 2016 compilers
- Fix issues with socket/NUMA node level binding
- Disable atomics when using Connect-IB with RDMA_CM
- Fix hang in MPI_Finalize when using hybrid channel
- Fix memory leaks
* Tue Nov 15 2016 pth@suse.de
- Update to version 2.2rc1 (fate#319240). Changes since 2.1:
MVAPICH2 2.2rc1 (03/29/2016)
* Features and Enhancements (since 2.2b):
- Support for OpenPower architecture
- Optimized inter-node and intra-node communication
- Support for Intel Omni-Path architecture
- Thanks to Intel for contributing the patch
- Introduction of a new PSM2 channel for Omni-Path
- Support for RoCEv2
- Architecture detection for PSC Bridges system with Omni-Path
- Enhanced startup performance and reduced memory footprint for storing
InfiniBand end-point information with SLURM
- Support for shared memory based PMI operations
- Availability of an updated patch from the MVAPICH project website
with this support for SLURM installations
- Optimized pt-to-pt and collective tuning for Chameleon InfiniBand
systems at TACC/UoC
- Enable affinity by default for TrueScale(PSM) and Omni-Path(PSM2)
channels
- Enhanced tuning for shared-memory based MPI_Bcast
- Enhanced debugging support and error messages
- Update to hwloc version 1.11.2
* Bug Fixes (since 2.2b):
- Fix issue in some of the internal algorithms used for MPI_Bcast,
MPI_Alltoall and MPI_Reduce
- Fix hang in one of the internal algorithms used for MPI_Scatter
- Thanks to Ivan Raikov@Stanford for reporting this issue
- Fix issue with rdma_connect operation
- Fix issue with Dynamic Process Management feature
- Fix issue with de-allocating InfiniBand resources in blocking mode
- Fix build errors caused due to improper compile time guards
- Thanks to Adam Moody@LLNL for the report
- Fix finalize hang when running in hybrid or UD-only mode
- Thanks to Jerome Vienne@TACC for reporting this issue
- Fix issue in MPI_Win_flush operation
- Thanks to Nenad Vukicevic for reporting this issue
- Fix out of memory issues with non-blocking collectives code
- Thanks to Phanisri Pradeep Pratapa and Fang Liu@GaTech for
reporting this issue
- Fix fall-through bug in external32 pack
- Thanks to Adam Moody@LLNL for the report and patch
- Fix issue with on-demand connection establishment and blocking mode
- Thanks to Maksym Planeta@TU Dresden for the report
- Fix memory leaks in hardware multicast based broadcast code
- Fix memory leaks in TrueScale(PSM) channel
- Fix compilation warnings
MVAPICH2 2.2b (11/12/2015)
* Features and Enhancements (since 2.2a):
- Enhanced performance for small messages
- Enhanced startup performance with SLURM
- Support for PMIX_Iallgather and PMIX_Ifence
- Support to enable affinity with asynchronous progress thread
- Enhanced support for MPIT based performance variables
- Tuned VBUF size for performance
- Improved startup performance for QLogic PSM-CH3 channel
- Thanks to Maksym Planeta@TU Dresden for the patch
* Bug Fixes (since 2.2a):
- Fix issue with MPI_Get_count in QLogic PSM-CH3 channel with very large
messages (>2GB)
- Fix issues with shared memory collectives and checkpoint-restart
- Fix hang with checkpoint-restart
- Fix issue with unlinking shared memory files
- Fix memory leak with MPIT
- Fix minor typos and usage of inline and static keywords
- Thanks to Maksym Planeta@TU Dresden for the patch and suggestions
- Fix missing MPIDI_FUNC_EXIT
- Thanks to Maksym Planeta@TU Dresden for the patch
- Remove unused code
- Thanks to Maksym Planeta@TU Dresden for the patch
- Continue with warning if user asks to enable XRC when the system does not
support XRC
MVAPICH2 2.2a (08/17/2015)
* Features and Enhancements (since 2.1 GA):
- Based on MPICH 3.1.4
- Support for backing on-demand UD CM information with shared memory
for minimizing memory footprint
- Reorganized HCA-aware process mapping
- Dynamic identification of maximum read/atomic operations supported by HCA
- Enabling support for intra-node communications in RoCE mode without
shared memory
- Updated to hwloc 1.11.0
- Updated to sm_20 kernel optimizations for MPI Datatypes
- Automatic detection and tuning for 24-core Haswell architecture
* Bug Fixes (since 2.1 GA):
- Fix for error with multi-vbuf design for GPU based communication
- Fix bugs with hybrid UD/RC/XRC communications
- Fix for MPICH putfence/getfence for large messages
- Fix for error in collective tuning framework
- Fix validation failure with Alltoall with IN_PLACE option
- Thanks for Mahidhar Tatineni @SDSC for the report
- Fix bug with MPI_Reduce with IN_PLACE option
- Thanks to Markus Geimer for the report
- Fix for compilation failures with multicast disabled
- Thanks to Devesh Sharma @Emulex for the report
- Fix bug with MPI_Bcast
- Fix IPC selection for shared GPU mode systems
- Fix for build time warnings and memory leaks
- Fix issues with Dynamic Process Management
- Thanks to Neil Spruit for the report
- Fix bug in architecture detection code
- Thanks to Adam Moody @LLNL for the report
* Fri Oct 14 2016 pth@suse.de
- Create and include modules file for Mvapich2 (bsc#1004628).
- Remove mvapich2-fix-implicit-decl.patch as the fix is upstream.
- Adapt spec file to the changed micro benchmark install directory.
* Sun Jul 24 2016 p.drouand@gmail.com
- Update to version 2.1
* Features and Enhancements (since 2.1rc2):
- Tuning for EDR adapters
- Optimization of collectives for SDSC Comet system
- Based on MPICH-3.1.4
- Enhanced startup performance with mpirun_rsh
- Checkpoint-Restart Support with DMTCP (Distributed MultiThreaded
CheckPointing)
- Thanks to the DMTCP project team (http://dmtcp.sourceforge.net/)
- Support for handling very large messages in RMA
- Optimize size of buffer requested for control messages in large message
transfer
- Enhanced automatic detection of atomic support
- Optimized collectives (bcast, reduce, and allreduce) for 4K processes
- Introduce support to sleep for user specified period before aborting
- Disable PSM from setting CPU affinity
- Install PSM error handler to print more verbose error messages
- Introduce retry mechanism to perform psm_ep_open in PSM channel
* Bug-Fixes (since 2.1rc2):
- Relocate reading environment variables in PSM
- Fix issue with automatic process mapping
- Fix issue with checkpoint restart when full path is not given
- Fix issue with Dynamic Process Management
- Fix issue in CUDA IPC code path
- Fix corner case in CMA runtime detection
* Features and Enhancements (since 2.1rc1):
- Based on MPICH-3.1.4
- Enhanced startup performance with mpirun_rsh
- Checkpoint-Restart Support with DMTCP (Distributed MultiThreaded
CheckPointing)
- Support for handling very large messages in RMA
- Optimize size of buffer requested for control messages in large message
transfer
- Enhanced automatic detection of atomic support
- Optimized collectives (bcast, reduce, and allreduce) for 4K processes
- Introduce support to sleep for user specified period before aborting
- Disable PSM from setting CPU affinity
- Install PSM error handler to print more verbose error messages
- Introduce retry mechanism to perform psm_ep_open in PSM channel
* Bug-Fixes (since 2.1rc1):
- Fix failures with shared memory collectives with checkpoint-restart
- Fix failures with checkpoint-restart when using internal communication
buffers of different size
- Fix undeclared variable error when --disable-cxx is specified with
configure
- Fix segfault seen during connect/accept with dynamic processes
- Fix errors with large messages pack/unpack operations in PSM channel
- Fix for bcast collective tuning
- Fix assertion errors in one-sided put operations in PSM channel
- Fix issue with code getting stuck in infinite loop inside ptmalloc
- Fix assertion error in shared memory large message transfers
- Fix compilation warnings
* Features and Enhancements (since 2.1a):
- Based on MPICH-3.1.3
- Flexibility to use internal communication buffers of different size for
improved performance and memory footprint
- Improve communication performance by removing locks from critical path
- Enhanced communication performance for small/medium message sizes
- Support for linking Intel Trace Analyzer and Collector
- Increase the number of connect retry attempts with RDMA_CM
- Automatic detection and tuning for Haswell architecture
* Bug-Fixes (since 2.1a):
- Fix automatic detection of support for atomics
- Fix issue with void pointer arithmetic with PGI
- Fix deadlock in ctxidup MPICH test in PSM channel
- Fix compile warnings
* Features and Enhancements (since 2.0):
- Based on MPICH-3.1.2
- Support for PMI-2 based startup with SLURM
- Enhanced startup performance for Gen2/UD-Hybrid channel
- GPU support for MPI_Scan and MPI_Exscan collective operations
- Optimize creation of 2-level communicator
- Collective optimization for PSM-CH3 channel
- Tuning for IvyBridge architecture
- Add -export-all option to mpirun_rsh
- Support for additional MPI-T performance variables (PVARs)
in the CH3 channel
- Link with libstdc++ when building with GPU support
(required by CUDA 6.5)
* Bug-Fixes (since 2.0):
- Fix error in large message (>2GB) transfers in CMA code path
- Fix memory leaks in OFA-IB-CH3 and OFA-IB-Nemesis channels
- Fix issues with optimizations for broadcast and reduce collectives
- Fix hang at finalize with Gen2-Hybrid/UD channel
- Fix issues for collectives with non power-of-two process counts
- Make ring startup use HCA selected by user
- Increase counter length for shared-memory collectives
- Use download Url as source
- Some other minor improvements
- Add mvapich2-fix-implicit-decl.patch
/usr/share/doc/mvapich2 /usr/share/doc/mvapich2/install.pdf /usr/share/doc/mvapich2/logging.pdf /usr/share/doc/mvapich2/user.pdf /usr/share/doc/mvapich2/www1 /usr/share/doc/mvapich2/www1/index.htm /usr/share/doc/mvapich2/www1/mpicc.html /usr/share/doc/mvapich2/www1/mpicxx.html /usr/share/doc/mvapich2/www1/mpiexec.html /usr/share/doc/mvapich2/www1/mpif77.html /usr/share/doc/mvapich2/www1/mpifort.html /usr/share/doc/mvapich2/www3 /usr/share/doc/mvapich2/www3/MPIX_Comm_agree.html /usr/share/doc/mvapich2/www3/MPIX_Comm_failure_ack.html /usr/share/doc/mvapich2/www3/MPIX_Comm_failure_get_acked.html /usr/share/doc/mvapich2/www3/MPIX_Comm_revoke.html /usr/share/doc/mvapich2/www3/MPIX_Comm_shrink.html /usr/share/doc/mvapich2/www3/MPI_Abort.html /usr/share/doc/mvapich2/www3/MPI_Accumulate.html /usr/share/doc/mvapich2/www3/MPI_Add_error_class.html /usr/share/doc/mvapich2/www3/MPI_Add_error_code.html /usr/share/doc/mvapich2/www3/MPI_Add_error_string.html /usr/share/doc/mvapich2/www3/MPI_Address.html /usr/share/doc/mvapich2/www3/MPI_Allgather.html /usr/share/doc/mvapich2/www3/MPI_Allgatherv.html /usr/share/doc/mvapich2/www3/MPI_Alloc_mem.html /usr/share/doc/mvapich2/www3/MPI_Allreduce.html /usr/share/doc/mvapich2/www3/MPI_Alltoall.html /usr/share/doc/mvapich2/www3/MPI_Alltoallv.html /usr/share/doc/mvapich2/www3/MPI_Alltoallw.html /usr/share/doc/mvapich2/www3/MPI_Attr_delete.html /usr/share/doc/mvapich2/www3/MPI_Attr_get.html /usr/share/doc/mvapich2/www3/MPI_Attr_put.html /usr/share/doc/mvapich2/www3/MPI_Barrier.html /usr/share/doc/mvapich2/www3/MPI_Bcast.html /usr/share/doc/mvapich2/www3/MPI_Bsend.html /usr/share/doc/mvapich2/www3/MPI_Bsend_init.html /usr/share/doc/mvapich2/www3/MPI_Buffer_attach.html /usr/share/doc/mvapich2/www3/MPI_Buffer_detach.html /usr/share/doc/mvapich2/www3/MPI_Cancel.html /usr/share/doc/mvapich2/www3/MPI_Cart_coords.html /usr/share/doc/mvapich2/www3/MPI_Cart_create.html /usr/share/doc/mvapich2/www3/MPI_Cart_get.html /usr/share/doc/mvapich2/www3/MPI_Cart_map.html /usr/share/doc/mvapich2/www3/MPI_Cart_rank.html /usr/share/doc/mvapich2/www3/MPI_Cart_shift.html /usr/share/doc/mvapich2/www3/MPI_Cart_sub.html /usr/share/doc/mvapich2/www3/MPI_Cartdim_get.html /usr/share/doc/mvapich2/www3/MPI_Close_port.html /usr/share/doc/mvapich2/www3/MPI_Comm_accept.html /usr/share/doc/mvapich2/www3/MPI_Comm_call_errhandler.html /usr/share/doc/mvapich2/www3/MPI_Comm_compare.html /usr/share/doc/mvapich2/www3/MPI_Comm_connect.html /usr/share/doc/mvapich2/www3/MPI_Comm_create.html /usr/share/doc/mvapich2/www3/MPI_Comm_create_errhandler.html /usr/share/doc/mvapich2/www3/MPI_Comm_create_group.html /usr/share/doc/mvapich2/www3/MPI_Comm_create_keyval.html /usr/share/doc/mvapich2/www3/MPI_Comm_delete_attr.html /usr/share/doc/mvapich2/www3/MPI_Comm_disconnect.html /usr/share/doc/mvapich2/www3/MPI_Comm_dup.html /usr/share/doc/mvapich2/www3/MPI_Comm_dup_with_info.html /usr/share/doc/mvapich2/www3/MPI_Comm_free.html /usr/share/doc/mvapich2/www3/MPI_Comm_free_keyval.html /usr/share/doc/mvapich2/www3/MPI_Comm_get_attr.html /usr/share/doc/mvapich2/www3/MPI_Comm_get_errhandler.html /usr/share/doc/mvapich2/www3/MPI_Comm_get_info.html /usr/share/doc/mvapich2/www3/MPI_Comm_get_name.html /usr/share/doc/mvapich2/www3/MPI_Comm_get_parent.html /usr/share/doc/mvapich2/www3/MPI_Comm_group.html /usr/share/doc/mvapich2/www3/MPI_Comm_idup.html /usr/share/doc/mvapich2/www3/MPI_Comm_join.html /usr/share/doc/mvapich2/www3/MPI_Comm_rank.html /usr/share/doc/mvapich2/www3/MPI_Comm_remote_group.html /usr/share/doc/mvapich2/www3/MPI_Comm_remote_size.html /usr/share/doc/mvapich2/www3/MPI_Comm_set_attr.html /usr/share/doc/mvapich2/www3/MPI_Comm_set_errhandler.html /usr/share/doc/mvapich2/www3/MPI_Comm_set_info.html /usr/share/doc/mvapich2/www3/MPI_Comm_set_name.html /usr/share/doc/mvapich2/www3/MPI_Comm_size.html /usr/share/doc/mvapich2/www3/MPI_Comm_spawn.html /usr/share/doc/mvapich2/www3/MPI_Comm_spawn_multiple.html /usr/share/doc/mvapich2/www3/MPI_Comm_split.html /usr/share/doc/mvapich2/www3/MPI_Comm_split_type.html /usr/share/doc/mvapich2/www3/MPI_Comm_test_inter.html /usr/share/doc/mvapich2/www3/MPI_Compare_and_swap.html /usr/share/doc/mvapich2/www3/MPI_Dims_create.html /usr/share/doc/mvapich2/www3/MPI_Dist_graph_create.html /usr/share/doc/mvapich2/www3/MPI_Dist_graph_create_adjacent.html /usr/share/doc/mvapich2/www3/MPI_Dist_graph_neighbors.html /usr/share/doc/mvapich2/www3/MPI_Dist_graph_neighbors_count.html /usr/share/doc/mvapich2/www3/MPI_Errhandler_create.html /usr/share/doc/mvapich2/www3/MPI_Errhandler_free.html /usr/share/doc/mvapich2/www3/MPI_Errhandler_get.html /usr/share/doc/mvapich2/www3/MPI_Errhandler_set.html /usr/share/doc/mvapich2/www3/MPI_Error_class.html /usr/share/doc/mvapich2/www3/MPI_Error_string.html /usr/share/doc/mvapich2/www3/MPI_Exscan.html /usr/share/doc/mvapich2/www3/MPI_Fetch_and_op.html /usr/share/doc/mvapich2/www3/MPI_File_c2f.html /usr/share/doc/mvapich2/www3/MPI_File_call_errhandler.html /usr/share/doc/mvapich2/www3/MPI_File_close.html /usr/share/doc/mvapich2/www3/MPI_File_create_errhandler.html /usr/share/doc/mvapich2/www3/MPI_File_delete.html /usr/share/doc/mvapich2/www3/MPI_File_f2c.html /usr/share/doc/mvapich2/www3/MPI_File_get_amode.html /usr/share/doc/mvapich2/www3/MPI_File_get_atomicity.html /usr/share/doc/mvapich2/www3/MPI_File_get_byte_offset.html /usr/share/doc/mvapich2/www3/MPI_File_get_errhandler.html /usr/share/doc/mvapich2/www3/MPI_File_get_group.html /usr/share/doc/mvapich2/www3/MPI_File_get_info.html /usr/share/doc/mvapich2/www3/MPI_File_get_position.html /usr/share/doc/mvapich2/www3/MPI_File_get_position_shared.html /usr/share/doc/mvapich2/www3/MPI_File_get_size.html /usr/share/doc/mvapich2/www3/MPI_File_get_type_extent.html /usr/share/doc/mvapich2/www3/MPI_File_get_view.html /usr/share/doc/mvapich2/www3/MPI_File_iread.html /usr/share/doc/mvapich2/www3/MPI_File_iread_at.html /usr/share/doc/mvapich2/www3/MPI_File_iread_shared.html /usr/share/doc/mvapich2/www3/MPI_File_iwrite.html /usr/share/doc/mvapich2/www3/MPI_File_iwrite_at.html /usr/share/doc/mvapich2/www3/MPI_File_iwrite_shared.html /usr/share/doc/mvapich2/www3/MPI_File_open.html /usr/share/doc/mvapich2/www3/MPI_File_preallocate.html /usr/share/doc/mvapich2/www3/MPI_File_read.html /usr/share/doc/mvapich2/www3/MPI_File_read_all.html /usr/share/doc/mvapich2/www3/MPI_File_read_all_begin.html /usr/share/doc/mvapich2/www3/MPI_File_read_all_end.html /usr/share/doc/mvapich2/www3/MPI_File_read_at.html /usr/share/doc/mvapich2/www3/MPI_File_read_at_all.html /usr/share/doc/mvapich2/www3/MPI_File_read_at_all_begin.html /usr/share/doc/mvapich2/www3/MPI_File_read_at_all_end.html /usr/share/doc/mvapich2/www3/MPI_File_read_ordered.html /usr/share/doc/mvapich2/www3/MPI_File_read_ordered_begin.html /usr/share/doc/mvapich2/www3/MPI_File_read_ordered_end.html /usr/share/doc/mvapich2/www3/MPI_File_read_shared.html /usr/share/doc/mvapich2/www3/MPI_File_seek.html /usr/share/doc/mvapich2/www3/MPI_File_seek_shared.html /usr/share/doc/mvapich2/www3/MPI_File_set_atomicity.html /usr/share/doc/mvapich2/www3/MPI_File_set_errhandler.html /usr/share/doc/mvapich2/www3/MPI_File_set_info.html /usr/share/doc/mvapich2/www3/MPI_File_set_size.html /usr/share/doc/mvapich2/www3/MPI_File_set_view.html /usr/share/doc/mvapich2/www3/MPI_File_sync.html /usr/share/doc/mvapich2/www3/MPI_File_write.html /usr/share/doc/mvapich2/www3/MPI_File_write_all.html /usr/share/doc/mvapich2/www3/MPI_File_write_all_begin.html /usr/share/doc/mvapich2/www3/MPI_File_write_all_end.html /usr/share/doc/mvapich2/www3/MPI_File_write_at.html /usr/share/doc/mvapich2/www3/MPI_File_write_at_all.html /usr/share/doc/mvapich2/www3/MPI_File_write_at_all_begin.html /usr/share/doc/mvapich2/www3/MPI_File_write_at_all_end.html /usr/share/doc/mvapich2/www3/MPI_File_write_ordered.html /usr/share/doc/mvapich2/www3/MPI_File_write_ordered_begin.html /usr/share/doc/mvapich2/www3/MPI_File_write_ordered_end.html /usr/share/doc/mvapich2/www3/MPI_File_write_shared.html /usr/share/doc/mvapich2/www3/MPI_Finalize.html /usr/share/doc/mvapich2/www3/MPI_Finalized.html /usr/share/doc/mvapich2/www3/MPI_Free_mem.html /usr/share/doc/mvapich2/www3/MPI_Gather.html /usr/share/doc/mvapich2/www3/MPI_Gatherv.html /usr/share/doc/mvapich2/www3/MPI_Get.html /usr/share/doc/mvapich2/www3/MPI_Get_accumulate.html /usr/share/doc/mvapich2/www3/MPI_Get_address.html /usr/share/doc/mvapich2/www3/MPI_Get_count.html /usr/share/doc/mvapich2/www3/MPI_Get_elements.html /usr/share/doc/mvapich2/www3/MPI_Get_elements_x.html /usr/share/doc/mvapich2/www3/MPI_Get_library_version.html /usr/share/doc/mvapich2/www3/MPI_Get_processor_name.html /usr/share/doc/mvapich2/www3/MPI_Get_version.html /usr/share/doc/mvapich2/www3/MPI_Graph_create.html /usr/share/doc/mvapich2/www3/MPI_Graph_get.html /usr/share/doc/mvapich2/www3/MPI_Graph_map.html /usr/share/doc/mvapich2/www3/MPI_Graph_neighbors.html /usr/share/doc/mvapich2/www3/MPI_Graph_neighbors_count.html /usr/share/doc/mvapich2/www3/MPI_Graphdims_get.html /usr/share/doc/mvapich2/www3/MPI_Grequest_complete.html /usr/share/doc/mvapich2/www3/MPI_Grequest_start.html /usr/share/doc/mvapich2/www3/MPI_Group_compare.html /usr/share/doc/mvapich2/www3/MPI_Group_difference.html /usr/share/doc/mvapich2/www3/MPI_Group_excl.html /usr/share/doc/mvapich2/www3/MPI_Group_free.html /usr/share/doc/mvapich2/www3/MPI_Group_incl.html /usr/share/doc/mvapich2/www3/MPI_Group_intersection.html /usr/share/doc/mvapich2/www3/MPI_Group_range_excl.html /usr/share/doc/mvapich2/www3/MPI_Group_range_incl.html /usr/share/doc/mvapich2/www3/MPI_Group_rank.html /usr/share/doc/mvapich2/www3/MPI_Group_size.html /usr/share/doc/mvapich2/www3/MPI_Group_translate_ranks.html /usr/share/doc/mvapich2/www3/MPI_Group_union.html /usr/share/doc/mvapich2/www3/MPI_Iallgather.html /usr/share/doc/mvapich2/www3/MPI_Iallgatherv.html /usr/share/doc/mvapich2/www3/MPI_Iallreduce.html /usr/share/doc/mvapich2/www3/MPI_Ialltoall.html /usr/share/doc/mvapich2/www3/MPI_Ialltoallv.html /usr/share/doc/mvapich2/www3/MPI_Ialltoallw.html /usr/share/doc/mvapich2/www3/MPI_Ibarrier.html /usr/share/doc/mvapich2/www3/MPI_Ibcast.html /usr/share/doc/mvapich2/www3/MPI_Ibsend.html /usr/share/doc/mvapich2/www3/MPI_Iexscan.html /usr/share/doc/mvapich2/www3/MPI_Igather.html /usr/share/doc/mvapich2/www3/MPI_Igatherv.html /usr/share/doc/mvapich2/www3/MPI_Improbe.html /usr/share/doc/mvapich2/www3/MPI_Imrecv.html /usr/share/doc/mvapich2/www3/MPI_Ineighbor_allgather.html /usr/share/doc/mvapich2/www3/MPI_Ineighbor_allgatherv.html /usr/share/doc/mvapich2/www3/MPI_Ineighbor_alltoall.html /usr/share/doc/mvapich2/www3/MPI_Ineighbor_alltoallv.html /usr/share/doc/mvapich2/www3/MPI_Ineighbor_alltoallw.html /usr/share/doc/mvapich2/www3/MPI_Info_create.html /usr/share/doc/mvapich2/www3/MPI_Info_delete.html /usr/share/doc/mvapich2/www3/MPI_Info_dup.html /usr/share/doc/mvapich2/www3/MPI_Info_free.html /usr/share/doc/mvapich2/www3/MPI_Info_get.html /usr/share/doc/mvapich2/www3/MPI_Info_get_nkeys.html /usr/share/doc/mvapich2/www3/MPI_Info_get_nthkey.html /usr/share/doc/mvapich2/www3/MPI_Info_get_valuelen.html /usr/share/doc/mvapich2/www3/MPI_Info_set.html /usr/share/doc/mvapich2/www3/MPI_Init.html /usr/share/doc/mvapich2/www3/MPI_Init_thread.html /usr/share/doc/mvapich2/www3/MPI_Initialized.html /usr/share/doc/mvapich2/www3/MPI_Intercomm_create.html /usr/share/doc/mvapich2/www3/MPI_Intercomm_merge.html /usr/share/doc/mvapich2/www3/MPI_Iprobe.html /usr/share/doc/mvapich2/www3/MPI_Irecv.html /usr/share/doc/mvapich2/www3/MPI_Ireduce.html /usr/share/doc/mvapich2/www3/MPI_Ireduce_scatter.html /usr/share/doc/mvapich2/www3/MPI_Ireduce_scatter_block.html /usr/share/doc/mvapich2/www3/MPI_Irsend.html /usr/share/doc/mvapich2/www3/MPI_Is_thread_main.html /usr/share/doc/mvapich2/www3/MPI_Iscan.html /usr/share/doc/mvapich2/www3/MPI_Iscatter.html /usr/share/doc/mvapich2/www3/MPI_Iscatterv.html /usr/share/doc/mvapich2/www3/MPI_Isend.html /usr/share/doc/mvapich2/www3/MPI_Issend.html /usr/share/doc/mvapich2/www3/MPI_Keyval_create.html /usr/share/doc/mvapich2/www3/MPI_Keyval_free.html /usr/share/doc/mvapich2/www3/MPI_Lookup_name.html /usr/share/doc/mvapich2/www3/MPI_Mprobe.html /usr/share/doc/mvapich2/www3/MPI_Mrecv.html /usr/share/doc/mvapich2/www3/MPI_Neighbor_allgather.html /usr/share/doc/mvapich2/www3/MPI_Neighbor_allgatherv.html /usr/share/doc/mvapich2/www3/MPI_Neighbor_alltoall.html /usr/share/doc/mvapich2/www3/MPI_Neighbor_alltoallv.html /usr/share/doc/mvapich2/www3/MPI_Neighbor_alltoallw.html /usr/share/doc/mvapich2/www3/MPI_Op_commute.html /usr/share/doc/mvapich2/www3/MPI_Op_create.html /usr/share/doc/mvapich2/www3/MPI_Op_free.html /usr/share/doc/mvapich2/www3/MPI_Open_port.html /usr/share/doc/mvapich2/www3/MPI_Pack.html /usr/share/doc/mvapich2/www3/MPI_Pack_external.html /usr/share/doc/mvapich2/www3/MPI_Pack_external_size.html /usr/share/doc/mvapich2/www3/MPI_Pack_size.html /usr/share/doc/mvapich2/www3/MPI_Pcontrol.html /usr/share/doc/mvapich2/www3/MPI_Probe.html /usr/share/doc/mvapich2/www3/MPI_Publish_name.html /usr/share/doc/mvapich2/www3/MPI_Put.html /usr/share/doc/mvapich2/www3/MPI_Query_thread.html /usr/share/doc/mvapich2/www3/MPI_Raccumulate.html /usr/share/doc/mvapich2/www3/MPI_Recv.html /usr/share/doc/mvapich2/www3/MPI_Recv_init.html /usr/share/doc/mvapich2/www3/MPI_Reduce.html /usr/share/doc/mvapich2/www3/MPI_Reduce_local.html /usr/share/doc/mvapich2/www3/MPI_Reduce_scatter.html /usr/share/doc/mvapich2/www3/MPI_Reduce_scatter_block.html /usr/share/doc/mvapich2/www3/MPI_Register_datarep.html /usr/share/doc/mvapich2/www3/MPI_Request_free.html /usr/share/doc/mvapich2/www3/MPI_Request_get_status.html /usr/share/doc/mvapich2/www3/MPI_Rget.html /usr/share/doc/mvapich2/www3/MPI_Rget_accumulate.html /usr/share/doc/mvapich2/www3/MPI_Rput.html /usr/share/doc/mvapich2/www3/MPI_Rsend.html /usr/share/doc/mvapich2/www3/MPI_Rsend_init.html /usr/share/doc/mvapich2/www3/MPI_Scan.html /usr/share/doc/mvapich2/www3/MPI_Scatter.html /usr/share/doc/mvapich2/www3/MPI_Scatterv.html /usr/share/doc/mvapich2/www3/MPI_Send.html /usr/share/doc/mvapich2/www3/MPI_Send_init.html /usr/share/doc/mvapich2/www3/MPI_Sendrecv.html /usr/share/doc/mvapich2/www3/MPI_Sendrecv_replace.html /usr/share/doc/mvapich2/www3/MPI_Ssend.html /usr/share/doc/mvapich2/www3/MPI_Ssend_init.html /usr/share/doc/mvapich2/www3/MPI_Start.html /usr/share/doc/mvapich2/www3/MPI_Startall.html /usr/share/doc/mvapich2/www3/MPI_Status_set_cancelled.html /usr/share/doc/mvapich2/www3/MPI_Status_set_elements.html /usr/share/doc/mvapich2/www3/MPI_Status_set_elements_x.html /usr/share/doc/mvapich2/www3/MPI_T_category_changed.html /usr/share/doc/mvapich2/www3/MPI_T_category_get_categories.html /usr/share/doc/mvapich2/www3/MPI_T_category_get_cvars.html /usr/share/doc/mvapich2/www3/MPI_T_category_get_info.html /usr/share/doc/mvapich2/www3/MPI_T_category_get_num.html /usr/share/doc/mvapich2/www3/MPI_T_category_get_pvars.html /usr/share/doc/mvapich2/www3/MPI_T_cvar_get_info.html /usr/share/doc/mvapich2/www3/MPI_T_cvar_get_num.html /usr/share/doc/mvapich2/www3/MPI_T_cvar_handle_alloc.html /usr/share/doc/mvapich2/www3/MPI_T_cvar_handle_free.html /usr/share/doc/mvapich2/www3/MPI_T_cvar_read.html /usr/share/doc/mvapich2/www3/MPI_T_cvar_write.html /usr/share/doc/mvapich2/www3/MPI_T_enum_get_info.html /usr/share/doc/mvapich2/www3/MPI_T_enum_get_item.html /usr/share/doc/mvapich2/www3/MPI_T_finalize.html /usr/share/doc/mvapich2/www3/MPI_T_init_thread.html /usr/share/doc/mvapich2/www3/MPI_T_pvar_get_info.html /usr/share/doc/mvapich2/www3/MPI_T_pvar_get_num.html /usr/share/doc/mvapich2/www3/MPI_T_pvar_handle_alloc.html /usr/share/doc/mvapich2/www3/MPI_T_pvar_handle_free.html /usr/share/doc/mvapich2/www3/MPI_T_pvar_read.html /usr/share/doc/mvapich2/www3/MPI_T_pvar_readreset.html /usr/share/doc/mvapich2/www3/MPI_T_pvar_reset.html /usr/share/doc/mvapich2/www3/MPI_T_pvar_session_create.html /usr/share/doc/mvapich2/www3/MPI_T_pvar_session_free.html /usr/share/doc/mvapich2/www3/MPI_T_pvar_start.html /usr/share/doc/mvapich2/www3/MPI_T_pvar_stop.html /usr/share/doc/mvapich2/www3/MPI_T_pvar_write.html /usr/share/doc/mvapich2/www3/MPI_Test.html /usr/share/doc/mvapich2/www3/MPI_Test_cancelled.html /usr/share/doc/mvapich2/www3/MPI_Testall.html /usr/share/doc/mvapich2/www3/MPI_Testany.html /usr/share/doc/mvapich2/www3/MPI_Testsome.html /usr/share/doc/mvapich2/www3/MPI_Topo_test.html /usr/share/doc/mvapich2/www3/MPI_Type_commit.html /usr/share/doc/mvapich2/www3/MPI_Type_contiguous.html /usr/share/doc/mvapich2/www3/MPI_Type_create_darray.html /usr/share/doc/mvapich2/www3/MPI_Type_create_hindexed.html /usr/share/doc/mvapich2/www3/MPI_Type_create_hindexed_block.html /usr/share/doc/mvapich2/www3/MPI_Type_create_hvector.html /usr/share/doc/mvapich2/www3/MPI_Type_create_indexed_block.html /usr/share/doc/mvapich2/www3/MPI_Type_create_keyval.html /usr/share/doc/mvapich2/www3/MPI_Type_create_resized.html /usr/share/doc/mvapich2/www3/MPI_Type_create_struct.html /usr/share/doc/mvapich2/www3/MPI_Type_create_subarray.html /usr/share/doc/mvapich2/www3/MPI_Type_delete_attr.html /usr/share/doc/mvapich2/www3/MPI_Type_dup.html /usr/share/doc/mvapich2/www3/MPI_Type_extent.html /usr/share/doc/mvapich2/www3/MPI_Type_free.html /usr/share/doc/mvapich2/www3/MPI_Type_free_keyval.html /usr/share/doc/mvapich2/www3/MPI_Type_get_attr.html /usr/share/doc/mvapich2/www3/MPI_Type_get_contents.html /usr/share/doc/mvapich2/www3/MPI_Type_get_envelope.html /usr/share/doc/mvapich2/www3/MPI_Type_get_extent.html /usr/share/doc/mvapich2/www3/MPI_Type_get_extent_x.html /usr/share/doc/mvapich2/www3/MPI_Type_get_name.html /usr/share/doc/mvapich2/www3/MPI_Type_get_true_extent.html /usr/share/doc/mvapich2/www3/MPI_Type_get_true_extent_x.html /usr/share/doc/mvapich2/www3/MPI_Type_hindexed.html /usr/share/doc/mvapich2/www3/MPI_Type_hvector.html /usr/share/doc/mvapich2/www3/MPI_Type_indexed.html /usr/share/doc/mvapich2/www3/MPI_Type_lb.html /usr/share/doc/mvapich2/www3/MPI_Type_match_size.html /usr/share/doc/mvapich2/www3/MPI_Type_set_attr.html /usr/share/doc/mvapich2/www3/MPI_Type_set_name.html /usr/share/doc/mvapich2/www3/MPI_Type_size.html /usr/share/doc/mvapich2/www3/MPI_Type_size_x.html /usr/share/doc/mvapich2/www3/MPI_Type_struct.html /usr/share/doc/mvapich2/www3/MPI_Type_ub.html /usr/share/doc/mvapich2/www3/MPI_Type_vector.html /usr/share/doc/mvapich2/www3/MPI_Unpack.html /usr/share/doc/mvapich2/www3/MPI_Unpack_external.html /usr/share/doc/mvapich2/www3/MPI_Unpublish_name.html /usr/share/doc/mvapich2/www3/MPI_Wait.html /usr/share/doc/mvapich2/www3/MPI_Waitall.html /usr/share/doc/mvapich2/www3/MPI_Waitany.html /usr/share/doc/mvapich2/www3/MPI_Waitsome.html /usr/share/doc/mvapich2/www3/MPI_Win_allocate.html /usr/share/doc/mvapich2/www3/MPI_Win_allocate_shared.html /usr/share/doc/mvapich2/www3/MPI_Win_attach.html /usr/share/doc/mvapich2/www3/MPI_Win_call_errhandler.html /usr/share/doc/mvapich2/www3/MPI_Win_complete.html /usr/share/doc/mvapich2/www3/MPI_Win_create.html /usr/share/doc/mvapich2/www3/MPI_Win_create_dynamic.html /usr/share/doc/mvapich2/www3/MPI_Win_create_errhandler.html /usr/share/doc/mvapich2/www3/MPI_Win_create_keyval.html /usr/share/doc/mvapich2/www3/MPI_Win_delete_attr.html /usr/share/doc/mvapich2/www3/MPI_Win_detach.html /usr/share/doc/mvapich2/www3/MPI_Win_fence.html /usr/share/doc/mvapich2/www3/MPI_Win_flush.html /usr/share/doc/mvapich2/www3/MPI_Win_flush_all.html /usr/share/doc/mvapich2/www3/MPI_Win_flush_local.html /usr/share/doc/mvapich2/www3/MPI_Win_flush_local_all.html /usr/share/doc/mvapich2/www3/MPI_Win_free.html /usr/share/doc/mvapich2/www3/MPI_Win_free_keyval.html /usr/share/doc/mvapich2/www3/MPI_Win_get_attr.html /usr/share/doc/mvapich2/www3/MPI_Win_get_errhandler.html /usr/share/doc/mvapich2/www3/MPI_Win_get_group.html /usr/share/doc/mvapich2/www3/MPI_Win_get_info.html /usr/share/doc/mvapich2/www3/MPI_Win_get_name.html /usr/share/doc/mvapich2/www3/MPI_Win_lock.html /usr/share/doc/mvapich2/www3/MPI_Win_lock_all.html /usr/share/doc/mvapich2/www3/MPI_Win_post.html /usr/share/doc/mvapich2/www3/MPI_Win_set_attr.html /usr/share/doc/mvapich2/www3/MPI_Win_set_errhandler.html /usr/share/doc/mvapich2/www3/MPI_Win_set_info.html /usr/share/doc/mvapich2/www3/MPI_Win_set_name.html /usr/share/doc/mvapich2/www3/MPI_Win_shared_query.html /usr/share/doc/mvapich2/www3/MPI_Win_start.html /usr/share/doc/mvapich2/www3/MPI_Win_sync.html /usr/share/doc/mvapich2/www3/MPI_Win_test.html /usr/share/doc/mvapich2/www3/MPI_Win_unlock.html /usr/share/doc/mvapich2/www3/MPI_Win_unlock_all.html /usr/share/doc/mvapich2/www3/MPI_Win_wait.html /usr/share/doc/mvapich2/www3/MPI_Wtick.html /usr/share/doc/mvapich2/www3/MPI_Wtime.html /usr/share/doc/mvapich2/www3/index.htm /usr/share/doc/mvapich2/www3/mpi.cit
Generated by rpm2html 1.8.1
Fabrice Bellet, Thu Oct 23 22:58:29 2025