vayesta.mpi

Submodules

vayesta.mpi.interface

class vayesta.mpi.interface.NdArrayMetadata(shape, dtype)

Bases: tuple

count(value, /)

Return number of occurrences of value.

dtype

Alias for field number 1

index(value, start=0, stop=9223372036854775807, /)

Return first index of value.

Raises ValueError if the value is not present.

shape

Alias for field number 0

class vayesta.mpi.interface.MPI_Interface(mpi, required=False, log=None)[source]

Bases: object

property enabled
property disabled
property is_master
get_new_tag()[source]
nreduce(*args, target=None, logfunc=None, **kwargs)[source]

(All)reduce multiple arguments.

TODO: * Use Allreduce/Reduce for NumPy types * combine multiple *args of same dtype into a single array, to reduce communication overhead.

bcast(obj, root=0)[source]

Common function to broadcast NumPy arrays or general objects.

Parameters:
  • obj (ndarray or Any) – Array or object to be broadcasted.

  • root (int) – Root MPI process.

Returns:

obj – Broadcasted array or object.

Return type:

ndarray or Any

with_reduce(**mpi_kwargs)[source]
with_allreduce(**mpi_kwargs)[source]
only_master()[source]
with_send(source, dest=0, tag=None, **mpi_kwargs)[source]
create_rma_dict(dictionary)[source]
scf(mf, mpi_rank=0, log=None)[source]
gdf(df, mpi_rank=0, log=None)[source]

vayesta.mpi.rma

class vayesta.mpi.rma.RMA_Dict(mpi)[source]

Bases: object

classmethod from_dict(mpi, dictionary)[source]
class RMA_DictElement(collection, location, data=None, shape=None, dtype=None)[source]

Bases: object

local_init(data)[source]
remote_init()[source]
property size
get(shared_lock=True)[source]
property mpi
rma_lock(shared_lock=False, **kwargs)[source]
rma_unlock(**kwargs)[source]
rma_put(data, **kwargs)[source]
rma_get(buf, **kwargs)[source]
free()[source]
property readable
clear()[source]
writable()[source]
keys()[source]
values()[source]
get_location(key)[source]
get_shape(key)[source]
get_dtype(key)[source]
synchronize()[source]

Synchronize keys and metadata over all MPI ranks.

vayesta.mpi.scf

vayesta.mpi.scf.scf_with_mpi(mpi, mf, mpi_rank=0, log=None)[source]

Use to run SCF only on the master node and broadcast result afterwards.

vayesta.mpi.scf.gdf_with_mpi(mpi, df, mpi_rank=0, log=None)[source]

Module contents

vayesta.mpi.init_mpi(use_mpi, required=True)[source]