MPI is a message passing API for the distributed memory parallelization. MPI has many implimentations, such as Intel MPI, Open MPI, etc. One of the advantage of MPI.jl is that it supports various implimantations of MPI flexibly, and it is easy to use MPI from Julia by preparing functions with almost the same name as C functions, such as MPI.Send
. Additionally MPI.jl offers a simplified version MPI.send
which internally serialize the Julia object.
MPI.jl also supports the hybrid parallelization. You can request the thread-safe level during the initialization. Essentially, you can write a code with both MPI parallelization and Julia parallelization. MPI.jl also supports CUDA-aware MPI.
Sometimes the conditional branch for rank
causes unavoidable type instability. In principle, you can always solve this problem in the following way:
using MPI
function func(::Val{rank}) where rank
if rank == 0
elseif rank == 1
elseif rank == 2
elseif rank == 3
end
end
MPI.Init()
comm = MPI.COMM_WORLD
rank = MPI.Comm_rank(comm)
func(Val(rank))
MPI.Finalize()
Or it is better to write in the following way as suggested by the official document:
using MPI
function func(::Val{0})
end
function func(::Val{1})
end
function func(::Val{2})
end
function func(::Val{3})
end
MPI.Init()
comm = MPI.COMM_WORLD
rank = MPI.Comm_rank(comm)
func(Val(rank))
MPI.Finalize()
This is how to work around. In either case, the JIT compiler will compile a necessary part of your code.