什么控制MPI_Barrier执行时间

时间:2023-01-14 21:03:37

This code:

#include <mpi.h>

int main(int argc, char* argv[])
{
    MPI_Init(&argc, &argv);

    for (unsigned int iter = 0 ; iter < 1000 ; iter++)
        MPI_Barrier(MPI_COMM_WORLD);

    MPI_Finalize();

    return 0;
}

is very long to run with MPICH 3.1.4. Here are the wall clock (in seconds) for different MPI implementations.

用MPICH 3.1.4运行很长。以下是不同MPI实现的挂钟(以秒为单位)。

On a laptop with 4 processors of 2 cpu cores:

在具有2个CPU核心的4个处理器的笔记本电脑上:

| MPI size | MPICH 1.4.1p1 | openmpi 1.8.4 | MPICH 3.1.4 |
|----------|---------------|---------------|-------------|
|  2       | 0.01          | 0.39          | 0.01        |
|  4       | 0.02          | 0.39          | 0.01        |
|  8       | 0.14          | 0.45          | 27.28       |
| 16       | 0.34          | 0.53          | 71.56       |

On a desktop with 8 processors of 4 cpu cores:

在具有4个cpu内核的8个处理器的桌面上:

| MPI size | MPICH 1.4.1p1 | openmpi 1.8.4 | MPICH 3.1.4 |
|----------|---------------|---------------|-------------|
|  2       | 0.00          | 0.41          | 0.00        |
|  4       | 0.01          | 0.41          | 0.01        |
|  8       | 0.07          | 0.45          | 2.57        |
| 16       | 0.36          | 0.54          | 61.76       |

What explain such a difference, and how to control it?

是什么解释了这种差异,以及如何控制它?

1 个解决方案

#1


You are using MPI size > number of processors available. As MPI programs spawn in such a way that each process is handled by a single processor, what this means is that, for example when you run MPI size == 16 on your 8 core machine, each processor will be responsible for two processes; this will not make the program any faster, and, in fact, will make it slower as you have seen. The way to get around it is to either get a machine with more processors available, or to ensure that you run your code with MPI size <= number of processors available.

您正在使用MPI大小>可用的处理器数量。由于MPI程序以每个进程由单个处理器处理的方式生成,这意味着,例如,当您在8核计算机上运行MPI size == 16时,每个处理器将负责两个进程;这不会使程序更快,事实上,如你所见,它将使它变慢。解决这个问题的方法是获得具有更多可用处理器的计算机,或者确保运行MPI大小<=可用处理器数量的代码。

#1


You are using MPI size > number of processors available. As MPI programs spawn in such a way that each process is handled by a single processor, what this means is that, for example when you run MPI size == 16 on your 8 core machine, each processor will be responsible for two processes; this will not make the program any faster, and, in fact, will make it slower as you have seen. The way to get around it is to either get a machine with more processors available, or to ensure that you run your code with MPI size <= number of processors available.

您正在使用MPI大小>可用的处理器数量。由于MPI程序以每个进程由单个处理器处理的方式生成,这意味着,例如,当您在8核计算机上运行MPI size == 16时,每个处理器将负责两个进程;这不会使程序更快,事实上,如你所见,它将使它变慢。解决这个问题的方法是获得具有更多可用处理器的计算机,或者确保运行MPI大小<=可用处理器数量的代码。