开发者

MPI Matrix Multiplication with Dynamic Allocation: Seg. Fault

I'm making a matriz multiplication program in OpenMPI, and I got this error message:

[Mecha Liberta:12337] *** Process received signal ***
[Mecha Liberta:12337] Signal: Segmentation fault (11)
[Mecha Liberta:12337] Signal code: Address not mapped (1)
[Mecha Liberta:12337] Failing at address: 0xbfe4f000
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 12337 on node Mecha Liberta exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------

That's how I define the matrices:

  int **a, **b, **r;

  a = (int **)calloc(l,sizeof(int));

  b = (int **)calloc(l,sizeof(int));

  r = (int **)calloc(l,sizeof(int));

  for (i = 0; i < l; i++)
      a[i] = (int *)calloc(c,sizeof(int));

  for (i = 0; i < l; i++)
      b[i] = (开发者_如何学Cint *)calloc(c,sizeof(int));      

   for (i = 0; i < l; i++)
      r[i] = (int *)calloc(c,sizeof(int)); 

And here's my Send/Recv (i'm pretty sure my problem should be here):

  MPI_Send(&sent, 1, MPI_INT, dest, tag, MPI_COMM_WORLD); 
  MPI_Send(&lines, 1, MPI_INT, dest, tag, MPI_COMM_WORLD);   
  MPI_Send(&(a[sent][0]), lines*NCA, MPI_INT, dest, tag, MPI_COMM_WORLD); 
  MPI_Send(&b, NCA*NCB, MPI_INT, dest, tag, MPI_COMM_WORLD);

and:

MPI_Recv(&sent, 1, MPI_INT, 0, tag, MPI_COMM_WORLD, &status);  
MPI_Recv(&lines, 1, MPI_INT, 0, tag, MPI_COMM_WORLD, &status);
MPI_Recv(&a, lines*NCA, MPI_INT, 0, tag, MPI_COMM_WORLD, &status);
MPI_Recv(&b, NCA*NCB, MPI_INT, 0, tag, MPI_COMM_WORLD, &status); 

Can anyone see where is the problem?


This is a common problem with C and multidimensional arrays and MPI.

In this line, say:

MPI_Send(&b, NCA*NCB, MPI_INT, dest, tag, MPI_COMM_WORLD);

you're telling MPI to send NCAxNCB integers starting at b to dest,MPI_COMM_WORLD with tag tag. But, b isn't a pointer to NCAxNCB integers; it's a pointer to NCA pointers to NCB integers.

So what you want to do is to ensure your arrays are contiguous (probably better for performance anyway), using something like this:

int **alloc_2d_int(int rows, int cols) {
    int *data = (int *)malloc(rows*cols*sizeof(int));
    int **array= (int **)malloc(rows*sizeof(int*));
    for (int i=0; i<rows; i++)
        array[i] = &(data[cols*i]);

    return array;
}

  /* .... */

  int **a, **b, **r;

  a = alloc_2d_int(l, c);
  b = alloc_2d_int(l, c);
  r = alloc_2d_int(l, c);

and then

  MPI_Send(&sent, 1, MPI_INT, dest, tag, MPI_COMM_WORLD); 
  MPI_Send(&lines, 1, MPI_INT, dest, tag, MPI_COMM_WORLD);  
  MPI_Send(&(a[sent][0]), lines*NCA, MPI_INT, dest, tag, MPI_COMM_WORLD); 
  MPI_Send(&(b[0][0]), NCA*NCB, MPI_INT, dest, tag, MPI_COMM_WORLD);

MPI_Recv(&sent, 1, MPI_INT, 0, tag, MPI_COMM_WORLD, &status);  
MPI_Recv(&lines, 1, MPI_INT, 0, tag, MPI_COMM_WORLD, &status);
MPI_Recv(&(a[0][0]), lines*NCA, MPI_INT, 0, tag, MPI_COMM_WORLD, &status);
MPI_Recv(&(b[0][0]), NCA*NCB, MPI_INT, 0, tag, MPI_COMM_WORLD, &status); 

should work more as expected.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜