2

My code below attempts to map a set of integers in an array with multiple processors in parallel. I am confused why it keeps getting a segmentation fault. I am using Ubuntu 17.10. Any help would be greatly appreciated.

#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <math.h>

#define IN 16   //input size

int main(int argc, char** argv){
   // Initialize the MPI environment
   MPI_Init(&argc, &argv);
   MPI_Win win;
   // Find out rank, size
   int id;  //process id
   MPI_Comm_rank(MPI_COMM_WORLD, &id);
   int p;   //number of processes
   MPI_Comm_size(MPI_COMM_WORLD, &p);

   srand(time(0));
   int mapper[IN];
   int toMap[IN];
   int result[IN];
   if(id==0){
       for(int n=0; n<IN; n++){   //predecided map values
           toMap[n] = rand()%IN;
           mapper[n] = rand()%101;
           printf("[%d, %d]", n, mapper[n]);
       }
       printf("\n");
   }

   int d = IN/p;
   int i = id*d;
   while(i<id*d+d && i<IN){
        result[i] = mapper[toMap[i]];
        i++;
   }
   MPI_Barrier(MPI_COMM_WORLD);
   if(id == 0){
       for(int n=0; n<IN; n++){   //map results
           printf("[%d -> %d]\n", toMap[n], result[n]);
       }
   }
   MPI_Finalize();
   return 0;
}

When I execute the program using:

mpiexec -np 2 parallelMap

I get the error:

[sanjiv-Inspiron-5558:00943]     *** Process received signal ***
[sanjiv-Inspiron-5558:00943] Signal: Segmentation fault (11)
[sanjiv-Inspiron-5558:00943] Signal code: Address not mapped (1)
[sanjiv-Inspiron-5558:00943] Failing at address: 0x7ffecfc33a90
[sanjiv-Inspiron-5558:00943] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x13150)[0x7f8c74400150]
[sanjiv-Inspiron-5558:00943] [ 1] parallelMap(+0xbf2)[0x5652d5561bf2]
[sanjiv-Inspiron-5558:00943] [ 2] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1)[0x7f8c7402e1c1]
[sanjiv-Inspiron-5558:00943] [ 3] parallelMap(+0x99a)[0x5652d556199a]
[sanjiv-Inspiron-5558:00943] *** End of error message ***
--------------------------------------------------------------------------
mpiexec noticed that process rank 1 with PID 0 on node sanjiv-Inspiron-5558 exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------

1 Answer 1

2

In a MPI program, every process executes the same code, but in separate memory space.

In your code, every MPI process has its own int mapper[IN], they have not relation to each other. Here you are using

while(i<id*d+d && i<IN){
    result[i] = mapper[toMap[i]];
    i++;
}

for all processes, but only the id == 0 process has initialized those arrays. For other processes, the values in these arrays are garbage, which leads to your segmentation fault.

You even haven't called any MPI communication routine. In fact, MPI communications are achieved by calling its communication routine, for example MPI_Send(), MPI_Bcast(). Process id=1 doesn't know the arrays' values in Process id=0. Nothing is done automatically.

Sign up to request clarification or add additional context in comments.

7 Comments

Thanks. It did remove the segmentation faults. I would like to ask one more question. Since, I am trying to use the same mapping values for every process, how do I have all processes generate the same values for the mapper in each process. This is why I was trying to only use that while loop in one process so that the value for that array is consistent among all processes. Or I guess, my question is in what pattern do I send and receive values among them. Again, thank you for giving your time to this.
One process generates it, than broadcast it to all. See the 2 links I wrote in the answer.
I went down that road before, it kind of ends up with the result array overridden by whichever process comes in last. Am I right about this? That the result array is different for every process? Since, each process works on some parts of the array, I am desperately trying to return sections of the result array from each process and merge them into one.
It just seems so difficult.
No. If you use MPI_Bcast() correctly, after this call, the passed array for all processes in MPI_COMM_WORLD will contain the same value. You need to familiarize yourself with MPI's API. And as a sidenote, parallel programming is indeed not easy.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.