Hi Mikael:
After I installed the PETSc, I test the inside examples and it shows ok.
Running test examples to verify correct installation
Using PETSC_DIR=/home/hchen/petsc-3.4.4 and PETSC_ARCH=arch-linux2-c-debug
C/C++ example src/snes/examples/tutorials/ex19 run successfully with 1 MPI process
C/C++ example src/snes/examples/tutorials/ex19 run successfully with 2 MPI processes
Fortran example src/snes/examples/tutorials/ex5f run successfully with 1 MPI process
Completed test examples
However when I installed the PETSc, I used the recommended command:
./configure --with-cc=gcc --with-fc=gfortran --download-f-blas-lapack --download-mpich
Probably it means PETSc uses mpich?
And when I compiled oofem in parallel, I use openmpi-1.8 because I did not find the library of mpich.
Is it the possible reason?
When I run
It showed the same:
[hchen:04737] *** Process received signal ***
[hchen:04737] Signal: Segmentation fault (11)
[hchen:04737] Signal code: Address not mapped (1)
[hchen:04737] Failing at address: 0x44000098
[hchen:04737] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x36ff0)[0x7f829b915ff0]
[hchen:04737] [ 1] /usr/local/lib/libmpi.so.1(MPI_Comm_rank+0x46)[0x7f829c21a5e6]
[hchen:04737] [ 2] ./oofem(main+0x130)[0x403369]
[hchen:04737] [ 3] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7f829b900de5]
[hchen:04737] [ 4] ./oofem[0x403139]
[hchen:04737] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 4737 on node hchen exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
Hao