there is no internal limit on problem size. The size of the problem rather depends on your hardware (memory, number of CPUs, etc.).
- In case of distributed memory model, you can really think of large problems. The efficiency depends on quality of your partitioning and problem type. The PETSc solver is recommended, the performance depends especially on sparsity of the problem, type of preconditioner and problem itself. In short, the performance is problem dependent and it is very hard to make any general conclusion.
- In case of shared memory systems, oofem supports also openMP programming model. In this case the scalability is rather limited, you can thing of using approx up to 20-30 cores efficiently, then the memory bus will become limiting factor. The use requires SuPERLU_MT library to be configured and compiled with oofem.
Some results on parallel scalability can be found in B. Patzák and D. Rypl: Object-oriented, parallel finite element framework with dynamic load balancing. Advances in Engineering Software, 47(1):35–50, 2012. This is, however, relative small problem and complexity is in nonlinear problem.
Hope that it will help,