Skip to content
Snippets Groups Projects
Commit 49b19cd8 authored by Gab's avatar Gab
Browse files

added README

parent 7a0f0298
No related branches found
No related tags found
No related merge requests found
Description:
============
Presentation
------------
MUMPS : a parallel sparse direct solver
Technical information:
----------------------
* website : http://mumps.enseeiht.fr/index.php?page=home
* Scientific domain : Sparse Matrix
* Language : Fortran
* Parallelism : MPI + OpenMP
* GPU acceleration : No
* Scalability : high
* Vectorization: high
Compilation and simulation:
===========================
Download:
---------
Sources can be requested here: http://mumps.enseeiht.fr/index.php?page=dwnld#form
For the test, we will use a specific release. To download this release, run:
```
./download.sh
```
Compile:
--------
Compile the code using for instance:
```
./compile.sh occigen-bdw
```
`machines/occigen-bdw/env` contains the information for compilation (module load gcc openmpi lapack hdf5 ...)
You can create your own machine directory under `machines` to define the appropriate environment.
Run and validate the simulation:
--------------------------------
For each test case, given in a separate folder (e.g. testcase_small), you can find three scripts:
* prepare.sh: prepare the simulation (move data to the right location, recompile some minor changes, ...)
* run.sh : run the application and print out the evaluated metric
* validate.sh: validation of the simulation on a scientific point of view
For running and validating the simulation, one should be able to do:
```
cd testcase_XXX
./prepare.sh occigen-bdw
./run.sh
./validate.sh
```
And getting no error code returned.
Those steps can also be used in a batch file for running the simulation using a job scheduler.
Test case presentation
======================
The charge test case is running 32 system resolutions using 200 nodes.
Each instance of system resolution is distributed over the 200 nodes, using one MPI per node per instance, and as many OpenMP thread as possible.
Test case presentation
======================
The scale test case is running from 1 to 32 system resolutions using 200 nodes.
Each instance of system resolution is distributed over the 200 nodes, using one MPI per node per instance, and as many OpenMP thread as possible.
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment