Skip to content
Snippets Groups Projects
README.md 4.87 KiB
Newer Older
midou's avatar
midou committed
The Gysela 5D code models the electrostatic branch of the Ion Temperature Gradient turbulence in tokamak plasmas.  
Gysela is a 5D full-f and flux-driven gyrokinetic code. As such, it evolves in time the five-dimensional (3D in configuration space, 2D in velocity space) ion distribution function. For now, Gysela assumes electrons to be adiabatic and considers a global simplified magnetic geometry (concentric toroidal magnetic flux surfaces with circular cross-sections). The code simulates the full ion distribution function without any scale separation between equilibrium and fluctuations (”full-f”).  
From the physics point of view, the other peculiarities of the Gysela code are the presence of an ion-ion collision operator accounting for neoclassical transport, and the existence of versatile sources (heat, momentum, …) which sustain the mean profiles on confinement times (“flux-driven”).  
From the numerical point of view, Gysela is based on a semi-lagrangian scheme, hence is name: GYSELA 5D is an acronym for GYrokinetic SEmi-LAgrangian in 5 Dimensions. Two solvers are at the heart of Gysela: a Vlasov solver for computing ions advections and a Poisson solver for computing the magnetic field.
**Technical information**
midou's avatar
midou committed
* website : http://gyseladoc.gforge.inria.fr/ 
* Scientific domain : Fusion
* Language : Fortran
* GPU acceleration : No  
* Scalability : good
midou's avatar
midou committed
* Vectorization: good
Compilation and simulation
--------------------------

Here we describe the different phases from the download to the validation of the simulation.

**Download**
The sources are available in a tarball and correspond to a stable release. To un-tar this release, run  
export TARBALL_PATH=path/to/gysela/tarball 
**Compile**
midou's avatar
midou committed
./compile.sh machine_name
```

For example:
```
./compile.sh occigen-bdw
**Run and validate the simulation**
midou's avatar
midou committed
For each test case, given in a separate folder (e.g. testcase\_small), you can find three scripts:

*  prepare.sh: prepare the simulation (move data to the right location, recompile some minor changes, ...)
*  run.sh : run the application and print out the evaluated metric
*  validate.sh: validation of the simulation on a scientific point of view

For running and validating the simulation, one should be able to do:
```
cd testcase_XXX
midou's avatar
midou committed
./prepare.sh machine_name
./run.sh machine_name
./validate.sh
```
Those steps can also be used in a batch file for running the simulation using a job scheduler.

Create a new machine environment
--------------------------------

First download gyselax.  

Then go to ```./machines```. Create a directory with the machine name:

```
mkdir $machine_name
cd $machine_name
```

Then, copy/paste an existing env_bench file. For example:
```cp ../occigen-bdw/env_bench ./```

Edit the env\_bench file. Replace the environment variable ARCH by your $machine_name. Then, change the loaded librairies if needed.

Then copy paste an existing cmake file. For example:
```cp ../occigen-bdw/occigen2.cmake ./$machine_name.cmake```
Edit the cmake file to make the compilation options fit to your machine.

From this point, you can test if the compilation succeeds by running these commands:
```
cd ../../
./compile.sh $machine_name
```

If the compilation succeeds, you need to create the submission file **subgys**.  
Go to ```cd ./machines/$machine_name/```.  
Copy an existing subgys file: ```cp ../occigen-bdw/subgys ./```  
Edit it: change the machine name **occigen2** by $machine_name. Then, configure your machine. Here are some key parameters:
- THPNODE: it is the number of thread available on one node (equal to number of cores, or 2X number of cores if hyperthreading). You need to hard write it.
- NTHREAD: this is the number of thread you ask per MPI task. This variable is an input and comes from your input testcase.
- MPIPROCS: this is the number of MPI tasks per node. It is computed from the given values THPNODE and NTHREAD (THPNODE/NTHREAD)
- NPES: is equal to the total number of MPI tasks of the simulation. Computes from the input file.
- CPUTASK: stands for the number of CPU per MPI task. Computed as NPES/MPIPROCS or NPES/(2*MPIPROCS) if hyperthreading is activated.
- NCPUS: is equal to the total number of thread used for the simulation. Computed as NPES*NTHREAD
- NNODES: stands for the number of nodes used for the simulation. Computed from the previous variables.
In summary, you need to change the name of the machine and the THPNODE with the proper values of your machine. Then, you need to adapt MPIPROCS (depending on the activated hyperthreading).
Then, adapt the test of the number of thread to match your machine. You can adapt the CMD\_MPIRUN and the generation of the batch file if you do not use slurm.

You are ready to test your subgys. You can do it on the testcase_small:
cd ../../testcase_small/
./prepare.sh $machine_name
./run.sh