Implementation and organisation ================================= Since the code was written in C/C++, we used Object Orientation for the organisation of every aspect of the integrator. Similarly, we focused in leaving one class per file, to help the understanding and organisation of the code. The constants and structures global declaration are placed into a **common** header to avoid multiple definitions and for maintaining coherence. Due to we provide a command line interface, we use the ``boost`` module called ``program_options`` which allow us to have an stable and robust base to create our options parsing and handling. This is included in the class **OptionParser**. Every `N-` body code has the same general information, both particle information (``r, v, a, dt, ...``) and system properties (``T, E, K, ...``), which will be use always, for every different implementation, independently of the environment. This is the reason of storing all this information in a class called **NbodySystem**. Additionally to the system information and features, it's necessary to leave the functions in charge of calculating all the system properties, like *core-radius*, *lagrange radii*, *crossing-time*, among others, in a different class, that we called **NbodyUtils**. To notify the user the inner process that the integration performing we created a separated class called **Logger** in charge of all the standard output procedures of the code. The current version of our integrator is based on the Hermite 4th order integration scheme, which consist in a few well-determinated steps to perform the integration described in the class **Hermite4**. This class contains a few virtual methods, which are the main procedure, like *prediction*, *correction*, *force interaction*, among others. Th idea behind the virtual method is to allow inherit this class to implement the most heavy computationally methods with different techniques or parallelise, which is the case of **Hermite4CPU**, **Hermite4MPI** and **Hermite4GPU**, that implement this virtual methods using only OpenMP for a single node, only MPI for single node or cluster, and GPU for a single GPU-node, respectively. A diagram to understand the distribution of this classes and files is displayed below. ----- .. image:: ../_static/files_structure.png :scale: 80 % :alt: Code organisation :align: center ----- The structure of the code allow us to use a different integrator scheme, for example the Hermite 6th order, in an easy way, since we just need to include the same classes includes by the current Hermite 4th order and then follow the structure of leaving the important methods as virtual for a future implementation for different environments.