         ************************************
	 * PARALLEL TESTS FOR SMD WITH ORAC *
         ************************************

These tests run in parallel. The main prerequisite is:

* MPI (Message Passing Interface) libraries and
  implementation/environment:
  MPICH2    [http://www.mcs.anl.gov/research/projects/mpich2/] 
  -or-
  OPENMPI   [http://www.open-mpi.org/]

Other prerequisites depend on the MPI environment used. For example,
for MPICH2 one must provide:

* mpd.hosts file listing hosts to run in parallel 
* .mpd.conf file in user dir with secret password 
* ssh access to hosts listed in mpd.hosts


------------------------------------------------------------------------
BEFORE RUNNING THE TESTS
------------------------------------------------------------------------

* Build a parallel version of the ORAC program, e.g. :

   cd <orac6-root-dir>
   ./configure -INTEL -FFTW -MPI
   make   ! create a MPI executable in  INTEL-FFTW-MPI dir    

* Adjust the variables in the first section of ./Makefile :

  O_BIN_P
  ORAC_P

* Adjust the variables in the "init and defaults" section of
  ./run_parallel_tests.bash to your MPI environment:

  MPIRUN

* start your MPI environment, e.g. for MPICH2

  mpdallexit
  mpdboot -n <no_of_hosts>

------------------------------------------------------------------------
TO RUN THE TESTS
------------------------------------------------------------------------

Tests are preferably run through `make'.

* to run all the tests:

  `make parallel'
  
  this runs command `./run_parallel_tests.bash 4', which in turn will
  call the the ancillary program

    ./fes


The tests will create NP work directories PAR0000, PAR0001, ..., with
output files from each instance of the program, and a collection of
selected output lines in OUT_PARALLEL_TEST; this file is to be
compared to the reference file OUT_PARALLEL.
NOTE that even small differences of numerical origin will soon drive a
trajectory away from the reference path. Thus, the work distribution
may differ significantly from the reference data.

In this example parallel execution is done on the MPI layer only.  So,
each NE trajectory is serially executed on a single core. This allows
to produce NCORES independent NE trajectories. The hybrid OpenMP/MPI
version of the code can use more than one core to speed up the
execution of the NE trajectories, hence producing a maximum number of
MPI instances given by NP=NCORES/NTHREADS.

------------------------------------------------------------------------
SHORT DESCRIPTION OF THE TESTS
------------------------------------------------------------------------


- Please read the manual ( ${ORAC_DIR}/doc/orac-manual.pdf ) for a
  detailed description of the program and its input



# Tests 1Px.in and 2Px.in don ot actually run in parallel. They run
#  one instance of the program in the MPI environment

1Pa.in    Run a single NVT steered molecular dynamics simulation:
         stretching helix 10-ala in vacuo 
1Pb.in    Run a single NVT steered molecular dynamics simulation:
         bending of helix decaalanine in vacuo 
1Pc.in    Run a single NVT steered molecular dynamics simulation:
         torque of helix decaalanine in vacuo 
1Pd.in    Run a single NVT steered molecular dynamics simulation:
                  stretching, bending and torque at the same time of helix decalanaine in vacuo

#    The following tests are designed to produce bidirectional non equilibrium work distribution 
#    for the case of the folding/unfolding of decaalanine. 

2Pa.in produces canonically distributed restart files configurations
       in the a-helix state in the RESTART_A dir.

2Pb.in produces canonically distributed restart files configurations
       in the unfolded state in the RESTART_B dir.

#
# Tests 3x.in run NP  instances of the program in parallel
#
3a.in(for parallel job only) produces work distribution (i.e. many
       trajectories in parallel) for the forward process

3b.in(for parellel job only) produces work distribution (i.e. many
       trajectories in parallel) for the backward process

# Finally, program ./bin/fes calculates the Potential of Mean Force in
# bi-directional SMD using the trajectories produced in tests 3x.in

