This part continues the discussion started in the previous part about how to optimize a neural network. I showed before the mathematical algorithmical improvemements, here I am discussing the parallelization possibilities offered by a ANN. Used together, the two aspects can lead to improved performance in the training on ANNs.
This part has two main sections:
The first section is a theoretical part, starting with the theoretical aspects of parallelism: definitions and types, evaluation parameters, etc. Also here I discuss and exemplify the two main types of parallelism: dedicated computers (MPPs) or computer networks. After these introductory remarks I approach the item "Parallelism in ANNs". Here I propose an ideal environment for ANN parallelization and discuss how the mapping ANN-machine can be done.
The second section deals with the computer simulations, the experiments, the results and discussion on implementing parallel ANNs. I first describe the applications I have implemented. As the environment used by me is a UNIX environment, so not the ideal environment described in detail at the end of the first section, I have to explain the different mapping of parallelism (this time, simulated) in the UNIX environment. This section contains also some implementation details of the simulated UNIX system parallelism. Then, the results obtained through simulation are discussed and evaluated, some time-gain being possible even under the UNIX environment with simulated parallelism.