First we describe, analyze and present the theoretical derivations and the source codes for several (modified and well-known) non-linear Neural Network algorithms based on the unconstrained optimization theory and applied to supervised training networks. In addition to the indication of the relative efficiency of these algorithms in an application, we analyze their main characteristics and present the MATLAB source codes. Algorithms of this part depend on some modified variable metric updates and for the purpose of comparison, we illustrate the default values specification for each algorithm, presenting a simple non-linear test problem. Further more in this thesis we also emphasized on the conjugate gradient (CG) algorithms, which are usually used for solving nonlinear test functions and are combined with the modified back propagation (BP) algorithm yielding few new fast training multilayer Neural Network algorithms. This study deals with the determination of new search directions by exploiting the information calculated by gradient descent as well as the previous search directions.