Elsevier

Neurocomputing

Volume 214, 19 November 2016, Pages 307-316
Neurocomputing

Multi-agent architecture for Multi‐objective optimization of Flexible Neural Tree

https://doi.org/10.1016/j.neucom.2016.06.019Get rights and content

Abstract

In this paper, a multi-agent system is introduced to parallelize the Flexible Beta Basis Function Neural Network (FBBFNT)’ training as a response to the time cost challenge. Different agents are formed; a Structure Agent is designed for the FBBFNT structure optimization and a variable set of Parameter Agents is used for the FBBFNT parameter optimization. The main objectives of the FBBFNT learning process were the accuracy and the structure complexity. With the proposed multi-agent system, the main purpose is to reach a good balance between these objectives. For that, a multi-objective context was adopted which based on Pareto dominance. The agents use two algorithms: the Pareto dominance Extended Genetic Programming (PEGP) and the Pareto Multi-Dimensional Particle Swarm Optimization (PMD_PSO) algorithms for the structure and parameter optimization, respectively. The proposed system is called Pareto Multi-Agent Flexible Neural Tree (PMA_FNT).

To assess the effectiveness of PMA_FNT, four benchmark real datasets of classification are tested. The results compared with some classifiers published in the literature.

Introduction

In Artificial Intelligence field, scientists have always meditated natural life and behaviors to invent and evolve intelligent systems. We can for instance mention the Artificial Neural Network (ANN) which was inspired from the human brain behavior, the Evolutionary Computation (EC) algorithms which were inspired from some natural phenomena and so on [1].

Thanks to its efficiency, the Multi-Layer ANN and in particular the well known ANN' topology has attracted more attention nowadays and it is now considered as a powerful system for complex search problems such as prediction [2], pattern recognition [3] and classification [4], [5].

In fact, the main weakness of the Neural Network training is the slow convergence to the near desired output, specially with large problems. To circumvent this weakness, many works hover around the objective of accelerating the NN training with ameliorating its performance. Some researchers focus on only training the output weights for a very large neural network [6], [7]. Others tried to find some alternatives to improve the MLPs performance. In this context, Evolutionary Computation EC has been considered as a good candidate for the ANN evolution [8]. EC includes Swarm Intelligence like Particle Swarm Optimization (PSO) [9], Evolutionary Algorithms such as Genetic Algorithm (GA) [10] as well as some Mimic Algorithms like Harmony Search (HS) [11] and so on.

In addition, it is not possible to predict an ideal ANN structure likely to resolve all the problems. So, a structure adaptive task is required to improve the Neural Network performance. Decreasing the complexity of the NN structure and increasing its flexibility has led to the establishment of a new ANN encoding based on the tree representation called Flexible Neural Tree (FNT) [3]. The FNT topology is adapted in our work using the Beta function as a transfer function (FBBFNT). Although the FBBFNTs could solve complex problems [12], this model suffers from high time-cost. In addition, real problems include a large set of features/inputs which increase the time complexity. So, we have considered parallelizing the learning process of the FBBFNT so that we can deal with real problems with respect to time cost.

On the other side, Multi-Agent System (MAS) is viewed as a new intelligent system inspired from the social system. In fact, as an intelligent system, the human being is a part of a social system in which he operates and interacts with other humans distributed in the same environment. Much in the same way, MAS distributes and coordinates a set of jobs, tasks and decisions between different entities, called agents, to build coherent and interactive systems [13].

In this work, the Multi-Agent System is used to optimize and parallelize the learning process of the evolving Flexible Beta Basis Function Neural Tree (FBBFNT) model. It uses a set of interacting agents; a Structure Agent for NN architecture/structure optimization and a set of Parameter Agents for NN parameters optimization.

Indeed, an agent is an independent and autonomous entity that has the ability to interact, cooperate, coordinate and negotiate with each other. For that, a communication process is disposed to ensure a good negotiation between agents. In our model, a negotiation protocol is implemented as a communication protocol. It ensures the exchange of information between agents which compete to find the optimal ANN for the treated problem. The negotiation strategy is based on two factors; the Agent Dominance Rate (ADR) and the Agent Trust Value (ATV). They ensure the evaluation of the agent performance in the multi-agent system.

To attain the optimal solution, this model takes into consideration two main objectives: the accuracy and the structure complexity of the neural network. In fact, the trade-off between these objectives is caused by their influence on the convergence and the effectiveness of the given solution. This conflict is implicitly pointed out in [14], [2]. A simple aggregation of the two objective functions has averted systems dealing with the conflict. However, these objectives have different impact on the NN and should ensure good balance between them. Consequently, we developed a multi-objective optimization to solve this trade-off. Indeed, the learning process adopted a multi-objective optimization based on the Pareto dominance. Learning agents use multi-objective evolutionary algorithms; the Pareto dominance Extended Genetic Programming algorithm (PEGP) and a Pareto Multi-Dimensional Particle Swarm Optimization algorithm (PMD_PSO) to optimize the FBBFNT structure and parameters respectively. According to the formerly mentioned, the model was called the Pareto Multi-Agent Flexible Neural Tree PMA_FNT. It will be described in Section 3. The functionalities and the training method of Structure and Parameter agents are detailed in 3.1 Pareto dominance extended genetic programming for structure agent, 3.2 Pareto multi-dimensional particle swarm optimization for parameter agent, respectively. The communication process including the strategy and the negotiation protocol used is described in Section 3.3. In Section 4, the experimentation results with real classification problems and a comparative study with other classifiers are presented. The final section introduces some concluding remarks.

Section snippets

Related works of ANN’ parallel training

Among the first attempts in parallelizing neural network training, the model, which is presented in [15], [16], distributes neurons across cluster nodes working in parallel. This method relies on expensive and complex hardware. From the Network Parallel Training to the Pattern Parallel Training, Suri et. al. [17] and Dahl et. al [18] used a duplicated ANN at each node to train a subset of patterns from the training set in parallel. In [19], Quteishat et al. presented an ANN based on multi-agent

Pareto multi-agent flexible neural tree PMA_FNT

At the beginning, the fundamental characteristics of the used neural network might be described. Our model belongs to the Multi-Layer Preceptor (MLP) ANNs. It has two hidden layers which contain a set of functional nodes that use the Beta function [21] as a transfer function, as well as a set of terminal nodes. While the output layer contains one linear functional node, the input layer is composed of a set of terminal nodes. It also adopts the tree encoding for a flexible handling with the ANN

Experimentation

Our system was implemented in Matlab platform using the distributed computing toolbox and the parallel computing toolbox to design the multi-agent system. In this section, the proposed model is applied to some real classification datasets to evaluate its performance. These well known datasets (such as Leukemia, Colon Cancer and Lymphoma) are high dimensional. For this reason, a feature selection phase was interfered to reduce the number of features and prevent our system from misleading

Conclusion

In this paper, we proposed a parallelized learning process for the Flexible Beta basis Neural Network using a multi-agent architecture. This system is called the Pareto Multi-Agent flexible Neural Tree (PMA_FNT). It looks for the optimal neural network respecting two main objectives; the accuracy and the structure complexity. For that, PMA_FNT applies a multi-objective optimization based on the Pareto dominance. It distributes the learning process to a Structure agent and a variable set of

Acknowledgement

The authors would like to acknowledge the financial support to this work by grants from the General Direction of Scientific Research and Technological Renovation (DGRSRT), Tunisia, under the ARUB program 01/UR/11/02. Ajith Abraham acknowledges the support from the framework of the IT4 Innovations Center of Excellence project, reg. no. CZ.1.05/1.1.00/02.0070 supported by Operational Program ’Research and Development for Innovations’ funded by Structural Funds of the European Union and state

Marwa Ammar was born in Sfax (Tunisia) in 1987. She graduated in Computer Engineering 2011. She is currently working toward the Ph.D degree with the University of Sfax. Her research interest includes computational intelligence: artificial neural network, evolutionary computation, multi-agent system.

References (44)

  • P.K. Simpson

    Fuzzy min-max neural networks. i. classification

    IEEE Trans. Neural Netw.

    (1992)
  • M. Ammar, S. Bouaziz, A.M. Alimi, A. Abraham, Negotiation process for bi-objective multi-agent flexible neural tree...
  • J. Kennedy, R. Eberhart, Particle swarm optimization, in: IEEE International of First Conference on Neural Networks,...
  • J.H. Holland, Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology,...
  • M. Ammar, S. Bouaziz, A. M. Alimi, A. Abraham, Hybrid harmony search algorithm for global optimization, in: 2013 World...
  • S. Bouaziz et al.

    Evolving flexible beta basis function neural tree using extended genetic programming & hybrid artificial bee colony

    Appl. Soft Comput.

    (2016)
  • G. Weiss

    Multiagent Systems: a Modern Approach to Distributed Artificial Intelligence

    (1999)
  • P. Farber, K. Asanovic, Parallel neural network training on multi-spert, in: 1997 3rd International Conference on...
  • N. Mache, P. Levi, Parallel Neural Network Training and Cross Validation on a Cray t3e System and Application to Splice...
  • N.R. Suri, D. Deodhare, P. Nagabhushan, Parallel Levenberg–Marquardt-based neural network training on linux clusters—a...
  • G. Dahl, A. McAvinney, T. Newhall, et al., Parallelizing neural network training for cluster systems, in: Proceedings...
  • A.M. Alimi, The beta fuzzy system: approximation of standard membership functions, Proc. 17eme J. Tunis. d'Electrotech....
  • Cited by (7)

    • Adaptive neural tree exploiting expert nodes to classify high-dimensional data

      2020, Neural Networks
      Citation Excerpt :

      Note that other neural trees could not find reasonable solution in data with more than 100 features. In this experiment the NTEN with different metaheuristics in its over-fitted nodes is compared with FDRK and some other neural trees including, COF-NT (Rani et al., 2015), BNT (Micheloni et al., 2012), NT-SLP (Foresti & Pieroni, 1998), NT-MLP (Maji, 2008), SLNN (Fontenla-Romero et al., 2010), RF (Hall et al., 2009), HFNT (Ojha et al., 2017), and FNT (Ammar et al., 2016). The best classification accuracy of all of the considered classifiers is presented with underlines in Table 4.

    • Neural trees with peer-to-peer and server-to-client knowledge transferring models for high-dimensional data classification

      2019, Expert Systems with Applications
      Citation Excerpt :

      Among ensemble methods, neural trees have been proposed in recent years that combines the ability of a decision tree for rule generation and high generalization of a neural network. As some instances, Foresti & Micheloni (2002) proposed a generalized neural tree, Maji (2008) developed a neural tree with a new splitting criterion, Micheloni, Rani, Kumar & Foresti (2012) stated a balanced neural tree, Martinel, Piciarelli, Foresti & Micheloni (2016) used a combination of ELM, decision tree and deep learning, Martinel, Piciarelli, Foresti & Micheloni (2016) developed a flexible neural tree with multi-agent architecture, Abpeykar & Ghatee (2018) utilized knowledge transferring between nodes of a neural tree with RBF nodes and finally Abpeykar, Ghatee & Zare (2019) developed a decision forest of RBF networks by using feature clustering approach. In the latter paper, each node includes an RBF neural network, the error function is minimized and the optimal weights of the nodes are transferred to the next nodes.

    • Bi-level multi-objective evolution of a Multi-Layered Echo-State Network Autoencoder for data representations

      2019, Neurocomputing
      Citation Excerpt :

      Determining an optimal architecture and number of parameters while maintaining the complexity at a low level are the challenges to be addressed throughout this work. The problem of neural network parameters optimization has been tackled in many works such as [19–21], etc. For example, Bouaziz et al. [19] proposed a hierarchical mono-objective optimization of Beta-Basis neural network.

    View all citing articles on Scopus

    Marwa Ammar was born in Sfax (Tunisia) in 1987. She graduated in Computer Engineering 2011. She is currently working toward the Ph.D degree with the University of Sfax. Her research interest includes computational intelligence: artificial neural network, evolutionary computation, multi-agent system.

    Souhir Bouaziz was born in Sfax (Tunisia) in 1984. She graduated in Computer Engineering 2008 and obtained a Ph.D. degree in Computing System Engineering in 2013 at the University of Sfax. She is currently an Assistant Professor in the University of Gabes. Her research interest includes computational intelligence: neural network, evolutionary computation, swarm intelligence.

    Adel M. Alimi was born in Sfax (Tunisia) in 1966. He graduated in Electrical Engineering 1990, obtained a Ph.D. and then an HDR both in Electrical & Computer Engineering in 1995 and 2000 respectively. He is now a Professor in Electrical & Computer Engineering at the University of Sfax. His research interest includes applications of intelligent methods (neural networks, fuzzy logic, evolutionary algorithms) to pattern recognition, robotic systems, vision systems, and industrial processes. He focuses his research on intelligent pattern recognition, learning, analysis and intelligent control of large scale complex systems. He is an Associate Editor and Member of the editorial board of many international scientific journals (e.g. “Pattern Recognition Letters”, “Neurocomputing”, “Neural Processing Letters”, “International Journal of Image and Graphics”, “Neural Computing and Applications”, “International Journal of Robotics and Automation”, “International Journal of Systems Science”, etc.). He was a Guest Editor of several special issues of international journals (e.g. Fuzzy Sets & Systems, Soft Computing, Journal of Decision Systems, Integrated Computer Aided Engineering, Systems Analysis Modelling and Simulations). He was the General Chairman of the International Conference on Machine Intelligence ACIDCA-ICMI'2005 & 2000. He is an IEEE senior member and member of IAPR, INNS and PRS. He is the 2009–2010 IEEE Tunisia Section Treasurer, the 2009–2010 IEEE Computational Intelligence Society Tunisia Chapter Chair, the 2011 IEEE Sfax Subsection, the 2010–2011 IEEE Computer Society Tunisia Chair, the 2011 IEEE Systems, Man, and Cybernetics Tunisia Chapter, the SMCS corresponding member of the IEEE Committee on Earth Observation, and the IEEE Counselor of the ENIS Student Branch.

    Ajith Abraham received the Ph.D. degree in Computer Science from Monash University, Melbourne, Australia. He is currently the Director of Machine Intelligence Research Labs (MIR Labs), Scientific Network for Innovation and Research Excellence, USA, which has members from more than 85 countries. He has a worldwide academic and industrial experience of over 20 years. He works in a multi-disciplinary environment involving machine intelligence, network security, various aspects of networks, e-commerce, Web intelligence, Web services, computational grids, data mining, and their applications to various real-world problems. He has numerous publications/citations (h-index 40) and has also given more than 50 plenary lectures and conference tutorials in these areas. Since 2008, he is the Chair of IEEE Systems Man and Cybernetics Society Technical Committee on Soft Computing and a Distinguished Lecturer of IEEE Computer Society representing Europe (since 2011). Dr. Abraham is a Senior Member of the IEEE, the Institution of Engineering and Technology (UK) and the Institution of Engineers Australia (Australia), etc. He is the founder of several IEEE sponsored annual conferences, which are now annual events. More information at: 〈http://www.softcomputing.net〉.

    View full text