24 Read more

Plamen P. Angelov (F’16, SM’04, M’99) holds a Personal Chair (full Professorship) in Intelligent **Systems** with the School of Computing and Com- munications, Lancaster University, UK. He obtained his Ph.D. (1993) and his D.Sc. (2015) from the Bulgarian Academy of Science. He is the Vice Pres- ident of the International Neural Networks Society and a member of the Board of Governors of the **Systems**, Man and Cybernetics Society of the IEEE, a Distinguished Lecturer of IEEE. He is Editor-in- Chief of the **Evolving** **Systems** journal (Springer) and Associate Editor of IEEE Transactions on **Fuzzy** **Systems** as well as of IEEE Transactions on Cybernetics and several other journals. He received various awards and is internationally recognized pioneering results into on-line and **evolving** methodologies and **algorithms** for knowledge extraction in the form of human-intelligible **fuzzy** rule-based **systems** and autonomous machine learning. He holds a wide portfolio of research projects and leads the Data Science group at Lancaster.

Show more
14 Read more

In this paper, a novel on-line **evolving** **fuzzy** clustering method that extends the **evolving** clustering method (ECM) of Kasabov and Song (2002) is presented, called EFCM. Since it is an on-line algorithm, the **fuzzy** membership matrix of the data is updated whenever the existing cluster expands, or a **new** cluster is formed. EFCM does not need the numbers of the clusters to be pre-defined. The algorithm is tested on several benchmark data sets, such as Iris, Wine, Glass, E-Coli, Yeast and Italian Olive oils. EFCM results in the least objective function value compared to the ECM and **Fuzzy** C-Means. It is significantly faster (by several orders of magnitude) than any of the off-line batch-mode clustering **algorithms**. A methodology is also proposed for using the Xie-Beni cluster validity measure to optimize the number of clusters.

Show more
[14,43] and the more general and philosophical concept of knowledge generation or discovery from the data [26]. Both refer to the non-trivial process of identifying valid and understandable/interpretable structure in the data. In this respect system identification is meant in this paper mostly as a model structure identification rather than the more limited and practically more often used parameter identification. One can note that the parameter identification under a fixed model structure is nothing more than an adjustment, tuning and thus has obviously limitations related first of all to the choice of the model structure. Since the data streams are often non-stationary it is logical to assume the structure of the data to be also dynamic, that is, to evolve. An e-ntelligent system continuously learns **new** data to integrate this data with the existing models. It develops its structure and functionality continuously, always adapting and modifying its knowledge representation. The e-ntelligent system approach is demonstrated here through two system modelling techniques that the authors have introduced recently and are continuing to develop, namely the **evolving** connectionist **systems** [11] (ECOS) and **evolving** **fuzzy** **systems** [12,13] (EFS).

Show more
13 Read more

Clustering is a **new** science that work and study is ongoing in this field because it is considered a lot in different science as a solution. In recent years this method is optimized and the results of optimization are provided as papers. The goal of optimization is obtaining to the minimum number of replicates and clusters with the most similar members. In this paper a comparison is done between two common **algorithms** in order to recognize the DoS attacks. Finally, the results derived from K-means algorithm are better to identify these kinds of attacks. We should mention that these results are not exactly true and by choosing different fields to study; it is clear that **Fuzzy** k- means acts better than k-means.

Show more
Once the local regions (clusters) are elicited, they are projected from the high-dimensional space to the one-dimensional axes to form the **fuzzy** sets as antecedent parts of the rules. Hereby, one cluster is associated with one rule. A visualization of this projection concept is shown in Figure 5 (three-dimensional example, visualized as ground plan), where three two-dimensional clusters are projected to the two input axes (the output axes is the third dimension), form- ing the antecedent parts. The (linear) consequent parameters are estimated by local learning approach, that is for each rule separately. This is also because in [28] it is reported that local learning has some favourable advantages over global learning (estimating the parameters from all rules in one sweep) such as smaller matrices to be inverted (hence more stable and faster), providing a better interpretation of the consequent functions (as obtaining local-piecewise hyper-planes snuggling along the real trend of the non-linear surface) and a higher flexibility when intending to adjoin **new** rules on demand (e.g. dur- ing an incremental learning phase). The underlying optimisation function is a weighted least squares problem, defined by

Show more
29 Read more

This thesis first started with engineering a complex network by controlling its nodes. The goal was to identify the best drivers which facilitate synchronisation of the network over the widest range of coupling strengths. Central nodes could be good candidates and heuristic centrality measures such as degree, betweenness or closeness centrality can be considered, although they are not related to dynamics of networks. We proposed a **new** controllability centrality to find the best driver node(s). In order to engineer a network for better collective behaviour, this metric was proposed to identify the most influential set of driver nodes on controllability of a dynamical network. The metric is based on single eigen-decomposition of the Laplacian matrix of the graph; thus, it is computationally efficient and applicable on large- scale networks. Simulation results prove the precision of this metric in networks with different scale-free, Watts-Strogatz and random topologies. Interestingly, controllability centrality shows the sub-modularity feature. It means that by only one eigen-decomposition of the Laplacian matrix, the best subset of nodes with any desired size can be identified. As an application, this metric successfully predicted the best frequency leader in secondary frequency control of distributed generation **systems**. This is one of the real-time requirements of future power management **systems**, where there are a lot of small capacity generators. The metric was also applied to identify brain areas of activation which may prevent disease to propagate in dementia networks. Results for these applications should prove of interest to the network community.

Show more
119 Read more

Abstract—Gene recruitment or co-option is defined as the placement of a gene under a foreign regulatory system. Such re-arrangement of pre-existing regulatory networks can lead to an increase in genomic complexity. This reorganization is recognized as a major driving force in evolution. We simulated the evolution of gene networks by means of the Genetic **Algorithms** (GA) technique. We used standard GA methods of (point) mutation and multi-point crossover, as well as our own operators for introducing or withdrawing **new** genes on the network. The starting point for our computer evolutionary experiments was a minimal 4-gene dynamic model representing the real genetic network controlling segmentation in the fruit fly Drosophila. Model output was fit to experimentally observed gene expression patterns in the early fly embryo. We found that the mutation operator, together with the gene introduction procedure, was sufficient for recruiting **new** genes into pre-existing networks. Reinforcement of the evolutionary search by crossover operators facilitates this recruitment. Gene recruitment causes outgrowth of an **evolving** network, resulting in **structural** and functional redundancy. Such redundancies can affect the robustness and evolvability of networks.

Show more
A BSTRACT : In recent years advances in technology have led to the generation of large volumes of data, mainly numerical data, highlighting the interest in processing them to extract knowledge and information from them. The main objective is to make more efficient the **systems** from which these data have been obtained and help in decision making. The information in a database is implicit in the values that represent the different states of the **systems**, whereas the knowledge is implicit in the relations between the values of the different attributes or present characteristics. These relationships are identified by groups to be discovered and describe the relationships between the input and output states. One of the main human functions is to classify, differentiate and group different objects according to their attributes. The article investigates how to apply **fuzzy** grouping **algorithms**, which allow an element to belong to more than one group by a degree of membership, in order to obtain relevant characteristics or recognize patterns of a set of data. We discuss a study that involved 4 main **fuzzy** **algorithms** where each algorithm is explained and how they are related, as well as with each **new** algorithm solves problems that the previous one did not solve efficiently.

Show more
14 Read more

10 Read more

node (cluster center, rule node) less than a certain threshold are allocated to the same cluster. Samples that do not fit into existing clusters, form **new** clusters. Cluster centers are continuously adjusted according to **new** data samples, and **new** clusters are created incrementally. ECOS learn from data and automatically create or update a local **fuzzy** model/function, e.g.:

11 Read more

models assign equal importance to all the samples seen so far. On-line model identification is advantageous, especially when convergence to an optimality criterion or stable state of the model structure can be achieved [34]. However, this only holds for data streams which are generated from the same under- lying data distribution and do not show any drift or shift behavior to other parts of the input/output space [44]. Drift (respectively shift) indicates the necessity of (gradual) out-dating of previously learned relationships (in terms of structure and parameters) during the incremental learning process as they are not valid any longer and should hence be eliminated from the model (for instance, consider completely **new** types of images in a surface inspection sys- tem). An alternative to gradual out-dating is the concept of re-learning, which can be either done based on all samples seen so far, providing lower weights for older samples in the learning process, or based on the latest data blocks only. The first variant slows down the learning process significantly over time, such that on-line real-time demands are hardly met. The second variant has the problem that older data is usually completely forgotten when extracting the models based on the **new** data blocks from scratch, causing a crisp switch between two models (from the old to the **new**). With gradual forgetting, a smooth transition from an old model to a **new** one can be achieved instead of an abrupt switch. Drift handling (in connection with gradual forgetting) was already applied in other machine learning techniques, e.g. in connection with Support Vector Machines (SVMs) [25] [26], ensemble classifiers [39], and instance-based (lazy) learning approaches [14] [18]. However, to the best of our knowledge, this concept has not yet been applied to **fuzzy** **systems** (neither the concept of re-learning).

Show more
30 Read more

before regulation. (c) First cluster formed after regulation. (B) Clustering. (a) Introduction of a novel data point with no left neighbor. (b) Creation of a **new** cluster before regulation. (c) Final appearance of the **fuzzy** partitioning after regulation. (d) Introduction of a novel data point with both left and right neighbors. (e) Creation of a **new** cluster before regulation. (f) Final appearance of the **fuzzy** partitioning after regulation (Tung et al., 2011).

24 Read more

Abstract—**Evolving** **fuzzy** **systems** (EFSs) are now well devel- oped and widely used thanks to their ability to self-adapt both their structures and parameters online. Since the concept was firstly introduced two decades ago, many different types of EFSs have been successfully implemented. However, there are only very few works considering the stability of the EFSs, and these studies were limited to certain types of membership functions with specifically pre-defined parameters, which largely increases the complexity of the learning process. At the same time, stability analysis is of paramount importance for control applications and provides the theoretical guarantees for the convergence of the learning **algorithms**. In this paper, we introduce the stability proof of a class of EFSs based on data clouds, which are grounded at the AnYa type **fuzzy** **systems** and the recently introduced empirical data analysis (EDA) methodological framework. By employing data clouds, the class of EFSs of AnYa type considered in this work avoids the traditional way of defining membership functions for each input variable in an explicit manner and its learning process is entirely data-driven. The stability of the considered EFS of AnYa type is proven through the Lyapunov theory, and the proof of stability shows that the average identification error converges to a small neighborhood of zero. Although, the stability proof presented in this paper is specially elaborated for the con- sidered EFS, it is also applicable to general EFSs. The proposed method is illustrated with Box-Jenkins gas furnace problem, one nonlinear system identification problem, Mackey-Glass time series prediction problem, eight real-world benchmark regression problems as well as a high frequency trading prediction problem. Compared with other EFSs, the numerical examples show that the considered EFS in this paper provides guaranteed stability as well as a better approximation accuracy.

Show more
11 Read more

Covariance provides a measure of the strength of the correlation between two or more sets of random variables [13]. In general, when a mathematical model of the system can be obtained, the covariance matrices are chosen based on experience or through experiments. However, this can be a daunting process. It is very difficult (if not impossible) to derive accurate mathematical models for many complex **systems** in mechanical engineering. This leaves the system to be approximated by a KF. As a consequence, the covariance matrices must also be approximated. This process leaves room for a significant error making the training technique somewhat unreliable. Therefore, a **new** method to update process noise and observation error covariance matrices is proposed in this section to improve robustness of the training technique, which is defined as:

Show more
It is well known that **fuzzy** rule-based **systems** are universal function approximators [35]; they are suitable for extracting interpretable knowledge. Therefore, they are viewed as a promising framework for designing effective and powerful classifiers. The type of classifiers that can be built using the recently introduced **evolving** **fuzzy** rule-based **systems** [10],[11] can be called **evolving** [12] which differs from ‘evolutionary’. **Evolving** **fuzzy** rule-based classifiers develop and adapt in on-line mode the non-linear classification surface. Evolutionary/genetic **algorithms** have recently been used for design of **fuzzy** rule-based **systems** in general [13] and classifiers in particular [6],[39]. They are based on the off-line optimization of one or more criteria in designing the **fuzzy** rule-base (classifier) using paradigms that stem from Nature such as mutation, crossover, and reproduction. **Evolving** in the sense that we use it in our paper and related works includes self-organising, self-developing in terms of the classifier (rule-base) structure. In this sense this paradigm can be considered as a higher level of adaptation (adaptation is usually related to parameters not to the structure of the **systems** [15]). Note, that similar principles were used by the authors in developing **evolving** classifiers also in [14] and [23]. The concept is taken further in this paper comparing to [14] by analysing different possible architectures of eClass. Comparing to [23] the backbone of the approach is different – we use here and in [14] the **evolving** **fuzzy** Takagi-Sugeno, eTS approach while in [23] we extended FLEXFIS [27] and its modification FLEXFIS-Mod [36] to the classification case (called FLEXFIS-Class), both families originally designed for **fuzzy** regression modelling tasks. The eTS family of **evolving** TS models (eTS, MIMO-eTS, exTS) has been recently

Show more
Abstract. This paper describes the results of the working group investigating the issues of empirical studies for **evolving** **systems**. The groups found that there were many issues that were central to successful evolution and this concluded that this is a very important area within software engineering. Finally nine main areas were selected for consideration. For each of these areas the central issues were identified as well as success factors. In some cases success stories were also described and the critical factors accounting for the success analysed. In some cases it was later found that a number of areas were so tightly coupled that it was important to discuss them together.

Show more
12 Read more

30 Read more

Based on the analysis, the **fuzzy** clustering **algorithms** including FCM, SFCM and PCM are highly dependent on the features used. For example, FCM using PI is suitable feature for one type image for segmenting objects while using PL produces better results for other. In some cases, FCM using CIL shows good segmentation performance [61, 72- 76]. This raises an open question which feature set produces best segmentation results for which type of image [61]. Addressing this issue, Ameer et al proposed a **new** algorithm namely merging initially segmented regions (MISR) [61] which merges initially segmented similar regions produced by clustering algorithm separately using a pair of feature set from PI, PL, and CIL. The detailed description of the MISR algorithm is given in Algorithm 4 with the full details in below.

Show more
14 Read more

The evolution of the coeﬃcients of the monetary rule in the **structural** VAR ac- cords well with narrative accounts of post-WWII U.S. economic history, with (e.g.) significant increases in the long-run coeﬃcients on inflation and money growth around the time of the Volcker disinflation. Overall, however, our evidence points towards a dominant role played by good luck in fostering the more stable macroeconomic envi- ronment of the last two decades. First, the Great Inflation was due, to a dominant extent, to large demand non-policy shocks, and to a lower extent to supply shocks. Second, bringing either Paul Volcker or Alan Greenspan back in time would only have had a limited impact on the Great Inflation episode. Although the systematic component of monetary policy clearly appears to have improved over the sample pe- riod, this does not appear to have been the dominant influence in post-WWII U.S. macroeconomic dynamics.

Show more
49 Read more