Ebook: Advancing Technology Industrialization Through Intelligent Software Methodologies, Tools and Techniques
Software has become ever more crucial as an enabler, from daily routines to important national decisions. But from time to time, as society adapts to frequent and rapid changes in technology, software development fails to come up to expectations due to issues with efficiency, reliability and security, and with the robustness of methodologies, tools and techniques not keeping pace with the rapidly evolving market.
This book presents the proceedings of SoMeT_19, the 18th International Conference on New Trends in Intelligent Software Methodologies, Tools and Techniques, held in Kuching, Malaysia, from 23–25 September 2019. The book explores new trends and theories that highlight the direction and development of software methodologies, tools and techniques, and aims to capture the essence of a new state of the art in software science and its supporting technology, and to identify the challenges that such a technology will have to master. The book also investigates other comparable theories and practices in software science, including emerging technologies, from their computational foundations in terms of models, methodologies, and tools. The 56 papers included here are divided into 5 chapters: Intelligent software systems design and techniques in software engineering; Machine learning techniques for software systems; Requirements engineering, software design and development techniques; Software methodologies, tools and techniques for industry; and Knowledge science and intelligent computing.
This comprehensive overview of information systems and research projects will be invaluable to all those whose work involves the assessment and solution of real-world software problems.
In the rapid journey of industrialization in the 20th century, software has become more and more crucial as enabler to daily operational values in life. From daily routines to important national decisions, software has been the epitome of global essentiality. While creating new markets, opportunities, directions and aspirations for a more reliable, flexible and robust society, the exploration for perfection and scrutiny has been more feasible with the empowerment of software utilization. Despite its advancement, software development will from time to time disappoint new expectations as society adapts to frequent changes in technology. This reflects the issues of efficiency, reliability, security and robustness of software methodologies, tools and techniques in present time that does not go hand in hand with the rapidly evolving market.
This book explores new trends and theories that highlight the direction and development of software methodologies, tools and techniques, which we hope will provide knowledgeable insights into transforming the role of software sciences within the expansion of global industrialization.
Through intellectual discourses of state-of-the-art research practices, newly developed techniques, enhanced methodologies, software related solutions and recently developed tools, more opportunities were offered conforming current intellectual status plus resolution to future directions.
The book aims to capture the essence of a new state of the art in software science and its supporting technology, and to identify the challenges that such a technology will have to master. It contains extensively reviewed papers presented at the 18th International Conference on New Trends in Intelligent Software Methodology Tools, and Techniques, (SoMeT_19) held in Kuching, Malaysia with the collaboration of IEEE Malaysia Computer Chapter, Malaysia-Japan International Institute of Technology (MJIIT), Universiti Teknologi Malaysia and Iwate Prefectural University, from September 23–25, 2019. (https://ieeecomputer.my/somet2019/).
This round of SoMeT_19 is celebrating the 18th anniversary. SoMeT [Previous related events that contributed to this publication are: SoMeT_02 (the Sorbonne, Paris, 2002); SoMeT_03 (Stockholm, Sweden, 2003); SoMeT_04 (Leipzig, Germany, 2004); SoMeT_05 (Tokyo, Japan, 2005); SoMeT_06 (Quebec, Canada, 2006); SoMeT_07 (Rome, Italy, 2007); SoMeT_08 (Sharjah, UAE, 2008); SoMeT_09 (Prague, Czech Republic, 2009); SoMeT_10 (Yokohama, Japan, 2010), and SoMeT_11 (Saint Petersburg, Russia), SoMeT_12 (Genoa, Italy), SoMeT_13 (Budapest, Hungary), SoMeT_14(Langkawi, Malaysia), SoMeT_15 (Naples, Italy), SoMeT_16 (Larnaca, Cyprus), SoMeT_17 (Kitakyushu, Japan), SoMeT_18 (Granada, Spain).] conference series is ranked as B+ among other high-ranking Computer Science conferences worldwide.
This conference brought together researchers and practitioners to share their original research results and practical development experience in software science and related new technologies. This volume contributes to the conference and the SoMeT series of which it forms a part, by providing an opportunity for exchanging ideas and experiences in the field of software technology; opening up new avenues for software development, methodologies, tools, and techniques, especially with regard to intelligent software by applying artificial intelligence techniques in software development, and tackling human interaction in the development process for a better high-level interface. The emphasis has been placed on human-centric software methodologies, end-user development techniques, and emotional reasoning, for an optimally harmonized performance between the design tool and the user.
Intelligence in software systems resembles the need to apply machine learning methods and data mining techniques to software design for high level systems applications in decision support system, data streaming, health care prediction, and other data driven systems.
A major goal of this work was to assemble the work of scholars from the international research community to discuss and share research experiences of new software methodologies and techniques. One of the important issues addressed is the handling of cognitive issues in software development to adapt it to the user’s mental state. Tools and techniques related to this aspect form part of the contribution of this book. Another subject raised at the conference was intelligent software design in software ontology and conceptual software design in practical human-centric information system application.
The book also investigates other comparable theories and practices in software science, including emerging technologies, from their computational foundations in terms of models, methodologies, and tools. This is essential for a comprehensive overview of information systems and research projects, and to assess their practical impact on real-world software problems. This represents another milestone in mastering the new challenges of software and its promising technology, addressed by the SoMeT conferences, and provides the reader with new insights, inspiration and concrete material to further the study of this new technology.
The book is a collection of carefully selected refereed papers by the reviewing committee, covering (but not limited to):
unmapped: label 1.
Requirement engineering, especially for high-assurance systems, and requirement elicitation.
unmapped: label 2.
Software methodologies, and tools for robust, reliable, non-fragile software design.
unmapped: label 3.
Software development techniques and legacy systems.
unmapped: label 4.
Automatic software generation versus reuse, and legacy systems.
unmapped: label 5.
Software quality and process assessment for business enterprise.
unmapped: label 6.
Intelligent software systems design, and software evolution techniques.
unmapped: label 7.
Agile software and lean methods.
unmapped: label 8.
Software optimization and formal methods for software design.
unmapped: label 9.
Static, dynamic analysis on software performance model, software maintenance.
unmapped: label 10.
Software security tools and techniques, and related software engineering models.
unmapped: label 11.
Formal techniques for software representation, software testing and validation.
unmapped: label 12.
Software reliability, and software diagnosis systems.
unmapped: label 13.
Mobile code security tools and techniques.
unmapped: label 14.
End-user programming environment, user-centered adoption-centric reengineering techniques.
unmapped: label 15.
Ontology, cognitive models and philosophical aspects on software design.
unmapped: label 16.
Medical informatics, software methods and application for biomedicine.
unmapped: label 17.
Artificial intelligence techniques on software engineering.
unmapped: label 18.
Software design through interaction, and precognitive software techniques for interactive software entertainment applications.
unmapped: label 19.
Creativity and art in software design principles.
unmapped: label 20.
Axiomatic based principles on software design.
unmapped: label 21.
Model driven development (DVD), code-centric to model-centric software engineering.
unmapped: label 22.
Software methods for medical informatics, bioinformatics, and bioinformatics.
unmapped: label 23.
Emergency management informatics, software methods for supporting civil protection, first response and disaster recovery.
unmapped: label 24.
Software methods for decision support systems and recommender systems.
We have received high quality submissions and from among these we have selected 56 articles to be published in this book. Referees in the program committee have carefully reviewed all the submissions, and based on technical soundness, relevance, originality, significance, and clarity, these 56 papers were selected. They were then revised based on the review reports before being accepted by the SoMeT_19 international reviewing committee. It is worth stating that there were three to four reviewers for each paper published in this book. The book is categorized into 5 Chapters based on the following themes:
CHAPTER 1 Intelligent Software Systems Design and Techniques in Software Engineering
CHAPTER 2 Machine Learning Techniques for Software Systems
CHAPTER 3 Requirements Engineering, Software Design and Development Techniques
CHAPTER 4 Software Methodologies, Tools and Techniques for Industry
CHAPTER 5 Knowledge Science and Intelligent Computing
This book is the result of a collective effort from many industrial partners and colleagues throughout the world. We especially would like to acknowledge our gratitude to the IEEE Malaysia Computer Chapter, Malaysia-Japan International Institute of Technology, Universiti Teknologi Malaysia, Iwate Prefectural University, and all authors who have contributed their invaluable support to this work. We also thank the SoMeT_19 keynote speakers: Professor Volker Gruhn, Software Technology Universitat Duisburg-Essen, Germany and Professor Dr. Enrique Herrera-Viedma, Vice president of Research and Knowledge Transfer, University of Granada, Spain. Most especially, we thank the reviewing committee and all those who participated in the rigorous reviewing process and the lively discussion and evaluation meetings which led to the selected papers that appear in this book. Last and not least, we would like to thank the Microsoft Conference Management Tool team for their expert guidance on the use of the Microsoft CMT System as a conference-support tool during all the phases of SoMeT_19.
The skin recognition is a topic that has been studying since some years ago using machine learning and artificial vision, nowadays this topic has many applications in the medical industry, for example, cancer detection, injuries, mood recognition, telemedicine, among other applications. In this industry, if we can classify the skin tonalities, we able to limit the diseases that attack each type of skin tonality. Many papers have studied skin recognition, where the goal is the recognition of the skin in a picture or video, they need to have a good database and powerful algorithms of machine learning. This paper proposes a system able to segment the skin through the map and the recognition of the skin in an image. The results show that is possible to generate a skin geographic distribution; it gives the opportunity to classify the skin tonalities, for another hand, we tested the proposed system to recognize skin showing interesting results for different tonalities of skin.
At present, the cloud marketplace becomes more and more widely used for delivering cloud applications to consumers. The diversity of IaaS and PaaS services from many cloud providers gives customers many choices that benefit them the most. If a customer is not satisfied with his existing cloud resource service (IaaS or PaaS), he is going to stop using the service in use and consider other cloud service providers. And he also wants his cloud software bought on the market to be hosted on new cloud platforms. However, changing cloud resource services for multi-cloud application is not trivial. In this paper, we propose an approach that uses a Composable Application Model (CAM) to construct the topology of a multi-cloud application in a Blueprint. Thereby, all the changes of cloud platform services are reflected to the Blueprint. In this way, the cloud application is managed. Thus, cloud application operation is guaranteed after one or several of its software components are re-deployed on new cloud platform services and re-established application interconnections so that the operation of the cloud application is as an initial state. For updating the Blueprint, we built a bidirectional transformation system where the core is a bidirectional transformation program. We show how the Blueprint which is described by TOSCA-based specification is automatically correctly auto-updated.
Increasing requirements for the quality of functioning of complex autonomous technical objects (bodynets, robotic complexes, unmanned cars and aerial vehicles, etc.), as well as their security and reliability, made the problem of assessing their state particularly relevant given the impact of various types of attacks and destabilizing factors, aging and technological dispersion of parameters. The paper proposes a new approach to intelligent evaluation of the state of such objects. The approach is based on interval assessment of parameters, use of a knowledge base about critical and state conditions, and application of wavelet analysis. The architecture and realization of an intelligent system for evaluation of the state of complex autonomous technical objects is considered. The carried-out experimental assessment of the offered approach showed that use of wavelet analysis when forming areas of objects’ operability allows one to make accurate differentiation of classes of their technical states that increases the accuracy and reliability of state identification and also to expand possibilities of technical means of control and diagnostics.
The purpose of this paper is to present a clear description of an approach to automate the conception process, to implement and to deploy an open and adaptive multiagent system for personal assistive applications. This kind of applications increasingly relies on a network of connected objects which are present in the environment of elderly or sick people. It deals with exploiting the connected objects of this environment to offer a service (e.g., fall detection, localization) to these persons. Since the environment is dynamic, due to the availability or the lack of connected objects and to the diversity of situations, cooperation mechanisms among connected objects must be dynamically and adaptively designed. The proposed approach emphasizes the cooperation between these different connected objects. This work is based on cooperative multiagent systems in which each agent models a connected object.We will leverage interaction protocols in multiagent systems in order to design a platform capable of generating an adaptive multiagent systems automatically. The adaptation takes into consideration the context of the person as well as the existing interaction protocols. This paper presents an infrastructure which allows the use of ontologies for agent-oriented software engineering. The first results are very promising since they show several advantages of the approach in terms of adaptability of the generated multiagent system, as indicated by the experiments we have carried out.
This paper aims to put a new approach in the picture to the payment scheduling problem, which looks for a schedule that maximizes the benefit of all parties in a project. In a project, both sponsor and contractor seek to have a good payment strategy on their own. The timing of payments and the completion times of activities in projects are determined simultaneously in order to achieve an equitable schedule among the sponsor and the development team. In previous research, we developed a Unified Game-Based Model for conflicts in project management. In this paper, we applied this model to this problem, implemented in an open sourced evolutionary computation library named MOEA framework. The use of a Unified Game-Based Model enables us to figure out a suitable schedule for the problem, and in the tool, we conducted an experimental test of the model by the used of several multi-objective optimization algorithms. The experimental results demonstrated that the presented approach is effective and promising so that both parties could use this model to choose the proper tactics for each of them in scheduling payment.
One way to recognize human emotions is to use physiological signals. In particular, EEG is noticed because it is non-invasive and inexpensive. However, it is difficult to perform recognition with high accuracy because there are a number of problems such as EEG signals have a lot of noise. The high accuracy analysis of EEG is the subject of research by many researchers. In this paper, we propose converting EEG signals into images and performing emotion classification tasks using CNN. In the experiment, we use DEAP dataset, which is often used in emotion recognition tasks using EEG. The EEG signal is divided into short segments based on a predetermined time window and plotted in time series data format to generate images. About the data plotting method, the image is generated by the method of making 32 classes and the method of making 4 classes. The generated images are classified into each emotions using a convolutional neural network. The classification use two axes, arousal and valence. The best results differ by gender. Men are able to get the best results when the time window is 1.0 with a 4-class image. The accuracy at these results is 63.75% for arousal and 63.36% for valence. The time window is 1.5 seconds and arousal is 65.37% when women use 4-class images. On the other hand, valence is 59.96% in 1.5 seconds when using a 32-class image. Also, it is found that arousal tends to be higher for women and valence tends to be higher for men. The experimental results show that the proposed method outperforms some related work. The proposed method is not dependent on the dataset, so it can be applied to research using various data.
This paper compares two metaheuristic neural networks (ANNs) models, Bat algorithm neural network (BANN) and Bat optimisation neural network (BatNN) for spatial downscaling of long term precipitation. For BANN, model parameters for both pulse rate (R) and loudness (A) are fixed as 0.5. Whilst, R and A parameters for BatNN will dynamically self-adapt in searching the optimal configuration during the training process. Hidden node (HN), iteration number (IN) and learning rate (LR) for both models are predetermined to be 100, 1000 and 1 respectively for comparison. Investigations were carried out with different population (b), maximum pulse frequency (fmax) and velocity factor (α). Models performance will be measured with Square Root of Correlation of Determination (r), Root Mean Square Errors (RMSE), Mean Absolute Error (MAE) and Nash and Sutcliffe coefficient (E). Data from 1961 to 1990 are used for training, whilst validation data are from 1991 to 2010. Predictors of three climate models including HadCM3, ECHAM5 and HadGEM3-RA cum collected precipitation data from Kuching Airport Rainfall Station are input into the models. Model output is the forecasted precipitation. Results showed BatNN is more robust than BANN with its average r = 0.96, average RMSE = 1.69, average MAE = 1.4 and average E = 0.84 across the three climate models; while BANN achieved average r = 0.95, average RMSE = 1.91, average MAE = 1.75 and average E = 0.82 across the three climate models. The higher accuracy of BatNN can be attributed to the modifications done where dynamical parameters R and A are used in place of static parameter to allow BatNN to self-adapt during the training process.
Systems operated complex engineering structure (ES) constituting smart industrial or commercial product, prototype, or experimental configuration requires new way in model-based engineering. New style of engineering model system (EMS) is developed and applied in the course of continuous engineering to achieve system level, generic, and contextual model object structure which is in possession of self-modification capability to change itself for changed inside and outside context. Comprehensive software platform provides modeling capabilities during the whole innovation and life cycle of ES in industries engaged with smart products and production. Introducing work in engineering modeling for smart ES this paper focuses on the problem of multiple context driving of model object. The purpose of the reported work is to develop concept and methodology in intellectual driving structure for objects included in EMS and in cyber units of cyber physical systems. Paper starts with a scenario of multiple outside and inside context driving of object parameters and an outline of multiple context driven EMS. Following this, general process for managing contextual object structure of EMS which represents an ES is introduced. Main contributions in this paper are structured model of driving smart content (DSC) to drive objects in EMS and development of DSC as extension for EMS considering appropriate engineering platform. Finally, role of DSC and issues at its implementation are concluded considering the 3DEXPERIENCE platform by the company Dassault Systémes as cloud-based laboratory background at the Laboratory of Intelligent Engineering Systems, Óbuda University.
Imbalanced data classification is an important task in data mining and machine learning. Imbalanced data consists of majority class and minority class, where the majority class leads to miss-classification of minority samples. Various approaches have been proposed in recent years to address this problem. Sampling, which focuses on balancing between classes, is one of the methods to solve the class imbalance problem. In previous our research, we have proposed Multivariate Normal Distribution based Over-Sampling (MNDO), which uses correlations between attributes and statistical methods, and have tackled this problem. In this paper, we propose Multivariate Normal Distribution based Over-sampling for Numerical and Categorical features (MNDO-NC) to sampling a dataset that contains both numerical data and categorical data. First, MNDO-NC generates numerical data using correlation coefficients and multivariate distribution. Next, calculate the distance between the generated data and the original data, and identify 5 nearest neighbors. The categorical data is sampled by applying a voting strategy for the neighborhood sample. Some existing methods generate new samples using distance function, but our method uses positive class statistics. Therefore, it can be applied even if the number of training samples is very small. In addition, outliers can be reproduced stochastically, so more realistic samples can be generated. In the experiment, we used 17 imbalanced datasets, which consist of numerical data and categorical data. To compare with the existing method, 6 sampling methods, 2 scaling and 3 learning methods were used. As a result of the experiment, the proposed method showed the same result as other methods.
Debugging is a laborious part of the software development process as well as of programming education. Although existing editors and IDEs support the identification of syntax errors, their functions for detecting logical errors in compilable program code are very limited. Algorithms have been developed taking either a static code analysis approach or a deep learning approach. However, although overall experimental results are positive in terms of the detection of logic errors, the results have limitations. We should take advantage of algorithm capacity as well as avoidance of mismatches caused by weakness in implementing the corresponding intelligent coding editors. In the present paper, we analyze the two different approaches through accumulated source codes for solving a programming task in an online judge system. Experimental results reveal the strengths and weaknesses of these approaches, and we conclude that these approaches are an appropriate basis for developing a hybrid algorithm to enhance the accuracy of logic error detection.
Handling the missing values play important step in the preprocessing phase of hydrological modeling analysis. One of the challenges in preprocessing phase is to deal with the problems of missing data with good consideration on the pattern and approaches of the missing data. Hence, this paper presents a study on Feedforward neural network algorithm (FFNN) and Elman neural network (ENN) imputation algorithm in estimating missing rainfall data at different percentages of missingness. Reliable rainfall data series from nearest neighbor gauging stations were used as inputs to predict the missing rainfall data for an output station. The selected study area is Sungai Merang, East Malaysia. The study revealed that ENN method demonstrated a superior prediction of the missing daily rainfall data than FFNN method. It is also observed that the ENN model-infilling method could be highly beneficial in reducing the data gaps for continuous hydrological modelling analysis.
Data augmentation is being widely used to enrich the datasets and enhance the performance of neural network for classification and detection. However most of the recent works focus only on the augmentation for classification. A technique for detection augmentation by template blending has been introduced in the literature. The limitation of blending technique is an extra polygon shape of each object needed to blend with the scene. In this paper we investigate the effect of the geometric transformations for detection augmentation on the Malaysian Traffic Sign Detection (MTSD) datasets. We propose and investigate a new augmentation framework for object detection datasets and train using faster-rcnn with ZF network as a backbone. We measure the Average Precision (AP) as stated in paper [PASCAL VOC] and show the correlation matrix for each class. Our findings show that data augmentation improving the performance for true positive. However, many false positive also occur but decreased by 19.7% after augmentation.
The unmanned surface vehicle (USV) has been widely used to accomplish tasks that cannot be completed by ships with human drivers on certain sea areas. It is not only necessary but essential to obtain a robust strategy in order to ensure multiple USVs accomplish collaborative tasks successfully and efficiently. To meet the challenge, a deep reinforcement learning method is proposed, which is combined with an improved A star algorithm. A statistically promising collaborative strategy is achieved by the proposed method under the guidance from the unmanned aerial vehicles (UAVs). After the collaborative strategy is generated, the improved A star algorithm is used to navigate the USVs. To verify the proposed algorithm, several tasks are tested on a simulation platform. Experimental results demonstrate that the proposed method outperforms state-of-the-art reinforcement learning methods such as DQN and DeepSarsa.
Recently, another cloud taxonomy service in addition to IaaS, PaaS and Saas services has been added: this is the business process as a service in the cloud (BPaaS). A BPaaS is any business process delivered through a cloud service model via internet with access through web interfaces. Therefore, process models will be developed by providers for discovery and use by tenants. In a previous work, we perform the design of an e-learning process as a business process in cloud. This paper examines the problem of discovering the similarity between e-learning processes. Given a pair of e-learning process models, the consumer lunches a request and the provider present a target process. The query is thus compared to the target process in order to check if it answers to the user needs. To achieve this goal, we use the graph based structural matching process and apply the graph edit edit-distance to measure the similarity between the two processes. We demonstrate the feasibility of our approach by testing the greedy and A-Star algorithms. The results we obtained show the efficiency and precision of the A-Star algorithm compared to the greedy algorithm.
This paper presents and discusses an empirical work of using machine learning K-means clustering algorithm in analyzing and processing Mobile Augmented Reality (MAR) learning usability data. This paper first discusses the issues within usability and machine learning spectrum, then explain in detail a proposed methodology approaching the experiments conducted in this research. This contributes in providing empirical evidence on the feasibility of K-means algorithm through the discreet display of preliminary outcomes and performance results. This paper also proposes a new usability prioritization technique that can be quantified objectively through the calculation of negative differences between cluster centroids. Towards the end, this paper will discourse important research insights, impartial discussions and future works.
Recently, Business Email Compromise (BEC) has become a big issue. Some security companies and organizations warn about BEC and say that we must defend against them. Although we have many SPAM filters, we have very few BEC filters. We have to find BEC by ourselves. One of the features of BEC is that the wording and style in BEC differ usual. If a software finds this point, it can help us to defend against BEC. Based on this idea, we propose a method to identify an email author using machine learning algorithms. In this approach, we make identification models from emails received in the past. We defined a target person in advance and use machine learning algorithms to make models which identify whether an email is sent by this person or not. We translate an email to a feature vector which consists of the similarity between the subject and the body, the distribution of Part of Speech and the occurrence of terms in the beginning part of the body. We make models from these feature vectors using machine learning algorithms, KNN, SVM, NBC and Decision Tree. And we try to identify whether a target person write a new email or not, with these models. We evaluated these approaches using open dataset and tools. The best accuracy is about 0.84 and the best Kappa statistics value is 0.68, therefore, our approach shows good agreement. However, we can get the better Kappa statistics value using a simple method. That is, we could not show the advantage of our approach. Overfitting is one of the reasons why our approach could not be better than an existing approach. We have to modify this weakness using literature resources and other approaches. Moreover, we have to evaluate our new approach using the bigger dataset. These are our future work.
In higher education institutions, the most significant issue is to improve the students’ performance and retention rate. Massive numbers of students’ data are used to gain new hidden knowledge from students’ learning behaviour, particularly to discover the initial symptom of at-risk students by using Educational Data Mining techniques. However, data with noises, outliers and irrelevant information might cause an inaccurate result. This study aims to develop a robust students’ performance prediction model for higher education institution by identifying features of students’ data that have the potential to increase performance prediction results, comparing and identifying the most suitable ensemble learning technique after preprocessing the data and optimizing the hyperparameters. Data are collected from 2 different systems, which are: student information system and e-learning system of undergraduate students from the Faculty of Engineering in one of Malaysia’s public university. 4413 students’ instances are used for this study. The process follows 6 different data mining phases namely: data collection, data integration, data pre-processing (such as cleaning, normalization, and transformation), feature selection, patterns extraction and finally model optimization and evaluation. Machine learning techniques used to build prediction model are Decision Tree, Support Vector Machine and Artificial Neural Network, while for ensemble learning: Random Forest, Bagging, Stacking, Majority Vote and 2 variants of Boosting techniques are AdaBoost and XGBoost. Hyperparameters for ensemble learning techniques are optimized to gain better performance and optimum result. The result shows that the combination of features of students’ behaviour from e-learning and students information system using Majority Vote produced better result compared to other ensemble methods.
Recently, deep learning has been studied as one of the most effective methods in the machine learning field, and lots of results have been reported. However, the most effective way to construct neural networks has not yet been determined. Besides, interpretation of an obtained network is difficult. In a previous study, we proposed a novel method to construct a neural network using a support vector machine called SVM-NN. However, there is also a problem in that, for hard to apply to a nonlinear problem and model size problem. In this study, we first propose a new network structure called AND/OR layers to solve the nonlinear problem of SVM-NN. AND/OR layers improve the effectiveness of identification by grouping support vectors based on training data results. This type of novel SVM-NN is called SVM-NN(AND/OR). We also utilize the genetic algorithm to pair and reduce support vectors to compress the model size of SVM-NN and SVM-NN(AND/OR). To confirm the effectiveness of the proposed methods, the computational experiments were carried out taking typical benchmark problems as examples. The effectiveness of the proposed method is confirmed by computer simulations.
Botnets are the most deadly threat in the network due to the capability of exploiting resources within a network as an army to launch huge attacks such as Denial-Distributed-of-Service (DDOS) or spam emails. Network Intrusion Detection System (NIDS) that designed based on the behavior of botnets in network traffic is seen as the promising technique in detecting botnets that are hiding by using encryption technique or any hiding techniques. This paper proposes on K-means clustering algorithm as the first phase of botnet’s behaviour detection model that extracts data from network traffic. The criterion highlighted for our behaviour detection model is that it should be able to detect botnet in encrypted packets(hiding techniques), structure-independent (centralized and peer-to-peer), requiring minimal computing resources and minimal time processing. Other than that, by representing the real-time of network traffics, the detection model also must be resistant to noise and able to identify the anomaly of botnets behavior among a huge number of normal traffic. We are using the botnet benchmark dataset and normal traffic from Malware Capture Facility Project and comparing our proposed method using K-means algorithm with Expectation Maximization algorithm that proposed by the previous researcher in clustering the similar pattern of botnet behavior. The result shows that the K-means algorithm producing much higher accuracy, 94% and lower false negative rate, 0.1413. While, average accuracy for Expectation Maximization algorithm is 88% and False Negative Rate, 0.2245 with the insertion of uncertain data from real network traffic.
This article presents the experimental work of comparing the performances of two machine learning approaches, namely Hierarchical Agglomerative clustering and K-means clustering on Mobile Augmented Reality Usability datasets. The datasets comprises of 2 separate categories of data, namely performance and self-reported, which are completely different in nature, techniques and affiliated biases. This research will first present the background and related literature before presenting initial findings of identified problems and objectives. This paper will the present in detail the proposed methodology before presenting the evidences and discussion of comparing this two widely used machine learning approach on usability data. This paper contributes in presenting evidences showing K-means as the better performing clustering algorithm when compared to Hierarchical Agglomerative when implemented on the usability datasets. The results shown has contradicted with some recent studies claiming otherwise, and the findings have created more research gaps pertaining the combined utilization of machine learning and usability analysis.
Thermal infrared tracking (TIR) is able to track objects in dark environments such as night. It can be used mainly for surveillance and rescue for surveillance cameras at night. While the development of automatic driving is progressing in recent years, we believe that thermal infrared tracking can contribute to the improvement of safety even in places with few streetlights. However, unlike normal visual object tracking, thermal infrared tracking itself has some problems. In this paper, we propose an algorithm for improving the accuracy by selecting the optimal feature map for each sequence using Kullback-Leibler divergence (KLD) amount for ensemble tracking using the powerful expression ability of convolutional neural network (CNN). Using KLDs from response maps obtained from an ensemble tracker with multi-layer convolutional features in thermal infrared tracking (MCFTS), we determine the CNN filter most involved in creating the response map. By adjusting the bias value corresponding to these filters and learning the filter, it is possible to create a tracker corresponding to the sequence each time. In order to evaluate the performance of the tracker and conventional tracker which applied the proposed algorithm, we experimented with the thermal infrared tracking benchmark of VOT-TIR2016. We also compared the 24 types of trackers that were evaluated in the thermal infrared tracking benchmark. The experimental results demonstrate that the proposed tracker achieves effective and promising performances with some sequences.
Online media are well-known to be suitable for conveying hate speech. Hateful wording as such involves communications that unlawfully demean any group or person based on certain characteristics, including colour, race, gender, ethnicity, sexual orientation, religion, or nationality. The continuing rise of internet social platforms, including micro blogging services like Twitter, has compelled the need for more immediate analyses of hatreds and other antagonistic responses to various trigger events. This study aims to investigate the details using aspect-based inspections of sentiments. Content analysis of such tweets along with the associations between them is key. Nevertheless, due to the large data volumes involved, it can oftentimes be burdensome if not infeasible to conduct these types of analyses manually. The main problems of prior methods involve data sparsity, classification accuracy, and sarcastic content identification. for the techniques incorrectly categorise tweets as neutral. For content analysis, three dissimilar schemes were suggested, with all proposing to surmount the above-mentioned problems. The research results show that the proposed strategy has achieved correspondingly increased accuracies of some 75%, 71.43%, and 92.86%.