Ebook: New Trends in Software Methodologies, Tools and Techniques
Software is the essential enabling means for science and the new economy. It helps us to create a more reliable, flexible and robust society. But software often falls short of our expectations. Current methodologies, tools, and techniques remain expensive and are not yet sufficiently reliable, while many promising approaches have proved to be no more than case-by-case oriented methods.
This book contains extensively reviewed papers from the thirteenth International Conference on New Trends in software Methodology, Tools and Techniques (SoMeT_14), held in Langkawi, Malaysia, in September 2014. The conference provides an opportunity for scholars from the international research community to discuss and share research experiences of new software methodologies and techniques, and the contributions presented here address issues ranging from research practices and techniques and methodologies to proposing and reporting solutions for global world business. The emphasis has been on human-centric software methodologies, end-user development techniques and emotional reasoning, for an optimally harmonized performance between the design tool and the user. Topics covered include the handling of cognitive issues in software development to adapt it to the user's mental state and intelligent software design in software utilizing new aspects on conceptual ontology and semantics reflected on knowledge base system models.
This book provides an opportunity for the software science community to show where we are today and where the future may take us.
Software is the essential enabler for science and the new economy. It creates new markets and new directions for a more reliable, flexible and robust society. It empowers the exploration of our world in ever more depth. However, software often falls short of our expectations. Current software methodologies, tools, and techniques remain expensive and are not yet sufficiently reliable for a constantly changing and evolving market, and many promising approaches have proved to be no more than case-by-case oriented methods.
This book explores new trends and theories which illuminate the direction of developments in this field, developments which we believe will lead to a transformation of the role of software and science integration in tomorrow's global information society. By discussing issues ranging from research practices and techniques and methodologies, to proposing and reporting solutions needed for global world business, it offers an opportunity for the software science community to think about where we are today and where we are going.
The book aims to capture the essence of a new state of the art in software science and its supporting technology, and to identify the challenges that such a technology will have to master. It contains extensively reviewed papers presented at the 13th International Conference on New Trends in Intelligent Software Methodology Tools, and Techniques, (SoMeT_14) held in Langkawi, Malaysia with the collaboration of Universiti Teknologi Malaysia, (Johor Baharu, Malaysia) from September 22–24, 2014. (http://seminar.spaceutm.edu.my/somet2014/). This round of SoMeT_14 is celebrating the 13th anniversary.
Previous related events that contributed to this publication are: SoMeT_02 (the Sorbonne, Paris, 2002); SoMeT_03 (Stockholm, Sweden, 2003); SoMeT_04 (Leipzig, Germany, 2004); SoMeT_05 (Tokyo, Japan, 2005); SoMeT_06 (Quebec, Canada, 2006); SoMeT_07 (Rome, Italy, 2007); SoMeT_08 (Sharjah, UAE, 2008); SoMeT_09 (Prague, Czech Republic, 2009); SoMeT_10 (Yokohama, Japan, 2010), and SoMeT_11 (Saint Petersburg, Russia), SoMeT_12 (Genoa, Italy), SoMeT_13 (Budapest, Hungary).
This conference brought together researchers and practitioners to share their original research results and practical development experience in software science and related new technologies.
This volume participates in the conference and the SoMeT series1 of which it forms a part, by providing an opportunity for exchanging ideas and experiences in the field of software technology; opening up new avenues for software development, methodologies, tools, and techniques, especially with regard to intelligent software by applying artificial intelligence techniques in Software Development, and tackling human interaction in the development process for better high level interface. The emphasis has been placed on human-centric software methodologies, end-user development techniques, and emotional reasoning, for an optimally harmonised performance between the design tool and the user.
The word “intelligent” on the SOMET emphasises the need for applying artificial intelligence issues of software design for systems application for example in disaster recovery and other system supporting civil protection and other inquire human intelligence as requirement in system engineering.
A major goal of this work was to assemble the work of scholars from the international research community to discuss and share research experiences of new software methodologies and techniques. One of the important issues addressed is the handling of cognitive issues in software development to adapt it to the user's mental state. Tools and techniques related to this aspect form part of the contribution to this book. Another subject raised at the conference was intelligent software design in software ontology and conceptual software design in practice human centric information system application. The book also investigates other comparable theories and practices in software science, including emerging technologies, from their computational foundations in terms of models, methodologies, and tools. This is essential for a comprehensive overview of information systems and research projects, and to assess their practical impact on real-world software problems. This represents another milestone in mastering the new challenges of software and its promising technology, addressed by the SoMeT conferences, and provides the reader with new insights, inspiration and concrete material to further the study of this new technology.
The book is a collection of carefully selected refereed papers by the reviewing committee and covering: Software engineering aspects of software security programmes, diagnosis and maintenance
• Static and dynamic analysis of software performance models
• Software security aspects and networking
• Agile software and lean methods
• Practical artefacts of software security, software validation and diagnosis
• Software optimization and formal methods
• Requirement engineering and requirement elicitation
• Software methodologies and related techniques
• Automatic software generation, re-coding and legacy systems
• Software quality and process assessment
• Intelligent software systems design and evolution
• Artificial Intelligence Techniques on Software Engineering, and Requirement Engineering
• End-user requirement engineering, programming environment for Web applications
• Ontology, cognitive models and philosophical aspects on software design,
• Business oriented software application models,
• Emergency Management Informatics, software methods and application for supporting Civil Protection, First Response and Disaster Recovery
• Model Driven Development (DVD), code centric to model centric software engineering
• Cognitive Software and human behavioural analysis in software design.
All 79 papers selected among 192 submissions published in this book have been carefully reviewed, on the basis of technical soundness, relevance, originality, significance, and clarity, by up to four reviewers. They were then revised on the basis of the review reports before being selected by the SoMeT_14 international reviewing committee.
This book is the result of a collective effort from many industrial partners and colleagues throughout the world. In special we would like acknowledge our gratitude to Universiti Teknologi Malaysia especially the Vice Chancellor; Dr. Wahid Omar, Iwate Prefectural University, SANGIKYO Co. Japan, and all the others who have contributed their invaluable support to this work. Most especially, we thank the reviewing committee and all those who participated in the rigorous reviewing process and the lively discussion and evaluation meetings which led to the selected papers which appear in this book. Last and not least, we would also like to thank the Microsoft Conference Management Tool team for their expert guidance on the use of the Microsoft CMT System as a conference-support tool during all the phases of SoMeT_14.
The editors
Detecting emotion features in a song remains as a challenge in various area of research especially in music emotion classification (MEC). In order to classify selected song with certain mood or emotion, the algorithms of the machine learning must be intelligence enough to learn the data features as to match the features accordingly to the accurate emotion. Until now, there were only few studies on MEC that exploit timbre features from vocal part of the song incorporated with the instrumental part of a song. Most of existing works in MEC done by looking at audio, lyrics, social tags or combination of two or more classes. The question is does exploitation of both timbre features from both vocal and instrumental sound features helped in producing positive result in MEC? Thus, this research present works on detecting emotion features in Malay popular music using artificial neural network by extracting timbre features from both vocal and instrumental sound clips. The findings of this research will collectively improve MEC based on the manipulation of vocal and instrumental sound timbre features, as well as contributing towards the literature of music information retrieval, affective computing and psychology.
A significant response to the information overload problem currently being experienced as a result of the enormous advent of internet usage has been demonstrated by recommendation systems; that is, providing users with adapted information services. User preferences play a key role in preparing recommendations in the search for required information over the web. User feedback, both explicit and implicit, has proven to be vital for recommendation systems, with the similarity between users then able to be computed. In this paper we propose that traditional reliance on user similarity may be overstated. Nevertheless, there are many problems to be faced, specifically; sparseness, cold start, prediction accuracy, as well as scalability, which can all, result in a challenge of trust over the recommendation systems. A sparsity rate of 95% has been experienced in CF-based commercial recommendation applications. We discuss the manner in which other factors have a vital role in managing recommendations. Specifically, we propose that the issue of user satisfaction must be considered and incorporated with explicit feedback for improved recommendations.
The classification technique using the neural networks has been recently developed. We apply a competitive neural network of Learning Vector Quantization (LVQ) to classify remote sensing data including microwave and optical sensors for estimation of a rice field. The method has capability of a nonlinear discrimination function which is determined by learning. The satellite data were observed before and after planting rice in 1999. Three RADARSAT and one SPOT/HRV data are used in Higashi-Hiroshima City, Japan. RADARSAT image has only one band data, which is difficult to extract a rice field. However, SAR back-scattering intensity in a rice field decreases from April to May and increases from May to June. Thus, three RADARSAT images from April to June are used for this study. The LVQ classification was applied to RADARSAT and SPOT data in order to evaluate rice field estimation. The results show that the true production rate of rice field estimation for RADASAT data by using LVQ was approximately 60% compared with SPOT data. It is shown that the present method is much better compared with SAR image classification by the maximum likelihood (MLH) method.
A new intelligent synchronization tool is developed on multi robot manipulators to handle an object in the desired trajectory. The intelligent synchronization tool is based on the ANFIS structure, which tries to compensate synchronization error between robot manipulators. To overcome lumped uncertainty of the robot manipulator a neural network based prediction mode and modified sliding mode control are used. The examples as illustrated can guarantee the robustness, accuracy and decentralized feature for multi robot manipulator system.
Owning to the rumpled nature of the human ear, creating wonderful hills and valleys biometric authors have shifted to ear biometric in the quest of establishing it to the likes of fingerprint, iris, and face based biometric system. This paper avail a little summary view of ear biometric till date and proposes the use of gradient histogram features for recognizing the ear. The ridges-like structure of human ear provides vital information which can aid for its recognition with or without any much enhancement processes. After the image is comfortably cropped to reduce background noise from the hair, the image is further enhancement to remove some spike of noise that could cause the failure of the proposed method since allowing undesired information going into for processing will be erroneously expensive. Furthermore, the gradient of the enhanced image is computed and subsequently its Gradient Histogram (GH) is built. A dimensional reduction technique Principal Component Analysis (PCA) is used to reduce the feature vector space presented by the gradient histogram. Using Multilayer Feed Forward (MLFF) Neural Network classifier, the experimental result showed 100% recognition accuracy on USTB ear database.
In the past few years, tremendous studies have been made to examine the accuracy of time series forecasting that provide the foundation for decision models in foreign exchange data. This study proposes a novel approach of Hidden Markov Model and Case Based reasoning for time series forecasting. This paper compares the proposed method with the technical models; moving average convergence/divergence model (MACD), William's percent range, and naïve strategy for short-term trading decision. HMM is trained by using forward-backward or Baum-Welch algorithm and the likelihood value is used to predict future exchange rate price. The forecasting accuracy has been measured according to Root Mean Square Error (RMSE). The statistical performance of all techniques is investigated in testing of EUR/USD exchange rate time series over the period of October 2010 to November 2013. The preliminary results indicate that the new approach of HMM produce the lowest RMSE compared to the benchmark models. Further study is to adopt Case Based reasoning to further improve the forecasting results.
A container terminal functioning depends on the interaction of numerous subsystems, which, in turn, are related to other systems. Indeed the performances of the container terminal are affected by many factors such as component subsystems of the whole system, equipment, resources and procedure. To represent the relation between those factors, a System Dynamics approach is used. The approach will show the effect of any changes of those factors to performance, to the whole performance and to the performance of the specific factor. The goal of this paper is to model, through a system dynamics approach, terminal container operations and to provide (propose) a new scenario, based on the reassignment of the resources. This study aims to provide a simulation tool, which able to control performances of each subsystem and of whole system and to predict the effect of performance improvement of sub systems on whole system. The choice of utilizing the SD paradigm derives from the fact that for complex systems, such as the terminal container one, it is very difficult to collect real data regarding its functioning and it is possible show the effect of any changes of those factors to performance, to the whole performance and to the performance of the specific factor.
This paper presents a network intrusion detection technique based on Synthetic Minority Over-Sampling Technique (SMOTE) and Deep Belief Network (DBN) applied to a class imbalance KDD-99 dataset. SMOTE is used to eliminate the class imbalance problem while intrusion classification is performed using DBN. The proposed technique first resolves the class imbalance problem in the KDD-99 dataset followed by DBN to estimate the initial model. The accuracy is further enhanced by using multilayer perceptron networks. The obtained results are compared with the existing best technique based on reduced size recurrent neural network. The study shows that our approach is competitive and efficient in classifying both intrusion and normal patterns in KDD-99 dataset.
Text steganography was never a threat until it has been manipulated by cyber criminals or terrorist for their own benefit. Secret messages sent through steganography with bad intent can cause harm and to the extent jeopardizing the security of a company or country. Thus, text steganalysis is critically needed for forensic investigation purposes due to the tremendous usage of digital media in accessing and disseminating information. Apparently, there has not been much research done on text steganalysis and mostly researches are based on statistical approach. The aim of the research is to propose a new text steganalysis technique in detecting format-based steganography using a simple and effective approach. This paper introduces a novel text steganalysis technique based on color-coded text visualization. An encoding scheme for the text visualization is designed by analyzing text features with respect to colors to detect whitespace pattern. The aim here is to distinguish between natural and stegano text by color-coded visualization. Experiments show that the detection performance accuracy successfully reaches 96.67% with remarkably high precision and recall. This finding has proved that the text visualization technique is capable in detecting text steganography effectively by simply looking at the text visualization image. It is evidently shown that a simple text steganalysis technique is successfully discovered and it is feasible.
The paper has demonstrated InSAR phase unwrapping using Hybrid Genetic Algorithm (GHA). The three dimensional phase unwrapping is performed using three-dimensional best-path avoiding singularity loops (3DBPASL) algorithm. Then phase matching is implemented with 3DBPASL using Hybrid Genetic Algorithm (GHA). Further, the combination between GHA and 3DBPASL is used to eliminate the phase decorrelation impact from the interferograms. The study shows that InSAR produces discontinues interferogram pattern because of the high decorelation. On the contrary, the three-dimensional sorting reliabilities algorithm generated 3-D coastline deformation with bias of −0.06 m, lower than ground measurements and the InSAR method. Therefore,3DBPASL algorithm has a standard error of mean of ± 0.03 m, lower than ground measurements and the InSAR method. Consequently, the 3D-SRA is used to eliminate the phase decorrelation impact from the interferograms. The study also shows the performance of InSAR method using the combination of HGA and 3DBPASL is better than InSAR procedure which is validated by a lower range of error (0.04±0.22 m) with 90% confidence intervals. In conclusion, HGA algorithm can be used to solve the problem of decorrelation and produced accurate 3-D coastline deformation using ENVISAT ASAR data.
In this paper, we develop a probabilistic approach to formally represent and reason about the interaction between knowledge and social commitments in Multi-Agent Systems (MASs). Unlike existent approaches that address social commitments within the scope of agent to agent, our approach considers commitments among multiple agents as well. In particular, we introduce a new logic called the probabilistic logic of knowledge and commitment (PCTLkc+). The logic we introduce combines a logic of knowledge and commitment (CTLKC+) with a probabilistic branching-time logic (PCTL). We then extend the resulting logic by adding further operators for the group knowledge and group commitment so that different flavors of the interaction between the two concepts can be nicely captured and reasoned about. A major contribution of this paper is the semantics of group social commitment, which has not been considered in the literature. Target systems are modeled using a new extended version of interpreted systems, a popular formal framework that models the communication between interacting agents and accounts for the uncertainty in MASs. The advancement of the proposed logic over existing logics lies in its expressiveness power that allows one to not only express knowledge and social commitments independently, but also express combinations between them in the presence of uncertainty when the scope of interacting agents goes beyond two.
In the robotic world, SLAM (Simultaneous Localization And Mapping) is a well-known and difficult problem. For solving this problem many solutions have been presented that are generally based on two methods (tools): EKF (Extended Kalman Filter) and Particle Filter. Each of these methods has some drawbacks, so researchers are looking for other ways for solve these problems. One of the major approaches to solve the SLAM problem is the approach based on evolutionary algorithm, and the algorithm proposed in this study is in the same category. Our final algorithm is hybrid Particle Filter and genetic algorithm for solving the SLAM problem but since one of the most important steps in genetic algorithm and our hybrid solution is fitness function, we want to introduce this step of our algorithm and show some of the results in a simulated environment.
Gravitational search algorithm (GSA) is a new member of swarm intelligence algorithms. It stems from Newtonian law of gravity and motion. The performance of synchronous GSA (S-GSA) and asynchronous GSA (A-GSA) is studied here using statistical analysis. The agents in S-GSA are updated synchronously, where the whole population is updated after each member's performance is evaluated. On the other hand, an agent in A-GSA is updated immediately after its performance evaluation. Hence an agent in A-GSA is updated without the need to synchronize with the entire population. Asynchronous update is more attractive from the perspective of parallelization. The results show that both implementations have similar performance.
This paper presents a performance evaluation of a novel Vector Evaluated Gravitational Search Algorithm II (VEGSAII) for multi-objective optimization problems. The VEGSAII algorithm uses a number of populations of particles. In particular, a population of particles corresponds to one objective function to be minimized or maximized. Simultaneous minimization or maximization of every objective function is realized by exchanging a variable between populations. The results shows that the VEGSA is outperformed by other multi-objective optimization algorithms and further enhancements are needed before it can be employed in any application.
Assembly sequence planning (ASP) known as a hard combinatorial optimization problem is an important part of assembly process planning. Determining the sequence of assembly arguably is quite challenging in ASP problem. A good assembly sequence can help to reduce the cost and time of the manufacturing process.This paper presents an implementation of binary Gravitational Search Algorithm (BGSA) for solving an assembly sequence planning (ASP) problem. Initially, each agent is represented by a feasible assembly sequence according to a precedence matrix. Next, binary Gravitational Search Algorithm (BGSA) is used for updating the current to new feasible assembly sequence to solving ASP problem. Using a case study of ASP, the results show that the proposed approach based on BGSA is more efficient for solving the ASP problem in compare to other approaches based on simulated annealing (SA), genetic algorithm (GA), and binary particle swarm optimization (BPSO).
An important issue to bear in mind in Group Decision Making situations is that of consistency. However, the expression of consistent preferences is often a very difficult task for the decision makers, specially in decision problems with a high number of alternatives and when decision makers use fuzzy preference relations to provide their opinions. It leads to situations where a decision maker may not be able to express all his/her preferences properly and without contradiction. To overcome this problem, we propose the concept of the information granularity being regarded as an important and useful asset supporting the goal to reach consistent fuzzy preference relations. To do so, we develop a concept of granular fuzzy preference relation where each pairwise comparison is formed as a certain information granule instead of a single numeric value. As being more abstract, the granular format of the preference model offers the required flexibility to increase the level of consistency.
In this paper, we proposed a modified decision tree learning algorithm. In order to improve the traditional decision tree learning algorithm, we modified a predict phase though exists approached modified a learning phase. Our proposed approach makes a decision tree by a traditional decision tree learning algorithm and predicts new data items' class label by K-NN. The traditional decision tree learning algorithm predicts a class label based on the ratio of class labels in a leaf node. When it is not easy to classify data set according to class labels, leaf nodes includes a lot of data items and class labels. It causes to decrease the accuracy rate. However, it is difficult to prepare good training data set. So we used K-NN to predict a class label from data items in a leaf node. In order to evaluate our approach, we did an experiment using a part of open data sets from UCL learning repository. We compared our approach to ID3 which is one of traditional decision tree learning algorithms and K-NN in this experiment. Experimental result shows our approach is better than ID3 when the leaf nodes include a lot of data items. When the leaf nodes include some data items, our approach can perform like as ID3. So we can say that our approach is useful to modify a decision tree learning algorithm. We don't change a learning process so that our approach doesn't change the readability of a decision tree. In addition to, our approach is better than K-NN. We think that a decision tree works for K-NN as data cleaning. It says that our approach is useful for K-NN. Though we can show the advantage of our approach according to the experiment, there are some data items we can not predict correctly. In future, we have to evaluate experimental results and process in detail. We have to ascertain the cause of error. And we consider how to modify our approach to correct errors. It is likely that normalization is one of useful method. In addition to, we have to evaluate our new approach using some open data sets.
Opportunity of decision making for improving work efficiency is often occurred in work environment. Although typically appropriate task assignment for improving work efficiency is tended to solve as optimization problem, general optimization approach improves task assignment based on quantitative profits for improving work efficiency such as reducing labor cost or shortening operating time. However, actual superior decision maker who is manager expects nurture of worker for improving work efficiency of forthcoming work as qualitative profit by assigning task to inexperienced worker. Therefore, decision making for appropriate task selection by worker should include qualitative profit as criteria. In order to provide appropriate alternative in decision making for improving work efficiency, this paper proposes aggregation of criteria for quantitative profit and qualitative profit by reducing uncertainty of context of worker's situation.
Within the era of globalisation that acknowledges differences and diversity, multiple languages have been increasingly used to capture requirements. This practice is particularly prevalent in Malaysia, where both Malay and English languages are used as a media of communication. Nevertheless, capturing requirements in multiple languages is often error-prone due to natural language imprecision being compounded by language differences. Considering that two languages may be used to describe requirements for the same system in different ways, we were motivated to develop MEReq, a tool which uses Essential Use Case (EUC) models to support capturing and checking the inconsistency occurring in English and Malay multi-lingual requirements. MEReq is tablet compatible to minimise time for on-site capture and validation of multi-lingual requirements. This paper describes the MEReq approach and demonstrates its use to capture and validate English and Malay requirements.
Requirements validation is a crucial process to determine whether client-stakeholders' needs and expectations of a product are sufficiently correct and complete. Various requirements validation techniques have been used to evaluate the correctness and quality of requirements, but most of these techniques are tedious, expensive and time consuming. Accordingly, most project members are reluctant to invest their time and efforts in the requirements validation process. Moreover, automated tool supports that promote effective collaboration between the client-stakeholders and the engineers are still lacking. In this paper, we describe a novel approach that combines prototyping and test-based requirements techniques to improve the requirements validation process and promote better communication and collaboration between requirements engineers and client-stakeholders. To justify the potential of this prototype tool, we also present three types of evaluation conducted on the prototpye tool, which are the usability survey, 3-tool comparison analysis and expert reviews.
Incomplete and incorrect requirements may cause the safety-related software systems to fail to achieve their safety goals. It is crucial to ensure software safety by identifying proper software safety requirements during the requirements elicitation activity. Practitioners apply various Safety Risk Assessment Techniques (SRATs) to identify, analyze and assess safety risk. Nevertheless, there is a lack of guidance on how appropriate SRATs and safety process can be integrated into requirements elicitation activity to bridge the gap between the safety and requirements engineering practices. In this research, we proposed an Integration Framework that integrates safety activities and techniques into existing requirements elicitation activity.
Requirements elicitation is a critical and error-prone stage in software development where user requirements should be defined accurately to ensure the success of the software system. In a highly competitive market, businesses are focusing more on satisfying customer needs which largely affect customers decision to buy the software product, providing the potential for the success of the software in the market. This study aims to investigate whether eliciting and thus fulfilling most of the individual software requirements imply a high level of customer satisfaction, and what type of requirements that define the perceived product quality and as a result customer satisfaction. To achieve this goal, a questionnaire is conducted based on Kano's model for customer satisfaction in an academic environment. The results showed the priorities that should be followed in the implementation of user requirements which may lead to a higher customer satisfaction and as a consequence to the success of the software.