Ebook: Use of Risk Analysis in Computer-Aided Persuasion
Defense systems, computer networks and financial systems are increasingly subject to attack by terrorists, fraudsters and saboteurs. New challenges require concerted action from researchers to develop successful designs for computer networks that are resilient to deliberate attacks, and capacities must be built into the network to allow for quick and automatic recovery with little or no interruption or loss. This book contains contributions to the NATO Advanced Research Workshop on The Use of Risk Analysis in Computer-Aided Persuasion held in Antalya, Turkey in May 2011. The goal has been to further the research and knowledge in the areas of combatting threats and risks to entities, systems, and networks. The papers presented here fall into three categories. The first deals with fraud and financial systems. The second one considers threats to security and economic entities. The third group of papers considers computer network, IT, and software threats. Cutting edge methods based on neural networks, social network analytics, Bayesian networks, decision trees, and risk hedging have been proposed in these works to deal with these challenging problems.
Use of Risk Analysis in Computer-Aided Persuasion: Threats to the security of defense systems, computer networks, and financial systems by terrorists, fraudsters, and saboteurs are an ever increasing danger. They are posing new challenges that require concerted action from researchers for developing new technologies for confronting these tactics. Security threats in recent times have also taken novel forms, for example attacks on computer networks, or attacks on the financial system network (through hacking or through a computer virus), with the aim being to cripple or paralyze these systems. This calls for protective action that identifies weak points in the system, manages and minimizes the risk involved. A resilient financial system should withstand the collapse of a financial institution (due to economic or malicious activity) with minor side effects or reverberations. In addition, terrorists attempt to use these same venues to further their activities, whether using the internet for recruiting or for messaging purposes, or using the financial system for funding their activities. Defensive strategies on computer networks call for successful designs to withstand deliberate attacks by intelligent agents. Designing computer networks that are resilient to attacks consists of building capacities in the network that allow for quick and automatic recovery with little or no interruption or loss.
As part of the NATO Science for Peace and Security Programme, the NATO Workshop on the subject of Use of Risk Analysis in Computer-Aided Persuasion was held during May 24-26, 2011 in Antalya, Turkey. The goal has been to further the scientific knowledge about ways to detect, counter, and protect against intelligent threats that target computer networks, financial systems, and defense and economic entities. This book is a compilation of the papers that are presented in this workshop.
The book's papers fall into three major categories. The first group of papers considers fraud and financial systems. Topics such as fraud in insurance claims, and in credit cards are considered, and effective approaches based on genetic algorithms, neural networks, and social network analytics are developed. Approaches for the prevention of money laundering and the mitigation of credit risk are presented. This category also analyzed topics, such as how the interconnectivity of financial institutions influences the depth of defaults, how cross-holdings affect shareholder networks, and how fraud and manipulation in financial trading can lead to market collapse. The second category of papers considered threats to security and economic entities. It considered topics such as the use of immune-system inspired approaches, the use of submarine swarms, the use of text mining, and the use of combined intelligence/surveillance methods to counter and protect threats from terrorists and adversaries. New techniques based on fuzzy systems, Bayesian networks, belief theory, and risk hedging are proposed to handle threats and risks to major economic entities. The third and last category of papers considers computer network, IT, and software threats. Effective approaches based on information theory are proposed for the protection against computer network intrusion. Neural networks and decision trees are developed for software fault detection. Proactive procedures to avoid or mitigate crippling downtime risks, and to protect electronic voting systems are also proposed.
Overall, the workshop covered many diverse topics related to intelligent threats. We hope this workshop will be one contribution (among many performed by other researchers) that will make our world safer and our protection investments more cost-effective.
All talks are videotaped and posted on the workshop website (www.dogus.edu.tr/NatoARW-Risk).
The workshop co-directors are Ekrem Duman (Dogus University, Turkey) and Mohamed Naceur Azaiez (Tunis Business School, Tunisia), and the workshop organizers are Bart Baesens (Katholieke Universiteit Leuven, Belgium) and Amir Atiya (Veros Systems, USA). We would like to acknowledge the great help of Yeliz Ekinci (Dogus University, Turkey) and Yusuf Sahin (Marmara University, Turkey) , whose help in the organization of the workshop was immense, and led to its success.
Ekrem Duman (Dogus University, Turkey)
Amir Atiya (Veros Systems, USA)
The complexity of the financial system is increasing at an accelerating pace. This evolution is driven not only by technology but also by business practices. For example, as risk managers reach out for different exposures in the pursuit of diversification, the financial network becomes more tangles. The ensuing uncertainty has been indicated as one of the causes that aggravated the global financial crises. In this talk, we will cover some recent results on the econometrics of networks and describe an empirical framework for quantifying network risk.
This paper aims to understand the role of most used centrality measures on the specific topics of shareholding networks, and to propose new measures for describing the emergence of market leaders. Data on companies and shareholdings are embedded into a weighted network. Both operational research and complex networks approaches are used. The former provides tools for unveiling paths of ownership and control that could not be evident at a first glance. The latter proposes the concept of centrality measure and offers a wide set of measures that add knowledge in understanding the relevance of nodes in the network. The combination of both approaches gives hints for understanding phenomena like tunnelling, that is relevant for the detection the response to market fluctuation due to financial instabilities. The present analysis also considers the response of the system under random and targeted attacks. The Italian Stock Market is examined as a case study.
We aim to identify potential vulnerabilities in investment management and financial trading structures that malicious agents can exploit. The opportunities for malicious trading have always existed. In the past, threats have been market “directional” but today they can be disguised as market “noise” and distributed. The “anomalies” in market structure can be systematically triggered.
We consider the most significant potential threat rising from the prolonged loss of market liquidity leading to forced disruption of trading across many asset markets. We focus on a single specific objective, “the most probable” destabilisation of price discovery that can lead to difficulties in asset valuation, widespread loss of confidence and forced asset redemptions. In extreme, this could evolve to the point of endangering financial stability.
Given the vast range of issues, we can only hope to and establish cross-disciplinary links and make a modest contribution to the future research agenda, but an outline of a possible threat that is organisationally structured to mimic bona fide entities where the agents can be disguised as cross-financed “proprietary trading” operations is presented.
Portfolio selection for strategic asset management is a crucial activity in many organizations. This activity is a rather complex process that involves a variety decision making situations in a very volatile and unpredictable environment. Main objective of asset management is through trading with financial instruments to maximize profit of the organization. With the ongoing globalization process and rapid development of networked computer information system, the platform where financial market is conducted becomes a large scale system of interconnected entities. Therefore in decision making process, many factors should be considered, indicated as risks, which affect the financial asset dynamics, especially those which simultaneously exhibit phenomena of randomness, uncertainty and vagueness. The source of these factors is not required and connected only with the nature of the system, but seriously should be taken in consideration deliberate actions that can disrupt the behavior of the system even functioning of the whole organization. In focus in this paper are deliberate actions (threats) and how to diminish their influence. An approach is suggested, where the primary basis should be an adaptive model for the key part of the system that is prone to threats, in this case model for portfolio selection. As input i.e. adaptive variable, we propose to use the output of a fuzzy inference system. In modeling of fuzzy inference system, a possibility is given to involve in decision making process. The risks rising from the nature of the system and deliberate actions by both human nature and from attacks on computer networks in which the asset management system is conducted.
Internet sales are growing very fast, thus becoming a major target for fraudsters. This fraud is mostly perpetrated by international organized crime rings. Fighting fraud is thus critical for our societies' security. Classically, fraud detection has been implemented through data mining techniques; however, social networks techniques have recently emerged in the security domain. We present here a methodology to use social networks together with data mining for fraud analysis and illustrate the approach through results recently obtained in an ongoing project, with transaction data provided by a major national network.
Standard linear models are very easily readable, but have limited model flexibility. Advanced neural network models and kernel-based learning techniques are less straightforward to interpret but can capture more complex multivariate non-linear relations. Whereas more flexible models may be appealing because of the higher learning capacity, it is more challenging to control the generalization capacity and avoid overfitting using, e.g., Bayesian inference or model complexity criteria. In financial practice, it is important to consider the prediction capacity combined with the element of model risk, inherent in so-called black box models. Model combinations using linear and kernel models and rule extraction techniques are used to combine kernel models with higher performance and limit model risk. The approach is illustrated with practical case studies.
With the developments in the information technology and improvements in the communication channels, fraud is spreading all over the world, resulting in huge financial losses. Though fraud prevention mechanisms such as CHIP&PIN are developed for credit cards, these mechanisms do not prevent the most common fraud types such as fraudulent credit card usages over virtual POS terminals or mail orders so called online credit card fraud. As a result, fraud detection is the essential tool and probably the best way to stop such fraud types. In this study, classification models based on well-known data mining algorithms such as decision trees, Artificial Neural Networks (ANN), Logistic Regression (LR) and Support Vector Machines (SVM) are developed and applied on the credit card fraud detection problem. Furthermore, a new cost-sensitive decision tree algorithm that minimizes the sum of misclassification costs while selecting the splitting attribute at each non-terminal node of the tree is developed. Performances of the models developed are compared with respect to not only a well-known performance metric True Positive Rate (TPR), but also newly defined cost based performance metrics specific to the problem domain. The performance of the model built using this cost-sensitive decision tree algorithm in identifying frauds is compared with the pre-built models. The results show that this cost-based decision tree algorithm outperforms the existing well-known methods in terms of the fraudulent transactions identified and the amount of losses recovered. This method can be readily applied to real-world credit card fraud detection tasks.
Recently, a large variety of models considering the protection of critical infrastructures against intentional attacks including defense-attacks strategies have been developed. The current chapter exposes some of the models emphasizing the problems' statements and briefly presenting the solution methodologies. Special focus will be made on intentional attacks and reliability problems with and without full information as well as on network problems. Each of the defender and attacker problems will be explained with alternative objectives of each agent. The use of game theory will be discussed. Moreover, protection tools such as redundancy, deterrence and employment of false targets are also outlined.
Automated and accurate detection of anomalous, terrorist documents is a desired capability among numerous nations all over the world. In this study, we have improved an algorithm [1], which deals with this detection challenge. The training part includes two stages: the first, building fuzzy lexicons and the second, constructing clusters of labeled documents using cluster analysis methodology. We assume having two collections of labeled normal and terrorist documents downloaded from normal and terrorist websites, respectively. Three separate, disjoint fuzzy lexicons and two separate sets of clusters are induced from these two collections. In this chapter, we propose a new approach for constructing the lexicons based on the ratio between keyphrase appearance in terrorist and normal documents. The keyphrases are divided into the following subsets: fuzzy normal – keyphrases that appear mainly in the normal documents, fuzzy terrorist – keyphrases that appear mainly in the terrorist documents, and common – keyphrases, which appear in both types of documents in similar frequency. In the detection stage, we combine an existing clustering-based classification method with the fuzzy lexicons. When classifying a new, incoming document, we count the amount of keyphrases in each fuzzy subset (fuzzy normal, fuzzy terrorist, and common). If the fuzzy normal subset is nonempty and the terrorist subset is empty or vice versa, the document is labeled as normal or terrorist, respectively. Otherwise the sizes of the two fuzzy subsets are compared using some threshold criteria. If a definite conclusion cannot be derived, the distances of the whole document vector from both sets of centroids are calculated to reach a final decision.
With growing complexity of civilian or defence systems and structures intelligent security attacks have become more difficult to detect, analyse and respond in real time. Attacks in the information and communication technologies environment are usually defended by individual elements targeted, or according to the type of attack. This approach is difficult, costly and is always a step behind the attackers who have strategic view. Attackers may also target the defender's finances by driving its defence expenditures to unsustainable levels. It is proposed here a general macro vision of defence as a whole, with two major aspects. First, the defence system is composed of independent layers of detection, starting from the cheapest and most available to expert level with high costs, based on autonomous agents, swarm behaviour and immune system metaphor. Secondly, the whole defence should be considered as a single system, with cyber-attacks, financial fraud, money laundering, organised crime, border trafficking, terrorism, guerrilla warfare, stealth attacks, conventional war and other threats are considered as parts of the same defence with feeding information to each other in a real time environment.
The increasing number of Information Security related incidents, organized crimes, intelligent threats deserves much closer attention. It is important to bear in mind that effective security cannot be achieved by relying on technology alone. A tight coordination is required between the people and technology to achieve effective security. Tracking the effects of the Network Society and the Information Revolution on the intelligence agencies of highly developed states will hope to solve the problems of organized crime, intelligent, and terrorist threats. Of course, the analysis of intelligence activities is clearly vital. This study will discuss information security, cryptography, cyber-attacks and cyber threats. Also, study will describe types and forms of intelligence.
There is ample evidence that the demand for products held in an inventory system is often correlated with the returns of securities in financial markets. Therefore, the risks associated with the profit or cash flow in the inventory system can be hedged by investing in a portfolio of instruments in the financial system. In order to get insights, we take this idea to the extreme by supposing that random demand, as well as random supply, both depend “perfectly” on the price of a security in an almost arbitrary fashion. This allows one to represent the cash flow by a replicating portfolio of derivative securities and bonds. Thus, the value of the cash flow needs to be determined in terms of the prices of these financial instruments. The decisions of the inventory manager are therefore based on this pricing mechanism. In particular, in a complete market with some risk-neutral martingale measure that yields no arbitrage opportunities, the expected value of the cash flow should be determined using this measure. We discuss these issues in the context of a single period newsvendor model with random demand and supply.
In this paper we examine the present state of underwater systems including both machine and humans, identify problems associated with these systems and how fuzzy logic can be used to extend the functionality and safety through risk analysis. Specifically in this paper we examine autonomous underwater vehicles investigating navigational issues in a myriad of locations and conditions. We also discuss issues associated with human divers and propose a new sensor acquisition technique to facilitate making diving safer as well as reviews techniques for implementing a new decompression algorithm in hardware.
The evolution of the current industrial context and the increasing of competition pressure led the companies to adopt new concepts of management. In this context, we have recently proposed an integrated management system including Quality, Environment and Safety management systems [1] using the risk management as an integration factor. This paper proposes an implementation of the most important part of the plan phase, consisting in analyzing and selecting the most critical risks regarding all QSE objectives. To this end we propose to adapt the well known risk management approach Fuzzy Failure Mode and Effects Analysis (FMEA) in order to define for each risk a multi-leveled Risk Priority Number relative to different QSE objectives. Then, to select the most critical risks we propose to use a multicriteria approach which is the Analytic Hierarchical Process (AHP) in order to take into consideration the different values relative to each risk.
Risk analysis becomes very important especially with the increase of risk accidents in the industrial fields. In this context, we present in this paper a new approach based on belief functions theory for determining the safety integrity level of a safety instrumented system. This approach consists on collecting data from expert opinions by eliciting judgements using a qualitative method, dividing them in groups using the k-means algorithm and aggregating them by applying a hierarchical method. The output of the data collecting process will be integrated into a risk evaluation model in order to get the safety integrity level. As an evaluation method we proposed a new generalized risk graph named Evidential Risk Graph which is able to deal with imperfect data modeled with the belief functions theory.
In the work eigenvalue problems are considered for the elliptic operators with variable domain. Eigenvalues of these operators are taken as functionals of the domain. Using the one to one correspondence between bounded convex domains and their support functions variation of the domain is expressed by the variation of its support function and calculate the first variation of this functional. Using the obtained formulas behavior of the eigenvalues is investigated when the domain varies. Then shape optimization problems are considered consider for the eigenvalues, the necessary conditions of optimality relatively domain are proved, an algorithm is offered for the numerical solution of the considered problems.
As some characteristics of certain mechanical systems are described by the eigenvalues of the corresponding operators, external problems for the eigenvalues with respect to domain (shape optimization problems) indeed are mathematical formulation of the problem of finding of the optimal form that defines the critical value of the corresponding physical characteristics. Solution of such problems allows avoid some risk situations in the applications.
Network monitoring and protection often involve large-scale networks and many organizations. Data on observing and inferring large-scale events, however, may be collected locally by monitors. Organizations who own monitors and thus data are often reluctant to share information due to security and privacy concerns. Communities form where voluntary organizations join with a good will and contribute data to share. Questions arise: Under a given privacy constraint, what types of shared information would be effective for network monitoring and protection? How to quantify such information from large community-based data repositories? This work describes a large-scale community network, and the corresponding data sets. Metrics from information theory, i.e., Renyi Information Entropy is then introduced to measure the effectiveness of shared information under a common privacy constraint. In particular, two types of shared information are studied, one for centralized and the other for decentralized sharing. Real data from DShield is used as an example to show how effective the shared information is for inference through a large-scale community network.
The new era in technology has derived new risks in computer security. The cyber security becomes as much important as the security in conventional sense. That is why in the recent years we are witnessing rapid investments in security of the computer systems. As a special type of computer systems, the SCADA systems play their role in defining the security systems. Modern SCADA systems used in infrastructure are threatened by cyber-attacks, as a result of their higher visibility in recent years and the conversion of legacy stovepipe implementations to modern information technology (IT) systems. Nevertheless, the modern SCADA systems are very often crucial for normal operation of the everyday life, and must be fully operable at any time. That is why the existence of effective security governance for SCADA is essential for the future computer/SCADA systems.
Several continuous computing technologies can be applied in order to mitigate the IT-related risks that cause business discontinuance. An IT-infrastructure in the form of an “always-on enterprise information system” must be able to meet the requirements for continuous or “24×7×365” computing – an operating platform that represents main prerequisite for business continuance. Paper explores the most common IT business risks and presents a model for implementation of several continuous computing technologies in order to mitigate the risks and enhance business continuity.
A Metropolis criterion based fuzzy Markov game flow controller has been designed for coping with congestion in high-speed networks and networked systems. For such networks the complete and accurate information is not easy to obtain in real time because of the uncertainties and highly time-varying time delays. A viable alternative is to employ the Q-learning, which is independent of a mathematical model and prior knowledge as well as it can be used in game-theoretic framework. It enables to learn the needed parameters from the operating network environment. The fuzzy Markov game offers a promising platform for robust control in the presence of external disturbances and unknown parameter variations that are bounded. The Metropolis criterion can cope with the balance between exploration and exploitation in action selecting. In a similar game-theoretic framework and by employing SOM-based virtual sensors one recent solution to intrusion detection system from the literature, which is compatible with our network flow control design, is highlighted and outlined too. Simulation experiments demonstrate the proposed controller can learn to take the best action in order to regulate source flows. Thus it can guarantee high throughput and low packet loss ratio while efficiently avoiding the congestion.
Financial frauds are one of the important security issues for the banks. On the other hand, money laundering is closely related with such frauds. Banks should score every customer and every transaction for the detection of such financial crimes. These scoring systems need calibration of their parameters if they are based on expert opinions. We have compared the money laundering risk score of customers using previously identified money laundering data and utilized classification techniques. We have also introduced new performance metrics for the performance of classifiers.