Ebook: Intelligent Autonomous Systems 10
This volume contains the proceedings of the tenth International Conference on Intelligent Autonomous Systems (IAS-10) in Baden Baden, Germany. The IAS conference brings together leading researchers interested in all aspects of autonomy and adaptivity of artificial systems. One of the driving forces of this conference is the observation that intelligence and autonomy is best studied and demonstrated using mobile robots acting autonomously in real-world environments and under challenging conditions. The papers contained in the final program of the conference cover a wide spectrum of research in autonomous intelligent systems including agent technology, walking robots, motion planning, robot control, multi-robot systems, navigation, perception, applications, learning and adaptation, and humanoid robots, just to mention some of them. The organization of IAS-10 aims to provide the reader with new ideas and to exchange knowledge in relation to the research of autonomous systems. Previous IAS proceedings are available through IOS Press as well.
Welcome to the 10th International Conference on Intelligent Autonomous Systems (IAS-10). The International Conference on Intelligent Autonomous Systems belongs to the most traditional robotics events and we are proud to host it in Baden Baden, Germany, this year. The goal of the IAS conference is to bring together leading researchers interested in all aspects of autonomy and adaptivity of artificial systems. One of the driving forces of this conference is the observation that intelligence and autonomy is best studied and demonstrated using mobile robots acting autonomously in real-world environments and under challenging conditions.
This year, 80 papers have been submitted to IAS-10. Each paper was evaluated by between two and six reviewers and 49 were accepted for presentation at the conference. IAS-10 features technical presentations of papers with high scientific quality, invited talks, demonstrations, and workshops. The papers contained in the final program cover a wide spectrum of research in autonomous intelligent systems including agent technology, walking robots, motion planning, robot control, multi-robot systems, navigation, perception, applications, learning and adaptation, and humanoid robots, just to mention some of them.
We are especially proud that the keynote presentation will be given by Professor Roland Siegwart from ETH Zurich. He belongs to the most outstanding European researchers in the area of autonomous intelligent robots.
The proceedings include all accepted papers and reflect a variety of topics concerning intelligent autonomous systems. The organizers would like to express their gratitude to all contributors in the preparation phase as well as during the meeting. Without additional assistance, IAS-10 would have never been a success. We would especially like to thank the program committee members for their valuable support and for the preparation of the reviews, which allowed to make a proper selection of high-quality papers. Many thanks also go to the additional reviewers.
The staff at the Autonomous Intelligent Systems lab of the University of Freiburg and at the Institute for Computer Science and Engineering of the University of Karlsruhe took a great part in planning and organizing the conference.
We wish all participants that they enjoy IAS-10 and the beautiful area of Baden Baden. We hope that IAS-10 will provide you with new ideas, allow you to exchange knowledge, and be a prosperous event for you.
Enjoy IAS-10.
Wolfram Burgard, Rüdiger Dillmann, Christian Plagemann, Nikolaus Vahrenkamp
A practical solution based on multi-agent protocols for the development of real-world multi-robot applications is presented. FIPA standard protocols implemented by the JADE library provide the standard functionality for a number of tasks. Robot behaviors are built upon the Player middleware. Such components provide off-the-shelf tools which allow a straightforward implementation of indoor localization and navigation tasks for a team of mobile robots. Such integration combines proven mobile robot algorithms with a distributed infrastructure, and extends the capabilities from a robot alone to a whole team of robots, thus allowing the development of cooperative applications. As a proof of concept, an auction-like goal assignment task is presented: the robot team is given a goal, and each robot proposes an estimated cost for achieving it, then the best proposal is selected. Most of the control flow is automated by the standard interaction protocols. Experimental evaluation demonstrates the advantages of combining both frameworks, for a practical yet sound development of multi-robot applications.
This paper introduces and describes a new type of wheeled locomotor, which we refer to as a “trident steering walker.” The wheeled locomotor is a nonholonomic mechanical system, which consists of an equilateral triangular base, three joints, three links and four steering systems. The equilateral triangular base has a steering system at its center of mass. At each apex of the base is a joint which connects the base and a link. The link has a steering system at its midpoint. The wheeled locomotor transforms driving of the three joints into its movement by operating the four steering systems. This means that the wheeled locomotor achieves undulatory locomotion in which changes in its own shape are transformed into its net displacement. We assume that there is a virtual joint at the end of the first link. The virtual joint connects the first link and a virtual link which has a virtual axle at its midpoint and a virtual steering system at its end. We prove that, by assuming the presence of such virtual mechanical elements, it is possible to convert the kinematical equation of the trident steering walker into five-chain, single-generator chained form in a mathematical framework, differential geometry. Based on chained form, we derive a path following feedback control method which causes the trident steering walker to follow a straight path. The validity of the mechanical design of the trident steering walker, the conversion of its kinematical equation into chained form, and the straight path following feedback control method has been verified by computer simulation.
In outdoor environments, a great variety of ground surfaces exists. To ensure safe navigation, a mobile robot should be able to identify the current terrain so that it can adapt its driving style. If the robot navigates in known environments, a terrain classification method can be trained on the expected terrain classes in advance. However, if the robot is to explore previously unseen areas, it may face terrain types that it has not been trained to recognize. In this paper, we present a vibration-based terrain classification system that uses novelty detection based on Gaussian mixture models to detect if the robot traverses an unknown terrain class. If the robot has collected a sufficient number of examples of the unknown class, the new terrain class is added to the classification model online. Our experiments show that the classification performance of the automatically learned model is only slightly worse than the performance of a classifier that knows all classes beforehand.
This paper deals with the problem of how to implement software controller for a robot with a small amount of random access memory (RAM) on its computer. This problem is essentially different from how to solve it with a small amount of RAM. This paper purely compares the trade-off between memory use and performance of a controller. We found that policies that are compressed by vector quantization have an efficient representation.
The presented algorithm keeps track of people walking within the field of view of a laser rangefinder while constructing a short-term model of the static environment. Background detection does not require a learning phase and is based on the extraction of simple geometric shapes and on the notion of their vitality introduced in this study. Object tracking is based on Multiple Hypothesis Tracking and Kalman Filtering, whereas separate Kalman Filters follow people's trajectories. The proposed method is experimentally evaluated.
In this paper we focus on solving a path following problem and keeping a geometrical formation. The problem of formation control is divided into a leader agent subproblem and a follower agent subproblem such that a leader agent follows a given path, while each follower agent tracks a trajectory, estimated by using the leader's information. In this paper, we exploit nonlinear model predictive control (NMPC) as a local control law due to its advantages of taking the robot constraints and future information into account. For the leader agent, we propose to explicitly integrate the rate of progression of a virtual vehicle into the local cost function of NMPC. This strategy can overcome stringent initial constraints in a path following problem. Our approach was validated by experiments using three omnidirectional mobile robots.
Focusing the development of non-industrial robotics in the last decade the growing impact of service and entertainment robots for daily life has emerged from pure science fiction to a serious scientific subject. But still many questions in how to solve everyday tasks like laying the table or even “simpler” detecting objects in unstructured areas with varying lighting conditions are unsolved. Hence the strong need to evaluate and exchange different approaches and abilities of multiple robotic demonstrators under real world conditions is also a crucial aspect in the development of system architectures. In this paper an architecture will be described providing strong support for simple exchange and integration of new robot abilities.
In our research, we are concerned with sensing the environment using mobile robots. This enables selection of optimal sampling locations in order to produce maximum information about the environment. Selection of sampling locations plays a key role in hospital environments, for example, where humidity and temperature levels or carbon dioxide concentration may require regular monitoring. On the other hand, the accuracy of a regression model depends on the sampling locations, which is significant, for example, in planetary exploration.
In geostatistics optimal spatial sampling strategies search for sensor locations that produce minimal variance in estimates with a restricted number of sensors. Minimal variance is achieved, for example, by minimizing the conditional entropy of unobserved locations, where the environment is modelled using Gaussian processes.
In this paper, we propose an experimental environment for optimal spatial sampling using mobile sensors. Mobility reduces the reliability of sampling locations due to odometer failures, which again reduce the likelihood of the model. We have resolved this problem by building an experimental environment where a ceiling camera vision system provides multi-robot localization in an area measuring approximately 240×160 cm. Preliminary experiments compare optimal spatial sampling for both stationary and nonstationary models using scalar measurements of ambient light and magnetic flux density.
The NDT[3] was presented as an alternative to ICP-based scan matching algorithms. The problems of ICP-based algorithms derived of the establishment of correspondences do not appear in the NDT approach. However, the NDT requires accurate and dense sets of readings to work with. Consequently, the NDT is not well suited to perform scan matching with sensors that produce sparse sets of noisy readings, such as ultrasonic range finders. In this paper, an extension of the NDT to deal with sparse sets of noisy readings, the sNDT, is presented. The necessary processes to overcome the sparsity of sonar readings, as well as those involved in the scan filtering, are described in the paper. Experimental results compare the new approach to other well known scan matching methods.
We present a domestic robot assistant system designed around the objective to enable flexible and robust behavior in human centered environments. Emphasis is put on integration of different domains of abilities into a common framework to study techniques for robust and flexible human-robot interaction and general proactive decision making under uncertainty. The robotic system is designed to be fully autonomous and combines capabilities from the domains of navigation, object manipulation and multi-modal human-robot interaction. It is able to perform some typical missions in real world environments.
Edge cost measures for topological maps should be consistent with the actual difficulties experienced upon map edge traversal. This is a non-trivial problem for robots steered by a behaviour-based control subsystem, as little a priori information is available about the real trajectory emerging during motion. As solution, the paper proposes to learn consistent estimates of major cost factors for topological edges through a posteriori observation of situation assessments produced by the low-level control layer. These observations are then subjected to a spatial or temporal integration, yielding a multi-dimensional cost vector. Simulation results show that increasingly accurate cost estimates can indeed be derived using this strategy. This allows the use of the appealing abstract topological map representation for high-level navigation planning even in cluttered terrain, where consistent edge traversal cost estimates are indispensable for efficient path computation.
This paper focuses on motion control problems of an omnidirectional robot based on the Nonlinear Model Predictive Control (NMPC) method. The main contributions of this paper are not only to analyze and design NMPC controllers with guaranteed stability to nonlinear kinematic models, but also to show the feasibility of NMPC with a real fast moving omnidirectional robot.
Distributed heterogeneous robotic systems are often organized in component-based software architectures. The strong added value of these systems comes from their potential ability to dynamically self-configure the interactions of their components, in order to adapt to new tasks and unforeseen situations. However, no satisfactory solutions exist to the problem of automatic self-configuration. We propose a self-configuration mechanism where a special component generates, establishes and monitors the system configurations. We illustrate our approach on a distributed robotic system, and show an experiment in which the configuration component dynamically changes the configuration in response to a component failure.
Object recognition has traditionally been approached using primarily vision-based strategies. Recent research suggests, however, that intelligent agents use more than vision in order to comprehend and classify their environment. In this work we investigate an agent's ability to recognize objects on the basis of nonvisual proprioceptive information generated by its body. An experiment is presented in which an industrial robot collects and structures information about various objects in terms of its physical configuration. This information is then analyzed using a Bayesian model, which is used subsequently for classifying objects.
This paper proposes a novel way of characterizing the local geometry of 3D points, using persistent feature histograms. The relationships between the neighbors of a point are analyzed and the resulted values are stored in a 16-bin histogram. The histograms are pose and point cloud density invariant and cope well with noisy datasets. We show that geometric primitives have unique signatures in this feature space, preserved even in the presence of additive noise. To extract a compact subset of points which characterizes a point cloud dataset, we perform an in-depth analysis of all point feature histograms using different distance metrics. Preliminary results show that point clouds can be roughly segmented based on the uniqueness of geometric primitives feature histograms. We validate our approach on datasets acquired from laser sensors in indoor (kitchen) environments.
We present an autonomous multi-robot system that can collect objects from indoor environments and load them into a dishwasher rack. We discuss each component of the system in detail and highlight the perception, navigation, and manipulation algorithms employed. We present results from several public demonstrations, including one in which the system was run for several hours and interacted with several hundred people.
The work presented in this paper is our first step toward the development of an exoskeleton for human gait support. The device we foresee should be suitable for assisting walking in paralyzed subjects and should be based on myoelectrical muscular signals (EMGs) as a communication channel between the human and the machine. This paper concentrates on the design of a biomechanical model of the human lower extremity. The system predicts subject's intentions from the analysis of his/her electromyographical activity. Our model takes into account three main factors. Firstly, the main muscles spanning the knee articulation. Secondly, the gravity affecting the leg during its movement. Finally, it considers the limits within which the leg swings. Furthermore, it is capable of estimating several knee parameters such as joint moment, angular acceleration, angular velocity, and angular position. In order to have a visual feedback of the predicted movements we have implemented a three-dimensional graphical simulation of a human leg which moves in response to the commands computed by the model.
This paper describes an autonomous learning method used with real robots in order to acquire ball passing skills in the RoboCup standard platform league. These skills involve precisely moving and stopping a ball to a certain objective area and are essential to realizing sophisticated cooperative strategy. Moreover, we propose a hybrid method using “thinning-out” and “surrogate functions” in order to reduce actual trials regarded as unnecessary or unpromising. We verify the performance of our method using the minimization problems of several test functions, and then we address the learning problem of ball passing skills on real robots, which is also the first application of thinning-out on real environments.
We describe a multi-modal software system for executing navigation missions in an urban environment, focusing on the robust treatment of anomalous situations such as blocked roads, stalled vehicles and tight maneuvering. Various recovery mechanisms are described relative to the nominal mode of operation, and results are discussed from the system's deployment in the DARPA Urban Challenge.
This Study focuses on efficient management for a large-scale logistic center and proposes a new method how to assign stocked products to racks in a warehouse so as to minimize a round-up time (the maximum make-span time) to fetch ordered products. Self-organizing map (SOM), one of artificial neural networks, is employed to determine stocked products' allocation to the racks. In applying SOM, a number of order forms for rounding up the products are treated as a set of input signals and rack positions are regarded as SOM topology. Numerical simulation proves that the proposed method allow us to determine an efficient assignment of the stocked products to the racks.
For an efficient seaport terminal, we propose a novel operational model, namely, a double container-handling operation among operating machines, such as automated guided vehicles (AGVs), automated transfer cranes (ATCs), and quay container cranes (QCCs), in a seaport terminal system. In addition, a passing lane is provided in a container storage yard in order to activate the container-handling operation by the AGVs and ATCs. In this paper, the effect of the double container-handling operation and passing lane on the system utilization is examined. Finally, the effectiveness of the proposed operational model with a passing lane is discussed on the basis of the operating time and obtained number of operating machines for a given demand in consideration of a mega-container terminal.
Many applications in mobile robotics require the safe execution of a real-time motion to a goal location through completely unknown environments. In this context, the dynamic window approach (DWA) is a well-known solution which is safe by construction —assuming reliable sensory information— and has shown to perform very efficiently in many experimental setups. Nevertheless, the approach is not free of shortcomings. Examples where DWA fails to attain the goal configuration due to the local minima problem can be easily found. This limitation, however, has been overcome by many researches following a common framework which essentially provides the strategy with a deliberative layer. Based on a model of the environment, the deliberative layer of these approaches computes the shortest collision-free path to the goal point being, afterwards, this path followed by DWA. In unknown environments, nevertheless, such a model is not initially available and has to be progressively built by means of the local information supplied by the robot sensors. Under these circumstances, the path obtained by the deliberative layer may repeatedly and radically change during navigation due to the model updates, which usually results in high-suboptimal final trajectories. This paper proposes an extension to DWA without the local minima problem that is able to produce reasonable good paths in unknown scenarios with a minimal computational cost. The convergence of the proposed strategy is proven from a geometric point of view.
This paper considers the problem of multi-robot patrolling along an open polyline, for example a fence, in the presence of an adversary trying to penetrate through the fence. In this case, the robots' task is to maximize the probability of detecting penetrations. Previous work concerning multi-robot patrol in adversarial environments considered closed polygons. That situation is simpler to evaluate due to its symmetric nature. In contrast, if the robots patrol back and forth along a fence, then the frequency of their visits along the line is coherently non-uniform, making it easier to be exploited by an adversary. Moreover, previous work assumed perfect sensorial capabilities of the robots in the sense that if the adversary is in the sensorial range of the robot is will surely be detected. In this paper we address these two challenges. We first suggest a polynomial time algorithm for finding the probability of penetration detection in each point along the fence. We then show that by a small adjustment this algorithm can deal with the more realistic scenario, in which the robots have imperfect sensorial capabilities. Last, we demonstrate how the probability of penetration detection can be used as base for finding optimal patrol algorithms for the robots in both strong and weak adversarial environment.
We propose a novel method for cooperative behavior of multiple robots. To control more than two robots, the proposed method utilizes multiple policies that are created for not more than two robots. By chaining those policies redundantly, we can approximate the optimum policy for all robots, which is never obtained due to the curse of dimensionality. In simulations and experiments on the domain of RoboCup four-legged league, we verify that the proposed method can realize effective cooperation of robots.