
Ebook: Intelligent Autonomous Systems 9

The papers in this publication cover both the applied as well as the theoretical aspects of intelligent autonomous systems. Autonomy and adaptivity are key aspects of truly intelligent artificial systems, dating from the first IAS conference in 1989. New directions of research have recently emerged from the synergetic interaction of many fields, such as cognitive science, operations research, mathematics, robotics, mechanics, electronics, informatics and economics, interdisciplinary as well as transdisciplinarily. One key insight is that to realize both intelligence and autonomy, it is crucial to build real-world devices and abstract principles of design from them. The goal of IAS-9 is to lay out new scientific ideas and design principles for artificial systems able to survive in nature and in our society.
The IAS-9 conference aims to address the main issues of concern within the IAS community. The conference covers both the applied as well as the theoretical aspects of intelligent autonomous systems.
Autonomy and adaptivity are key aspects of truly intelligent artificial systems, dating from the first IAS conference in 1989. New directions of research have recently emerged from the synergetic interaction of many fields, such as cognitive science, operations research, mathematics, robotics, mechanics, electronics, informatics, and economics, interdisciplinary as well as transdisciplinarily. One key insight is that to realize both intelligence and autonomy, it is crucial to build real-world devices and abstract principles of design from them. The goal of IAS-9 is to lay out new scientific ideas and design principles for artificial systems able to survive in nature and in our society. The conference proceedings stimulate novel challenges as well as exciting research directions. A total of 146 scientific papers were submitted from 16 countries. All of the submitted papers were reviewed by the program committee, and 112 were accepted as full papers.
We have 5 invited guest speakers at IAS-9: Andrew Adamatzky from the University of West England addresses the new direction of computation; Hod Lipson from Cornell University shows the frontier study of evolutionary robotics; Tomomasa Sato from The University of Tokyo presents the COE project of Japan for the real world application; Masahiro Fujita from SONY addresses the communication and service robotic system; and Shigeyuki Hosoe from RIKEN Bio-mimetic control research center shows the human analysis toward robotic learning.
The conference takes place at Kashiwa new campus of the University of Tokyo, where frontier sciences are being created as “transdisciplinary” studies. A novel research center on artifacts, RACE, is also located on this campus with other three interdisciplinary research centers. I hope all participants of IAS-9 will enjoy the atmosphere of the campus and the facilities of the research building, and experience the novel trend of “transdisciplinary” studies in Japan.
We sincerely appreciate the support of the Inoue Foundation of Science and Kayamori Foundation of Informational Science Advancement, the Robotics Society of Japan and the Research into Artifact Center of Engineering at the University of Tokyo. We would also like to express our gratitude to everybody of the program committee who contributed to the collection and the selection of high-level papers, and to the local committee members who supported the management of IAS-9.
We look forward to seeing you at the conference site of IAS-9 in Tokyo.
Tamio Arai, Rolf Pfeifer, Tucker Balch and Hiroshi Yokoi
We give an overview of recent results on implementation of computing, actions, and emotions in spatially extended reaction-diffusion chemical systems [1-3]. We pinpoint all essential ingredients of intelligence found in spatio-temporal dynamics of nonlinear chemical systems, and show outlines of future designs and prototypes of chemical intelligent ‘gooware’.
This talk will outline challenges and opportunities in translating evolutionary learning of autonomous robotics from simulation to reality. It covers evolution and adaptation of both morphology and control, hybrid co-evolution of reality and simulation, handling noise and uncertainty, and morphological adaptation in hardware.
This paper proposes a real world informatics environment system realized by multiple human behavior support cockpits (HBSC's). The human behavior support cockpit (HBSC's) is synthesized by such supportive environment as illumination environment (physiological support), object access environment (physical support) and background music environment (psychological support). These HBSC's are implemented by cooperating the real world informatics environment system components of humanoid robots, audio/visual agents and ubiquitous appliances. In the paper, the author describes images of real world informatics environment system and presents research results of its constituent elements by dividing them into the following research groups; a humanoid robot (HR), VR system (VR), attentive environment system (AE), neo-cybernetic system (NC), and human informatics (HI) research group.
This paper presents two approaches toward the understanding and realization of constrained motion of the upper limbs. The first approach deals with the analysis on the constrained human movements under the framework of optimal control. It is shown that the combination of the optimality criteria constructed by muscle force change and hand contact force change can explain the formation of the constrained reaching movements. The second approach comes from robotics. It is illustrated by application of reinforcement learning to a robotic manipulation problem. Here, an assumption on the existence of holonomic constraints can accelerate learning by introducing function approximation techniques within model-based reinforcement learning. The idea of estimation (parameterization) of the constraints can relate two different problems under the common perspective of “learning constrained motion.”
In last years, Simultaneous Localization And Mapping techniques monopolize research in mobile robot self-localization, especially in the civilian, Service Robotics domain. In this paper we argue that techniques derived from the industrial scenario, in particular beacon-based triangulation systems, should be taken into considerations even for civilian applications whenever we deem important to provide accurate, a priori specifications about the system behavior in a generic, yet untested environment. The paper provides an analytical expression of sensitivity to errors for triangulation depending on the system geometry.
A force field algorithm based on inclinometer readings is presented. It leads the robot to the goal by preventing it from turning upside down because of the inclination of the terrain. The method is supported by a navigation algorithm, which helps the system to overcome the well known local minima problem for force field-based algorithms. By appropriately selecting the “ghost-goal” position, known local minima problem can be solved effectively. When the robot finds itself in danger of local minimum, a “ghost-goal” appears while the true goal temporarily disappears in order to make the robot go outside dangerous configurations. When the robot escapes the possible dangerous configuration the system makes the true goal appear again and the robot is able to continue on its journey by heading towards it. Simulation results showed that the Ghost-Goal algorithm is very effective in environments with complex rugged terrain for a robot only equipped with an inclinometer.
In this paper, we propose a novel autonomous robot vision system that can recognize an environment by itself. The developed system has self-motivation to search an interesting region using human-like selective attention and novelty scene detection models, and memorize the relative location information for selected regions. The selective attention model consists of bottom-up saliency map and low level top-down attention model. The novelty detection model generates a scene novelty using the topology information and energy signature in the selective attention model. Experimental results show that the developed system successfully identify an environment as well as changing of environment in nature scene.
We present a multi-resolution path planner and replanner capable of efficiently generating paths across very large environments. Our approach extends recent work on interpolation-based planning to produce direct paths through non-uniform resolution grids. The resulting algorithm produces plans with costs almost exactly the same as those generated by the most effective uniform resolution grid-based approaches, while requiring only a fraction of their computation time and memory. In this paper we describe the algorithm and report results from a number of experiments involving both simulated and real field data.
In this paper, we propose a new path and viewpoint planning method for a mobile robot with multiple observation strategies. When a mobile robot works in the constructed environments such as indoor, it is very effective and reasonable to attach landmarks on the environment for the vision-based navigation. In that case, it is important for the robot to decide its motion automatically. Therefore, we propose a motion planning method that optimizes the efficiency of the task, the danger of colliding with obstacles, and the accuracy and the ease of the observation according to the situation and the performance of the robots.
This paper describes a motion planning method for mobile robot which considers the path ambiguity of moving obstacles. Each moving obstacle has a set of paths and their probabilities. The robot selects the motion which minimizes the expected time to reach its goal, by recursively predicting future states of each obstacle and then selecting the best motion for them. To calculate the motion for terminal nodes of the search tree, we use a randomized motion planner, which is an improved version of a previous method. Simulation results show the effectiveness of the proposed method.
The harmonic potentials have proved to be a powerful technique for path planning in a known environment. They have two important properties: Given an initial point and a objective in a connected domain, it exists a unique path between those points. This path is the maximum gradient path of the harmonic function that begins in the initial point and ends in the goal point. The second property is that the harmonic function cannot have local minima in the interior of the domain (the objective point is considered as a border). Our approach has the following advantages over the previous methods: 1) It uses the Finite Elements Method to solve the PDE problem. This method permits complicated shapes of the obstacles and walls. 2) It uses mixed border conditions, because in this way the trajectories are smooth and the potential slope is not too small and the trajectories avoid the corners of walls and obstacles. 3) It can avoid moving obstacles in real time, because it works on line and the speed is high. 4) It can be generalized to 3D or more dimensions and it can be used to move robot manipulators.
In this paper we address the problem of autonomous navigation seen from the neuroscience and the robotics point of view. A new topological mapping system is presented. It combines local features (i.e. visual and distance cues) in a unique structure – the “fingerprint of a place” - that results in a consistent, compact and distinctive representation. Overall, the results suggest that a process of fingerprint matching can efficiently determine the orientation, the location within the environment, and the construction of the map, and may play a role in the emerging of spatial representations in the hippocampus.
We present an incremental algorithm for constructing and reconstructing Generalized Voronoi Diagrams (GVDs) on grids. Our algorithm, Dynamic Brushfire, uses techniques from the path planning community to efficiently update GVDs when the underlying environment changes or when new information concerning the environment is received. Dynamic Brushfire is an order of magnitude more efficient than current approaches. In this paper we present the algorithm, compare it to current approaches on several experimental domains involving both simulated and real data, and demonstrate its usefulness for multirobot path planning.
Information processing within autonomous robots should follow a biomimetic approach. In contrast to traditional approaches that make intensive use of accurate measurements, numerical models and control theory, the proposed biomimetic approach favors the concepts of perception, situation, skill and behavior – concepts that are used to describe human and animal behavior as well. Sensing should primarily be based on those senses that have proved their effectiveness in nature, such as vision, tactile sensing and hearing. Furthermore, human-robot communication should mimic dialogues between humans. It should be situation-dependent, multimodal and primarily based on spoken natural language and gestures. Applying these biomimetic concepts to the design of our robots led to adaptable, dependable and human-friendly behavior, which was proved in several short- and long-term experiments.
This paper presents a method for tracking multiple moving objects with invehicle 2D laser range sensor (LRS) in a cluttered environment, where ambiguous/false measurements appear in the laser image due to observing clutters and windows, etc. Moving objects are detected from the laser image with the LRS via a heuristic rule and an occupancy grid based method. The moving objects are tracked based on Kalman filter and the assignment algorithm. A rule based track management system is embedded into the tracking system in order to improve the tracking performance. The experimental results of two people tracking validate the proposed method.
This paper presents an approach for building a dynamic environment model based on an occupancy grid using a SLAM technique, while detecting and tracking mobile objects using an Auxiliary Multiple-Model particle filter. The mobile objects, such as people, are distinguished from the fixed parts and not included in the model, and their motion is tracked.
We describe a set of simulations to evolve omnidirectional active vision, an artificial retina scanning over images taken via an omnidirectional camera, being applied to a car driving task. While the retina can immediately access features in any direction, it is asked to select behaviorally-relevant features so as to drive the car on the road. Neural controllers which direct both the retinal movement and the system behavior, i.e., the speed and the steering angle of the car, are tested in three different circuits and developed through artificial evolution. We show that the evolved retina moving over the omnidirectional image successfully detects the task-relevant visual features so as to drive the car on the road. Behavioral analysis illustrates its effective strategy in algorithmic, computational, and memory resources.
In this paper, we propose a method to estimate a robot position and orientation in a room by using a map of color histograms. The map of color histograms is a distribution map of colors measured by a mobile robot in multiple directions at various points. The size of the map should be strongly compressed because the memory capacity of the robot is restricted. Thus, the histograms are converted by the discrete cosine transform, and their high-frequency components are cut. Then, residual low-frequency components are stored in the map of color histograms. Robot localization is performed by matching between the histogram of an obtained image and reconstructed histograms out of the map by means of histogram intersection. We also introduce a series of estimations with the aid of odometry data, which leads to accuracy improvement.
This paper deals with map-based self-localization in a dynamic environment. In order to detect localization features, laser rangefinders traditionally scan a plane: we hypothesize the existence of a “low frequency cross-section” of the 3D environment (informally, over people heads), where even highly dynamic environments become more “static” and “regular”. We show that, by accurately choosing the laser scanning plane, problems related to moving objects, occluded features, etc. are simplified. Experimental results showing hours of continuous work in a real-world, crowded scenario, with an accuracy of about 2.5cm and a speed of 0.6m/sec, demonstrate the feasibility of the approach.
In appearance-based localization, the robot environment is implicitly represented as a database of features derived from a set of images collected at known positions in a training phase. For localization the features of the image, observed by the robot, are compared with the features stored in the database. In this paper we propose the application of the integral invariants to the robot localization problem on a local basis. First, our approach detects a set of interest points in the image using a Difference of Gaussian (DoG)-based interest point detector. Then, it finds a set of local features based on the integral invariants around each of the interest points. These features are invariant to similarity transformation (translation, rotation, and scale). Our approach proves to lead to significant localization rates and outperforms a previous work.