Ebook: Workshop Proceedings of the 6th International Conference on Intelligent Environments
This book presents the combined proceedings of three workshops which make up part of the 6th International Conference on Intelligent Environments. The remarkable advances in computer sciences throughout the last few decades are already making an impact in our daily lives. This is enabling the provision of intelligent environments; digital environments which proactively support people in their daily lives. This active area of research is attracting an increasing number of professionals worldwide. The first workshop deals with human-centric interfaces for ambient intelligence applications (HCIAmI' 10), and provides a forum for the exchange of recent results in modelling, design and computing methods for human-centric interfaces in ambient intelligent applications. The second addresses artificial intelligence techniques for ambient intelligence (AITAmI’10), and aims to stimulate the development of human-like effectiveness within the artificial systems that provide support to humans, and the third, the Creative Science 2010 workshop (CS’10), is the first in a series of workshops exploring science fiction as a starting point for research into new technologies and consumer products. Exploring the latest developments in key areas for intelligent environments, this volume is a compilation of the latest research, focused on achieving the deployment of intelligent environments in the real world and influencing the way all of us will live in the future.
We are witnessing a historic technological revolution as computing reaches maturity, becoming immersed in our daily life to an extent that would have been considered science fiction some decades ago.
Advances in the engineering of sensing and acting capabilities distributed in wide range of specialized devices is at last providing an opportunity for the fundamental advances achieved by computer science in the past few decades to make an impact on our daily lives.
This technical confluence is matched by an unique historical context where users are better informed (and more aware of the benefits that technology can provide) and production of more complex systems is becoming more affordable. Sensors/actuators deployed in an environment (in this context this could be any physical space like a house, an office, a classroom, a car, a street, etc.) facilitate a link between an automated decision making system connected to that technologically enriched space. This computing empowered environment enables the provision of an intelligent environment, i.e., “a digital environment that proactively, but sensibly, supports people in their daily lives”. This very active area of research is attracting an increasing number of professionals (both in academia and industry) worldwide.
The prestigious 6th International Conference on Intelligent Environments (IE'10) is focused in the development of advanced Intelligent Environments and stimulates the discussion of several specific topics crucial to the future of this field. As part of the effort to stimulate development in critically important areas, four workshops were supported as part of IE'10. This volume presents the combined proceedings of those four workshops:
The 1st International Workshop on Human-Centric Interfaces for Ambient Intelligence (HCIAmI'10). This workshop serves as a forum for the exchange of recent results in modelling, design, and computing methods for human-centric interfaces in Ambient Intelligence (AmI) applications. The papers presented at the workshop give a perspective on the technologies and methods which offer unobtrusive, intuitive, and adaptive interfaces. These technologies have the potential to make Ambient Intelligence systems easier to use for more people. The goal is to replace the classical paradigm of Human-Computer Interaction (HCI) in which users have to adapt themselves to computers by learning how to use them, with a new one (where the same acronym refers to Human-Centric Interfaces) in which computers adapt to users and learn how to interact with them in the most natural way. This is consistent with an increasing interest in human-centric computing, which aims to offer rich user experiences in comfort, safety and well-being applications. Topics covered range from ubiquitous interfaces and multimodal dialog systems for different applications to the implications of connecting users with impairments in social networks.
The Workshop on Artificial Intelligence Techniques for Ambient Intelligence (AITAmI'10) aims to stimulate the development of human-like effectiveness within the artificial systems that provides support to humans. The event is not focused on a specific application area, although it welcomes reports on applications given the value to inform the community with regards to solutions for specific cases and to extrapolate strategies across areas. The overall emphasis is providing a forum in which to analyze the potential of Artificial Intelligence to make smart environments smarter. Learning, reasoning, adaptation, user preferences and needs discovery, sensible interaction with users, and many other topics form the regular agenda of this event. The content of this section includes the abstract of one keynote speaker, and papers accepted for oral and poster presentations. All these contributions are from recognized professionals in the area who are reporting on their latest reflections and achievements in the improvement of the decision making capabilities of intelligent environments. This edition includes papers from a special session with an emphasis on healthcare applications from the 2nd International Workshop on Intelligent Environments Supporting Healthcare and Wellbeing (WISHWell'10) event.
The Creative Science 2010 Workshop (CS'10) is the first in a series of workshops exploring the use of science fiction to motivate and direct research into new technologies and consumer products. In particular, CS'10 applies a methodology we call Science-Fiction Prototyping (SF Prototyping) which uses stories as prototypes to explore a wide variety of futures. In these proceedings we present two invited contributions from Brian David Johnson who coined the term SF Prototyping and defined the methodology. The first contribution, SF Prototyping, describes the history of SF Prototyping and introduces the methodology. The second contribution is a SF Prototype called ‘Brain Machines’, that illustrates the principles involved. The workshop proceedings then present a number of SF Prototypes drawn from the “Intelligent Environments” research community. Interestingly, in this first workshop many of the stories fall into what might be called explorations of mixed reality technology. In “Tales from a Pod” virtual reality is applied to the provision of intelligent personalised teaching environments, whereas the second, “Mdi”, is a story of an extraordinary portable gadget that produces holograms and can recognise gestures. In the third story “We All Wear Dark Glasses Now” a rather darker application of augmented reality is presented, in which high-tech glasses mislead the wearer into thinking that his world is much nicer that it really is. The fourth story, “Voices From The Interface“, is a voyage into an imaginative world where brain computer interfaces become almost indistinguishable from paranormal phenomena. Paranormal phenomena sometimes feature in folklore, and the fifth story, “Were-Tigers of Belum”, elegantly mixes a mystical tale with the latest high-tech sensory networks to create an engaging story that bridges the gulf between past and present. The sixth paper, “Knowing Yourself”, explores more spiritual aspects by taking an “out of the box” journey into the metaphysical, in which physical objects, events, words, sounds or thoughts can be seen as bundles of energy, a view which could have significant implications for medical technology. Finally, the seventh paper takes us full circle and back to the reality of ourselves and examines some of our most basic understanding of being human, consciousness and free-will, by means of a discussion on the design of future reception robots. We hope you will agree, that this first Creative Science workshop has produced some stimulating ideas with the potential to challenge and push the boundaries of science. If you have enjoyed reading this first set of science fiction prototypes, why not write one yourself and join us at our next Creative Science event? (See creative-science.org for details.)
This volume offers a glance at the latest developments in key areas of the development of Intelligent Environments. It compiles the latest research by active researchers in the field, working to extend the boundaries of science and focused on achieving the deployment of intelligent environments in the real world. The efforts of these professionals will influence the way we live in tomorrow's world. We hope that, as a reader, you will enjoy the content of this volume as much as those who attended these workshops enjoyed the live presentation of the papers and the thought provoking discussions which emanated from them.
The co-editors of this volume would like to thank all those who facilitated the realization of each one of these events: the remaining co-chairs of the workshops, the members of their Program Committees, who facilitated the review of papers, the external reviewers who also contributed to that task, and the conference organizers who provided a supportive environment for the realization of these events.
July 2010
Ramón López-Cózar and Hamid Aghajan HCIAmI'10
Juan Carlos Augusto, Diane Cook and John O'Donoghue AITAmI'10/WishWELL'10
Victor Callaghan, Simon Egerton and Brian David Johnson CS'10
We have been developing a proactive spoken dialog system on a plasma display panel (PDP). “Proactive dialog system” referes to a system with the added functionality of actively presenting acceptable information in acceptable timing. The proposed system is based on spoken dialog. It can detect non-verbal information, such as changes in gaze and face direction and head gestures of the user during dialog, and recommend suitable information. We implemented a dialog scenario to present sightseeing information on the system. Experiments with 100 subjects were held to evaluate the system's efficiency. The system grows particularly clear when dialog contains recommendations.
This paper presents our current work in the design of a multimodal dialogue system for an Ambient Intelligence application in the home environment. The aim of the system is to facilitate users interaction with home appliances, either by voice or using a classical GUI interface. The paper describes the most relevant aspects of the system's operation, which is based on the use of an abstract data structure called ‘action’. It also explains how the system generates responses according to user requests, for example, switching off a light in a room.
Today's Web 2.0 is a place, where people express themselves, interact share their lives, socialize. Thousands of elderly people join various social networking sites or use the Net to keep in touch with their families. But, as we show in the paper, today's social software does not target their needs sufficiently. While there are usable solutions, usability itself does not imply acceptability. Acceptable interfaces should reflect user's habits, follow understandable metaphors, and above all, target their deep needs as precisely as possible. Further, the elderly can not be taken as a homogeneous group characterized by the impairments, but rather as a set of individual human beings, with specific wishes, desires, habits, and with some disabilities, of course. In the paper we formulated qualities which should social software meet to be widely perceived as beneficial and in result accepted by the elderly and by people with serious impairments.
In this paper, we present the results of the comparison between three corpora acquired by means of different techniques. The first corpus was acquired with real users. A statistical user simulation technique has been developed for the acquisition of the second corpus. In this technique, the next user answer is selected by means of a classification process that takes into account the previous user turns, the last system answer and the objective of the dialog. Finally, a dialog simulation technique has been developed for the acquisition of the third corpus. This technique uses a random selection of the user and system turns, defining stop conditions for automatically deciding if the simulated dialog is successful or not. We use several evaluation measures proposed in previous research to compare between our three acquired corpora, and then discuss the similarities and differences with regard to these measures.
Smart phones are one of the most popular devices nowadays. The enrichment of their technical capabilities allows them to carry out new operations beyond the traditional in telephony. This work presents a system that automatically generates user interfaces for Ambient Intelligence environments. This way, smart phones act as “unbiquitous remote controllers” for the elements of the environment. The paper proposes some ideas about the usability and adequacy of these interfaces.
This paper describes a multimodal dialogue system under development in our lab which has been designed to assist students and professors in some of their daily activities within the Faculty of a University. The system combines features of multimodal dialogue systems and Ambient Intelligence. The paper focuses on the system's architecture, which is based on XHTML+Voice. It explains the speech- and GUI-based interfaces, and discusses the connection of both. Then it explains the system usage, shows a sample interaction and discusses the identification and localization of users within the Faculty. Finally, it presents the conclusions and outlines possibilities for future work.
In this paper we propose a novel cluster-and-label semi-supervised algorithm for utterance classification algorithm. The approach assumes that the underlying class distribution is roughly captured through -fully unsupervised- clustering. Then, a minimum amount of labeled examples are used to automatically label the extracted clusters, so that the initial label set is “augmented” to the whole clustered data. The optimum cluster labeling is achieved by means of the Hungarian algorithm, traditionally used to solve any optimization assignment problem. Finally, the augmented labeled set is applied to train a Naïve Bayes classifier. This semi-supervised approach has been compared to a fully supervised version, in which the initial labeled sets are directly used to train the classifier.
This paper discusses our plans to improve an already implemented multimodal dialogue system for control of house appliances, called Mayordomo. We think that by means of the planned improvements, the dialogue system will become more user-adaptive and thus the interaction will be more user-friendly. The adaptation will be implemented considering information not taken into account in the initial version of the system, namely user localization and identification. The paper discusses our current study on methods to achieve this kind of information, as well as our plans to employ user profiles.
Ambience Intelligence (AmI) permits accessing information anytime, anywhere, that is why traditional human-computer interfaces like mouse and keyboard have become obsolete. Speech has become the best alternative for that type of systems, as it permits the users to have their eyes and hands free so that they can develop their daily activities like driving. In this paper we present a multimodal tutoring platform called ORIENTA, which acts as an assistant which provides academic information, schedules tutoring appointments and assists tutors by managing student profiles and making pedagogic suggestions in academic AmI environments.
We explore a coherent combination of two jointly implemented logic programming based systems, namely those of Evolution Prospection and Intention Recognition, to address a number of issues pertinent for Ambient Intelligence (AmI), namely in the home environment context. The Evolution Prospection system designs and implements several kinds of well-studied preferences and useful environment-triggering constructs for decision making. These enable a convenient declarative encoding of users' preferences and needs, as well as reactive constructs like goal triggering rules. The other system performs intention recognition by means of Causal Bayes Nets and a planner. This approach to intention recognition is appropriate to tackle several AmI issues, such as security and emergency. We also present a novel method for collective intention recognition to allow tackling the case where multiple users are of concern. We exemplify our methods with examples in the elder care domain as it is one typical concern in the home environment context.
In smart spaces such as smart office and workplaces, users are surrounded by hundreds of networked devices and services which often vanish into the background. Therefore, the users are often unaware of possible tasks which they can achieve within a given physical space. Moreover, the user does not want to dig in a long list of individual services and devices for functionalities which are required to accomplish a task at hand. To enable doing more with less in everyday life, we envision a future intelligent computing where the users should not handle directly functionalities provided by individual services and devices but rather high-level tasks, e.g. ‘watch a movie’ or ‘borrow a book’. In this paper, we present our vision of a task computing framework which is to deal with development, deployment, discovery, recommendation, and execution of high-level tasks within smart spaces. We outline a research agenda aiming to realise the proposed framework.
This paper describes work that seeks to take the first steps towards demonstrating the application of Sci-Fi prototyping methodology to developing futuristic products. Science-Fiction Prototyping is based on an iterative interaction between science fiction and fact. We begin the paper by explaining the science-fiction prototyping process. We then identify a futuristic product to act as an exemplar of this process (free willed domestic robots) before describing virtual-reality systems and how they can provide a suitable visualisation environment for conceptualizing products. Next we identify a paper that provides a promising approach to emulating intelligent control and free-will, quantum computing. We then provide an introduction to quantum computing and argue the need for a special quantum development environment before presenting an architecture for the quantum development tools and the robot. Finally we present an evaluation methodology before summarising our work. We are in the process of implementing this system and we hope that by the time the workshop occurs, we will be able to report some initial results.
In this paper we detail the design intricacies and implementation details of a novel Selective Activity Monitoring (SAM) system targeted for homes for the elderly. The system is designed to support people who wish to live alone but are at risk because of old age, ill health or disability. The system works on the principle of using sensor units (SU) to monitor the appliance throughout a house and detect when certain desired electrical equipments are turned on. Rules are defined for appliances to turn on in certain time intervals or turn off after a cut-off time. The rules are flexible and can be user-defined based on the daily activities of a person. Several levels of alarm conditions have been created based on combination of rules that are violated. Any number of sensor units may be installed in a house, one each to monitor an electrical appliance. A central controller unit (CCU) queries the sensor units and logs the data into a PC at a pre-defined rate. Communication between the SUs and the controller is using radio-frequency wireless media. The rules inference engine runs on the PC and whenever the situation warrants, sends a text message to the care-givers or relatives. Since no vision sensors (camera or infra-red) are used, the system is non-invasive, respects privacy and has found wide acceptance. The system is completely customizable, allowing the user to select which appliances to monitor and define exactly what is classified as unusual behavior.
This study proposes a combinatory feature selection method for HRV and ECG signals classification to detect the level of stress. The main purpose of feature selection methods is to reduce the dimension of features to those data that their changes are sensitive to the algorithm that is being used. This will make the performance of classifiers more efficient with higher accuracy and will reduce the time of processing. The paper studies the effect of three feature selection methods ANOVA, SFS and SBS on the complete list of ECG and HRV features derived from 16 subjects. For every set of features resulted from different feature selection, the level of error for stress detection is investigated using ANFIS classifier. Finally the method of combinatory feature selection method (intersection area of mentioned selection method) is compared in the matter of error index. The results reveal that the combinatory method consists of the intersection of ANOVA, SFS and SBS algorithms has the benefit of good and accurate detection results; fast and low memory space, low time complexity, and the computations are not so complicated.
In this paper, an improvement of experimental for pattern recognition of SEMG signals of hand gesture using spectral estimation and neural network is proposed. Proposed spectral estimations are the Covariance method and the Linear Vector Quantization (LVQ) is chosen for Neural Network classification. The raw SEMG signals been captured from SEMG Amplifier and the Auto Regressive (AR) Covariance returned the power spectral density (PSD) magnitude squared frequency response. The 4 channels of AR data will be combined and a fine tuning step by using LVQ will then incorporate for pattern classification. The detail of the experiment and simulation conducted described here to verify the differentiation and effectiveness of combined channels PSD method for SEMG pattern classification of hand gesture.
This paper presents an Intelligent Informatics Platform, which feed information for decision making in a Precision Agriculture environment. One major challenge faced by agriculture in Malaysia is crop productivity. There is a clear need for Information and Communications Technology (ICT) adoption in the management of crops. ICT can help not only in the monitoring of the crop, but also to provide assistance in the form of decision support for the plantation operators. Various sensors being deployed at the plantation, together with observations from the farmers and rules defined by the domain expert, Intelligent Informatics Platform (I2P) will be able to produce advices for the farm operators on the condition of the crops and also the mitigation plan if the system identified any problems. This is made possible with the use of ontology with OWL and SWRL reasoning.
In the past decade, smart home environment research has found application in many areas, such as activity recognition, visualization, and automation. However, less attention has been paid to monitoring, analyzing, and predicting energy usage in smart homes, despite the fact that electricity consumption in homes has grown dramatically. In this paper, we extract the useful features from sensor data collected in the smart home environment and select the most significant features based on mRMR feature selection criterion, then utilize three machine learning algorithms to predict energy use given these features. To validate these algorithms, we use real sensor data collected from volunteers living in our smart apartment testbed. We compare the performance between alternative learning algorithms and analyze the results of two experiments performed in the smart home.
This paper describes the design process of a cognitive sensor network using wireless sensor nodes integrated in gloves for fire fighters. The gloves support protection functions for fire fighters during operations like temperature warnings and retreat messages. Warning decisions are processed on the glove based on sensor and communication data whereas haptic feedback to the fire fighter and hazard information are transmitted as reactions. Monitoring the cognitive network of several gloves will be done by the command post. Designing such cognitive networks cannot be done considering the technical part only. Because the glove is part of the protection equipment of the fire fighters, they have to trust it. Therefore adaptivity of both, the fire fighters and the glove sensor network is required. It will be shown, that using a particular design method with iterative process steps, the complexity of such a cognitive design is controllable.
The analysis of movements using inertial sensors represents an interesting alternative to video cameras, or other instrumentation used in posture analysis (treadmills, force plates, pressure plates, EMG). Inertial-sensor based analysis has been shown to be useful to classify Activities of Daily Living for situation assessment, healthcare applications, or to understand human emotions from body posture. We classify movements using a “lexical-like” approach. We use a vector representation of movements using a technique able to extract a great number of generic features, and a methods of classification, inspired by text mining, and machine learning techniques with some modifications, that transform our vector space from the feature-value space into a feature-frequency space. We used this method to classify a set of 21 movements performed by 13 people with good recognition results. Then we tested our method on the public WARD 1.0 database outperforming the results presented in literature on that database. The method we describe also shows to be technologically independent and semantically scalable, uses fast algorithms and appears to be suitable for every practical application where runtime movement analysis with big dictionaries could be a key factor.
In this paper, we discuss how spoken dialogue systems technology, key factor for user-friendly human-computer interaction, can be deployed in intelligent environments (IEs). An IE is a physical space augmented with computation, communication and digital content. In particular, we show how a spoken dialogue management component may be integrated in Adaptive and Trusted Ambient Ecologies (ATRACO), one realization of next generation intelligent environments. ATRACO is a EU-funded FET/ICT 7th Framework Programme project. We provide an overview on the ATRACO architecture and its interleaving with the spoken dialogue manager, called OwlSpeak. Several conversational phenomena that are explicitly important within the application domain are introduced and provide a detailed look of OwlSpeak's capabilities.