
Ebook: Workshops at 18th International Conference on Intelligent Environments (IE2022)

The term Intelligent Environments (IEs) refers to physical spaces in which information and communication technologies are interwoven with sensing technologies, innovative user interfaces, robotics and artificial intelligence to create interactive spaces which increase the awareness and enhance the experience of those occupying them. The growing IE community is rooted in academia, but increasingly involves practitioners. It explores the core ideas of IEs as well as the factors necessary to make them a reality, such as energy efficiency, the computational constraints of edge devices and privacy issues.
This book presents papers from Workshops held during the 18th International Conference on Intelligent Environments, IE2022, held as a hybrid conference in Biarritz, France, from 20 to 23 June 2022. The conference is now recognized as a major annual venue in the field of IE. It offers a truly international forum for the exchange of information and ideas, and welcomes contributions from all technically active regions of the planet.
Included here are 35 papers from the 1st International Workshop on Sentiment Analysis and Emotion Recognition for Social Robots (SENTIRobots’22); 1st International Workshop on Edge AI for Smart Agriculture (EAISA’22); 2nd International Workshop on Artificial Intelligence and Machine Learning for Emerging Topics (ALLEGET’22); 11th International Workshop on the Reliability of Intelligent Environments (WoRIE’22); 2nd International Workshop on Self-Learning in Intelligent Environments (SeLIE’22); 5th Workshop on Citizen Centric Smart Cities Solutions (CCSCS’22); 11th International Workshop on Intelligent Environments Supporting Healthcare and Well-being (WISHWell’22)
Exploring some of the latest research and developments in the field, the book will be of interest to all those working with intelligent environments and its associated technologies.
Intelligent Environments (IEs) combine physical spaces with sensing technologies, innovative user interfaces, robotics and artificial intelligence, to increase the users’ awareness of their surroundings, empower them to carry out their tasks, enrich their experience, and enhance their ability to manage such environments. IEs can be anything – homes, offices, public spaces and even fields. IEs’ growing community, anchored in academia but increasingly involving practitioners, is tirelessly working on bringing them to life. The community explores core ideas of IEs as well as critical issues needed to make them a reality, such as energy efficiency, computational constraints of edge devices and privacy issues.
The 18th International Conference on Intelligent Environments is still affected by COVID-19, even though this is true to a lesser degree than last year. The pandemic has been a mixed blessing for conferences and workshops – tele-attendance has certainly made them more accessible, but the experience does not match being there in person. This has impacted paper submissions, as having a paper accepted does not necessarily mean a ticket to the French Riviera. But things are improving, and while the workshops in 2022 do not quite match the best years, we are proud to report that we have seven workshops this year compared to last year’s four, with 35 papers compared to last year’s 24. The following workshops are included this year:
∙ 1st International Workshop on Sentiment Analysis and Emotion Recognition for Social Robots (SENTIRobots 2022)
∙ 1st International Workshop on Edge AI for Smart Agriculture (EAISA 2022)
∙ 2nd International Workshop on Artificial Intelligence and Machine Learning for Emerging Topics (ALLEGET 2022)
∙ 11th International Workshop on the Reliability of Intelligent Environments (WoRIE 2022)
∙ 2nd International Workshop on Self-Learning in Intelligent Environments (SeLIE 2022)
∙ 5th International Workshop on Citizen-Centric Smart Cities Services (CCSCS 2022)
∙ 11th International Workshop on Intelligent Environments Supporting Healthcare and Well-being (WISHWell 2022)
We are happy to see two completely new Workshops this year (SENTIRobots and EAISA). These two happen to be two with the greatest number of papers, so congratulations on an excellent start! We are pleased that ALLEGET – which came into being by a merger of three other workshops in the dark days of the pandemic, is still with us despite the inauspicious origin – hopefully for many future editions. We are also impressed by two workshops having reached their 11th edition (WoRIE and WISHWell), excellent stamina! We would like to thank all the contributing authors, as well as the members of the organizing and programme committees of the workshops for their valuable work, which contributed to the success of the Intelligent Environments 2022 conference. We are grateful to our technical sponsors, the IEEE Systems Man & Cybernetics Society, IOS Press and the MDPI journals: Sensors, Electronics and Applied Sciences.
In the last few years, the application of robotics in environments common to human beings has been increasing. There exists several proposals of socially aware navigation frameworks for mobile robots, however all of them generally focused in a specific aspect of social relantionships between humans and robots; so, there exist a lack of approaches that integrate all the aspects related to social navigation. This work aims to propose an autonomous navigation framework based on the integration of social perception elements (from a robocentric perspective) with proxemics modelling, considering the presence of human beings and the perception of their needs, feelings or intentions. We verified the feasibility of our approach by implementing it in ROS and Gazebo, and making a qualitative evaluation of its performance in two simulated scenarios where we included people with different fellings about robot prescence, that triggered changes in the path planned by the robot in real time. So, it was concluded that this framework is feasible for implementing social navigation in mobile robots.
Soon robots will cooperate with humans in everyday tasks. These robots must be endowed with social skills so that their behavior will be similar to that of people. One of these behaviors is navigation: how the robot plans the route and moves through ubicomp environments. For example, a social behavior during navigation consists of detecting the position of people and evaluating with proxemics those areas where the robot can move and with what velocity. This work presents a new controller for the following ability of a socially aware person. The robot is equipped with RGB-D and laser sensors and navigates through an ubicomp environment that provides the person’s position at every moment. The system initially estimates the person’s position and its interaction regions at a future instant and then adjusts its path and velocity based on this estimate. Experimental results in simulated environments are included and discussed as initial results to show the performance of this proposal. We include a set of social metrics to validate the proposed results.
The coexistence of service robots in social environments has been intensified in recent years, demanding Human-Robot Interaction (HRI) increasingly fluid and necessary. In this context, the present work aims to develop an architecture, called Erika. Erika provides a chatbot to interact by voice and text commands with a service robot, who implements an autonomous navigation, respecting social restrictions based on proxemic zones. An API is available in Erika for connecting the chatbot with a website application, that in turn establishes communication with the Robot Operating System (ROS) to perform simulated experiments and test the functionality of the chatbot and the social-aware navigation of the robot. Results demonstrate that the service robot can respond to the commands provided through the chatbot and , so that, finally, it can drive autonomous or manual navigation, when necessary.
Social robotics is an emerging area that foster the integration of robots and humans in the same environment. With this objective, robots include capacities such as the detection of emotions in people to be able to plan their trajectory, modify their behavior, and generate a positive interaction with people based on the information analyzed. Several algorithms developed for robots to accomplish different tasks, such as people recognition, tracking, emotion detection, demonstrate empathy, need large and reliable datasets to evaluate their effectiveness and efficiency. Most existing datasets do not consider the first-person perspective from the sensory capacity of robots, but third-person perspective from out of the robot cameras. In this context, we propose an approach to create datasets with a robot-centric perspective. Based on the proposed approach, we made up a dataset with 23,222 images and 24 videos, recorded from the sensory capacity of a Pepper robot in simulated environments. This dataset is used to recognize individual and group emotions. We develop two virtual environments (a cafeteria and a museum), where there are people alone and in groups, expressing different emotions, who are then captured from the point of view of the Pepper robot. We labeled the database using the Viola-Jones algorithm for face detection, classifying individual emotions into six types: happy, neutral, sad, disgust, fear, and anger. Based on the group emotions observed by the robot, the videos were classified into three emotions: positive, negative, and neutral. To show the suitability and utility of the dataset, we train and evaluate the VGG-Face network. The efficiency achieved by this algorithm was 99% in the recognition of individual emotions and the detection of group emotions is 90.84% and 89.78% in the cafeteria and museum scenarios, respectively.
In the context of Human-Robot Interaction (HRI), emotional understanding is becoming more popular because it turns robots more humanized and user-friendly. Giving a robot the ability to recognize emotions has several difficulties due to the limits of the robots’ hardware and the real-world environments in which it works. In this sense, an out-of-robot approach and a multimodal approach can be the solution. This paper presents the implementation of a previous proposed multi-modal emotional system in the context of social robotics; that works on a server and bases its prediction in four modalities as inputs (face, posture, body, and context features) captured through the robot’s sensors; the predicted emotion triggers some robot behavior changes. Working on a server allows overcoming the robot’s hardware limitations but gaining some delay in the communication. Working with several modalities allows facing complex real-world scenarios strongly and adaptively. This research is focused on analyzing, explaining, and arguing the usability and viability of an out-of-robot and multimodal approach for emotional robots. Functionality tests were applied with the expected results, demonstrating that the entire proposed system takes around two seconds; delay justified on the deep learning models used, which are improvable. Regarding the HRI evaluations, a brief discussion about the remaining assessments is presented, explaining how difficult it can be a well-done evaluation of this work. The demonstration of the system functionality can be seen at https://youtu.be/MYYfazSa2N0.
Today, social robotics encompasses the ability to improve robot communication with humans. In this aspect, the robot’s behaviors are extended to maintain an acceptable human-robot interaction (HRI) for people, which implies that the robot must know the environment in which it is located and establish rules of behavior. Likewise, a large part of the robot’s capabilities is concentrated in the planning of social paths influenced by the emotions of the human being. In which, the robot is able to vary the speed of the path or learn where it should be located and where it should not be. In this article, a new cost function for A* path planning method is proposed for social navigation, on an environment shared with people with different emotional characteristics. These characteristics are classified into three categories of emotions (positive, neutral and negative). Demonstrating the results and advantages of the proposed algorithm in simulated virtual environments with crowds of people interacting.
Autism is a neurodevelopmental disorder characterized by deficits in social, interpersonal interaction and communication skills. A generalized facial emotion recognition model does not scale well when confronted with the emotions of autistic children due to the domain shift inherent in the distributions of the source (neurotypical) and the target (autistic) population. The dearth of labeled datasets in the field of autism exacerbates the problem. Domain adaptation using a generative adversarial model (GAN) counters this disparity by creating an adversarial model that aligns features of the source and target domains using adversarial training. This paper looks at building a facial emotion classifier model that can identify the idiosyncrasies associated with an autistic child’s facial expression by generating feature-invariant representations of the source and target distribution. The objective of the paper is two-fold – a) build a discriminative classifier to identify the emotions of autistic children accurately b) to train a feature generator to produce an invariant feature representation of the source and target domains taking into account their similar yet different data distributions, in the presence of unlabeled target data. Investigation into automatic recognition and classification of the facial expressions of the autistic population has not been pursued extensively vis-a-vis a neurotypical population due to the complexities associated with eliciting and interpreting data obtained from autistic children.
The Smart Village project is a scientific programme of the University of Corsica where an information system based on LoRA communication technology has been deployed, allowing the collection of millions of pieces of information, notably concerning agricultural activities. In this article, we present the system deployed and the limits of its use due to the experimentation site. We discuss the development of a more robust architecture integrating data processing as close as possible to agricultural operations in an edge and fog layer structure.
As part of the Internet of Food & Farm project for the adoption of IoT in the agri-food sector, this paper presents the design, implementation, and evaluation of an end-to-end and open IoT system architecture. This use case aims to further improve the entire viticulture value chain in a renowned estate of 135 ha in the Bordeaux area (France). Based on the first generation of end-devices (∼74) and gateways, the team designed the system’s architecture and relied on a LoRa network to collect and analyze the environmental data that would ultimately allow them to optimize resource consumption, improve vine yield and wine quality.
Agriculture is a basic economical source that is important for the survival of the population. Plants also like people suffer from diseases and their diagnostic and treatment is very important aspect of plant productivity. Generally, the most diseases may be detected and classified from the symptoms that appeared. Therefore in this article when using the artificial intelligence is proposed a new recommendation system that is based on plant profile and give comparatively high precision of the returned results.The system depend on the requested criteria from the user that make diagnostic using the system and consecutively lead to the on time treatment of the plants. The system when implemented artificial intelligence is characterized with high quality returned results.
This article investigates the problem of agricultural yield prediction, including that of soybeans. Soya is a very nutritious legume and one of the species producing the most protein per hectare. It is used in particular as a source of protein in animal feed and as an oilseed. However, agricultural systems are dependent on climatic variability, and farmers must deal with this factor to optimize their activities. This article describes a method for predicting soybean yields based on machine learning. The comparative study shows that one can obtain forecasts with less than 2% margin of error using the Random Forest algorithm. In addition, the results obtained in this study can be extended to many other crops such as maize or rice.
Post-harvest fruit grading is a necessary step to avoid disease related loss in quality. In this paper, a hierarchical method is proposed to (1) remove the background and (2) detect images that contains grape diseases(botrytis, oidium, acid rot). Satisfying segmentation performances were obtained by the proposed Lite Unet model with 92.9% IoU score and an average speed of 0.16s/image. A pretrained MobileNet-V2 model obtained 94% F1 score on disease classification. An optimized CNN reached a score of 89% with less than 10 times less parameters. The implementation of both segmentation and classification models on low-powered device would allow for real-time disease detection at the press.
About 35% of the world’s food are produced in small-scale farms while only occupying about 12% of all agricultural land. However, smallholder farmers usually face a number of constraints and the water resource is one of the major constraints. The usage of smart technologies and especially sensor systems in so-called Smart Farming Technologies can be applied to the optimization of irrigation. Regardless of the irrigation technique, soil sensors are promising in providing data that can be used to further reduce the usage of water. However, despite all these possibilities, the smallholder community are still reluctant to step into technology-based systems. There are various reasons but prohibitive cost and complexity of deployment usually appear overwhelming. The PRIMA INTEL-IRRIS project has the ambition to make digital and smart farming technologies attractive & more accessible to these communities by proposing the intelligent irrigation “in-the-box” concept. This paper describes the low-cost and full edge-IoT/AI system targeting the smallholder farmers communities and how it can provide the intelligent irrigation “in-the-box” concept.
Recent advances in the field of genomic trait prediction has paved the way for developing futuristic plant breeding programs. The objective of our study is to predict a single or multiple traits of rapeseed (Brassica napus) based on the RNA sequence data. We analyzed 12 different traits of rapeseed and evaluated how their pair-wise correlation impact on the yield production. Further, for predicting single or multi-traits of rapeseed, four state-of-art machine learning (ML) models, namely – Lasso Regression (Lasso), Random Forest (RF), Support Vector Machine (SVM) and Multi-layer Perceptron (MLP) were evaluated. For both single and multi-trait predictions, our RF and SVM models performed most consistently, where the lowest mean squared error was achieved by RF (0.045 and 0.016 for the single and multi-trait prediction respectively). A comparative analysis with related works showed the potentiality of our model for future multi-modal model development. Future study in this context could comprise of evaluating our models with other transcriptome dataset from related crops or deep learning-based methods for better outcomes.
Honeybees are of vital importance to both agriculture and ecology. Unfortunately, their populations have been in serious decline over recent years. Swarms from hives are both of great importance to wider success of a colony and of major significance to beekeepers. In this paper, we contribute to the challenge of predicting when a swarm is going to occur. We have employed a Convolutional Neural Network (CNN) approach applied to audio data recorded from hives. Our initial results are extremely encouraging, since they allow us to distinguish hives which are preparing to swarm from those which are not with high levels of accuracy.
This article presents a novel approach to the acquisition, processing, and analytics of industrial food production by employing state-of-the-art artificial intelligence (AI) at the edge. Intelligent Industrial Internet of Things (IIoT) devices are used to gather relevant production parameters of industrial equipment and motors, such as vibration, temperature and current using built-in and external sensors. Machine learning (ML) is applied to measurements of the key parameters of motors and equipment. It runs on edge devices that aggregate sensor data using Bluetooth, LoRaWAN, and Wi-Fi communication protocols. ML is embedded across the edge continuum, powering IIoT devices with anomaly detectors, classifiers, predictors, and neural networks. The ML workflows are automated, allowing them to be easily integrated with more complex production flows for predictive maintenance (PdM). The approach proposes a decentralized ML solution for industrial applications, reducing bandwidth consumption and latency while increasing privacy and data security. The system allows for the continuous monitoring of parameters and is designed to identify potential breakdown situations and alert users to prevent damage, reduce maintenance costs and increase productivity.