As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
In order for robots to interact with humans on real-world scenarios or objects, these robots need to construct a representation (‘state of mind’) of these scenarios that a) are grounded in the robots’ perception and b) ideally should match human understanding and concepts. Using table-top settings as scenario, we propose a framework that generates a robot’s ‘state of mind’ by extracting the objects on the table along with their properties (color, shape and texture) and spatial relations to each other. The scene as perceived by the robot is represented in a dynamic graph in which object attributes are encoded as fuzzy linguistic variables that match human spatial concepts. In particular, this paper details the construction of such graph representations by combining low-level neural network-based feature recognition and a high-level fuzzy inference system. Using fuzzy representations allows for easily adapting the robot’s original scene representation to deviations in properties or relations that emerge in language descriptions given by humans viewing the same scene. The framework is implemented on a Pepper humanoid robot and has been evaluated using a data set collected in-house.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.