As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
An effective location recommendation function in smart building management systems can optimize space utilization and enhance user service experience. Despite recent advances in Large Language Models (LLMs) for NLP-based recommender systems, smart building systems often lack communication and coordination with other devices, resulting in subpar interactivity and serviceability. To address these challenges, this paper proposes a multi-modal recommendation system for utilizing and sharing open spaces in smart buildings. The system includes a “vision-based recommendation module” that uses visual Language Models and real-time surveillance images to identify locations based on user-requested keywords. The “knowledge-based recommendation module” utilizes knowledge graph technology to match user requirements with historical feedback data, improving semantic matching and optimizing user experience. The system combines the outputs from both modules using decision fusion technology to provide final location recommendations. Simulation results demonstrate that the proposed system can effectively understand user intentions and provide satisfactory location recommendations. The multi-modal approach outperforms individual recommendation methods.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.