This paper presents an image search system with an emotion-oriented context recognition mechanism. Our motivation implementing an emotional context is to express user's impressions for retrieval process in the image search system. This emotional context recognizes the most important features by connecting the user's impressions to the image queries. The Mathematical Model of Meaning (MMM: [2], [4] and [5]) is applied for recognizing a series of emotional contexts for retrieving the most highly correlated impressions to the context. These impressions are then projected to a color impression metric to obtain the most significant colors for subspace feature selection. After applying subspace feature selection, the system then clusters the subspace color features of the image dataset using our proposed Pillar-Kmeans algorithm.
Pillar algorithm is an algorithm to optimize the initial centroids for K-means clustering. This algorithm is very robust and superior for initial centroids optimization for K-means by positioning all centroids far separately among them in the data distribution. It is inspiring that by distributing the pillars as far as possible from each other within the pressure distribution of a roof, the pillars can withstand the roof's pressure and stabilize a house or building. It considers the pillars which should be located as far as possible from each other to withstand against the pressure distribution of a roof, as number of centroids among the gravity weight of data distribution in the vector space. Therefore, this algorithm designates positions of initial centroids in the farthest accumulated distance between them in the data distribution.
The cluster based similarity measurement also involves a semantic filtering mechanism. This mechanism filters out the unimportant image data items to the context in order to speed up the computational execution for image search process. The system then clusters the image dataset using our Pillar-Kmeans algorithm. The centroids of clustering results are used for calculating the similarity measurements to the image query. We perform our proposed system for experimental purpose with the Ukiyo-e image dataset from Tokyo Metropolitan Library for representing the Japanese cultural image collections.