

The structure of the vessel tree in a retinal fundus image has been demonstrated to be a valid biometric feature that can be used for person identification. Most of the existing methods in this field rely on vessel segmentation algorithms, which may be computationally intensive and may suffer from insufficient robustness to noisy images and to images with pathologies. We propose a method that matches the spatial arrangement of five bifurcations between two retinal fundus images. It does not involve vessel segmentation and as a result it is more efficient and more robust to noisy images. The method that we propose uses a hierarchical approach of trainable COSFIRE filters. The bottom layer consists of bifurcation-selective COSFIRE filters and the top layer models the spatial arrangement of the concerned bifurcations. We demonstrate the effectiveness of our approach on the benchmark Retinal Identification Database (RIDB) and the VARIA data set where we obtained an accuracy score of 100% on both sets. The proposed method does not rely on domain knowledge and thus it can be adapted to other vision-based biometric systems, such as fingerprint and palmprint recognition.