

This paper investigates the application of GradCAM, an explainable AI (XAI) technique, to enhance the transparency and precision of fingerprint authentication systems in forensics, particularly in detecting fingerprint mutilation—a common method used to evade biometric security measures. Employing the SOCOfing dataset, which contains both unaltered and synthetically altered fingerprint images, we apply GradCAM to visualize and understand the decision-making process of a convolutional neural network (CNN) model trained to recognize and classify these alterations. Our study not only demonstrates the model’s effectiveness in identifying different types of fingerprint modifications but also identifies areas where the model’s performance can be enhanced. Through detailed visual analysis, we uncover the model’s focus points and assess its reliability across various alteration types and difficulty levels. The insights gained underline the potential of XAI in improving the robustness and reliability of biometric verification systems, paving the way for more secure and equitable AI applications in high-stakes environments.