

With the adoption of the AI Act, fundamental rights impact assessment (FRIA) processes become highly relevant for both public and private institutions; yet such processes can be challenging, especially for small- to medium-sized organizations. One recent research that piloted a partial automation of FRIA is Anticipating Harms of AI (AHA!), relying on the use of a large language model and crowd-sourcing; unfortunately, the paper provides limited insights upon its internal working. Therefore, this work presents AFRIA, a processing pipeline that performs specific aspects of FRIA, conceived with AHA! as inspiration. In order to assess to what extent AFRIA is a successful reconstruction of AHA!, we analyzed the percentage of meaningful harms that AFRIA generates and the distribution of harm categories, and compared it to AHA!’s results, finding a satisfactory convergence. Beyond inspiration from AHA!, we also looked into the requirements of the AI Act and scholarly critique to make AFRIA more meaningful for identifying impacts on fundamental rights, targeting categories of human rights impacts, potential harm mitigation measures, and the severity and likelihood of the harms. The results show opportunities, but also limitations in what type of support this technology can bring.