The recent study commissioned by ITRE is worthy of comment in view of the considerations it raises regarding FRIA and DPIA, since it does not properly frame the related issues. This is important at a time when debate on this topic is growing.
I fully agree with the suggestion in this study to ‘develop joint templates or interoperability guidance to streamline DPIA and FRIA’. This is precisely what we are doing with the Catalan Data Protection Authority after issuing the first FRIA model this year.
Unfortunately, the section of the report on this topic (pages 34–35) is quite weak. Firstly, when addressing legal issues, it is important to consider the legal debate, not just grey literature, which is not subject to the same standards of quality and peer review as scientific publications.
Secondly, it seems to me that the authors contradict their own argument about ‘incongruence’ when they acknowledge that the GDPR and the AI Act apply to different duty bearers and adopt different risk perspectives. If they are different, they must necessarily be distinct in some respects.
Regarding FRIA and DPIA, it is incorrect to talk about duplication. What exists is an overlap, under specific conditions, as set out in Article 27 of the AI Act. These conditions apply when AI involves the processing of personal data (which is not the case for all AI systems) and when the data controller is also the AI deployer. In these cases, Art. 27(4) addresses this issue specifically, stating that the FRIA ‘shall complement’ the DPIA, not replicate it. Therefore, it is incorrect to claim that a duplication of assessment is required. The duplication exists if the two provisions are not properly implemented in practice, but this is common to many other legal obligations (e.g. cybersecurity obligations).
Stating that there are different supervisory authorities again is partially incorrect, as in several countries, data protection authorities are also market surveillance authorities. Moreover, this is not about fundamental rights, but about the regulatory governance of AI resulting from the AI Act, and in this respect it is largely a minor issue.
The report states that, under the GDPR, ‘risk is evaluated contextually’, whereas this would not be the case under the AI Act. This misunderstands the abstract risk classification set out in Annex III and the contextual assessment required by Article 27 in such cases. Certain AI systems, e.g. those used to evaluate learning outcomes in education, are classified as high risk by the AI Act according to Annex III. However, this is an abstract classification based on the type of AI use, and a contextual evaluation can mitigate risk and render it acceptable. The same applies to the cases listed in Article 35(3) of the GDPR.
The fact that a system can be considered high-risk under the GDPR but not under the AI Act is a consequence of the decision to use a given risk classification in the AI Act (Annexes I and III). However, the study contains a basic misunderstanding in this regard. When it comes to fundamental rights, it is not the AI Act that creates the related protection: these rights are protected at a national level in Member States regardless of the provisions in the AI Act. The AI Act simply aims to promote trustworthiness in order to create an environment in which people in the EU trust AI applications in different fields. It also helps companies and public bodies consider the potential impact on these rights during the design process, rather than facing significant costs and reputational damage when bringing products and services to market that negatively affect individual and collective rights.
FRIA is a way to help companies and public bodies to avoid these additional risks and costs: eliminating the FRIA (as suggested by Germany) does not eliminate the need to comply with rights protection and simply reduces trust in AI and in AI operators, as well as increases the cost for the latter when legal actions are taken to safeguard users’ rights.
Against this background, it is not surprising that some AI systems are considered high risk under the GDPR but not under the narrower scope of Annex III of the AI Act. Annex III is not an exhaustive list of cases in which AI can infringe fundamental rights; it is merely a list of cases in which a FRIA is mandatory. We cannot confuse the use of a tool (prior assessment) with the legal framework (protection of fundamental rights).
Based on these considerations, it is difficult to agree with the conclusion of this study that ‘the lack of alignment generates burdens in practice’, despite the author recognising that ‘it is not strictly a legal inconsistency’. We fully agree that many EU laws are created in silos and poorly consider the relationship with other laws; this is evident in many cases. However, in the case of the AI Act, the high-risk scheme was an explicit intention of the EU legislator. Although it would have been possible to introduce a general impact assessment for AI systems in a manner similar to Article 35 of the GDPR, the EU opted to create a classification of high-risk applications instead. This increases complexity in the interaction with other general laws, but these difficulties can be addressed by creating a carefully crafted DPIA-FRIA model, and the initial results of testing this approach are positive.
It is now clear that certain sectors close to businesses are calling for fundamental rights protection to be reduced or eliminated in AI regulation. This is a misunderstanding of the fact that fundamental rights remain protected regardless of Article 27. However, without Article 27, many organisations will be unaware of how their solutions impact these rights and of the associated costs, including legal action, compensatory damages and the effort required to redesign AI solutions.
Furthermore, if EU legislators reduce the protection of fundamental rights, trust in AI among EU citizens will decrease at a time when the negative impact of this technology on individuals and society is becoming increasingly apparent. If the EU fails to protect the rights of its citizens with regard to AI, the resulting distrust of EU regulation and institutions will affect not only AI, but also future technologies.
Finally, it is worth noting that the study was co-authored by members of a law firm that advises companies in the IT sector. While this is positive in terms of sector knowledge, it raises questions about how potential conflicts of interest can be prevented.