How can we comply with the AI Act? How can we design and develop AI solutions in line with fundamental rights?
These were the key questions addressed by the workshop organised by the Croatian Data Protection Authority-Agencija za zaštitu osobnih podataka in Zagreb on June 13.
With over seventy participants from the private and public sectors, we presented the FRIA model for assessing the impact of AI on fundamental rights and applied it to five Croatian cases during a learning-by-doing workshop.
Thanks to the work I have carried out in Catalonia with the Autoritat Catalana de Protecció de Dades and at an international level with the UNDP, it has been possible to develop a user-friendly model for assessing and mitigating the risks to fundamentalrights in AI development and use.
Transparent, explicable and aligned with the fundamental rights logic, this model serves as a design and accountability tool for both AI deployers and providers.
I would like to thank the Director of the Croatian Data Protection Authority, Zdravko Vukić, for endorsing this model through this initiative, and Anamarija Mladinić for her excellent organisation, dedication, and passion in promoting data protection and fundamental rights at a national and international level.
