Italian Supreme Court says people must know how algorithms that judge them work
Published on: June 10, 2021
by Vincenzo Tiani
On 25 May, precisely on the occasion of the birthday of the GDPR, the General Data Protection Regulation, an important judgment of the Court of Cassation was published on the subject. The ruling clarified a fundamental principle: where an algorithm automatically profiles us with the possible consequence of limiting our rights, the consent given is only valid if we have been explained how that algorithm works. This is the direct consequence of the fact that consent, in order to be considered valid, must be ‘freely and specifically expressed with reference to a clearly identified processing operation’.
The judgment concerns events that took place in 2016, before the entry into force of the GDPR, when the old text of the Privacy Code (Legislative Decree 196 of 2003) was in force. In this case, an association used a platform capable of “processing reputational profiles concerning natural and legal persons in order to counteract phenomena based on the creation of artificial or untrue profiles and to calculate, instead, in an impartial manner, the so-called ‘reputational rating’ of the subjects surveyed, so as to allow any third parties to verify the real credibility“. This data processing was deemed unlawful by the Italian Data Protection Authority (the Garante), which ordered it to be blocked. The association appealed against this decision before the Court of Rome, which partially overturned the Garante’s decision.
In the Court of Rome’s view, it was legitimate for the association to be able to offer this rating service, given also the express consent of those concerned to its use. Since, according to the Court, there was no specific regulatory framework for “reputational rating”, similar to that existing for the “company rating” provided for in the code of the public contract, the system could not be considered unlawful.
The Garante took an entirely different view, stating that ‘the unknowability of the algorithm used to assign the rating score, with the consequent lack of the necessary requirement of transparency of the system‘ did not allow the person concerned to give informed consent. The data subject cannot give valid consent when he/she does not have sufficient information to establish which data processing he/she is accepting.
The Italian Supreme Court’s decision
The Court of Cassation ruled in favour of the Garante because, in order to be valid, the consent must relate to a data processing “clearly identified” and, to be so, the association should have adequately informed how and which data would be used by the algorithm in providing the result. Interestingly, the Court of Rome, in its judgment, did not deny that the algorithm was opaque but resolved the problem by simply relying on the market to determine its reliability.
Continue reading this article on Panetta.net