LOADING

Type to search

Article

Is the GDPR ready to meet Machine Learning challenges?

Share

The recent wave of enthusiasm for machine learning and algorithmic decision-making has its origins in the Turing Test, introduced by English mathematician, computer scientist, logician and cryptanalyst Alan Mathison Turing in 1950. The Turing Test is a test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human.

Nowadays, cloud computing technologies offer inexpensive, scalable, cloud-supported machine learning services and tools, with particular focus on data mining and other types of predictive analysis. With the continuing growth of the Internet usage and online exchange of personal data, data subjects often have little or no clear knowledge about what data controllers do with their personal data. On one hand, the law imposes strict requirements that service providers should abide by. However, in reality service providers establish data processing practices through privacy policies which are often long and complex for final users, making it difficult for users to identify potential misuses of their personal data.

Legal Framework and Machine Learning from EU perspective

Regulation of automated decision making was explicitly addressed in the 1995 Data Protection Directive (DPD). The 2016 General Data Protection Regulation (GDPR) extends individual protection not only on profiling of data subjects, but also other forms of automated processing. 

Article 22 and Recital 71 of GDPR appear to be broader in scope than the corresponding Article 15 of DPD because GDPR covers “a decision based solely on automated processing, including profiling” whereas the DPD covers only “a decision … which is based solely on automated processing of data intended to evaluate certain personal aspects relating to him, such as his performance at work, creditworthiness, reliability, conduct, etc”. So, Article 22(1) of GDPR gives  the data subject the right not to be subject to decision-making based solely on automated processing, including profiling that produces legal effect concerning them or affecting them. Such personal data should only be collected for specified, explicit, and legitimate purposes, and subsequent processing that is incompatible with those purposes is not permitted. Machine learning is data driven, typically involving existing data sets and live data streams in complex training and deployment workflows. Such dynamic process faces several challenges related to GDPR.

In terms of lawfulness, Article 22(2) of the GDPR does include specific exceptions from the prohibition on automated decision-making, including contractual necessity and consent. Additionally, Article 22(3) of GDPR provides that data controller should implement suitable measures to safeguard data subjects’ rights (e.g. the right to contest data controller decisions). In reality, how can a data subject give his consent in relation to a process that may lack transparency? Moreover, will data controllers need to obtain separate consents for different situations, particularly in  medical, financial or employment contexts? 

With reference to fairness, machine learning processes may use biased data in automated decision-making. Algorithms working on incomplete or unrepresentative data may generate false correlations that result in unjustifiable decisions. For example, profiling based on postal codes or magazine subscriptions may actually lead to selection based on race or gender. From the perspective of a data subject, how can we assure the data subject is properly informed and receives meaningful information about the logic of automated decision-making behind-the-scenes? Should data controllers and processors disclose the full code of algorithms and technical details? Probably not. A non-technical, explanatory description is more appropriate. However, it is questionable whether data subjects may still ask data controllers to disclose detailed technical descriptions of algorithms, and whether such technical descriptions are protected as trade secrets.

Verdict

Is GDPR ready to face challenges of automated decision-making? It’s unclear whether growing use of algorithms will increase inequality and threaten democracy, or whether the anticipated benefits of automated decision-making will outweigh potential harms. Moreover, it should not be forgotten that human decision-making is often influenced by bias, both conscious and unconscious. This suggests the appealing possibility that it may be feasible in the future to use an algorithmic process to demonstrate the lawfulness, fairness, and transparency of decisions made by either humans or machines to a greater extent than is possible via human review of the decision in question. Indeed, it may already make sense to replace the current model in some contexts, whereby individuals can appeal to a human against a machine’s decision, with the reverse model whereby individuals would have a right to appeal to a machine against a decision made by a human.  

About the Author: Marcel Hajd is a fully qualified Slovenian lawyer with several years of experience and an international background. He specialises in domestic and cross-border debt recovery court procedure, as well as litigation. He has been involved in several projects advising legal tech start-ups, and he has an enduring passion for technology and the impact of Artificial Intelligence on legal practice.

Tags:

Leave a Comment

Your email address will not be published. Required fields are marked *