The General Data Protection Regulation and Automated Decision-making: Will it deliver?

Author (Person) ,
Publication Date January 2019


In algorithmic decision-making (ADM) systems, machines evaluate and assess human beings and, on this basis, make a decision or provide a forecast or a recommendation for action. This means that the data processing and the decisions it delivers contain risks for the users. On the one hand, there are individual rights such as informational self-determination being the core objective of data protection, personality rights and individual autonomy. On the other hand, there are group-related and societal interests such as fairness, non-discrimination, social inclusion and pluralism.

In order to attain these goals, experts have suggested the adoption of certain measures which contribute to making ADM processes transparent, individual decisions explainable and revisable, as well as to making the systems verifiable and rectifiable. Furthermore, ensuring the diversity of ADM systems can contribute to safeguarding the aforementioned interests.

Against this background, the present report focuses on the following question: To what extent can the EU General Data Protection Regulation (GDPR) and the new German Federal Data Protection Act (BDSG), both of which entered into force in May 2018, support such measures and protect the interests threatened by algorithmic systems? The analysis demonstrates that the Article 22 GDPR’s scope of applicability with respect to ADM systems is quite restricted. In the few cases where the ADM specific provisions apply, it can to some extent create transparency and verifiability and thus help safeguard individual rights. However, regarding group-related and societal goals such as non-discrimination and social inclusion, the GDPR has little to offer. Discussing complementary regulatory tools beyond the GDPR is therefore necessary.

Subject Categories
Subject Tags
Keywords ,