Author (Person) | Palka, Przemyslaw |
---|---|
Publisher | European University Institute (EUI) |
Series Title | LAW Working Papers |
Series Details | No 11, 2018 |
Publication Date | 01/01/2018 |
ISSN | 1725-6739 |
Content Type | Journal | Series | Blog |
Abstract: Business is using artificial intelligence in essentially all sectors of the economy: machine learning is employed to generate knowledge out of big (consumer) data, to optimize processes and to undertake new, previously impossible tasks. This might come with benefits for the consumers and the society, but definitely poses numerous challenges. In this short note, we provide an overview of challenges for EU consumer law and policy stemming from the business’s use of AI, and sketch a plan for action. We argue that AI will need to be tackled case-by-case, bottom-up, though with the big picture in mind. It needs to be tackled soon, but we do need to take our time to reconsider the assumptions that have been challenged, and not rush to political conclusions. Moreover, we argue that the role of law is not just to minimize the risks, but also to enable developments in consumer-empowering AI tools. These tools, however, will not be given to us by business. Civil society must take action and fight for them. We cluster the challenges and takeaways by the type of actors that are affected by the business’s use of AI. Consumers face the risk of undue influence on their behavior (by targeted personalized commercial practices), exclusion from access to goods and services (ad delivery & price discrimination) and lower quality of information and services in the interaction with artificial agents. Regulators need to revise their governance toolbox, taking into account the specificity of AI’s operations (stealth infringement, wide-spread-by-minor damage, automation of reasoning). Also, regulation needs to strike a correct balance between specific cases and the bigger picture, and between commanding and enabling. In EU consumer law, the concepts of unfair commercial practices and unfair contractual terms should be revisited, to take into account the reality of business using AI. In addition, we should consider the usefulness of adopting special data protection rules to supplement the GDPR, by stating what purposes of data processing are lawful in what markets. Civil society should strive to seize the opportunities of AI in the medium-term, making the best use of the exiting legal instruments (UCPD, UCTD, GDPR) in the short-term, and lobby for societal and legal change in the long-term. Finally, academia, in particular legal scholars, must re-consider their role in the debate on AI governance: they should ground their research in empirical findings, acknowledge the limitations of sectoral knowledge and remedy such limitations by engaging in an interdisciplinary and multi-stakeholder dialogue. We argue that the competitive advantage of scholars goes beyond offering concrete policy recommendations. Instead, it concerns a critical reflection on the ways in which the mass deployment of AI challenges the basic assumptions and presuppositions of the existing legal and regulatory theory and practice. |
|
Source Link | http://hdl.handle.net/1814/57485 |
Subject Categories | Business and Industry, Culture, Education and Research |
Countries / Regions | Europe |