Description
The Netherlands is no stranger to using algorithmic profiling for the enforcement of social security legislation. However, in recent years it has been increasingly demonstrated, both in literature and in practice, that the use of such technologies can also bring risks of discrimination. In light of this context, the question arises to what extent the Dutch legislator's attempt to regulate the use of algorithmic systems, for example through the Data Processing by Cooperation Act, is sufficient to govern the risks of discrimination associated with these systems and how their regulatory approach can be improved.To this end, I will describe three cases of application of AI in the social domain, namely: SyRI, the Child Care benefits Scandal, and tackling welfare fraud in Rotterdam. In discussing these cases, I consider how the administrative body uses the space provided by regulation and identify the consequences. From these cases, a picture emerges of how algorithmic systems are used by governmental organizations as well as the discriminatory consequences that can occur if their use is not sufficiently regulated. A comparative analysis of the three cases provides a normative framework that the "governance" of AI should meet. Finally, I examine the extent to which the recent Cooperation Act meets these requirements and provide recommendations for improvement.
Period | 6-Jul-2023 |
---|---|
Event title | VSR Jaarcongres: The next generation |
Event type | Conference |
Location | Amsterdam, NetherlandsShow on map |
Degree of Recognition | National |