The rise of artificial intelligence presents us with not only technological but also fundamental ethical questions. How do we ensure that AI systems operate fairly, transparently and humanely? Who is responsible if an algorithm discriminates, causes harm or makes opaque decisions? The topic file"Ethics and Responsibility" focuses on these questions.
We examine, among other things:
- The ethical dilemmas that arise when designing and applying AI, such as the risk of bias, exclusion, autonomy limitation and unequal power relations.
- The responsibility of developers, organizations and governments in securing human-centered AI.
- Relevant guidelines and frameworks, such as the AI Act (with an emphasis on risk assessment and transparency), the AVG, the European AI ethics framework of the High-Level Expert Group on AI, and the Dutch government framework for trusted algorithms.
- Practical examples of ethical review, citizen participation in AI policy and the application of ethical impact assessments (such as IAMA).
- Discussions about accountability, human oversight and capturing responsibility in complex AI chains.
We also pay attention to current developments, such as the use of AI in public service, the monitoring of algorithms, and the creation of ethical codes of conduct by companies and institutions.