The rapid integration of artificial intelligence into our society brings not only technological innovations but also profound ethical and societal issues. How do we ensure that AI systems operate fairly, transparently and in the service of humans? And who bears responsibility when algorithms discriminate, cause harm or make decisions that are inimitable to citizens? In the theme file Ethics and responsibility we examine these challenges from legal, technological and societal perspectives.
We highlight, among other things:
The ethical dilemmas in designing, training and deploying AI, including bias, exclusion, loss of autonomy and new power relations.
The role and liability of developers, organizations and governments in human-centered and responsible AI.
Current standards and frameworks, such as the European AI Act (focusing on risk assessment, transparency and governance), the AVG, the ethics framework of the High-Level Expert Group on AI, and national guidelines for trusted algorithms.
Concrete cases of ethical review, citizen participation in AI policy and the application of tools such as the Human Rights and Algorithms Impact Assessment (IAMA).
Discussions about accountability, humanoversight and assigning responsibility within complex AI chains.
Recent developments around AI in public service, algorithm surveillance and the creation of ethical codes of conduct in the private and public sectors.
This dossier offers a current, critical and hands-on look at how we as a society deal with the power as well as the risks of AI - and what choices need to be made now to keep the technology human-centered, safe and responsible.