Who bears the risk when artificial intelligence - like ChatGPT - makes mistakes that cause damage? If the humans involved have acted carefully, no one is liable under current law.
In order to close this serious liability gap, Anna Beckers and Gunther Teubner outline three legal liability regimes, for which they draw on findings from sociology as well as moral and technical philosophy: Principal-agent liability for actions of autonomous software agents ("actants"), network liability for condensed human-AI interactions ("hybrids") and fund-based compensation for networked AI systems ("swarms").
Author Anna Beckers talks to Christian Dunker (Geistesblüten) about a pioneering solution to a highly topical problem.
Anna Beckers is Professor of Private Law and Social Theory at Maastricht University.
This content has been machine translated.Price information:
Admission: 8 €, reduced: 5 €, members: 3 €
Terms and Conditions for lotteries