Webinar 3: Trustworthy AI: provability, accountability, understandability, and their role in ethics guidelines (Nov. 25, 2021)
A constant urge to create trustable tools and cooperation is part of the human condition. It is little wonder then that we strive for maintaining trust and intellectual control over the machines we create. But given their growing complexity and deep embedding in the human society, is this realistic, or is this an example of the vanity of human wishes? How do we justify the double standard between the explainability of decisions by humans and AIs?
We inspect a three-tier approach to trustworthy AI: (1) a mathematical approach based on logical and probabilistic provability, (2) a legal approach using accountability and responsibility, and (3) an ethical engineering approach aiming for transparency and understandability throughout the development. We discuss their role in ethics guidelines to support comprehensive workflows for nurturing human compatible trustworthy AI systems.
- Péter ANTAL, PhD, associate professor, head of the Artificial Intelligence Group, head of the Computational Biology Laboratory
Department of Measurement and Information Systems (MIT), Faculty of Electrical Engineering and Informatics (VIK), Budapest University of Technology and Economics (BME)
- Mihály HÉDER, PhD, habil. associate professor, head of department
Dept. for Philosophy and History of Science (FTT), Faculty of Economic and Social Sciences (GTK), Budapest University of Technology and Economics (BME)
Moderator: Tarry Singh, CEO, deepkapha AI Lab & Real AI B.V., The Netherlands
Watch on LinkedIn: https://www.linkedin.com/video/event/urn:li:ugcPost:6868881724290433024/