• LEN.IA - FR
  • LEN.IA - EN

LEN.IA



Ethical Assessment for Trustworthy AI



The Ethical Assessment for Trustworthy AI organized by the LEN.IA - AI & Digital Ethics Lab brings together multidisciplinary teams in order to provide ethical assessments of digital intelligence systems. The independent workshops in Montreal, Quebec, Canada allow experts in computer science, engineering, philosophy, ethics, social sciences, law, other disciplines related to the cases and stakeholders to conduct a reflexive analysis of the ethical issues raised by these systems and to ensure that they are trustworthy.


The workshops are organized based on the international expertise developed around the Z-Inspection® initiative, itself based on the European Commission's Ethics Guidelines for Trustworthy AI.


To request an Ethical Assessment for Trustworthy AI, please contact us.



list of members
ETHICAL ASSESSMENT FOR TRUSTWORTHY AI



- Frédérick Bruneault, lead researcher and partner

- Andréane Sabourin Laflamme, researcher and partner

- Roberto V. Zicari, advisor



AFFILIATED LABS



The Laboratory for Trustworthy AI at Arcada University of Applied Sciences (Helsinki, Finland)


The Ethical and Trustworthy AI Lab at Illinois Institute of Technology’s Center for the Study of Ethics in the Professions (Chicago, USA)


Trustworthy AI Lab Venice at Venice Urban Lab (Venice, Italy)


Trustworthy AI Lab at the University of Copenhagen, (Copenhagen, Denmark)


The Trustworthy AI Lab at L3S Research Center Leibniz, University Hannover, (Hannover, Germany)


The Laboratory for Ethical and Trustworthy AI in Practice at the Swinburne University of Technology Sarawak Campus (Sarawak, Malaysia)


Trustworthy AI Lab at The Center for Bioethics and Research (CBR) (Ibadan, Nigeria)


Trustworthy AI Lab at the CIRSFID, Alma Mater Research Center for Human-Centered Artificial Intelligence, University of Bologna (Bologna, Italy)


Trustworthy AI Lab at the Imaging Lab, University of Pisa (Pisa, Italy)


Trustworthy AI Lab at the Goethe University Frankfurt (Frankfurt, Germany)



PUBLICATIONS
Z-Inspection®​



Assessing trustworthy AI in times of COVID-19. Deep learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients. IEEE Transactions on Technology and Society, July 2022.


How to assess trustworthy AI in practice. ArXiv, June 2022, arXiv:2206.09887


To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems. PLOS Digit Health 1(2), February 2022.


On assessing trustworthy AI in healthcare. Machine learning as a supportive tool to recognize cardiac arrest in emergency calls. Front. Hum. Dyn., July 2021.


Co-design of a trustworthy AI system in healthcare: Deep learning based skin lesion classifier. Front. Hum. Dyn., July 2021.


Z-Inspection®: A process to assess trustworthy AI. IEEE Transactions on Technology and Society, June 2021.


TO ACCESS THE COMPLETE LIST, VISIT ​

z-inspection.org/publications/



TO LEARN MORE
ABOUT Z-Inspection®



CLICK HERE





Z-inspection® is a registered trademark. This work is distributed under the terms and conditions of the Creative Commons (Attribution-NonCommercial-ShareAlike CC BY-NC-SA) license.



Contact


info@lenia.net

NOS MÉDIAS SOCIAUX



(Re)penser l'éthique de l'IA et de la transformation numérique



LEN.IA, s.e.n.c.

Montréal, Québec, Canada