Hamed Ayoobi
As an Assistant Professor in AI for Health at UMCG, I conduct research at the intersection of machine learning, explainable AI (XAI), and clinical data science to develop trustworthy and interpretable AI systems for healthcare. My work focuses on advancing explainability and responsible AI techniques for deep learning models - including CNNs, Large Language Models (LLMs), multimodal architectures, and computer vision systems (including 3D vision and robotic vision) - to improve diagnostic support, medical decision-making, and scientific insight across diverse medical data.
Building on my prior research in argumentation-based learning, prototype-driven interpretability, and open-ended perception, I design transparent AI methods that enhance clinician trust and foster safe, accountable deployment of AI in clinical practice. I actively collaborate with clinicians to translate AI innovations into real-world medical applications, including orthopedics and brain health.
In addition to research, I contribute to graduate supervision and teaching in AI and machine learning, and support interdisciplinary initiatives shaping the future of responsible AI in health.
Previously, I was a postdoctoral research associate at Imperial College London with deep expertise in Generative AI (LLMs, VLMs, LMMs), Explainable AI (XAI), Computer Vision (Medical Imaging & Robotic Vision), and Retrieval-Augmented Generation (RAG).
My work aims to make AI systems more transparent, understandable, and trustworthy, particularly through prototypical and argumentative approaches. I am passionate about bridging the gap between complex AI models and human comprehension, contributing to safer and more reliable AI deployments.