Prof. Folkert Asselbergs (Amsterdam UMC, Netherlands) offered a range of real-world examples demonstrating AI's transformative potential, including automated imaging analysis, structured note summarisation, personalised risk prediction, and conversational support tools [1]. One of the most striking moments of the session was when Prof. Asselbergs introduced a live, AI-powered digital avatar of himself, speaking fluent Serbian on stage.
“This avatar is trained on my voice and likeness,” he explained, before activating it in real-time. The avatar, which could speak in over 140 languages, answered questions using both verbal and visual cues and adjusted its responses based on the user’s literacy and educational background. “It can talk to a medical professor in one conversation and explain the same topic to a 12-year-old in the next,” he said. “It’s available 24/7 to provide patient education, explain procedures, or reinforce treatment plans, especially valuable in chronic care management.”
Prof. Asselbergs framed the avatar not as a replacement for the clinician-patient relationship, but as an extension of it. “You establish the connection and trust in the clinic. But after the appointment, your avatar can continue the conversation at home: answering questions, correcting misinformation, and reinforcing key messages.”
Yet this promise also comes with significant caveats. “We must remain aware that every interaction with generative AI, including tools like ChatGPT, may involve data being stored, analysed, and reused,” he cautioned. “This raises fundamental questions about patient consent, data protection, and the ethics of digital care.”
One solution, he argued, is building ‘computable’ clinical guidelines; AI-ready resources that translate consensus-based recommendations into structured, machine-readable formats. “Our European Society of Cardiology (ESC) guidelines are the ground truth. But they’re written for humans. To ensure AI supports, rather than overrides, medical reasoning, we must encode those guidelines directly into the tools we use.”
Prof. Asselbergs also referenced the ESC’s experimental chatbot, trained on the ESC guideline library. “We’re testing an internal chatbot that allows clinicians to type in natural language questions and get guideline-based answers. It’s fast, intuitive, and already helping to make evidence-based recommendations more accessible, especially in time-sensitive clinical environments.”
He closed with a challenge to the audience: “If we want AI to reflect our values, our training, and our standards of care, then we must stay in the driving seat. That means education, collaboration, and yes, regulation. Because if we don’t lead this transformation, tech companies will, and we risk losing not only clinical control but the foundation of patient trust.”
- Asselbergs F, et al. Clinical application of artificial intelligence in heart failure: real-world data. Session, Artificial intelligence for detection and follow-up of heart failure: hype or hope? Heart Failure 2025, 17 May, Belgrade, Serbia.
Medical writing support was provided by Dr Rachel Giles.
Brought to you by Pfizer
PP-VYN-NLD-0443
Posted on
Previous Article
« Real-world AI applications in cardiology Next Article
Sharpen your HF diagnostic instincts »
« Real-world AI applications in cardiology Next Article
Sharpen your HF diagnostic instincts »
Related Articles
March 28, 2024
Custom content article
June 24, 2025
HF in Europe: a changing landscape
June 24, 2025
Real-world AI applications in cardiology
© 2024 Medicom Medical Publishers. All rights reserved. Terms and Conditions | Privacy Policy
HEAD OFFICE
Laarderhoogtweg 25
1101 EB Amsterdam
The Netherlands
T: +31 85 4012 560
E: publishers@medicom-publishers.com