Home > AI versus integrity in cardiology

AI versus integrity in cardiology

Presented by
Prof. Folkert Asselbergs, Amsterdam UMC, Netherlands
Conference
HFA 2025
Cardiologists are facing opportunities and responsibilities in the era of artificial intelligence (AI). AI will disrupt healthcare, so the cardiovascular community is urged to lead the development of clinically validated safe, equitable, and effective applications.

Prof. Folkert Asselbergs (Amsterdam UMC, Netherlands) offered a range of real-world examples demonstrating AI's transformative potential, including automated imaging analysis, structured note summarisation, personalised risk prediction, and conversational support tools [1]. One of the most striking moments of the session was when Prof. Asselbergs introduced a live, AI-powered digital avatar of himself, speaking fluent Serbian on stage.

“This avatar is trained on my voice and likeness,” he explained, before activating it in real-time. The avatar, which could speak in over 140 languages, answered questions using both verbal and visual cues and adjusted its responses based on the user’s literacy and educational background. “It can talk to a medical professor in one conversation and explain the same topic to a 12-year-old in the next,” he said. “It’s available 24/7 to provide patient education, explain procedures, or reinforce treatment plans, especially valuable in chronic care management.”

Prof. Asselbergs framed the avatar not as a replacement for the clinician-patient relationship, but as an extension of it. “You establish the connection and trust in the clinic. But after the appointment, your avatar can continue the conversation at home: answering questions, correcting misinformation, and reinforcing key messages.”

Yet this promise also comes with significant caveats. “We must remain aware that every interaction with generative AI, including tools like ChatGPT, may involve data being stored, analysed, and reused,” he cautioned. “This raises fundamental questions about patient consent, data protection, and the ethics of digital care.”

One solution, he argued, is building ‘computable’ clinical guidelines; AI-ready resources that translate consensus-based recommendations into structured, machine-readable formats. “Our European Society of Cardiology (ESC) guidelines are the ground truth. But they’re written for humans. To ensure AI supports, rather than overrides, medical reasoning, we must encode those guidelines directly into the tools we use.”

Prof. Asselbergs also referenced the ESC’s experimental chatbot, trained on the ESC guideline library. “We’re testing an internal chatbot that allows clinicians to type in natural language questions and get guideline-based answers. It’s fast, intuitive, and already helping to make evidence-based recommendations more accessible, especially in time-sensitive clinical environments.”

He closed with a challenge to the audience: “If we want AI to reflect our values, our training, and our standards of care, then we must stay in the driving seat. That means education, collaboration, and yes, regulation. Because if we don’t lead this transformation, tech companies will, and we risk losing not only clinical control but the foundation of patient trust.”

  1. Asselbergs F, et al. Clinical application of artificial intelligence in heart failure: real-world data. Session, Artificial intelligence for detection and follow-up of heart failure: hype or hope? Heart Failure 2025, 17 May, Belgrade, Serbia.

Medical writing support was provided by Dr Rachel Giles.

 

Brought to you by Pfizer

PP-VYN-NLD-0443



Posted on