“Explainable AI” often gets plenty of airtime—but not enough real answers. In this session, we go beyond the buzzwords to explore how you can truly understand and influence the behavior of generative AI systems in practical, measurable ways.
We’ll cover both intuitive and research-backed strategies to interpret model outputs and reduce risks like hallucinations, bias, and unpredictable responses. While "prompt engineering" is often treated as a catch-all solution, we’ll show why successful interaction with AI models often requires the mindset of a forensic psychologist—observing, comparing, and adjusting responses across different models to guide outcomes reliably.
Through live demonstrations, applied examples, and real-world lessons from the AI research community, this session will help you shift from guesswork to grounded action in your AI projects.
You will learn:
- Key principles and dimensions of explainability in generative AI
- How to analyze and interpret model behavior through structured testing
- Techniques to steer models toward desired outputs and minimize undesirable behavior
Whether you're building AI systems or overseeing their deployment, you’ll leave this session with tangible tools and frameworks to make explainable AI not just a theory—but a practice.