← Back to overview
Conversational AI UX Design LLM Integration Accessibility

How a rule-based chatbot learned to improvise

Over two years at KBC, I designed conversations for Kate — the digital assistant inside Belgium's best-rated banking app. My focus: business and third-party integrations. The challenge that defined the project: how do you blend predictable, rule-based conversations with unpredictable LLM-generated answers — without the user noticing the seam?

186
Conversations designed
109
Knowledge improvements
42
Proactive conversations

The context

Kate has been part of KBC Mobile for years — a rule-based assistant that handles around 21,500 questions per day. To put that in perspective: that's roughly the capacity of the AFAS Dome, Belgium's largest arena. When I joined, she could recognize what users were asking and route them through predefined conversation flows. It worked well for straightforward banking queries.

But for third-party integrations — things like parking payments, mobility services, and external partner features — rigid scripts weren't enough. Users asked questions in ways we couldn't fully predict, about services with documentation that changed frequently. That's where the opportunity was.

The design challenge

The existing recognition layer — the part that understands what you're saying — couldn't be changed. It was the foundation everything else ran on. So whatever we built had to work within that architecture.

When someone typed "how do I pay for parking?", the rule-based system recognized the intent and identified it as a third-party topic. From there, we fed the relevant documentation from KBC's website into an LLM to generate a contextual answer. The flow went from deterministic to probabilistic mid-conversation — and the user shouldn't feel the shift.

That was the hard part. A rule-based response is the same every time. An LLM response isn't. Designing a conversation that moves smoothly from one to the other — without feeling broken or inconsistent — meant rethinking how we structured transitions, managed expectations, and handled edge cases where the LLM might not have a good answer.

Transparency by design

We also had to get the ethics right. The European Accessibility Act requires full transparency about AI use. Users need to know when they're reading something generated by a machine versus something written by a human.

I designed a custom text bubble specifically for LLM-generated answers — visually distinct from Kate's standard responses. It made clear that this answer was AI-generated, gave users context about where the information came from, and maintained trust in a space where people are making financial decisions. Transparency wasn't an afterthought — it was a core design constraint.

My role

I was the conversational designer responsible for business and third-party integration conversations. Within that scope, I was the first designer to explore how LLMs could be integrated into Kate's rule-based architecture.

Day to day, that meant designing conversation flows, writing and improving knowledge bases, building proactive conversations that reached out to users at the right moment, and creating business dashboards to track how conversations were performing.

The results

  • World's best banking app — SIA Partners, 2024
  • Best digital assistant in Belgium — out of 65 compared, Campfire AI, 2023 & 2025
  • Highest rating for design & accessibility — Campfire AI, 2024

What I learned

The biggest lesson: the best AI experiences aren't about the AI. They're about the transitions — the moments where the system shifts from one mode to another. Get those right, and users don't even notice. Get them wrong, and the whole thing feels broken.

I gave a talk at UX Beers about this exact transition — from rule-based to LLM-powered conversations, and what it means for designers working with AI.