Building a GenAI-Powered Conversational Recommender for Travel Insurance

The Problem

As my boyfriend and I planned our dream trip to Asia, I quickly realized how frustrating the search for the right travel insurance could be. We both wanted a plan that matched our unique needs without spending hours deciphering legalese in lengthy policy PDFs.

Unfortunately, that’s exactly what most insurance platforms offer: static filters, key benefit highlights without the fine print. To really understand the plan, one has to dig through the long, detailed plan documents full of technical jargon. Even trying to speak with a human agent proved difficult. I scheduled an online appointment, only to miss the call because of a time zone mix-up. The entire process felt outdated and unnecessarily painful.

Ironically, I had just spent the past year working on GenAI use cases at a leading insurance company. That experience got me thinking:

What if travelers could simply chat with a smart assistant—anytime, anywhere—describe their trip and health needs, and instantly receive a personalized insurance recommendation, no fine print decoding required?

The Solution

To answer that question, I built a Conversational Recommender:

  • It chats naturally with a traveler about their trip and needs;
  • It retrieves plan info only when necessary;
  • It offers clear recommendation with explanation;
  • It answers any follow-up questions.

The diagram below illustrates the core architecture:

Chatbot Architecture Diagram
  1. The traveler initiates a chat by describing themselves and their trip.
  2. Agent processes the input and decides whether to:
    • Respond directly, or
    • Query the vector database for available insurance plans.
  3. If the agent decides to retrieve plan info, it creates the query on its own and searches for the most relevant plans.
  4. LLM generates a response with explanation and recommendation in plain English.
  5. The chat can continue with the traveler asking clarifying questions or adjusting preferences. 

This isn’t just a basic chatbot. It uses LangGraph, which brings memory, control flow, and state to LLM-powered apps. Unlike traditional RAG pipelines that fetch data every time, this agentic RAG approach lets the model decide when retrieval is actually helpful. This makes the system:

  • More efficient (no unnecessary retrieval)
  • More flexible (adaptable multi-turn dialogues)
  • More human-like (because the LLM thinks before it acts)

Here's a quick look under the hood:

  • LLM: GPT-3.5-Turbo for dialogue and reasoning
  • Embeddings: all-mpnet-base-v2 open-source model
  • Vector store: FAISS to store plan metadata and text embeddings
  • Framework: LangGraph for control logic and memory
  • UI: Gradio for a lightweight, interactive frontend

Final Thoughts

You can check out the entire codebase on my GitHub. The project is modular and easy to build on—I’m already planning improvements like persisting memory per user and tool logging.

This prototype is just one example of how GenAI, when combined with smart retrieval and control mechanisms, can turn complex, frustrating decisions into smooth, intuitive user experiences. Thanks to frameworks like LangGraph, it's now faster than ever to go from idea to working MVP.