Reimagining Trip Planning with AI

The Tripadvisor AI Assistant (Ollie) was envisioned as a smarter, more human companion for every stage of the travel journey, from dreaming and planning to exploring on the go. Designed to deliver personalized, location-aware recommendations backed by community insights, it aimed to make travel feel effortless and inspiring. To bring this vision to life, our team embarked on a multi-round research journey; surveys, testing, iterating, and refining the experience through real user feedback. What started as an ambitious idea quickly evolved into a data-driven, human-centered product poised to redefine how travelers connect with the world.

Role: Lead UX Researcher


Methodologies: (mixed-method) CSAT surveys, usability tests and moderated concept testing interviews

Duration: 9 months


Contributions: Design, Product and Engineering


Goal: Evaluate the AI Assistant’s usability, accessibility, and overall experience by measuring user satisfaction, ease of use, information reliability, and response quality (accuracy, relevance, clarity, and helpfulness) to inform iterative design improvements and product refinement.

Laying the Groundwork: Understanding the User Experience

The first few CSAT survey rounds were designed to establish a baseline for understanding user satisfaction with the AI Assistant on web, assess overall product performance, and identify gaps in the experience. Additionally, we aimed to uncover the primary use cases for the product.

Surveys Details:

  • Tool: Sprig

  • Quota: 400 users per survey

  • Study type: An intercept survey via Sprig on the Tripadvisor’s website

First Results

Early insights showed mixed user satisfaction, with clear and fast responses but persistent issues around helpfulness and relevance. Ease of use dipped slightly due to technical bugs, which emerged as a key driver of dissatisfaction. Our team worked closely with Product and Engineering to investigate and address these issues.

Making Travel Guidance Truly Inclusive

Accessibility was a core focus of this project. To understand how travelers with mobility disabilities use the AI Assistant chat bot and where its responses fell short, I designed a targeted survey focused on accessibility-related needs and expectations.

The intercept survey, deployed via Sprig on the Tripadvisor website, recruited 49 travelers with mobility disabilities. The findings surfaced critical accessibility gaps and directly informed improvements to response quality, feature prioritization, and accessibility support across the travel planning journey.

Key takeaway: Travelers with mobility disabilities are eager to use the AI chat bot for personalized, accessibility-first guidance, but express concerns around limited human support, AI reliability, and data privacy. Their most common needs center on accessible accommodations, transportation, routes, and activities.

Evolving Through Every Round

I ran a total of four CSAT survey rounds on web, using insights from each to continuously refine the AI Assistant experience, from design and user interface to response quality and technical performance. As we repeated the surveys and tracked improvements over time, we began to see more positive and exciting results driven by the changes informed by user feedback.

Satisfaction with the product reached its highest point in the latest rounds, while dissatisfaction dropped to its lowest.

The results continually helped the team identify and track which areas of the product were performing well and which needed improvement. From there, we brainstormed strategic ways to address key pain points, collaborating closely with Product, Design, and Engineering to turn insights into actionable improvements that elevated the overall AI Assistant experience.

The surveys also played a key role in helping us track user needs and primary use cases, and how they evolved over the nine months of research. This ongoing tracking gave us a clear view of how users’ expectations and behaviors changed as the AI Assistant matured, allowing the team to adapt the product strategy and continue improving the overall experience.

Where the Numbers Led Us

With survey results showing strong satisfaction overall, our next step was to dig beneath the surface. Some users had hinted at frustrations tied to bugs, so we set out to uncover what was really happening. I designed and led a usability test of the AI Assistant across both the Tripadvisor website and mobile web, zeroing in on those hidden friction points that could make or break the experience.

Study Details:

  • Tool: Dscout

  • Quota: 10 participants

  • Participants: All travelers segments (recruited from Tripadvisor internal panel of users)

  • Study type: Unmoderated usability test

Mobile web

Desktop

🧠 What we learned

Users immediately recognized the AI Assistant via the floating button and found it easy to use. They especially valued clear maps, scannable lists, and the chat memory, which made the experience feel seamless and personal, though minor mobile bugs occasionally disrupted flow.

Key friction points emerged in navigation: users often confused text responses with the list view and overlooked one of the Assistant’s most valuable features. Similarly, expectations around starting a new chat varied, revealing opportunities to clarify entry points and make the experience more intuitive.

💡 Opportunities

  • Surface core information from the list view earlier within the AI text response.

  • Clarify and enhance the process to start new chats, either via improved icon guidance or by allowing direct user prompts.

  • Further investigate platform-specific bugs (e.g., error messages, chat memory issues) that disrupt user experience.

With insights from the usability test and CSAT surveys in hand, I teamed up with design to refine the experience and with engineering to uncover and fix the bugs behind user frustrations. Together, we turned feedback into action, shaping a smoother, more reliable AI Assistant experience.

Real World impact

As the AI Assistant grew across Tripadvisor’s platform, research became its compass. The usability test and each round of the CSAT survey told a new chapter of the product’s evolution, uncovering pain points, validating wins, and shaping smarter iterations. With every wave of feedback, the Assistant became sharper, faster, and more helpful. The impact of research wasn’t just seen in reports, it was reflected in the numbers, the performance, and the way users engaged. Some of the most notable outcomes included:

  • Overall satisfaction rose from 39% to 55%, with “helpful responses” cited more often as the top reason (27% → 49%)

  • Dissatisfaction dropped from 43% to 32%

  • Ease of use improved from 58% to 72%, while reported difficulty fell from 20% to 11%

  • 69% of users are “very likely” or “likely” to use the AI Assistant again

  • We have also noticed a significant improvement in the percentage of users upvoting responses post launch (69 → 82%)

  • +48% lift in return rate to the AI Assistant (8% vs. 5%), indicating stronger engagement from users entering via the persistent FAB (floating action button)

TRANSITIONING TO the App Experience

Insights from multiple web studies revealed strong demand for the AI Assistant chat bot on mobile, making app expansion our next key milestone.

I partnered closely with Design to shape the first app concepts and aligned with Product before validating the direction through research. I led a moderated concept testing study to gather deep qualitative feedback, testing multiple design variations to understand user expectations and mental models around entry points, response formats, chat management, and map and list views.

The study clarified not just which concepts worked best, but why, enabling confident, user-informed decisions before moving into development.

Study Details

  • Tool: Great Question (we had our own participant panel with over 300 users)

  • Participants: All traveler segments, including travelers with mobility disabilities

  • Quota: 10 participants

  • Study type: Unmoderated interviews

  • Duration: 1 hour each session

Design Refined by User Insights

User testing surfaced clear opportunities to refine the in-app experience. The floating action button quickly stood out as the preferred entry point: intuitive, accessible, and seamlessly integrated into users’ browsing flow. Participants appreciated having both list and map views to explore responses, but craved richer interactivity, from filtering and zooming to saving points directly on the map. Subtle language choices also proved impactful, terms like “Sources” and “Community” needed to be redefined for greater clarity and resonance. These insights guided a round of design refinements that made the AI Assistant experience feel more intuitive, dynamic, and human-centered.

Final Designs

Through iterative research, the AI Assistant evolved from a bold concept into a user-informed product shaped by real traveler needs. Months of testing, across web and app, from surveys to moderated concept studies, created a strong foundation for an intuitive, inclusive, and engaging experience, with every key design decision grounded in evidence.

With designs validated through multiple research rounds, the AI Assistant is now in its final development phase and on track to launch in late 2025–early 2026, marking a 9 month-long, research-led effort that shaped not just usability, but how travelers meaningfully plan with AI.