Purchasely Blog

Behavioral Science Boosts Headspace Course Starts by Over 100%

Written by Vanina Nguimbi | May 7, 2026 3:10:04 PM

A recap of the April 2026 webinar with Irrational Labs, Headspace, and Purchasely, exploring a behavioral science-backed onboarding flow designed for the leading meditation app.

TL;DR

 

1. Asking questions creates perceived fit - even when the answer is the same

A short personalization quiz during onboarding boosted course starts by 7.6 percentage points over simply showing users the same default course without questions. Users don't just want a good recommendation - they want to feel seen. The act of asking signals that the app is listening, and that feeling of tailored attention is itself a driver of action.

2. Implementation intentions help close the intention-action gap

Getting users to make a concrete plan - when they'll meditate, where, and what will trigger it - drove a 7.5% increase in total app opens and a 4% lift in unique return days during the trial period. Helping users form specific "When X happens, I will do Y" plans is one of the most reliable tools in behavioral science, and it works in mobile onboarding too.

3. Optimizing the front door has a ceiling

Despite doubling course starts and increasing app returns, active meditation days didn't move. Better onboarding gets people in the door and more engaged with features - but for demanding habits like meditation, behavioral design needs to extend well beyond onboarding, with feedback loops, return nudges, and continued scaffolding throughout the entire user journey.

The Full Story

The Problem: Getting People to Meditate is Hard

Headspace came to Irrational Labs with a deceptively simple-sounding challenge: could a behaviorally-informed onboarding flow turn more free-trial users into active meditators?

The stakes were concrete. Internal Headspace data showed that users with more active days during the 14-day free trial were significantly more likely to convert to paid subscribers. Matching users to a specific course had previously increased engagement by 20%. And qualitative research - user journals - showed that new meditators consistently wanted clearer next steps and a more tailored experience.

The diagnosis pointed to three root causes of early disengagement:

  • Unclear mental model: New users didn't know what to do, how long to practice, or how often to open the app

  • Choice overload: The Today tab presented so many options that users felt overwhelmed rather than guided

  • Lack of return prompts: Life gets busy, and without concrete reminders, it's easy to forget

The Experiment: Five Onboarding Flows, Randomly Assigned

Working together across roughly eight months, Irrational Labs designed the behavioral hypotheses and experiment variants, Headspace provided user data and product context, and Purchasely made it technically possible - deploying five complex onboarding variants simultaneously without a single engineering sprint.

The five conditions tested:

  1. Control: the existing onboarding flow, a 1-minute breathing exercise, then the Today tab

  2. Default to Basics: same control flow, but users were offered the Basics course immediately after

  3. Perceived Fit: users completed a personalization quiz, then received the same Basics course as in condition 2

  4. Personalized Fit: users completed the quiz and were matched to a course based on their responses

  5. Precommitment: users completed the quiz, received a course recommendation, and made a concrete plan for when, where, and how often they'd meditate

For each group, the team measured app opens, course starts, and active meditation/sleep session minutes.

Purchasely’s Role: Speed as a Competitive Advantage

Under a traditional engineering workflow, each variant would have required its own ticket, weeks of development, and coordination across multiple app releases. A 5-arm study with complex branching logic involving 60+ screens would have taken months - and the behavioral hypotheses most likely would have been cut for scope before ever being tested.

With Purchasely's no-code solution, the team designed, deployed, and iterated on all five flows in days, without waiting for app releases.

The implementation required a seamless integration:

1. UI and UX quality: Even in an experiment, every screen had to meet Headspace's brand standards. A degraded or glitchy experience wouldn't just reflect badly on Headspace - it would actively contaminate the experiment results, making it impossible to know whether a variant underperformed because of the behavioral design or because of a bad user experience. Purchasely's native rendering technology meant the experimental flows looked and felt identical to the rest of the app, with no visual seams between the onboarding screens and the product itself.

2. Data collection and user preferences: Several of the onboarding flows asked users questions - about their mindfulness experience, their goals, how much time they had, where they'd be using the app. That data needed to be captured accurately and fed into the branching logic in real time, so users were routed to the right course recommendation. Getting this right was non-negotiable: a failure in data collection would have broken the personalization conditions entirely.

3. Analytics and experiment integrity: With five variants running simultaneously, having clean, reliable analytics was essential. The team needed to be able to attribute every course start, every app open, and every meditation session back to the correct condition. Purchasely's built-in analytics integration ensured that even with the added complexity of branching flows, the data piped through correctly to assess which variant was working.

With these challenges solved at the platform level, the team designed, deployed, and iterated on all five flows without waiting for app releases.

The Results: Three Layers

Result 1: Course starts more than doubled

In the control group, 31% of new users started a course during their trial. In the best-performing condition - Perceived Fit - that number jumped to 63%. All four treatment arms beat control. Even the simplest intervention (Default to Basics, which just surfaced a course recommendation without any quiz) drove starts to 55.3%.

The key behavioral insight: asking questions created perceived fit even when the underlying recommendation was identical. Users who took a short quiz before receiving the Basics course started it at significantly higher rates than users who received the same course with no quiz. As Karl explained: "People want to know that this app is for me. You can do that by asking some questions, even if you don't have sophisticated personalization going on underneath the hood."

Result 2: Precommitment brought people back

The Precommitment condition - where users articulated a specific plan for when and where they'd meditate - drove a 7.5% increase in total app opens and a 4% lift in unique days users returned to the app. Both lifts were statistically significant. The mechanism is a well-established concept in behavioral science: implementation intentions. When people form detailed, cue-linked plans ("When I finish my morning coffee, I will open Headspace"), they're significantly more likely to follow through on their stated intentions.

Result 3: Active meditation days didn't move

Despite the strong gains in course starts and app opens, none of the four treatment arms produced a statistically significant increase in active meditation days compared to control. This is the honest, and important, result the team led with rather than buried.

Karl Purcell, from Irrational Labs, offers three hypotheses for the gap between starts and minutes:

  • Lost in the app: A "return modal" that would have pointed users back to their in-progress course on every app open was cut during implementation. Without it, users who came back may have landed on the Today tab, faced with the same choice overload that caused friction in the first place.

  • Too much too soon: Control users received a 1-minute breathing exercise - light, accessible, close to what most beginners expect. Treatment users were matched to longer guided courses (5-20 minutes). That step-change in commitment may have been too abrupt for brand-new meditators.

  • Mismatched content: The personalization logic over-weighted sleep content based on popularity data, when stress and anxiety were actually the top self-reported user needs. And sleep content - typically used at bedtime - is a poor fit for someone signing up at noon.

 

Five Learnings for Product and Behavior Design

The session closed with five takeaways applicable well beyond meditation apps:

  1. Onboarding is a high-motivation moment: Users are at peak intent right after downloading. Use that window to anchor the behaviors that matter most.

  2. Asking creates idiosyncratic fit: A short quiz makes a generic recommendation feel personally chosen - even when it isn't.

  3. Beating your best beginner content is hard: Personalization is only valuable when the matching logic is good. Getting it wrong is easier than getting it right.

  4. Implementation intentions help bring people back: Asking users to plan when, where, and how they'll engage significantly increases return rates.

  5. Front-door design has a ceiling: For demanding habits, behavioral scaffolding needs to run throughout the entire user journey - not just the first three screens.

Featured Speakers:

Meagan Sievers: Lead Product Manager, Headspace

Karl Purcell: Senior Behavioral Scientist, Irrational Labs

Laurent Libano: Chief Revenue Officer, Purchasely