The Smallest Possible Health Model
Three inputs beat twelve: activation state, usage recency/depth, and key feature adoption. Start with equal weights. If AUC/ROC against churn is ~0.5, your inputs are wrong, not your math. Fix the signals first.
Three inputs beat twelve: activation state, usage recency/depth, and key feature adoption. Start with equal weights. If AUC/ROC against churn is ~0.5, your inputs are wrong, not your math. Fix the signals first.
Playbooks without outcomes turn into activity reports. Start with 2–3 customer outcomes you can measure (time-to-first-value, usage depth, key feature adoption). Then write plays that move those, and instrument the deltas. If a play can’t be tied to a metric next week, it’s not ready.
Involuntary churn is failure to collect (payment issues, expired cards). Voluntary churn is a decision (no value, no budget, switching). The fixes differ. Involuntary churn is mostly ops and billing hygiene. Voluntary churn is product–market fit, onboarding, and value communications. Separate the streams in your reporting or you’ll chase the wrong problems.
Churn is a lagging indicator. By the time a customer cancels, the problem started months earlier, usually as low engagement or stalled adoption. If you’re reacting to churn risk at renewal time, you’re already behind. The simplest formula for keeping customers I have been able to validate is below: Retention = Experience + Outcomes(Adoption(Engagement)) Work…
For most SaaS, three checkpoints prevent long tail pain: – Technical fit verified (auth, data in, integrations stable) – First value achieved (the “aha” the buyer actually cares about) – Owner named (who runs it day-to-day) If any checkpoint fails, pause expansion plays and fix root cause. Onboarding is leverage; don’t step over it.
Keep risk reviews to three questions: what moved this account’s health since last review, what’s the next intervention, and what evidence says it will work. Ban status theater. If we can’t state the risk and the counterfactual clearly, log an assumption and test it.
Health scores work when they are legible and predictive. Keep the inputs few, stable, and behavior-based (e.g., activation milestones met, usage depth/recency, key feature adoption). Write the attribution rules down so you can explain changes. If you can’t predict churn or expansion better than chance, your model is a vanity metric—fix the inputs before tuning…