This post is the second installment in our Study Insights series, where we share lessons learned directly from the field. If you missed the first post—“Finding Real Solutions by Listening to Patients: Lessons from a Fuchs’ Dystrophy Study”—we explored how patient feedback can drive recruitment breakthroughs.
In this post, we dive into another rare disease study where early recruitment metrics looked promising—but deeper insights revealed a very different story. For more learnings from the field, visit the full Study Insights series.

In clinical trial recruitment, especially for rare diseases, data often leads the way. We monitor conversion rates, optimize cost per sign-up, and celebrate every qualified patient. But what happens when the numbers look good… and still lead us astray?
That’s exactly what we faced during a recent rare disease study. On the surface, everything appeared promising: strong ad engagement, a screener that was passing patients through, and early metrics that suggested success. But something wasn’t adding up.
The Problem Beneath the Surface
Our Patient Success Coordinators (PSCs), who speak with patients daily, started to notice a pattern. While patients were technically passing the screener, many weren’t actually eligible. They had the condition, they met general requirements, but they didn’t meet all the strict inclusion criteria outlined in the protocol.
In short, we were seeing false positives. The data said these patients were a fit, but in practice, very few advanced to randomization.

Turning Data Into Truth
Instead of rushing to rewrite ads or shift targeting, we paused. We dug into patient conversations, revisited the study protocol, and worked closely with our PSCs to pinpoint where confusion was happening and how we could fix it.
We discovered the screener wasn’t asking the right questions in the right way. It was well-intentioned but misaligned with how patients actually describe their symptoms and histories.
So we reworked it, refining the language, clarifying key points, and designing it to better reflect real patient experiences. The result was a screener that could more accurately validate patients in real time, before a conversation with our team even began.
At first, it looked like performance was slowing. But that dip in volume came with a dramatic increase in patient quality.
Smarter Inputs, Smarter Algorithms
Once we began sending cleaner, more accurate signals, our campaigns improved. The refined screener enabled us to feed better data into our digital platforms, helping algorithms target the right patients, not just those who were close.
Rather than training the system on a mix of eligible and borderline candidates, we were now showing it exactly what a “truly eligible” patient looked like.
Within weeks, we saw a measurable increase in patient quality and this time, we knew the numbers were telling the truth.
Why the Screener Matters More Than You Think
When recruitment falls behind, the standard playbook is:
- Refresh the creative
- Adjust targeting
- Expand outreach
- Test new platforms
All of these are valid. But one of the most overlooked levers is the screener itself.
- Is it clear and easy to complete?
- Does it align with how patients speak?
- Does it reflect what the protocol truly requires?
If the screener is off, the entire funnel risks collapse—even if the rest of your strategy is flawless.
A Human-First Lens

At Leapcure, we believe data tells a story, but it doesn’t tell the whole story. That’s why we center our approach not only on digital signals, but on human conversations. Our PSCs catch the nuances that algorithms can’t, bridging the gap between intent and eligibility.
By combining digital precision with human insight, we don’t just fill recruitment funnels, we refine them.
Because when the data doesn’t tell the truth, the answer isn’t louder signals. It’s better questions and truly listening to the answers





Leave a Reply