This is a fictional but representative example of a Tell Me Why analysis report. Every real report follows this structure — built from your actual interview recording, CV, and job description.
The job description mentions "stakeholder management" nine times and lists it as an essential requirement. When asked to describe a time you managed conflicting stakeholder priorities, [REDACTED] gave a vague answer about "keeping everyone in the loop" with no specific outcome, timeline, or resolution framework.
When asked how you decide which features to prioritise, [REDACTED] described an intuition-based approach. The role explicitly requires "data-informed product decisions" and "AB testing experience." Not a single metric, experiment, or quantitative framework was mentioned across the entire interview.
The closing question about preferred working style produced language that signalled individual autonomy ("I work best when given ownership and left to execute") while the company's careers page and JD both emphasise "deeply collaborative," "pair programming culture," and "shared ownership." The interviewer followed up twice on this point — a signal it was flagged.
Answers consistently stayed at the tactical execution level. When the interviewer asked about product vision for a new market segment, [REDACTED] immediately jumped to feature ideas rather than framing the market opportunity, user needs, or competitive landscape. For a senior PM role, the interviewer needed evidence of operating at the strategic layer.
When discussing API design and system architecture trade-offs, [REDACTED] demonstrated genuine depth. The explanation of event-driven architecture and its trade-offs against the REST approach was articulate and well-structured. This was the strongest section of the interview and likely kept the candidacy alive longer than the other signals would suggest.
The answer about the ████████ product launch included concrete metrics: 40% adoption in the first quarter, 15-point NPS improvement, and a clear timeline of the rollout phases. This was the only answer that fully satisfied the STAR framework the interviewer was looking for.
The candidate's interest in the company's market segment came across as authentic. The interviewer noted this positively in their follow-up question, which suggests it was a genuine differentiator against other candidates.
Each weak answer below is paired with a stronger alternative — rewritten using experience verified from the candidate's own CV. This is the core of the Tell Me Why report.
"I usually try to keep stakeholders informed through regular updates and make sure everyone is aligned on the priorities. I find that if you communicate well, most conflicts resolve themselves."
No specific situation, no named stakeholders, no measurable outcome. This answer tells the interviewer nothing about how you actually handle conflict.
"At ████████, I managed the Q3 roadmap where our VP of Product wanted to prioritise enterprise features while the Head of Growth needed self-serve onboarding improvements. I ran a structured prioritisation exercise using RICE scoring with both stakeholders, presented the data to the leadership team, and we agreed on a phased approach: self-serve onboarding in weeks 1-4 for quick growth wins, enterprise features from week 5. The result was a 28% increase in self-serve activation and the enterprise pipeline stayed on track for the Q4 launch."
Led cross-functional prioritisation at ████████ (2022-2024), managing competing stakeholder interests across product, growth, and engineering.
"I rely a lot on customer feedback and my own product sense. After years in this space, you develop an intuition for what users need. I also talk to the sales team regularly to understand what customers are asking for."
No mention of data, experimentation, or any quantitative framework. For a role that explicitly requires "data-informed product decisions," this is a disqualifying answer.
"I use a layered approach. First, I look at quantitative signals — product analytics from ████████, funnel drop-off data, and support ticket clustering. Then I validate with qualitative research: I ran 12 user interviews last quarter to pressure-test our assumptions on the checkout flow. For prioritisation, I use a weighted scoring model that factors in reach, effort, confidence, and strategic alignment. At ████████, this framework helped us increase feature ROI by 35% year-over-year because we stopped building things that tested well in surveys but didn't move retention metrics."
Product analytics and experimentation programme at ████████, where [REDACTED] ran 20+ AB tests per quarter.
"I think you should build a mobile app, improve the reporting dashboard, and maybe add some AI features. Users really want better notifications too."
A feature wish list, not a product vision. No market framing, no user segmentation, no strategic narrative. Senior PM candidates need to demonstrate they think beyond the backlog.
"Looking at ████████'s position in the market, I see three strategic bets for the next two years. First, the mid-market segment is underserved — your competitors focus on enterprise, but the fastest growth is in 200-500 seat companies. I would invest in self-serve onboarding and usage-based pricing to capture that. Second, your platform data is a moat: building intelligence features on top of aggregated usage patterns would create switching costs that pure feature competition cannot. Third, I would establish a partner ecosystem — at ████████ I built an integration marketplace that drove 22% of new revenue within 18 months. Each of these maps to a clear hypothesis we can validate within one quarter."
Built integration marketplace at ████████ driving 22% incremental revenue; led market expansion from enterprise into mid-market segment.
Answers were generally understandable but lacked the structured delivery expected at this seniority. Several responses meandered before reaching the point, and two answers exceeded three minutes without a clear framework. Confidence was inconsistent — strong on technical topics, noticeably weaker on behavioural questions.