beginnerProduct Management

Product Management Fundamentals

Learn the core responsibilities of a product manager — from discovery and user research to roadmapping, prioritisation, and stakeholder alignment.

40 min read7 sections
1

What a Product Manager Actually Does (Day to Day)

The "CEO of the product" title gets thrown around a lot. It's misleading. A CEO has authority over people and budgets. A PM has authority over... nothing, really. You can't force engineering to build your feature. You can't fire the designer who disagrees with your mockup. You can't mandate that sales stop making promises the product can't deliver.

What you can do is influence, align, and prioritise. And that turns out to be harder — and more valuable — than direct authority.

A typical week involves: reviewing user feedback and support tickets to understand recurring pain points, meeting with engineering to discuss technical feasibility of upcoming features, updating the product roadmap based on shifting priorities, running a stakeholder meeting to align everyone on what's coming next quarter, and writing a product brief for the feature that just got prioritised.

The role spans three domains:

  • Business viability — Will this make money? Does it serve our strategy?
  • User desirability — Do people actually want this? Will it solve their problem?
  • Technical feasibility — Can we build it? How long will it take? What trade-offs exist?

Your job is to find the sweet spot where all three overlap. Lean too far toward business and you build features nobody uses. Lean too far toward users and you build things that don't generate revenue. Lean too far toward engineering and you end up with technically elegant solutions to the wrong problems.

2

Discovery: Finding Problems Worth Solving

The most expensive mistake in product development is building the wrong thing. Discovery is how you avoid it — by understanding user problems deeply before committing engineering resources to solutions.

The research methods that actually work:

  • User interviews — Talk to 5–8 users. Ask open-ended questions about their workflow, frustrations, and goals. Don't ask "Would you use feature X?" (they'll say yes to be polite). Ask "Walk me through how you solved this problem last week."
  • Support ticket analysis — Your support team is sitting on a goldmine of user feedback. Categorise the top complaint types by frequency and severity.
  • Product analytics — Where do users drop off? Which features have high adoption vs low? Where do people click that doesn't go anywhere? Tools like Mixpanel, Amplitude, or even Google Analytics reveal behaviour that interviews miss.
  • Competitive analysis — Not to copy competitors, but to understand their positioning gaps. Where are their users complaining? That's your opportunity.

One framework that changed how I approach discovery: Jobs to be Done (JTBD). Instead of asking "what features do users want," ask "what job are they hiring our product to do?" A drill buyer isn't buying a drill — they're buying a hole in the wall. This reframes everything around outcomes, not solutions.

3

Product Strategy: The "Why Behind the What"

Strategy is the least understood and most important part of the PM role. Without it, you're just a feature factory — building things because someone asked, not because they advance a coherent plan.

A product strategy has four layers:

  1. Vision — Where are we going? (2–5 year horizon) "Every business decision backed by reliable data, accessible to everyone — not just analysts."
  2. Strategy — How will we get there? "We'll win by making data exploration so simple that non-technical users choose us over spreadsheets."
  3. Goals — What will we accomplish this quarter? "Increase weekly active users by 30%. Reduce time-to-first-insight from 45 minutes to under 10."
  4. Initiatives — What will we build to hit those goals? "Guided onboarding flow. Natural language query interface. Pre-built dashboard templates."

Notice how each layer narrows the scope. Vision is broad and aspirational. Initiatives are specific and buildable. The strategy is what connects them.

Your North Star Metric should capture the value your product delivers. For Spotify, it might be "time spent listening." For Slack, "messages sent per user per day." For a B2B analytics tool, "reports generated per week." Find the one metric that, if it improves, means everything else is probably working too.

4

Prioritisation: The Art of Saying No

You will always have more ideas than capacity. Always. The PM's hardest job isn't deciding what to build — it's deciding what not to build and defending that decision to disappointed stakeholders.

Frameworks help, but none of them are perfect. Use them as starting points, not final answers:

  • RICE — Score each idea on Reach (how many users), Impact (how much value per user), Confidence (how sure are you), and Effort (how much work). RICE = (R x I x C) / E. Good for comparing a long backlog of options.
  • Value vs Effort matrix — A 2x2 grid. Top-left (high value, low effort) is your sweet spot. Bottom-right (low value, high effort) is your graveyard.
  • MoSCoW — Must have, Should have, Could have, Won't have. Best for scoping a specific release.
  • Opportunity scoring — Ask users to rate both the importance of a problem and their satisfaction with current solutions. High importance + low satisfaction = highest opportunity.

Here's my honest take after years of product work: no framework replaces judgment. Data and frameworks get you to a shortlist. Judgment — informed by strategy, context, and customer empathy — makes the final call.

And the hardest part: saying no to your boss's pet feature. Do it diplomatically but firmly. "That's a valid idea. Here's what we'd need to drop from the roadmap to accommodate it. Are you comfortable with that trade-off?" Make the cost visible.

5

Roadmapping Without Setting Traps

Product roadmaps are communication tools, not contracts. But stakeholders treat them like contracts. This tension is the source of endless PM frustration.

The three roadmap styles, and when to use each:

  • Now-Next-Later — No dates, just priority buckets. "Now: improving checkout conversion. Next: subscription billing. Later: international expansion." Best for fast-moving teams where priorities shift frequently.
  • Outcome-based timeline — Quarters mapped to objectives, not features. "Q2: Reduce onboarding drop-off by 40%. Q3: Launch self-serve analytics." Gives stakeholders time context without feature-level commitments.
  • Feature timeline — Specific features with target dates. Use only for committed projects with hard deadlines (regulatory requirements, contractual obligations). Not for everything.

Tips I've learned the hard way:

  • Near-term items (0–6 weeks) should be detailed and specific. Far-term items (3+ months) should be themes and outcomes. Committing to feature details six months out is fantasy.
  • Maintain different roadmap views for different audiences. Engineering needs task-level detail. Executives need strategic themes. Sales needs customer-facing commitments.
  • Update monthly at minimum. A stale roadmap is worse than no roadmap — it creates false expectations.
6

Working with Engineering and Design

The PM-Engineering-Design triad is where product gets built. The quality of this relationship directly determines the quality of your product.

Rules that have served me well:

  • Share the problem, not the solution. "Users are abandoning checkout at the payment step" is better than "Build a one-click checkout button." Engineers and designers often come up with better solutions than whatever you had in mind.
  • Involve engineering early. Before you promise anything to stakeholders, check with your tech lead. "Is this feasible? What's the rough effort? Are there technical risks?" saves you from making commitments you can't keep.
  • Respect estimates. If engineering says a feature will take six weeks, don't negotiate it down to three because your deadline demands it. Ask instead: "What can we deliver in three weeks that would still solve the core problem?"
  • Be available during implementation. Questions come up constantly once coding starts. The PM who responds in ten minutes prevents two days of wrong-direction work.

The best PM-engineering relationships I've seen share a key trait: mutual respect for each other's expertise. You're the expert on what to build and why. Engineering is the expert on how and how long. Neither role is more important.

7

Measuring Success: Metrics That Actually Drive Decisions

There's a difference between tracking metrics and using them. Many product teams have dashboards full of numbers that nobody acts on. The goal is to identify the few metrics that directly inform your next decision.

The Pirate Metrics framework (AARRR) covers the full user lifecycle:

  • Acquisition — How do users find you? (Traffic, sign-ups, install rate)
  • Activation — Do they have a good first experience? (Completed onboarding, first "aha moment")
  • Retention — Do they come back? (DAU/MAU ratio, week-1 retention, churn rate)
  • Revenue — Do they pay? (Conversion rate, ARPU, LTV)
  • Referral — Do they tell others? (NPS, viral coefficient, referral rate)

Pick one or two from each category. More than that and you'll have too much data and not enough insight.

The metric that matters most varies by product stage:

  • Pre-product-market fit: focus on retention. If people aren't coming back, nothing else matters.
  • Post-product-market fit: focus on growth and revenue. You've proven the product works; now scale it.

One anti-pattern I see constantly: celebrating vanity metrics. "We got 50,000 sign-ups this month!" Great — how many actually used the product more than once? Activated users who come back weekly are worth a hundred sign-ups that churned in a day.

Ready to Take the Next Step?

Our tutorials are just the beginning. Explore our expert-led courses and certifications for hands-on, career-ready training.