Publication

📄 Fahim et al. (2023). Creation of a theoretically rooted workbook to support implementers in the practice of knowledge translation.


Read Published Paper Here

Introduction

Knowledge Translation (KT) is the work of turning research into decisions teams can act on. In health and public sector settings, weak KT means stalled initiatives, inconsistent practice, and missed impact. This case shows how I productised KT into a simple, self-serve workbook so teams could move from evidence to first action- confidently and on time.

The Problem and Why It Mattered

What is KT guidance?

Knowledge Translation (KT) guidance refers to the frameworks, toolkits and checklists that explain how to turn research evidence into practice. It often includes step sequences, strategy menus, evidence appraisal rubrics and reporting templates intended for planners and evaluators.

What I observed

Before rollout, KT guidance was thorough but not usable in weekly planning. Teams faced three blockers: 1) no obvious first step; 2) no way to match local barriers to actionable strategies; and 3) guidance written for experts with time. The result was not a knowledge gap. It was a confidence and execution gap.

Consequences

  • Planning latency: initiatives stalled and workshops ended with notes, not actions.

  • Inconsistent practice: teams defaulted to familiar tactics rather than barrier-fit strategies.

  • Expert bottlenecks: a few facilitators became gatekeepers, creating multi-week delays.

  • Rework and waste: plans were rewritten post-review and were hard to compare across teams.

  • Reporting risk: difficult to evidence KT application consistently across programmes.

  • Equity issues: smaller or remote teams lagged without access to expert support.

  • Morale: “We don’t know where to start” led to disengagement and abandoned plans.

Problem statement

I did not need more content. I needed a product that turned evidence into decisions and first moves.

Success Criteria

Reduce time to first action, increase template completion and reuse, and create a shared planning language leaders can scale.

Project description

Project description

Project description

The KT initiative aimed to help frontline teams use evidence in practice. Existing materials were comprehensive but not actionable in day-to-day planning.

Context

🔹 Dense, academic content with no quick start
🔹 One-size-fits-all guidance that ignored local constraints
🔹 No simple way to choose strategies by barrier or outcome
🔹 Heavy reliance on expert facilitators to translate the work

Outcomes

✅ Shipped the First Steps checklist and Barrier-to-Strategy selector

✅ Delivered plain-language templates, two entry paths, and a Plan-Do-Review tracker

✅ Result: faster starts, clearer decisions, and less reliance on facilitators.


Ownership & Delivery


⦿ I owned the Strategy selector, First Steps, and Plan-Do-Review tracker

⦿ Shipped the MVP in 12 weeks with two four-week iterations; pilot delivered on time

⦿ Handed off a complete Figma spec and QA checklist adopted by engineering.

The Team

👤 PM and Research Lead (me) - Product strategy, JTBD framing, MVP scope, usability testing, iteration
👤 KT Subject Matter Experts - Evidence rigour and accuracy
👤 Content Designer - Plain-language rewrite, examples and prompts
👤 Programme Leads and Pilot Teams - Real-world testing and feedback
👤 Implementation Analyst - Event schema and measurement plan

Process

Process

Process

This category details the step-by-step approach taken during the project, including the framework and workflow audit, research, card sorting and content grouping, interviews and co-design and the baseline review.

This category details the step-by-step approach taken during the project, including the framework and workflow audit, research, card sorting and content grouping, interviews and co-design and the baseline review.

This category details the step-by-step approach taken during the project, including the framework and workflow audit, research, card sorting and content grouping, interviews and co-design and the baseline review.

📂 Framework and Workflow Audit

I mapped current guidance against how teams actually plan.

Key friction: no obvious first next step and no way to match barriers to strategies.

🗃️ Card Sorting and Content Grouping

Remote card sorts clustered topics by mental models.

Result: a modular information architecture (IA) that surfaced Start and Choose a strategy ahead of reference content.

🗣️ Interviews and Co-Design

Sessions with programme leads, clinicians and policy staff confirmed the primary gap was confidence, not comprehension. People asked for concrete examples over theory.

📊 Baseline Review

I analysed existing plans and workshop outputs to set baselines for time to first action and task completion.

🧾 Research → Design Receipts

⦿ Users needed examples, not theory. We added worked examples to every template

⦿ Early overload slowed people down. We used progressive disclosure and moved dense references to sidebars

⦿ People started at different points and weren’t sure which strategy to pick. We created two entry paths and added “when to use” and risk notes to each strategy

Design & Development

Design & Development

Design & Development

We translated research into a shippable MVP by restructuring the IA, building decision-support components, and systematising templates for self-serve use.

1) Information Architecture for Action

Structured around JTBD: Start, Choose, Plan, Review. Reference material moved to supportive sidebars and appendices.

2) Decision Support Components

Built the Barrier-to-Strategy selector with quick filters, risk notes and outcome tags. Added outcome-based entry for teams starting from goals.

3) Template System and Tone Rules

Standardised inputs, added worked examples and prompts. Plain-language rules so new scenarios can be added without jargon.

4) Lightweight Operating Model

Introduced a plan-do-review tracker and retrospective template. Simple cadence teams can run without a facilitator.

Testing & Validation

Testing & Validation

Testing & Validation

We pressure-tested the MVP with think-aloud sessions and task-based tests, iterating copy, flow, and states until time-to-task and comprehension improved.

🧪 Usability Tests and Think-Aloud Sessions

  • Participants completed planning tasks faster

  • They preferred the dual entry paths

  • They relied on worked examples to progress


🔄 What Changed After Testing


  • Simplified copy and tone for clarity

  • Restructured modules for easier scanning and fewer steps

  • Minimised upfront inputs to reduce early friction

  • Added milestone confirmations and gentle prompts to maintain momentum

Metrics and Real-World Scenarios

Metrics and Real-World Scenarios

Metrics and Real-World Scenarios

Measurement

Real-world scenarios

KPIs: time-to-first-action, template completion, reuse


A/B: checklist-first vs long-form

  • Enabled barrier- or outcome-led entry so teams can start from what makes sense for them

  • Allowed users to resume mid-flow without re-entering information

  • Made templates print- and offline-ready for workshops and low connectivity

  • Made team handovers easy with clear owners and a simple review cadence

Final Results

Final Results

Final Results

💡 Time to First Action: ↓35% in pilot exercises
💡 Confidence to Apply KT Steps: +40% self-reported after one guided session
💡 Reuse Intent: 80% of pilot users planned to use the tool on their next project
💡 Standardisation: shared planning language adopted across mixed-discipline teams
💡 Publication: peer-reviewed paper published based on this work

AI Readiness

AI Readiness

AI Readiness

If AI assist were added, the product would:

  • Explain suggestions: show why a strategy is recommended, with confidence labels and sources

  • Keep humans in control: accept, edit or decline suggestions, with easy undo and never-show-again controls

  • Learn safely: record user edits/rejections, use them to improve future suggestions, and always show the original sources and reasoning

  • Guardrails: prevent unsupported claims, highlight bias risks, and show data lineage

Reflection and Learnings

Reflection and Learnings

Reflection and Learnings

✅ Make the first next step obvious
✅ Design for decisions, not reading

✅ Keep rigour through structure and examples




⚠️ Balance accuracy with plain English

⚠️ Support non-linear starts and resumes




This case shows how I operate as a product manager: frame the problem around user decisions, align experts and implementers, instrument outcomes, and ship a usable, scalable product that changes behaviour.

Copyrights © 2025  Nadia Somani

Copyrights © 2025 

Nadia Somani