Rigorous Evidence on What Works

D4Act designs and implements gold-standard impact evaluations - from randomized controlled trials to quasi-experimental studies - that provide definitive answers about whether development programs achieve their intended effects.

Why Impact Evaluation Matters

Global development spending exceeds $180 billion annually. Yet a striking proportion of these investments - in health, education, agriculture, and social protection - are never rigorously evaluated. Without credible counterfactual evidence, policymakers cannot distinguish programs that change lives from those that merely spend budgets.

D4Act's impact evaluation practice fills this gap with methodological approaches adapted to African data realities. We go beyond activity tracking to measure outcomes, return on investment, and long-term sustainability of development interventions.

Evaluation planning

Our Methodological Toolkit

We select the right method for each evaluation question - balancing internal validity, feasibility, and ethical considerations in the African contexts where we work.

Randomized Controlled Trials (RCTs)

The gold standard for causal inference. We design cluster-randomized, individual-randomized, and stepped-wedge trials with pre-registered analysis plans, attrition monitoring, and robust statistical power. Our RCTs typically involve sample sizes of 2,000-10,000+ across multiple survey rounds.

Difference-in-Differences (DiD)

When randomization is not feasible, we use DiD to compare trends between treatment and comparison groups, exploiting natural variation in program rollout timing. Particularly useful for evaluating government programs where random assignment is politically or ethically impossible.

Propensity Score Matching (PSM)

Statistical matching techniques that construct comparison groups from observational data. We combine PSM with kernel matching, nearest-neighbor, and inverse probability weighting methods to build credible counterfactuals when experimental data is unavailable.

Mixed-Methods & Qualitative

Numbers tell us what happened; qualitative research tells us why. We integrate focus groups, key informant interviews, participatory assessments, and process evaluations to understand the human stories behind the statistics - essential for policy relevance in diverse African contexts.

"Billions of dollars are invested in development programs across Africa each year. Impact evaluation answers the fundamental question: did these programs actually improve people's lives, and if so, by how much?"

- D4Act Impact Evaluation Practice

Our Evaluation Approach

Our end-to-end methodology - from initial assessment to sustainable impact.

Impact Evaluation Methodology Méthodologie d'évaluation d'impact 1 Scoping Cadrage Theory of Change Evaluability scan Pre-analysis plan 2 Design Conception RCT / DiD / PSM Power calculation Stratified sampling 3 Fieldwork Terrain CAPI / CATI tools Quality assurance Field supervision 4 Analysis Analyse Causal inference Heterogeneity tests Robustness checks 5 Dissemination Diffusion Policy briefs Stakeholder dialogues Evidence synthesis TOOLS & METHODS RCTs Diff-in-Diff PSM Mixed Methods Stata / R / Python Regression Disc. KEY DELIVERABLES Evaluation Report Policy Brief Data Dashboard Learning Note Evidence Map AFRICAN-LED · EVIDENCE-BASED · LOCALLY OWNED · GLOBALLY RIGOROUS

Frequently asked questions

Which impact-evaluation methods do you use, and how do you choose?

We start from the evaluation question and the data environment, not the method. Where randomisation is feasible and ethical we run RCTs (cluster, individual, stepped-wedge). Where it isn't, we apply difference-in-differences, propensity-score matching, regression discontinuity, instrumental variables, synthetic control or contribution analysis - whichever yields the cleanest identification given the programme. The choice is documented in a written evaluation matrix.

How do you handle pre-registration, transparency and replicability?

Where applicable, we pre-register evaluation matrices, sampling plans and analysis plans (AEA RCT Registry, Open Science Framework) before data collection. Code, anonymised data and replication packages are shared with the funder by default. Reporting follows OECD-DAC criteria plus the donor's preferred template (USAID Evaluation Report Requirements, World Bank IEG, Gates Foundation MERL).

How do you ensure statistical power and detectable effect sizes?

Every evaluation begins with explicit power calculations using the minimum detectable effect (MDE) the programme can plausibly produce. We document MDE, sample size, design effects (clustering, ICC) and attrition assumptions before fieldwork. If the programme cannot plausibly produce a detectable effect, we say so - and propose a different evaluation question.

What about ethics, do-no-harm and safeguarding?

Every engagement passes a written ethics review aligned with OECD-DAC ethics guidance and donor-specific requirements (USAID ADS, EU Code of Conduct on Development). We obtain IRB approval where applicable, design informed-consent protocols in local languages, build do-no-harm safeguards into data collection and apply safeguarding policies to all field staff. Personal data is handled under a written data-protection plan.