The elder in northern Ghana has seen it before. They gather the women under a tree. They ask about household income, about farming practices, about children’s health. They nod, they type, they leave. Months pass. A report is written, a grant is renewed, and next year, another vehicle arrives asking the same questions. Nothing changes. This is not aid. This is extraction disguised as evidence-based development. At the heart of this problem lies a deeper question: Who holds the power to define what “success” means in African development, and why does that authority rarely sit with the people whose lives are being measured? Across development institutions, Monitoring and Evaluation (M&E) systems are organized around predefined indicators, logical frameworks, and quarterly targets. These instruments are designed to ensure accountability. However, in practice, accountability primarily flows upwards, toward donors rather than outward or downward toward communities. When funders require specific metrics, those metrics become operational priorities, even when they fail to reflect local realities. Field officers may be required to report the “number of trainings conducted” rather than whether agricultural productivity improved. “Workshop attendance” is measured, while food security outcomes remain secondary. Such practices exemplify what scholars describe as upward accountability: a governance structure in which information travels from the Global South to the Global North, where definitions of impact are constructed in distant institutional settings. The consequences are profound. Rural populations are extensively surveyed, yet infrastructural deficiencies persist. The borehole remains broken despite years of data collection and the problem is not the absence of information, but the asymmetry of power embedded in its design and use. This asymmetry is sustained by professionalized development structures. Consultants frequently arrive with pre-designed survey instruments drafted in English and structured around donor templates. Data collection is rapid, constrained by funding timelines and reporting cycles. Relationships are secondary to deliverables. As a result, epistemic assumptions embedded in questionnaires often misrepresent lived realities. For instance, survey instruments commonly ask about “household income” in contexts where wealth is collectively organized, harvests are shared, and economic survival operates through extended kinship networks. Monetary income becomes the default proxy for well-being, even when it inadequately captures communal forms of security. The resulting dataset conforms to donor frameworks while obscuring local social arrangements. This dynamic resonates with the critique advanced by Linda Tuhiwai Smith in Decolonizing Methodologies, where research is described as historically intertwined with colonial extraction. For many indigenous communities, “research” signifies not collaboration but appropriation, knowledge taken without reciprocal benefit. Contemporary development data practices often reproduce this legacy in technocratic form. The divide between capital cities and peripheral regions is therefore not merely geographic; it is epistemological. Survey instruments are frequently designed in English by policymakers and consultants, while respondents conceptualize their social and economic worlds through local languages and relational frameworks. This is not a simple issue of translation but of authority. Those who formulate questions determine what qualifies as valid knowledge. Consider the case of a mobile money agent operating at a busy junction in Accra. His business facilitates dozens of daily transactions within an informal economy sustained by trust and social networks. When researchers administer a survey on “financial inclusion,” questions focus on formal bank accounts and institutional savings mechanisms. The absence of these markers is coded as exclusion. Yet customers rely on rotating savings groups and relational credit systems that function effectively within their social context. The survey captures deficiency relative to formal standards but fails to recognize alternative financial infrastructures. Such practices constitute data extractivism: information flows outward, analyzed within external frameworks, and is archived in donor systems or paywalled journals. Communities rarely access or shape the interpretation of the findings. Measurement occurs without meaningful reciprocity. A useful framework for understanding this imbalance is offered by Data Feminism by Catherine D’Ignazio and Lauren F. Klein. The authors argue that data systems are never neutral; they are shaped by existing power structures and often reinforce them. Their principle to “examine power” urges researchers to interrogate who designs data infrastructures, whose labor sustains them, and who ultimately benefits from their outputs. Applied to development practice, this perspective reveals that extractive data systems persist not by accident but by design: decision-making authority over what is measured, how categories are constructed, and how findings are interpreted remains concentrated among donors, consultants, and institutional actors far removed from the communities being studied. Examining power therefore makes visible the asymmetry embedded in Monitoring and Evaluation frameworks, clarifying that the question of who gets to ask is inseparable from the question of who gets to decide. This raises a normative test for ethical development research: Would the data remain useful to the community if external funding ceased? In most instances, the answer is negative. Reports satisfy reporting obligations but do not enhance local decision-making capacities. Closing this gap requires structural redesign rather than minor adjustments. Findings must be returned in accessible formats, translated into local languages, and discussed in community forums. More fundamentally, communities must participate in defining the questions themselves. Participatory Action Research (PAR) offers one such alternative. In PAR frameworks, community members act as co-researchers rather than respondents. They identify priorities, generate data, and interpret findings collectively. This approach reorients accountability horizontally, towards those whose lives are under study. In Musina, South Africa, collaborative research with migrant farmworkers revealed cross-border healthcare practices that conventional surveys had overlooked. When workers mapped the services they actually used, it became clear that many accessed clinics in Zimbabwe because of geographic proximity. Community-led inquiry thus surfaced patterns invisible within standardized donor instruments. In practical terms, a PAR cycle begins not with a pre-designed survey but with collective problem identification. Community members gather to articulate what challenges matter most to them, whether access to water, migration pressures, or youth employment. Together with facilitators, they translate these concerns into locally meaningful indicators. Rather than counting “number of workshops,” they might define success as reduced time spent fetching water or improved trust between farmers and extension officers. Community representatives then participate in gathering information, whether through mapping exercises, storytelling sessions, or locally administered surveys. The findings are analyzed collectively in open meetings where patterns are debated and refined. Finally, decisions about action are made jointly, and the cycle repeats, allowing indicators to evolve as conditions change. In this model, measurement becomes a tool for collective learning rather than external compliance. Grounded accountability also resonates with longstanding deliberative traditions. In Ghana’s Northern Region, elders routinely convene under trees to negotiate communal priorities. Such participatory governance practices predate formal M&E systems. Their durability lies in local ownership and collective legitimacy. To decolonize data, therefore, is to shift from extraction to reciprocity. It entails asking not only what information can be obtained, but how knowledge production benefits those who contribute it. Practical implications follow: allocating budgets for feedback sessions, employing local-language researchers, disseminating findings through radio and community assemblies, and ensuring informed consent regarding data use. The Responsible Data Handbook underscores that transparency, meaningful consent, and community benefit are ethical minimums rather than optional enhancements. Without these safeguards, evidence-based development risks reproducing asymmetries it claims to alleviate. Participatory methodologies do not merely refine data collection techniques; they redistribute epistemic authority. They challenge the assumption that legitimacy flows from donor compliance and instead center communities as co-authors of knowledge about their own lives. The elder under the tree has answered enough questions. The ethical imperative now is to create structures capable of listening and responding to hers.

Key Takeaways:

  • Development metrics must be co-designed with communities, not imposed by donors.
  • To be truly evidence-based, systems must value local knowledge as legitimate data.
  • The ultimate test of ethical research: the community benefits whether or not the grant continues