Example Project #1: Conventional Evaluation and The Hidden Politics of Evidence

Hello! I’m Will Faulkner and I direct FluxRME. Today I want to share the story of what - on the surface - might appear to be the world’s most conventional evaluation. Looking more closely, it’s actually perhaps the world’s best (or at least best-known) example of just how intricate evaluation gets.

Setting the Stage.

PROGRESA (now called Prospera) is Mexico’s largest anti-poverty program. The 1997-2000 evaluation by the International Food Policy Research Institute (IFPRI) is still trumpeted by prestigious institutions with names you’d likely recognize (World Bank, anyone?) as an exemplar, key in shoving evidence-based social policy into the 21st century.

1.png

The evaluation [ostensibly] showed that PROGRESA was effective:

“...after just three years, the poor children of Mexico in the rural areas where PROGRESA is currently operating are more likely to enroll in school, are eating more diversified diets, getting more frequent health care and learning that the future may look quite different from the past.”

- Is Progresa Working? (summary report of evaluation findings available here)

Before PROGRESA, no national Mexican social policy had survived a Presidential turnover. Safety nets and subsidies were focused more for political ends than overall social good. The IFPRI evaluation de-politicized Mexican social policy as the key instrument that made sure PROGRESA continued through the changeover from President Zedillo to President Fox (also the first change in the ruling party in 71 years!). Due to the IFPRI evaluation, PROGRESA gained political hardiness and funding sources, while evaluation became a legally-required part of Mexican social programming.

Yay! Except...

In a time long, long ago on a campus far, far away (a whole six miles from our office in Tremé), I uncovered the whispered tale of science and political intrigue surrounding this evaluation, and I’ve been fascinated by it ever since.

In short, the project was bungled.

 

The methodology they claim to have used...wasn’t. And even if it had been, there were enough problems during data collections, the results would have been low quality anyway (brief overview on the American Evaluation Association Blog here).

I’ve brought this to the attention of the original authors, presented many times at conferences, ran focus groups, and even published in newsletters and one peer-reviewed journal. But nobody wants to pick this up and run with it. I’ve even gotten some thinly-veiled cease-and-desist emails.

Nobody wants to touch it. Why??

Basically, you don’t get much better in-your-face illustrations of a key concept in evaluation:

Evidence is political.

Inherently. Natively. Humans generate it, and humans interpret it. There just no such thing as pure truth descended from the heavens in sublimely unbiased numbers.

So What?

Evaluation is our business, but it’s not just a question of mastering technical tools.

Evaluation is an art.

It involves just as much politics, tact, and emotional intelligence as skills in data analysis.I love the IFPRI/PROGRESA case so much because it’s so hard to box up in a single judgement. I feel a tangle of emotions that don’t lend themselves to a thumbs-up or thumbs-down. It’s a constant reminder that what you think of a given social policy or program depends on how you look at it.

Evaluating the efforts of NOLA’s social changemakers is a forever human process, and one has to remain open. Cookie cutter approaches just won’t work.

Micro-decisions matter. Context matters. History matters.