Richard D. French

Reading the Evidence on Evidence-based Policy

Richard D. French

[In the interests of length, supporting citations and references have been dropped from this blog copy. The full paper on this topic is available from the author at richard.french@uottawa.ca]

There is a remarkable consensus that it has proven unexpectedly difficult to identify the successes of Evidence-based Policy (EBP). Those who, mostly starting from the normative premises of EBP, have looked more closely at the use of scientific knowledge in policy-making, rediscovered a number of phenomena, many well-known to students of policy-making, which may account for this absence. It should be emphasized that the great majority of these scholars began their research motivated by an instinctive conviction of the value of scientific research for public policy.

The proportion of organized knowledge relative to other forms of information in the best policy-making processes is more modest than proponents of EBP imagine.  Evidence in the form of research which meets disciplinary standards is mostly framed for the editorial boards of academic journals, whose expectations are radically different from those of policy-makers. Such research does not address policy problems but researchable phenomena. As Andrews (2002, 34) puts it, “The scientific enterprise does not naturally produce information useful to lay decision makers; rather, the scientific enterprise produces knowledge for internal consumption.” The policy implications of any given piece of research are far less compelling for policy-makers than researchers assume, in part because researchers have no very accurate picture of the making of policy As Oliver et al (2014a, n. p.) argued “It is hard to defend academics from the charge of misunderstanding policy priorities or processes – a charge first made explicit over 20 years ago.”  

Research operates on a timetable far removed from the pressures of policy-making .  Social science in particular is not cumulative in the sense that natural science is, and often subject to fads and fashions. Aaron (1978, 167) concluded in the penultimate line of his study of the War on Poverty and the Great Society programs, “As before and as always we must proceed with inadequate research.”

The linear, or pipeline model of the use of science in policy-making, which is central to the EBP movement’s approach, is a very rare phenomenon. EBP advocates imagine that specific studies or research findings may so resolve or clarify the issues in a policy area that they drive specific policy change. This is known as the “instrumental”  or “problem-solving” function of evidence. However, as the late Carol Weiss (Weiss and Bucuvalas, 1980, 155), one of the pioneers in the field, concluded “Research is seldom used to affect decisions deliberately. Rather it fills in the background, it supplies the context, from which ideas, concepts, and choices derive.” She called this the “enlightenment” function.

Science and politics are intimately entwined in policy-making and attempts to separate them in practice are doomed to sterility. According to Jasanoff (1990, 230),  another of the leaders in the field,

Although pleas for maintaining a strict separation between science and politics continue to run like a leitmotif through the policy literature, the artificiality of this position can no longer be doubted. Studies of scientific advising leave in tatters the notion that it is possible, in practice, to restrict the advisory process to technical issues or that the subjective values of scientists are irrelevant to decision-making.

Doctors differ, and scientifically qualified experts may be found on both sides of many issues. Experts in different countries facing the same body of facts and studies may provide different advice.  Interests with much at stake in policy-making find experts who understand perfectly in what direction their expertise is to be directed.  If academic standards – the idealized demands for validity in science – are taken to their logical conclusions, then there is no end to the technical contestation which concerned stakeholders can foster, and the ability to sustain such contestation may become a matter of resources, not scientific competence. Weingart (2003, 57) concludes:

This scientization of politics [the demand for scientific support for policies], however, has had the surprising result that political decisions cannot - as might have been expected - be made more rationally, more unambiguously, more often consensually and with greater certainty, but, on the contrary, that controversies about these decisions become more intensive and their lack of foundation in science and their risks become obvious.

Academic disciplines and government departments and agencies divide up the world in essentially arbitrary pieces, which conflict with one another, and which provide no guarantee that they cut (policy) nature at the joints. Put another way, policy problems, as they present to policy-makers, respect neither academic fields nor government organization. They place a premium on interdisciplinarity, on the one hand, and co-operation among public bodies, on the other, but the achievement of either is by no means obvious. A recent study by the National Research Council for the National Academies of the United States (Prewitt et al, 2012, 49), Using Science as Evidence in Public Policy,  argued that “Focusing on understanding institutional arrangements - how the agencies, departments, and political institutions involved in policy making operate and relate to one another - may be what matters most in improving the connection and policy making.”  Tenbensel (2004, 205) concluded from his study of New Zealand’s efforts to set health policy priorities that “The task of understanding how policy processes deal with divergent implications of different types of knowledge and evidence is of far more importance than the question of how to make policy processes more evidence-based.”

The issue here goes beyond the incommensurability of the intellectual boundaries of the academic disciplines, and of the frontiers of the agency mandates in the public sector, with one another and with the nature of policy problems confronting policy-makers. Institutions such as disciplines and agencies have their own internal epistemologies and cultures, which reduce or increase the salience of various ways of knowing and doing, and which frame the production and use of scientific research as evidence .

The gap between the “two communities” of research and policy may best be filled by resources specifically devoted to bridging it, such as the translation of scientific findings into policy-friendly language or the positioning of knowledge brokers at the strategic meeting point of evidence and policy. However, these “knowledge mobilization” strategies have inspired a good deal of comment to the effect that they misconceive and/or underwhelm the problem they purport to begin to resolve.

Direct and sustained relationships between researchers and policymakers are the optimal method for promoting the use of research in policy-making (this is also known as the linkage or interaction or knowledge translation model).This requires a high “degree of persistence and stamina” on the part of researchers (Davies et al, 2015, 129).

The context of problems and of policy-making is critical to the use or otherwise of organized knowledge as evidence; evidence for policy-making does not have the universal applicability assumed in the scientific ideal; those proponents of EBP who promote randomized controlled trials (RCTs) as the “gold standard” for “what works” in policies and programs create a “hierarchy of evidence”  which both oversimplifies the task of providing policy-relevant evidence and fails to account for the complexities of different  policy contexts, such that the external validity of many RCT’s is mistakenly assumed.  

In the language of policy studies, policy transfer (from one jurisdiction to another) requires a deeper understanding of contextual variables and the mechanisms underlying programs putatively successful in a specific environment.  Nancy Cartwright (Cartwright and Hardie, 2012, 45) has thoroughly examined the misconceptions about RCT’s which EBP dogma fails to recognize: “The orthodox advice is that external validity can be expected if the target population is ‘sufficiently similar’ to the study population. For us, the key question is how good a job this advice does in getting you from ‘it worked there’ to ‘it will work here.’ The answer is: you are lucky if it gets you anywhere.”  Yet another way of making a similar point is offered by Pearce et al (2014, 164: “Evidence best informs policy when it is attentive to local contexts, lay knowledge and political demands alongside the more abstract, technical data which is often assumed to be the bedrock of EBP.”

At a minimum, however, the lesson here is that (Weiss, 1995, 148), “Research does not win victories in the absence of committed policy advocates, savvy political work and happy contingencies of time, place and funds.”  Weiss and Bucuvalas (1980, 10) describe these happy contingencies as follows: “The requisite conditions appear to be:  research directly relevant to an issue up for decision, available before the time of decision, that addresses  the issue within the parameters of feasible action, that comes with clear and unambiguous results, that is known to decision-makers, who understand its concepts and findings and are willing to listen, that does not run athwart of entrenched interests or powerful blocs, that is implementable within existing resources.” All this means that for Weiss (1995, 146), “Most policy research is probably born to die unseen and waste its sweetness on the desert air.”

Rick French is Senior Fellow and former CN Paul M. Tellier Chair on Business and Public Policy at the University of Ottawa and has had multiple policy-making roles in his career including Minister of Communications in the Québec government and Vice-Chairman of the Canadian Radio-Television and Telecommunications Commission (CRTC).

 

Back to top