Dr Jamieson

Guest Contributor
March 18, 2004

Using Evidence to Improve Public Programs

By Dr Don Jamieson

Last month, some 200 scientists and knowledge “users” met in Washington to discuss how we might increase the extent to which evidence guides our social and educational policies and programs. The concern for these policy makers, service providers, and politicians is that, even when appropriate evidence exists, it is rarely used to design and improve such programs. The challenge is greatest on the demand side: there are plenty of capable researchers who could work on policy- and intervention-relevant research, if users would apply the results.

More recently, Canadian newspapers are filled with outrage over the misuse of public funds — this time, it’s the federal sponsorships program. I’m wondering when we might see similar concern over the even greater waste of public money on well-intentioned programs with rigorous financial controls, and delivered more-or-less as designed, by well-motivated professionals.

Some programs are merely less effective than they could be; others do actual harm. These programs exist in health, as well as in education, justice, social services, and everywhere else. That’s to be expected. The surprise is that we so rarely know which programs work well, and which do not — and that there seems to be so little commitment to finding out.

I spend a lot of time trying to figure out why this might be. After all, my organization — of the Canadian Language and Literacy Research Network — seeks to improve policies and programs relating to language and literacy development. While there are many things to be done, I’ll focus here on two:

1. Accept that all programs are experiments, and demand that they be good ones.

When new programs are developed, it is natural that we lack the knowledge needed to predict just what is likely to work. Much can be learned if we can accept this, and view programs as experiments. But to be useful, we need to ensure they are good experiments.

Canada has some distinction in undertaking a few formal experiments on a grand scale (e.g., through the Social Research Demonstration Corporation, www.srdc.org. It has provided convincing evidence concerning how the details of welfare and employment insurance programs influence program outcomes. Many more examples are needed.

A challenge is that politicians and managers have few incentives to undertake serious evaluation of their programs, as this can provide fodder for opposition attacks and media criticism. These fears need to be replaced by one that is greater: that the public will demand evidence that programs are appropriate and effective, and that not having the evidence that a program yields the intended benefits demonstrates incompetence.

Even the best motivated and planned programs can fail to deliver the expected results. Large-scale programs designed to prevent teens from smoking and using drugs, to teach children to read well, and to prevent juvenile delinquents from moving on to adult crime are among the many prominent, well-intentioned failures, with profound consequences for their participants.

Finding out that a program does not work, and learning this quickly must come to be seen as simply responsible management, not an embarrassing failure. What should be truly inexcusable — and the cause for public censure — is when a program is failing, and you don’t know it.

2. Improve awareness of, and access to, good evidence.

Even when good program/policy-relevant knowledge exists, it too often remains inaccessible to the user audience. One reason is that too few users appreciate why knowledge acquired using systematic methods is more likely to be valid than that acquired in other ways. We therefore need to educate users not only about what good evidence is available, but also about why it is good evidence. It is sensible to consider whatever evidence is available, including personal experience, expert opinion, “common knowledge,” case studies, and single-group studies, when developing policy. However, these types of evidence too often receive equal or greater weight than do the results of large-scale experiments, or conclusions from systematic literature reviews.

When it was recognized that culture differences challenged the transfer of technology from universities to industrial partners, programs were created to permit researchers to spend time in industry, and company employees at universities. Similar programs might also facilitate the appreciation for evidence in government. Even when there is an appreciation for the merits of different types of evidence, it is important that good evidence be presented in an appropriate form. Peer-reviewed journal articles, proceedings, and monographs are not that form and have little impact on policy and practice.

Accurate, and accessible summaries, based on comprehensive syntheses of the available research, are essential for good public policy and effective programs. International efforts such as the Cochrane and Campbell Collaborations (www.campbellcollaboration.org) provide one way to ensure that knowledge syntheses are as valid as possible. These groups use formal protocols to ensure that all the relevant evidence that meets a specified level of scientific rigor is considered, and to weigh competing findings to reach the most sensible conclusions.

Canada has made progress in providing evidence for better health decisions, for example, with the creation of the Canadian Health Services Research Foundation (www.chsrf.ca) and the Canadian Cochrane Centre (http://cochrane. mcmaster.ca). Recognizing the challenge in education and other areas, the Social Sciences and Humanities Research Council is engaged in a broad transformation process, but they can’t be expected to do this alone. Overall, Canada’s record in education and other social science domains is far from encouraging. It does not have a Campbell Centre, nor a parallel to the CHSRF, and no level of government was represented at the recent international meeting of the Campbell Collaboration.

Unfortunately, our governments still too often rely on "expert" advice as their window on knowledge. Such GOBSATT — "good old boys sitting around talking turkey" — is quick and easy. However, it is also selective and likely to mislead. The US, UK and Nordic governments have all recently implemented formal mechanisms to bring systematic evidence into government. Canada needs to follow this lead. The costs of not doing so are not only counted in dollars.

Dr Jamieson (Jamieson@nca.uwo.ca) is CEO and scientific director of the Canadian Language and Literacy Research Network (www.cllrnet.ca), a federal Network of Centres of Excellence.


Other News






Events For Leaders in
Science, Tech, Innovation, and Policy


Discuss and learn from those in the know at our virtual and in-person events.



See Upcoming Events










You have 1 free article remaining.
Don't miss out - start your free trial today.

Start your FREE trial    Already a member? Log in






Top

By using this website, you agree to our use of cookies. We use cookies to provide you with a great experience and to help our website run effectively in accordance with our Privacy Policy and Terms of Service.