Conversation with conference speakers

In advance of our Innovation in Evidence conference, Mowat NFP reached out to six Canadian and International conference speakers — Dr. David Halpern, James Turner, Jon Baron, Diane Roussin, Hanna Azemati and Jonathan Breckon to garner their expertise on a wide-range of topics relating to the role of evidence:

  • Best practices in generating, translating, and adopting evidence.
  • Exploring what meaningful engagement with policymakers and practitioners looks like in practice.
  • The role of existing (and new) evidence institutions and outcomes funding arrangements.
  • Innovative approaches to funding evidence.
  • Creating or transitioning existing evidence institutions into What Works Centres.

In November 15-16 2018, The Community Safety Knowledge Alliance and the Mowat Centre’s Not-for-Profit Research Hub (Mowat NFP) are co-hosting an event in Regina, Saskatchewan on improving evidence-informed policymaking. The Innovation in Evidence Conference will bring together international and domestic leaders and innovators in evidence-informed policymaking to share emerging trends, discuss lessons learned, and provide fresh insight into the challenges facing policymakers, practitioners, researchers and academics in their quest to determine ‘what works.’

Collectively, their responses contribute towards answering the following question:

What are paths forward in the Canadian context for building the infrastructure to support evidence-informed policymaking and service delivery?

Dr. David Halpern
Chief Executive,
The Behavioural Insights Team

Asking “What Works?”

In 2016, the Treasury Board of Canada requirement that a fixed proportion of program funds be devoted “to experimenting with new approaches,” itself building on PM Trudeau’s instruction to Ministers, attracted widespread interest. ((Government of Canada (2016). “Experimentation direction for Deputy Heads – December 2016.” https://www.canada.ca/en/innovation-hub/services/reports-resources/experimentation-direction-deputy-heads.html)) It seemed a strikingly bold move to drive public sector innovation and evaluation – and a move that I would much like to see the UK pursue too.

In everyday speech, people often talk about ‘doing an experiment.’ They normally mean ‘trying out something new or different,’ ((Definition of “experiment” from the Collins English Dictionary, HarperCollins Publishers. https://www.collinsdictionary.com/dictionary/english/experiment)) though sometimes they do mean it in the deeper sense of ‘a test done in order to learn something new or discover if something works or is true.’ ((Definition of “experiment” from the Cambridge Advanced Learner’s Dictionary & Thesaurus, Cambridge University Press. https://dictionary.cambridge.org/dictionary/english/experiment)) Too often in the UK government and elsewhere, the idea of experimentation is used in the former rather than the latter sense.

Canada’s federal directive was refreshingly clear in defining experimentation as “testing new approaches to learn what works and what does not using a rigorous method.” ((Government of Canada, 2016)) But making sure ministries and agencies stick to this meaning rather than adopting looser interpretations will be key to increasing the supply of high quality evidence available to decision makers. We want public services and governments sometimes to try something new. But if and when they do, we want to know if it worked! And why stop there: don’t we want to know whether what we are already doing works, or whether there is a better alternative?

It is also important we apply the same energy and rigor to adoption and implementation. We have seen from evidence generated by the UK’s network of government-backed What Works Centres that light touch approaches to disseminating research findings don’t work. ((Sharples, J. (2017). “EEF Blog: Untangling the ‘Literacy Octopus’ – three crucial lessons from the latest EEF evaluation.” https://educationendowmentfoundation.org.uk/news/untangling-the-literacy-octopus/)) They do nothing to change behaviour. What’s needed are much more proactive approaches to facilitating adoption – including building capacity, changing incentives, and sustained dialogue with stakeholders. ((See “The science of using science: researching the use of research evidence in decision-making.” https://eppi.ioe.ac.uk/cms/Default.aspx?tabid=3504.))

I’m very much looking forward to the Innovation in Evidence conference to see the impact that the Trudeau and Treasury Board instruction is having. My understanding is that, even with this level of support, large scale experimentation has still been slow to take off. There is an opportunity for a deep and value-adding collaboration – especially across Canada, the UK, Australia, the United States, New Zealand and other countries that are asking ‘what works?’ An early action is to strengthen our joint commissioning of systematic reviews of the existing evidence. We are all asking the same questions, and could get more extensive reviews – and at a lower cost – if we worked together. We should also drive our emerging What Works institutions to work closely together, sharing our results on what does and doesn’t work. It’s to the advantage of all our citizens to turn this evidence into a truly cross-national public good.

James Turner
Deputy Chief Executive,
Education Endowment Foundation

Key Learnings from a UK What Works Centre

The Education Endowment Foundation (EEF) – the UK’s What Works Centre for education – is seven years old. One major lesson from that time is the importance of context, whether that be in the implementation of individual projects or in the overall evidence ecosystem that it is appropriate to create in one circumstance compared to another.

Nonetheless, the What Works movement is founded on the principle that learning from what has been effective elsewhere is a good place to start in giving an idea maximum chance of success, even if that is in a different school, hospital, region, country or continent. So, based on our work to date, there are three things that stand out for me in terms of useful next steps as Canada thinks about its own journey:

  • Institutions are important anchors
    The step change in the production of high quality evidence for English schools – which has seen the number of randomized controlled trials (RCTs) burgeoning from pretty much zero to 150 in a few years – has only been possible because of a partnership between schools, academics and the not-for-profit sector. But to instigate reform, to set the evidence bar high and to ensure consistency, comparability and accessibility in outputs, you need a convening force, and institution whose sole mission is to make these things happen and drive through change. That institution need not be entirely new – the EEF is itself a creation of two foundations and the UK government – but it needs to be independent of vested interests and objective in its view. Establishing with absolute clarity who the flag-bearers of the movement will be, and ensuring they have credibility with practitioners, decision makers and the host of partners they will need to engage, is a must.
  • Harvest what is already out there
    The What Works approach is often conflated with the desire to do something entirely new. Teachers in England are highly cynical of the latest fads and fatigued by constant change. And it takes time to generate new high quality evidence and confidence to resist the urge for ‘quick and dirty’ results. So start with assimilating and rating the existing research and – crucially – summarizing the findings in a way that is genuinely useful to practitioners. That provides huge bang for buck in terms of impact. Focus new evidence generation work on promising approaches that are already in the system. There is a pressing need to know whether they work or not; and if they do, scaling up an initiative that has proved its mettle in the cut and thrust of, say, a busy classroom, will be easier than an academic fancy. Of course, there is an important place for innovation – but focused on disciplined innovation, based on evidence and sound theory.
  • Establish genuine political buy-in from the outset. The What Works movement has gained great traction in the UK government over the last five or so years. New centres have been formed and there are examples of evidence shifting significant sums of government spending for the better. But with popularity comes the danger of superficiality: the tendency to pepper press releases and strategy documents with ‘evidence’ and ‘what works’ rhetoric, without any real commitment to decisions being made differently. Focus initially on a few key areas where there is an appetite to think radically and where there is the political cover for a genuinely evidence-informed approach. Success here will be an important way to gain traction for the approach elsewhere.

A final word of warning: “Don’t let the best be the enemy of the good.” Setting the evidence bar high and pushing up the quality of evaluations is at the heart of what we are all aiming to do. But if this work is to result in any real-world change, it is as important that it be relevant and timely for those it is meant to influence. So be clear where the non-negotiables are and where pragmatism is necessary. It is rare for any academic study to conclude that further research is not needed; but at some point we need to make the call and start making decisions.

Jon Baron
Vice-President of Evidence-Based Policy,
Laura and John Arnold Foundation

A Tiered Approach to Funding What Works

The ultimate goal of evidence-based social policy is to improve people’s lives. The key challenge in achieving this goal is that most new programs and practices, when rigorously evaluated, are found not to produce the hoped-for improvements in people’s lives (compared to usual services). This pattern occurs in social policy as well as other disciplines. Illustrative examples include:

  • Business
    Of 13,000 randomized controlled trials (RCTs) conducted by Google and Microsoft to evaluate new products or strategies in recent years, 80 to 90 per cent have reportedly found no significant effects. ((Manzi, J. (2012). Uncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics, and Society. New York: Perseus Books Group, pp. 128 and 142; Manzi, J. (2012). Science, Knowledge, and Freedom. https://www.youtube.com/watch?v=N4c89SJIC-M. Presentation at Harvard University’s Program on Constitutional Government.))
  • Medicine
    Reviews in different fields of medicine have found that 50 to 80 per cent of positive results in initial clinical studies are overturned in subsequent, more definitive RCTs. ((Ioannidis, John P.A (2005). “Contradicted and Initially Stronger Effects in Highly Cited Clinical Research.” Journal of the American Medical Association, vol. 294, no. 2, pp. 218-228; Zia, M.I., Siu, L.L., Pond, G.R., & Chen, E.X. (2005). “Comparison of Outcomes of Phase II Studies and Subsequent Randomized Control Studies Using Identical Chemotherapeutic Regimens.” Journal of Clinical Oncology, vol. 23, no. 28, pp. 6982-6991; Chan, J.K et. al (2008). “Analysis of Phase II Studies on Targeted Agents and Subsequent Phase III Trials: What Are the Predictors for Success.” Journal of Clinical Oncology, vol. 26, no. 9; Maitland, M.L., Hudoba, C., Snider, K.L., & Ratain, M.J. (2010). “Analysis of the Yield of Phase II Combination Therapy Trials in Medical Oncology.” Clinical Cancer Research, vol. 16, no. 21, pp. 5296-5302; Minnerup, J., Werschin, H., Schilling, M., & Schäbitz, W.R. (2014). “Analysis of early phase and subsequent phase III stroke studies of neuroprotectants: outcomes and predictors for success.” Experimental & Translational Stroke Medicine, vol. 6, no. 2.)) Thus, even in cases where initial studies—such as comparison-group designs or small RCTs—show promise, the findings often do not hold up in more rigorous testing.
  • Education
    Of the 90 educational interventions evaluated in RCTs commissioned by the Institute of Education Sciences and reporting findings between 2002 and 2013, close to 90 per cent were found to produce weak or no positive effects. ((Coalition for Evidence-Based Policy (2013). Randomized Controlled Trials Commissioned by the Institute of Education Sciences Since 2002: How Many Found Positive Versus Weak or No Effects. Available at: http://coalition4evidence.org/wp-content/uploads/2013/06/IES-Commissioned-RCTs-positive-vs-weak-or-null-findings-7-2013.pdf))
  • Federal Social Programs
    Of the 13 large RCTs commissioned by the U.S. government to evaluate the effectiveness of ongoing, Congressionally-authorized federal programs over the past 30 years, 11 found that the programs produced weak or no positive effects. ((Laura and John Arnold Foundation – Evidence-Based Policy Team (2018). “When Congressionally-authorized federal programs are evaluated in randomized controlled trials, most fall short. Reform is needed.” Straight Talk on Evidence initiative. https://www.straighttalkonevidence.org/2018/06/13/when-congressionally-authorized-federal-programs-are-evaluated-in-randomized-controlled-trials-most-fall-short-reform-is-needed/))

In each of these areas, there are clear examples of demonstrated effectiveness – that is, programs rigorously shown to produce important improvements in people’s lives – but they tend to be the exception. The bottom line is that it is harder to make progress than commonly appreciated.

We suggest that policymakers, foundations, and researchers view this as the central challenge in social policy, and re-deploy program and research funding towards addressing it, by: (i) growing the subset of programs that are rigorously shown to produce important effects, and (ii) expanding the implementation of such proven-effective programs. A specific approach we have promoted to achieve these goals is a tiered-evidence strategy, which can be implemented by governmental and/or philanthropic organizations. Such a strategy includes:

  • Funding the expansion of programs backed by strong (“top tier”) evidence of sizable, sustained effects on important life outcomes.
  • Funding – and rigorously evaluating – programs backed by highly-promising (“middle tier”) evidence, in order to hopefully move them into the top tier.
  • Building the pipeline of promising programs through modest investments in the development and initial testing of many diverse approaches (as part of a “lower tier”).

Such a strategy offers a path to progress in solving social problems that spending-as-usual cannot.

Diane Roussin
Project Director,
The Winnipeg Boldness Project

Scaling What Works Through Meaningful Engagement

Trusting relationships and true willingness to collaborate are the key requirements for meaningful engagement with policymakers and practitioners in Canada. These are two of the key values of The Winnipeg Boldness Project: a community social innovation initiative aimed at improving early childhood development outcomes in the Point Douglas neighbourhood.

The leadership of our community has long employed a unique way of working with families that they know to be efficient and impactful – one that is strength-based and person-centred, often referred to as “working from the heart.” This way of working has been documented and is the foundation for the project’s prototypes, which are intended to provide evidence for potential pathways for positive change. It is our hope that this way of working can be scaled both within and beyond the community to achieve lasting systemic change – the ultimate goal of social innovation.

We’ve found that relationships are key when seeking meaningful engagement with stakeholders, particularly when the ultimate goal is policy or systems change. We dedicate a significant portion of our time to building and maintaining strong relationships with key stakeholders, as we know that scaling efforts based on the evidence generated from our prototypes will require a substantial amount of support from strategic partners in order to be successful. While it is essential that priorities and solutions for prototyping are driven by the wisdom of the community, we know we can’t do it alone.

For the prototypes and the community’s wisdom to scale, we must communicate this evidence to a variety of audiences. From the very early stages of the project, we were very consciously engaging stakeholders who we saw as people or groups who could play a significant role in scaling efforts in the future. This has allowed for relationships that are reciprocal rather than transactional, as our partners have walked alongside us from the beginning and played an important part in co-creating ideas for change. This practice also helps to forge a deep sense of ownership in the evidence that is created, which means that policymakers are invested in the solutions and recommendations that are produced and are more likely to scale prototypes that have proven to be successful.

These diverse partnerships have been instrumental in our scaling successes to date. By building key relationships and collaborating across sectors, it has helped us to better understand how to communicate our evidence in an effective and concise way.

Hanna Azemati
Program Director,
Government Performance Lab, Harvard Kennedy School

Collaborative Strategies for Improving Outcomes

Governments around the world are struggling with how to make more rapid headway on solving pressing social problems that include homelessness, substance use and recidivism. To improve outcomes for at-risk populations, the following strategies should be pursued by governments, philanthropies, and service providers:

    • Continuously using data to improve program implementation
      This involves setting up robust mechanisms for identifying at-risk populations, referring them to providers and motivating them to engage in key program components. Ideally, governments and providers share operational and outcomes data and meet regularly to identify shortcomings and opportunities for improvement using this data. Based on this, they then jointly develop strategies for boosting outcomes. Simply expanding evidence-based programs is unlikely to make a difference if the quality of program implementation is not improved. Relatedly, governments and providers have to strike the balance between fidelity to evidence-based program models with openness to design, and between test strategies that further innovate on those models. After all, there is always room for improvement!
    • Coordinating services for the highest need, at-risk populations
      For individuals and families that might be involved in multiple systems (such as child welfare, criminal justice, and homelessness), smoothly navigating available resources and accessing the most appropriate services in the right sequence is vital. This requires not only close client-centred collaboration between government and providers but also between government agencies.
    • Aligning incentives to focus government and providers on desired outcomes
      Tying a small portion of payment for programs to key metrics can in specific circumstances complement—but not substitute for—the above strategies. Small performance-based payments can focus government and providers on desired outcomes and the operational steps that have potential to drive them. At a minimum, these can force parties to collect actionable data and review it regularly, which in turn can set the stage for outcomes-oriented conversations. For performance-based payment to be appropriate, the target population for the program should be clearly defined (to minimize cherry picking of easier-to-serve clients) and desired outcomes should be measurable and fully reflect the government’s goals (to reduce the risk that providers become overly focused on some goals at the expense of others).

These strategies are elements of what the Harvard Kennedy School Government Performance Lab (GPL) refers to as “results-driven contracting” and “active contract management.” The GPL supports state and local governments in the U.S. (90 jurisdictions to date) in adopting these and other strategies to accelerate progress on difficult social challenges and to strengthen core government functions. For example, as part of Bloomberg Philanthropies’ What Works Cities initiative, the GPL has helped nearly 30 cities more effectively structure, evaluate and manage their highest priority procurements and contracts.

In considering how to strengthen the infrastructure for improving social service delivery, Canada should consider how to develop evidence institutions that can similarly support governments and providers in carrying out the above strategies. Key functions of these institutions could be to expand the supply of evidence-based programs, particularly for issues where these are lacking, as well as to make evidence accessible, which requires sufficient information about the underlying evaluations so policymakers can correctly apply the evidence to a given problem. Beyond that, such institutions could also be positioned to help governments employ evidence. One idea is for them to run annual competitions – themed around a top policy priority for that year – to award technical assistance from evidence institutions to selected local governments. Hands-on support could help governments make progress on that policy priority by reassessing their related program investments, setting up performance tracking systems to flag successes and challenges during implementation of key programs, and elevating the most promising programs for rigorous evaluation.

Jonathan Breckon
Director,
Alliance for Useful Evidence, Nesta

Five Key Challenges in Setting Up a What Works-Type Institution

Below are some common issues for anybody setting up a new evidence centre – and ideas on how to surmount them. ((These recommendations are based on interviews with leaders in What Works-type centres, a review of the ‘grey literature’, and evaluations of evidence intermediaries, as part of a study we conducted for the Nuffield Foundation, a UK non-government organization. The ideas below also reflect my own anecdotal experience, particularly my experience in incubating the What Works Centre for Children’s Social Care, funded by £10m on behalf of the UK’s Department for Education.)) The focus here is on the UK What Works Centres, ((The formal number of What Works Centres is seven. The number of ten includes two more that are ‘affiliates’ – the What Works Centre for Scotland and the What Works Centre for Wales. The fact that they are not formally part of the What Works Network says a lot about British politics and devolution – the diverging paths between Whitehall and the Scottish and Welsh Governments. The number of ten also includes the What Works Centre for Children’s Social Care, currently in incubation.)) ten independent institutions dedicated to evidence synthesis, generation, transmission and adoption. We have also interviewed other What Works-type organizations in the UK and overseas, such as the US Clearing Houses.

      • Mobilize your knowledge
        The biggest single challenge for any What Works-type organization is having your evidence ignored. The research you synthesize just sits there on a website, unused. You need to actively engage your target audiences. Many institutions we spoke to regretted not putting enough resources into communicating and marketing at the start of their work. For instance, in its early years, the U.S. Department of Education’s What Works Clearing House focused too much on the excellence of its research, and not enough on communication. It has now moved towards a stronger emphasis on ‘usability’ and ‘educator-friendly products.’ ((United States Government Accountability Office. (2013). “Further Improvements Needed to Ensure Relevance and Assess Discrimination Efforts.” Report to the Committee on Education and the Workforce, House of Representatives, p.12. Available at: https://www.gao.gov/assets/660/659425.pdf.))
      • Avoid weak evidence
        Sometimes there is not enough evidence out there for you to review. One UK Centre was nick-named the ‘What Doesn’t Work Centre’ because they only found negative results, or poorly designed evaluations. If this is the case, it’s important to fill the gaps and do some original research. For instance, we are running some randomized controlled trials for the What Works Centre for Children’s Social Care. Others, such as the Centre for Homelessness Impact, publish ‘Evidence Gap Maps’ to show the research community what needs to be done.
      • Find sustainable funding
        If there are gaps in the research, then this brings us to the third core challenge: money. Every organization claims to need more. But it’s not just about the quantity, it’s also the sustainability of the funding. Centres can waste lots of time chasing grants from government or non-profits. The ideal is to have a core of funding over a long period, such as the Education Endowment Foundation’s £125m founding grant from the UK Department for Education. Compare this to the 17 Whitehall agencies that fund the What Works Centre for Wellbeing. ((What Works Wellbeing. (n.d.) “Our Partners.” https://whatworkswellbeing.org/about/our-partners/.))
      • Avoid academic capture
        You need to steer clear of being just another university research body. You are there for other audiences, such as teachers, police officers, social workers and policymakers, not for academic advancement alone. That doesn’t mean that your Centre should avoid locating inside a university – some successful ones have found homes inside higher education, including in the LSE (What Works Centre for Local Economic Growth) and in Cardiff University (the Wales Centre for Public Policy). But if you are based in a university, you must foster an independent-minded culture, with an eye beyond the churn of academic publishing.
      • Hire brilliant staff
        Finally, one crucial element is making sure you have good leaders. They need to be credible with a range of audiences. To give an example, the energy and entrepreneurialism of Rachel Tuffin at the What Works Centre for Crime Reduction has been vital in advancing that institution. Another example is David Halpern, the What Works National Adviser, based in the UK’s Cabinet Office. He is just as comfortable getting the ear of Ministers, as he is speaking with Professors.

      Nobody believes that ‘evidence speaks for itself.’ It needs leaders who get out and make sure evidence is used at the right time and place. Leaders matter, not just institutions.

Authors

Lisa Lalande
Joanne Cave
Adam Jog

Release Date

November 14, 2018