We need a more realistic approach to implementation in healthcare (Part 1)


by Kristian Hudson

Pilots Never Fail, Pilots Never Scale

The basic origins of implementation science have always been a push approach. We as implementation researchers find and know the evidence-based practices that ‘need’ to be implemented. We tell health systems, hospitals, schools, communities, and clinics about these interventions. We ask them if they want to engage with us in a clinical trial or implementation trial. If they agree, funding is provided externally or internally, local managers and their teams are informed that the project is happening, researchers are funded to learn about the implementation, and the executives and policy makers who gave the green light expect findings that tell them how to scale up these interventions. Implementation begins and most pilots are usually pretty successful.

But there are always a few factors that prevent long-term sustainment, widescale scale up and research findings that tell us how to sustain and upscale hence the term ‘pilots never fail, pilots never scale’.

Firstly, even though the practices we suggest are evidence-based, they are evidence-based to whatever the context of the original trial was. And pilots often try to squeeze out as many nasty contextual factors as they can. The ‘evidence’ for the evidenced based interventions we suggest may not match the realities of the places we try to implement them. The kind of knowledge we produce is not only context specific, but it also tends to be academic and generalisable rather than practical and transportable. Findings like barriers and facilitators and implementation outcomes like ‘acceptability’ don’t tell future sites ‘how to’ implement or sustain the intervention.

Secondly, even though health systems are in place to provide high quality support to patients, the truth is its return on investment and the bottom line that matters the most. No matter how long a site might need to implement something or how much it might cost, limited financial support is offered on the basis of specific goals being met within a specified timeline. Despite health systems best efforts, we are up against the financial realities and those get in the way of implementation. 

These are things that we as implementation researchers don’t really have control of. We haven’t learnt enough or lobbied enough in the policy arena to create the conditions that would help us be successful. We end up working down at the local level, the microsystem level. In our best scenarios we find champions, people like a nurse, a doctor, a charity, a member of the public who really want to implement this thing. But then they are swimming upstream because they are working within a system with those larger financial conditions, or other conditions which are not met and which are outside our spheres of influence.

So it becomes very difficult to achieve even short-term implementation goals. Even if the research team or the practice reach those short-term goals, reaching long-term goals and keeping the intervention sustained is very challenging and requires that the whole organisation works as one to achieve that. 

Now this sounds pretty bad, but it’s helpful to call out this reality. One of the most cited implementation scientists Laura Damschroder, refers to these conditions as ‘kiss of death’ conditions and they occur over and over again. I’ve seen them in the projects I’ve been working on. Something works and helps the system but its more expensive, so it gets dropped; something reduces staff burnout but it also takes them away from the patients, so it is discontinued; a pilot was very successful but widescale roll out was a complete failure. We need to call this stuff out so that other people can work on it. There’s a lot that can be done but we may be putting a lot of energy into the wrong places.  

We probably need to bring in people with experience of policy research into our research teams so our micro level work expands also to the macro. It is giant factors in the outer context that lead to sustainability and equitable outcomes and they have to do with levers around policy and funding and advocacy that implementation researchers are not studying. This might be because it is out of our comfort zone.

What should we do?

It is great to see a lot more second level translation i.e. the translation of implementation science theories, models and frameworks into practical tools and resources people can use (check out https://thecenterforimplementation.com). It is also great to see some implementation scientists talking about implementation support practitioners (Albers, Metz et al. 2020). These are the professionals who work in health systems and who support others in implementing, sustaining and scaling evidence-informed practices, policies and programs, for population impact. I know many nurses, doctors and other healthcare professionals who have spent years getting things implemented. They have a great deal of practical implementation knowledge that is simply not captured in the scientific literature and are often referred to as the unsung heroes, this invisible workforce or support system. These people are not publishing a lot and not giving the keynotes at conferences and that’s a shame. As implementation researchers we should probably focus on these people a lot more. I developed an approach which both captures their learning and facilitates their implementation efforts. Check out the essential implementation podcast on YouTube for videos on this – https://www.youtube.com/watch?v=wg-ypyDrbRo&t=537s 

Another positive in the last few years has been a lot more emphasis on bringing user centred design to clinicians and patients. Intervention designers focus on the users and their needs in each phase of the design process. Instead of developing an intervention on its own, research teams develop a bundled package which has a decision aid but also includes implementation strategies and approaches to implementation within it. They are trying to provide something that is the intervention and the implementation together, that’s potentially easier to implement and package than interventions we have been providing for the last few decades. I was lucky enough to work with a research team on a project called BRUSH recently where we had a go at doing this and developed an implementation toolkit for toothbrushing in schools. It was based on the opinions of multiple stakeholders ranging from commissioners to teachers to parents to children – https://arc-swp.nihr.ac.uk/research/projects/brush/. You can check out the toolkit here:

Some professionals involved in implementation science have also demonstrated an increasing level of self-awareness that seems to be particularly critical right now. For instance, it’s been wonderful to see critical analyses of issues around equity, and the disconnect between implementation research and practice (Rapport, Smith et al. 2022, Metz, Jensen et al. 2022a). Although there is still a way to go until there’s enough people in the science who are acknowledging how they may have contributed to this, some of us have realised that what we thought of as being collaborative in the past was actually just data extraction and we have to own that. 

I think the above steps I have mentioned will help. However, in increasingly complex systems with tremendous forces working against implementation that are beyond our control there is perhaps one thing we have underestimated. This is something sat under our noses that we can all potentially benefit from and it is appreciating the importance of trust and relationships.

The relational side of implementation

I’ve had Allison Metz on the essential implementation podcast a couple of times and she talks about the importance of relational implementation very eloquently. Allison and a few other professionals involved in implementation science have been writing a lot about this. They feel that trust, psychological safety, social cohesion and cohesive networks and alliance in our work can be measured rigorously, and that these things should be treated like dependent variables (Metz, Jensen et al. 2022). Can we create greater trust? Can we create more cohesive networks? Can we create more psychological safety in systems where most people don’t feel safe? 

In the NHS a lot of nurses don’t feel safe, doctors don’t feel safe, patients don’t feel safe, yet we are trying to implement at rocket speed and people are running for the hills because there is a nervousness in the system. Forty thousand nurses left the service in 2022 alone. Allison suggested on my podcast that a lack of trust is probably one of the biggest ‘kiss of death’ variables.   But often, when people on the ground such as implementation support practitioners talk about this, researchers often see these things as ‘soft’ i.e. things we cannot really measure or understand that well. But utilising and understanding these more ‘soft’ relational skills are some of the biggest challenges we face. It’s important we write about these things. We are publishing up a storm on the microsystem local variables still but not on these arguably more powerful, albeit harder to define relational variables.

If there isn’t a high level of trust, trying to support learning in an implementation initiative can be really hard

Trust is so important because if we’re carrying out implementation research or trying to get something implemented the people in that setting need to trust us. They also need to trust each other. I worked on a project last year where multiple parts of a health system including urgent and emergency care, social care, mental health and the ambulance service came together to try to reduce ambulance conveyance rates. They tried to do this by treating patients at home rather than at hospital and the whole thing was supported by an improvement team. There was political and financial backing for the project and everyone felt something needed to be done about the long queue’s of ambulances outside of hospitals. However, the ambulance service did not have established relationships with the nurses, doctors, social workers, and mental health professionals in the other services. They didn’t have an established relationship with the improvement team. They reported that many of their concerns about the intervention were not being listened to. The project had a degree of outer-setting variables in its favour e.g. policies and laws, financing, external pressure but the most important relationships had not been established. There was a lack of trust. The program was implemented but failed to reduce ambulance conveyance rates and the ambulance service stopped working within the project which led to its eventual demise. Had there been a greater level of trust between the ambulance service and the rest of the stakeholders in the effort it’s possible that the intervention might have been more effective as it would have set more realistic goals.   

The fact is service providers within a health system compete for pounds and resources. Research teams compete for funding. The Applied Research Collaborations I work within compete for the same. Yet we expect these different groups to come together for large initiatives and share what their challenges are without really taking the time to develop trust and cohesion between them. We don’t make sure there is psychological safety. We also ask them to do these things under immense time pressures and financial constraints.

We must give projects time to build trust

When implementing, we can apply a high level of assistance to get things done quicker and meet those short-term implementation goals, or a low level of assistance, which is assistance that is following the pace of the people in the setting. If the latter option is taken, teams have more time to build their own self-awareness individually and collectively about what it’s going to take to get a change in place. This often does not follow our funding timelines. It does however allow for much needed time to develop trusting relationships. This might be why achieving short-term goals does not necessarily predict future sustainment. For example, a lot of teams can’t get their people hired in time. They are slow to start but they keep their commitment even after the research team has left and discontinued their support. These teams have been shown to sustain interventions better than teams that reach their short-term goals in the short term (Nevedal, Widerquist et al. 2023).

A new role for implementation researchers and those involved in supporting implementation

Allison Metz and her team of researchers have taken on the role of promoting implementation by getting teams to recognise their collective values. They work to develop shared mental models and embed routine values reflection into every team meeting. For example, it might turn out that everyone involved in a particular project believes the project should be family led, equity focused and evidence driven. At the end of the meeting there is a focus on what actions have been taken, what resources are going to be allocated, what decisions were made and how far those decisions reflect the values everybody agreed on at the start. 

This takes time and there can be resistance to it but it can seriously pay off. The team might completely change the strategies they originally chose because what they thought was patient led was actually the opposite. They might build capacity data using technical things, they might collectively take a moment to recognise the move reached key milestones and moved through the stages of implementation. But perhaps even more importantly they been constantly asking the question are we a team? Are we cohesive and trusting team? Because it shown in the literature when people trust each other they innovate, and they learn. “Formal time for debriefing and reflection is a key part of implementation” this is a key sentence in the original 2009 article about the consolidated framework for implementation research (Damshroder et al., 2009). It took something that was essentially soft and made it more important. In all our team meetings we should be ‘building the muscle of reflection’ not just on the work we can see but on what we can’t see. People don’t realise what a big part relational connection is in all this. For example, if people are missing meetings, or multitasking during a meeting or looking annoyed this can destroy implementation. 

IA Team

Related Blogs

Taking Steps Towards Equality, Diversity, and Inclusion in Yorkshire & Humber ARC Research

We want all people and communities in YH to benefit from our research. To do this we aim to follow best practice and have the highest standards of equality, diversity and inclusion in all our research. The NIHR Applied Research Collaborations (ARCs) are leading efforts to put the latest NIHR Equality Diversity and Inclusion (EDI) Strategy into practice within their research programs. Some ARCs have developed toolkits while others have developed strategies focused on "mainstreaming" EDI.

Patient and Public Involvement (PPI) on a pain and frailty study – reflections on involvement, engagement, and impact 

The The YH ARC supported Pain in Older People with Frailty (POPPY) Study is a 3-year NIHR-funded study hosted by Bradford Teaching Hospitals NHS Foundation Trust (commenced April 2022). The study aims to provide service guidance to improve access and support for older adults living with frailty to better manage their pain.

Implementation researchers’ perspectives on bridging the research-practice gap

In our blog earlier this year, we talked about simplifying implementation science, and making it more accessible to frontline staff. In this blog we will share critical insights that the implementation team, based at the Improvement Academy, has gathered while supporting, and facilitating putting evidence into practice.