The concept of evidence-based practice (EBP) has become the gold standard for health care as It envisions that new and innovative findings (ideas, treatments, technology and methods) will be incorporated into the education of health care teams and thus will lead to improvements in real-world practice with the application of current knowledge. It makes sense that we look to the research literature (peer reviewed and published studies), evaluate what we find, and use evidence in our planning and our ways of doing things. The Oxford Centre for Evidence-Based Medicine (Howick et al., 2011) has developed a hierarchy of likely best evidence with five levels of evidence from strongest to weakest: systematic reviews, randomised controlled trials, cohort studies, case series and clinical/mechanistic reasoning. They link the evidence to whether it is being used for diagnosis, prognosis, therapy and prevention, or economic decision analysis. Eccles and colleagues (1998) used three evidence categories to develop guidelines for the primary care management of dementia: They are:
I well-designed randomised controlled trials, meta-analyses or systematic reviews;
II well-designed cohort or case control studies;
Ill uncontrolled studies or external consensus.
They also use a measure of strength of recommendation based on these three categories:
directly based on category I evidence;
directly based on category II evidence;
directly based on category III evidence OR extrapolated recommendations from category I and II evidence;
based on clinical opinion from a group.
But in practice do we use evidence to make our decisions, whether it is about the management of dementia, or our view of the world? Our contention is that, while evidence and innovative knowledge should inform practice, it must first be created and then transferred within health care and education systems. This is the state of current work in interprofessional education (IPE).