When a plethora of information exists aggregation is important to find meaning.
Systematic reviews provide a neat way of aggregating a backlog of information. The hierarchy of evidence-based medicine separates the medical ‘wood from the trees’ by allowing the sieving of essential information. Linear connections are important to our simple minds; cause and effect is what doctors routinely transmit to patients.
The consequences of using a nihilistic model to expand our knowledge base are potentially dire.
But this simplified approach has become a dead end. Randomised controlled trials (RTC) require large numbers to show statistical significance and even then modern methods of analysis make the null hypothesis far easier to support than any inferential hypothesis. It is no wonder many RCTs show no difference between treatments.
The consequences of using a nihilistic model to expand our knowledge base are potentially dire: funding bodies, scientific publications and teaching curriculums are embedded with a science that finds either nothing works better than anything else or, if it does, the difference is so small as to be insignificant in the real world.
To the zealous systematic evidencer, the problem is the world we live in not the tool – despite a groundswell of ‘evidence’ showing the systematic review approach to answering a research question is far from perfect. Indeed, decades of researcher attempts to encourage the systematic review to integrate into a real world environment have missed their mark: evidentiary hierarchies, meta-analyses, AMSTARS all purport to assist the reader make meaning of the data while increasing the rigor of the systematic processes.
In one recent study* of the common condition Chronic Obstructive Airways Disease 79 meta-analyses were sampled. Only 18% considered the scientific quality of primary studies when formulating conclusions and 49% used appropriate meta-analytic methods to combine findings. The problems were particularly acute among meta-analyses on pharmacological treatments. In 48% of the meta-analyses the authors did not report a conflict of interest while 58% percent reported harmful effects of treatment. Publication bias was not assessed in 65% and only 10% had searched non-English databases.
Something is rotten in the state of Denmark as The Bard would say.
The problem is the systematic review toolkit, which is just that: a box of tools used to build virtual evidentiary bookcases. Systematic reviews are not medicine and they genuinely struggle to address complex real world issues such as personalised medicine, the decline of the disease model and multi-morbidity.
Our approach to aggregation must be narrative not Boolean. We need more than just randomised and controlled research tools. For example, time and complexity considerations need to be included not excluded. Similarly, searching is not a matter of categorical decisions such as: and/or/not. Rather we need to be looking for ways to integrate the hows, whys and what ifs.
*Methodological quality of meta-analyses on treatments for chronic obstructive pulmonary disease: a cross-sectional study using the AMSTAR (Assessing the Methodological Quality of Systematic Reviews) tool. Robin ST Ho, Xinyin Wu, Jinqiu Yuan, Siya Liu, Xin Lai, Samuel YS Wong & Vincent CH Chung. npj Primary Care Respiratory Medicine 25, Article number: 14102 (2015) doi:10.1038/npjpcrm.2014.102