The Trouble With Systematic Reviews
Published by Yolanda KoscielskiJon Brassey, founder of the medical database Trip, expressed some concerns over the quality of systematic review research methodologies during his webinar this morning.
One concern is that systematic reviews are "untenable" and "unsustainable" for clinical practice: a medical professional is faced with a declining patient, and doesn't have time to run a 6-month or 2-year systematic review to determine the effective treatment.
Beyond time constaints, he notes systematic reviews suffer from some methodological problems. Cochrane reviews, for instance, are great for their transparency, however their lofty aim to "...appraise ALL high quality research evidence relevant" Brassey claims is not being met. In fact, he finds 64.2% of Cochrane Reviews are out-of-date, 20% are too small to provide an accurate assessment of effect size, and unpublished trials are frequently left out. The latter is particularly problematic as a definite bias has been found in terms of which trials make it to formal publication. Usually, it is favoured towards publishing only those trials that have a positive outcome. He claims these three factors mitigate against the accuracy of a systematic review.
The Tamiflu example underscores the need for the medical communicty to review all clinical trials before making conclusions, including while reseaching for systematic reviewers. Sometimes adverse effects are only reported in unpubished clinical trials -- not in the published journal articles.
Enter rapid reviews. These suffer from a poor reputation. The old way of running a rapid review followed these steps: "receive question --> rapid search --> crude appraisal --> narrative analysis". Jon highlighted a few key problem areas with rapid reviews, including "limited evidence base to guide methods" and "no obvious rapid review intellectual core".
Yet, he found that a four-hour rapid review yielded an 85% consistency rate with the results of systematic reviews. He further found that comparable results were obtained with a five-minute rapid review, when done in conjunction with a search backed by machine learning and sentiment analysis. (In sentiment analysis, the valence of the article is computer-analyzed. The computer requires ingestion of about 500 articles to learn how to interpret the language).
Brassey believes search personalization and optomization will be the future of rapid reviews, especially with respect to Trip. He notes MeSH entries are already being assigned by computer, before being vetted by a human indexer. For more info, see Tsafnat et al's editorial, Automation of Systematic Reviews.
Jon notes, one of the key tools for optomizing search could be clickstream data. This data may reveal which articles are particularly useful via clicking patterns. A second useful tool will be personalization; users can identify their research community (e.g., pediatrics) and view tailored search results. For information specialists and other interested researchers, though, Jon assures an opt-out process will be available, whereby neutral search results are displayed.
Some interesting comments were raised in the Q & A:
- Concerns that machine-learning will incorporate bias and have difficulty linking trials
- Concerns over search neutrality
- Concerns that private companies (e.g., pharmaceutical companies) would be able to manipulate the search optomization