lessons i've learned from looking at the medical literature
there have been several concerning issues and lessons that I have learned in the process of doing these blogs over the past several years (I am sending out this email/blog as a followup to some of the methodological issues and perhaps incorrect assumptions inherent in many clinical studies and their application to actual patients, as noted in the recent blog on placebos. see email of 11/14/26, entitled: "Benefits of placebo for low back pain, and some random thoughts").
--meta-analyses:
--there is huge variability in the
actual utility of meta-analyses in making clinical decisions.
these analyses are mathematical concoctions which try to
combine different studies with usually very different people (different
inclusion/exclusion criteria, people with different levels/types of comorbidites, different
ages, different ethnicities, often different doses of the med being
assessed, even somewhat different outcomes measured). and the meta-analyses
themselves have different inclusion criteria (minimum number of people in
a study that they include, the authors' assessment of the quality of
the study). And they use different statistical analyses (eg some do propensity
score matching as a means to control mathematically for different
patient baseline characteristics; or they may use different
basic statistical analyses). also, in some cases the meta-analysis is
overwhelmed by a single very large study (ie, a meta-analysis with 10
studies, but the one with many more patients will give much more
statistical weight to that one study, even if the smaller studies were actually
methodologically better). as a result i have seen almost simultaneous
meta-analyses on the same subject in different journals coming to
different conclusions.
--there was a really good article looking
at the pyramid of the value of different types of
clinical evidence (see http://ebm.bmj.com/content/21/4/125.short?rss=1&ssource=mfr
, or Evid
Based Med 2016;21:125-127 doi:10.1136/ebmed-2016-110401 )
which, unlike other "evidence pyramids" in the literature over
the past 20 years, dismissed meta-analyses/systematic reviews, and
highlighted, for example, that study design itself (ie an RCT) does not
necessarily mean that it is a "better" study and should be the
one influencing clinical practice just because of its
design, over a good cohort study (they demonstrate this by their schematic
pyramid of evidence-based medicine having wavy lines separating the types of
studies, instead of straight-line clear-cut separations of the value of studies
by their design. and they do not include meta-analyses/systematic reviews in the
pyramid). to me, RCTs are clearly limited by their exclusion and inclusion
criteria, and suffer from reductionism (see prior blogs, but basically reducing
"n" patients into some mathematical average of, eg, a
53 year-old patient, 35% female, 78% white, 37% diabetic, with no renal
failure and 56% on aspirin......"), and trying to apply the
results to a totally different individual patient you are treating with
different ethnicity, comorbidities, meds, etc.
--guidelines
(also not included in the pyramid of the value of
evidence-based medicine, above):
--there has been an
unfortunate evolution of clinical guidelines, with a few dramatic shifts
over the decades:
--the older
guidelines were written by the NIH or similar governmental organization, with
an emphasis on bringing in different experts both within the field and, at
least to my experience, some outside of the field (eg clinical people),
and providing a more consistent, less biased, and
independent validation mechanism for the recommendations
--perhaps related
to ideological or financial imperatives, newer guidelines are more often
being channeled back from the governmental agencies to professional
societies, creating a few problems:
--guidelines
may not reach the same conclusions: eg the early versions of the Am
Diabetic and Am Heart Assn guidelines on blood pressure goal. then, what is a
clinician to do??
--the
professional societies' guideline-writing groups often do not include
practicing clinicians (at least from what i've seen), but mostly the higher-ups
(ie, mostly researchers) in the professional societies. there is often a
significant financial conflict-of-interest with many
guideline-committee members, though this is being watched and
reported more now than before, more with some professional societies than
others. But, beyond those direct financial/other interests of some of the
specialty society leaders, i would guess that it is not
easy/comfortable for others within the societies to be critical
of them (they are the "leaders", with disproportionate influence
within the writing committee and within the specialty society)
--and
there are a huge profusion of guidelines, from all of these societies, to the
point that it is pretty impossible to keep up with them
--however,
i think the real reason that guidelines are not considered part of the
"evidence pyramid" noted above is that there is no external
validation metric used for these guidelines: there are a group of specialists
sitting around a table and making recommendations about how we should treat
patients, and with an inherent conflict-of-interest above and beyond those
of specific leaders promoting a technique or drug which they may
personally benefit from. is it surprising that the American Urological
Association has historically been much more aggressive in pushing for PSA
screening? or the American Cancer Society historically pushing for more
cancer screening? or the American College of Radiology promoting more mammograms?
--so,
the best model to me is reverting to the way guidelines used to be
created, as currently done in other countries having a
single uniform approach to guidelines (eg the NICE guidelines in the UK are
pretty exemplary to me: very thoroughly researched, with, i think, pretty
unbiased and thoughtful recommendations), using the best external
validation metric to promote the best, least-biased recommendations based
on known data and relatively unbiased expert opinion and informed by practicing
clinicians. probably the best we have now in the US is USPSTF, though they
also have an important-to-know filter of usually needing strong support from
RCTs to really endorse an approach (eg, see http://gmodestmedblogs.blogspot.com/2018/08/uspstf-does-not-back-lipid-screening-in.html which
does not recommend lipid screening in adolescents, despite what i think is
pretty compelling though circumstantial evidence, basically because there
are no good 30-40 year studies following 12 year-olds, randomized to
diet/exercise/perhaps meds at some point, and looking at clinical
outcomes).
--using
on-line sources for quick guidance (eg Up-To-Date, etc)
--these are also not on the "evidence
pyramid", for reasons similar to the guidelines issue: the
entries are the non-validated opinions of a few individuals about how to
evaluate, diagnose and treat patients. There are no upfront disclosures of
commercial interest (if you click on an author's name, then on disclosures in
Up-To-Date, you can get the info, but it is a few clicks away, and, i would
guess that a busy clinician looking for a quick answer probably does not do
this a lot. and then the information is that the author gets money from perhaps
a specific drug company. and, i would also guess, most of us primary
care clinicians have no idea which meds that drug company makes and therefore
which suggested med in the Up-To-Date review might be promoted more...).
--that being said, i do not know a
clinician (including myself) who does not use one or more of these sources
pretty often, to get quick guidance about what to do with the patient in front
of them.... it is so easy, typically has a review of the relevant
studies, and gives very clear guidance. the only issue is bias and
reliability.....
--misquoting
references
--as mentioned in a few blogs, sometimes
the articles misquote references, claiming incorrectly that a
previous study came to a certain conclusion. so, it is useful to check the original article when an
article makes a statement about another article that seems
out-of-line. this is a lot of extra work, though way easier than it used to be
(often you can click on a hyperlink of the reference, or do a quick online
search. easier than going to the library...)
--even more commonly (still not very
common), articles sometime make reference to a citation which is
incorrectly cited (ie, you look at the article cited and it has nothing to
do with the author's point. ??an error by the author/journal editor in
making sure that the citation matches??)
--supplemental
materials
--oftentimes, some of the most important
material is relegated to the supplemental material (including important
subgroup analyses, methodologic issues, data backing up some of the
article's conclusions, conflicts-of-interest,
etc) which really give lots of insight into the real value of an
intervention. these are only accessible online (an issue if you do not
subscribe to that journal) and are, i think, a significant impediment for
many clinicians to access. in cases where i cannot get a specific article and
have emailed the author for a copy, i only get the PDF, and unless i want to
pay $30-50 to get the article through the journal (which i am not), i cannot
see the supplementary materials.
--using
not-so-relevant clinical endpoints
--there has been a trend to using
composite endpoints (perhaps to make the likelihood of an
intervention's benefit higher and more likely to be statistically
significant) which just don't make sense, such as combining a really important
outcome with much less important ones. for example, a recent blog looked
at CPAP for OSA (see http://gmodestmedblogs.blogspot.com/2016/09/cpap-does-not-reduce-cardiovasc-risk.html), assessing CPAP utility for the composite endpoint of hard
cardiovascular events plus the development of hypertension. if there were
benefit for significant hard cardiovascular events, i would be quite
inclined to suggest CPAP for my patients. but
if CPAP only decreased hypertension a little (but
statistically significant), I would treat that by reinforcing lifestyle
changes, or using a med if needed, and would not prescribe CPAP. Or,
another example: the ADVANCE study, which looked at tight blood sugar
control on the effects on hard CVD outcomes plus diabetic nephropathy. this
seems pretty silly. we know from many studies that tight control helps
prevent diabetic nephropathy. the more important clinical
issue is cardiovascular benefit or harm. and adding a know quantity of
decreasing nephropathy into the "composite" endpoint just dilutes/distorts
the results. this study really highlights the general issue
of lumping together non-equivalent outcomes (it is hard to argue that
developing early nephropathy is somehow equivalent to, and should be
numerically added to, CV deaths or nonfatal strokes; or in many other studies,
lumping together all-cause mortality with need for an additional
clinical procedure). I raise these issues as examples, but this
is really a very common finding. and this approach of combining endpoints may be
worse now, since a large percent of the studies done are designed by drug
companies, etc, which have a vested interest in the most positive
outcome. and sometimes one cannot disaggregate the individual outcomes
without access to the supplementary material....
--as i have railed about in many blogs, i
am really concerned that the FDA accepts surrogate endpoints for some clinical
diseases. the most evident one is using A1C as the end-all for new diabetes
meds. personally, i don't really care so much about the A1c, just what
really happens to patients. many of the new drugs approved do decrease the A1c
(though only a little, in most cases), yet have significant and serious adverse
reactions (see many blogs in http://gmodestmedblogs.blogspot.com/search/label/diabetes )
which undercut their utility significantly (eg, as cited in many prior blogs:
rosiglitazone does well in lowering A1C, just unfortunately increases cardiac
events...)
so, i am writing this blog mostly because i have been doing these
blogs for several years now, have been reading lots of articles, have the
(perhaps) benefit of seeing the evolution over decades of clinical
research and the medical-political-social-economic structure of both the
research being done and how it is reported, and am pretty
frequently struck by some of the not-often-acknowledged gaps and concerns
of that literature and its effect on clinical practice. i would recommend
reading the "evidence pyramid" article in the BMJ Evidence-Based
Medicine journal referenced above, since it does comment a bit on some of these
(and did stimulate me to write this). But, of course, i should also
comment that all of the above are my observations (ie, not validated by
an independent group), but at least i have no (ie, zero) conflicts of interest,
other than the bias to a real skepticism in reading articles and guidelines, or
of being an early adopter of new meds/procedures.....
Comments
Post a Comment
if you would like to receive the near-daily emails regularly, please email me at gmodest@uphams.org