Why so many drugs don’t work as well IRL

Happens over and over again: A study comes out that analyzes how well a drug is controlling some health problem in the real world, that is, how the drug is doing based on what real patients are doing with it, and the results are invariably disappointing.

That is, the post-marketing drug review (which is rarely if ever done by the makers of the drug; it’s usually done by somebody who just likes doing this kind of research) concludes that well, the drug didn’t work nearly as well as the study (or studies) promised it would, and well, there are way more irritating side effects than there were in the studies, and well, way more people are stopping the use of the drug than dropped out during the studies, and also, there at least one or two, often more, worrying potential complications than seemed to occur in the pre-release studies.

So gee, maybe we shouldn’t be using this drug as often or in as high a dose or on as many people as the studies indicated we could do.

And if you’ve ever wondered why that’s so, the explanation is pretty simple.

First, studies are notorious for excluding lots and lots of potential participants for various reasons – the usual cop-outs are 1) age, that is, too young or too old, and 2) co-morbid conditions, which in English means that that potential participant is suffering from something else – diabetes, say, or heart disease – that may colour the results, and hey, we want our results as pure as possible.

But in the real world, old people and people with co-morbidites are among the heaviest used of new meds, so no surprise that there are lots of surprises when this huge group begins to use the new product.

Second, and this is even more disturbing, I think, according to a new report, for the last few decades, the US FDA (the body that determines if a drug can be marketed in the US, which arguably makes the US FDA the single biggest influencer about which drugs get produced and hence marketed in other countries) has become much more lenient about the criteria it uses to determine the value of a new medication.

In medicalese, according to this report, the US FDA is now relying much more than it used to on “surrogate markers”.

What is a surrogate marker?

It’s like secondary test of what a drug is really meant to do for you.

Best to use an example.

So if you wanna develop a drug that you wanna market as something that will cut heart attacks, rather than counting the number of heart attacks in a study group, something that takes years and years and tons of money to do properly unless the drug is a true miracle, the manufacturer of the drug gives the US FDA data about how well the drug lowers cholesterol levels, and since lowering cholesterol levels will eventually cut the risk of heart attack, the US FDA can tell the manufacturer, “Well, that’s good enough for you to sell your product as something that may lower the risk of heart attacks, but please remember to come back in a few years with a post-marketing study to tell us how your drug is actually doing in the real world”, which good research tells us, is rarely ever done by a drug maker (after all, what’s the upside to finding that the drug isn’t quite as effective as the studies promised it would be?)

In other words, in analyzing the data about which new drugs should be permitted on the market, the US FDA is acting like NHL general managers drafting 18-year-olds to find the ones that might become real players, something the GMS can do only by looking at “surrogate markers”: Is this young, still pimply boy going to end up big enough, is he tough enough (I really hate that one), is he good in the dressing room (even worse), does he have what it takes, and so on.

So some of those kids end up really good, many end up not being nearly as good as their high draft position would warrant, and in the end with the occasional exceptions of the obvious phenoms, the rest of those kids are much more of what you might call a crap shoot,

And so is, I would argue, the effect of most drugs on you as an individual.