Wednesday, April 28, 2010

Filters: Part IV

So just who is this mystery researcher who thinks that he can use a computer to cure diseases?
Trevor Marshall.
This Ph.D. in Electrical Engineering claims that he cured his own sarcoidosis and that he now has the recipe or “protocol” of drugs to cure all autoimmune diseases. He’s quite well known among patients with autoimmune disease. Not so much among physicians.
Trevor’s basic premise about Vitamin D centers on how that steroid is processed by the body. There is an initial form that is changed by the cell into an active form. The initial, or precursor form, is called 25-D in the MP literature. The active form is called 1,25-D by Marshall. Inside (and only inside) the cell, 1,25-D interacts with a receptor on the cell’s nucleus called the Vitamin D Receptor (VDR). The VDR then turns on or “transcribes” genes to make the cell do certain things it wasn’t doing before the VDR gave it the go ahead. The VDR is best known for regulating calcium (which is why milk is fortified with Vitamin D), but it also performs a lot of other functions in the body, helps regulate your body’s first line defense against pathogens, called innate immunity. The VDR that is located on the nucleus of immune cells tells those cells, among other things, to start spitting out proteins that kill certain types of bacteria.
Trevor’s main point is that his computer disease model shows that 25-D is an antagonist of the VDR. In biology, we talk about compounds that fit inside the “pocket” of a receptor or enzyme. This is where the lock-and-key model comes in. If it’s the wrong key but still fits in the hole, it’s an antagonist – it blocks the protein from doing its thing, like the wrong key stuck in a lock. If the molecule hits all the right hot buttons in the protein and turns in the lock, then the protein thinks it is seeing its native signaling molecule, and the protein goes off and does whatever it has evolved to do. That is called an agonist. Antagonist = blocked signal, agonist = boosted signal. That’s a layman’s definition, I didn’t cover partial agonists, mixed agonist / antagonists and so on, but it’s good enough to parse Trevor’s work.
Marshall claims that, since it’s an antagonist, the precursor molecule, 25-D, suppresses the innate immune response to teeny, tiny bacteria (small even by bacterial standards) without a cell wall. I have to note that the mainstream medical community has yet to accept the idea that these kinds of bacteria are involved in disease. (>>>>>) According to Trevor, these cell wall deficient bacteria then infect immune cells, causing them to go haywire against your immune system in much the same way that the zombies in I Am Legend went haywire on Will Smith.
This stretches my credulity a bit. First, in any other infection I’m aware of, the infected cell either stops working at all, or works at a reduced function because the infecting organism is taking nutrients from the cell and otherwise interfering with intracellular processes that the cell needs in order to function. Even in those cancers known to be virally or bacterially caused, the cell is not performing its normal functions at a faster rate, it’s spending all of its energy replicating like crazy because the infectious vector has damaged the cell’s DNA or RNA.
Second, it’s hard for me to believe that with all these biomedical researchers looking at infectious disease (and looking to make a name for themselves if they discover a new mechanism for disease), no one else has noticed all these tiny invaders in the immune system. Where’s the proof?
Trevor’s proof is threefold. First, he has a computer model that shows how bad Vitamin D is for you. Second, he has some electron micrographs that show these tiny invaders in immune cells. Finally, there’s his clinical data from everyone who’s voluntarily sending data in to the Marshall Protocol database.
I’m going to tackle these one by one, but in this section I only have space for the computer model. Since that’s the cornerstone of his arguments, I’ll spend the most time on it.
I think from the first two parts of this series you have a really good idea what I think about computer models, despite being the kind of chemist who tends to specialize in computational chemistry and biochemistry.
Just what is Trevor’s model, and how robust is it?
Well, he claims he’s the lone genius who’s found the silicon Holy Grail of autoimmune disease while all the rest of us are fishing around in the dark. Now, one may quibble that I cherry-picked that one quote from 2006 out of context. So let’s examine the record, shall we? Fortunately Trevor runs a website, actually several websites, where he explains himself, so you can see for yourself whether I’ve mischaracterized his position on computer modeling.
Let’s start with that first quote, made during a 2006 presentation to the FDA.
As we enter the 21st century, ‘mathematical reasoning’ (embodied in Molecular Biology) has advanced to the point where we know the precise location of atoms in certain key molecules which control the human body, and we can use the Genome to predict the location of atoms in many other molecules; predict with sufficient accuracy to understand the precise interactions between drugs and those molecules, an understanding which has often proven elusive in the clinical environment.
I think I already beat that one to death, don’t you? It’s not true by any rational definition of “precise interactions”, and that’s that.
But is Trevor really claiming to be a lone genius, or was I making a strawman?
First of all, take a look at the explanations of in vivo, in vitro, and in silico methodology on the Marshall Protocol website http://www.marshallprotocol.com/view_topic.php?id=13667&forum_id=11&jump_to=192851
It is very telling that the problems with in vitro and in vivo studies are specifically noted, but there is no corresponding section on the shortcomings of in silico technology. (Trevor: you’re welcome to use Parts One and Two of this essay for that, or just link to Derek Lowe’s “in silico” subject heading and be done with it).
But that’s just circumstantial evidence, the Marshall apologists will say. Just an error of omission, and a minor one at that. No. It’s a violation of Feynman’s famous description of properly presented science:
It's a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty--a kind of leaning over backwards. For example, if you're doing an experiment, you should report everything that you think might make it invalid--not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you've eliminated by some other experiment, and how they worked--to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can--if you know anything at all wrong, or possibly wrong--to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.
Fine, Marshall’s apologists still think I’m nitpicking. Let’s move right along: http://marshallprotocol.com/forum39/9348.html
So now the model is complete.
We know what pathogens can cause Th1 disease, and they were discovered purely by mathematical deduction, and not by looking into petrie dishes... or harming any furry animals
Does this square with what you read in my links about the interaction between in silico, in vitro and in vivo techniques? I will tell you straight away that any biomedical researcher reading that statement will choke. Read the whole post, read the whole thread, see if I’m taking things out of context.
Want more of Trevor’s hubris? Try this thread:
http://www.marshallprotocol.com/view_topic.php?id=11016&forum_id=39&
Those wet biologists are thrashing around in the dark. They need to learn about in-silico methodology...
What? Is there a biologist in the developed world who doesn’t know about modeling in Molecular Biology? I think the evidence is there, that Trevor is claiming to be the lone genius I posited. Yet another example:
There are very few institutions in the USA which are capable of doing in-silico work. A lot is being done Internationally. It will take a new generation of physicians, who are more capable with computers, before there is widespread understanding of the technologies we used to make our breakthroughs.

The concepts of modern pharmacopoeia will have to change. The pragma of "toxins" and "anti-inflammatories" are destined for the trash-heap
Few institutions in the US? Say what? We have more drug companies and biotechs than any other nation in the world and every one of them has to have some sort of modeling group. Those students come to Industry out of Academic modeling groups. The entire US research system is computer-crazy. Just look at Derek Lowe’s “in silico” subject header. Does Trevor’s quote pass your bozo filter? It doesn’t pass mine.
This quote is pretty brazen, too:
The second video was made when I finally realized exactly how the VDR activates when using a DRIP205 coactivator - something which apparently nobody else has figured out yet Precision is the keyword.. and lots of computing power...
No one else. Really?
Computer simulations can help elucidate this problem, no doubt. But in vivo or in vitro data are better. The way to start going about figuring out how a co-activator (a second molecule that’s required to start a signal - in effect we’re talking about a lock that requires 2 keys) works is to start replacing individual amino acids in the known ligand-binding region of the protein. As you replace those amino acids, activity will change, and you get a good idea of which residues are the “hot buttons” in the pocket, and which are along for the ride. Here’s an example of such an experiment on the DRIP-VDR system itself. http://www.ncbi.nlm.nih.gov/pubmed/16949543
So, with a single Google search on DRIP co-activation, you can prove to yourself that Trevor’s not quite the lone genius out ahead of everyone else that he and his followers would like you to believe.
But this one is the kicker. This is the grand high poobah of red flags to a biomedical researcher. When asked what kind of statistics are being collected on the human “subjects” of the Marshall protocol, this is the reply: http://www.marshallprotocol.com/view_topic.php?id=12633&forum_id=39&highlight=in+silico
Some people will never believe the statistics they see in front of them. It wouldn't matter how much extra work we did, they would try to find fault.

At this point it is important to understand that we have produced a disease model, at the level of the Molecular Biology. That is a game-changing event. The statistics mean nothing, except insofar as they support or negate that model.
(Emphasis Trevor’s)
The model means everything and the human statistics mean nothing? The statistics from human experience mean nothing!?!?!? I can’t express how bad this statement sounds to a biomedical researcher, and I can’t repeat this enough: in vivo data trumps all. HOWEVER, and this is the biggest however in the history of the world , the plural of anecdote is not data. If you are running a study, you MUST show the world that you are doing the patients no harm at the very least. The only way to do that is with descriptive statistics on validated measures of disease activity such as the ACR or DAS. We researchers fool ourselves all the time – I pointed out some examples of where this happens to even the best researchers in RA in the Part Three – so no one, and I mean no one, is exempt from conducting descriptive statistics on a human experiment.
If you are considering following an experimental protocol whose managers say that statistics on human subjects are irrelevant, you need to pull a Sir Robin and bravely run away. Fast.
But, say the apologists, Trevor is an outsider. Of course he doesn’t use the “correct” terminology. He’s over exuberant. But you’re nitpicking about semantics: he’s right, and you’re just a stick-in-the-mud.
Oh really? I still want to see the evidence. What do we know about how Trevor’s game changing event works? http://marshallprotocol.com/view_topic.php?id=13667&forum_id=11
I set up the computer to do a random, genetically adaptive search of millions of different configurations, and then tell me the best one it finds. Yes, Millions... and sometimes it only finds one good fit, despite all of its trying...
The proteins he’s talking about here are the Vitamin D receptor and possibly the Pregnane X Receptor, which are receptors for signaling via, well, Vitamin D and Pregnane (better to be simple than clever in scientific nomenclature). The VDR is his main target, though, and it’s a nuclear receptor.
The the kind of brute force calculation described in that quote is exactly the kind of method that all the professional sources I cited in Part One (Lt. Col. Coates, Derek Lowe, and Milkshake) said was useless because our state of biochemical knowledge is still too low. Was it six waters, or only five? Lt. Col. Coates predicts we might approach this level of sophistication by 2025. I say that’s early by 5 – 10 years, but whoever is correct, the fact remains that it’s not achievable now. Garbage in garbage out, and we just don’t have enough information from real experiments on physical things to construct a decent model yet.
So how should data on this kind of work be presented?
This paper on Quantitative Structure-Activity Relationships (or QSARs the fancy scientific name for what Trevor is trying to do – quantitatively predict biological activity based on shape) gives some guidelines on how one should be properly presented in the scientific literature: http://pubs.acs.org/doi/abs/10.1021/tx800252e
This principle requires that parameters that reflect both the internal performance of a QSAR model and its predictivity should be provided. The internal performance is characterized by the goodness-of-fit and robustness of the model. The goodness-of-fit measures how well the model accounts for the variation in the response in the training set (1). Robustness measures the stability of the parameters and predictions when one or more of the training set chemicals is removed, and the model is regenerated excluding the removed compounds (1).
In other words, you have to tell everyone exactly how you made the model, and how well its results correspond to experimental results in the real world, either in vivo or in vitro.
Contrast this with what little Trevor has “published” on his model. Actually, the only real technical details in the model were released here. http://precedings.nature.com/documents/52/version/1 Please click on that link and read the comments (which are supposed to be a sort of open-source peer review) before reading on here.
I would like to note that Nature Precedings is an open access non-peer reviewed site. Anyone can post anything (within reason) there to get some preliminary feedback before running the gauntlet of real peer review. Once again, the Precedings are NOT peer reviewed. The normal expectation is that the Preceding will eventually show up in a peer-reviewed publication subsequent to feedback on that site. Trevor “published” this Preceding in 2007, and we’re still waiting for the real paper. Why might that be?
Well, if you read the comments, you will see that Trevor got called on violating the principles of good QSAR publication by one Josiah Zayner:
I would be interested to see what actual parameters and techniques were used for the Energy minimization(if one was done) and MD simulation, gromacs mdp files &c. I see no statistical analysis of the H bond data just a comment that “it looks different”. Have you performed any other methods of in silico analysis such as computing linear interaction energy? Your conclusions seem pretty drastic with no ex silico data to back it up.
Josiah’s a graduate student. http://sosnick.uchicago.edu/people/jz.html Yes. That’s right. Trevor’s getting called on sloppy work by people who don’t even have a Ph.D. yet. (Though I suspect that Josiah’s going to make an excellent researcher when he obtains his terminal degree, and you know, his degrees going to be from the University of Chicago).
This is precisely the sort of amateur error that will get your submission bounced by peer review. Trevor and his apologists can grouse about peer review maintaining the status quo and Old Boys network all they want, but if this is exactly the sort of problem (no adequate description of experimental parameters) with a publication that peer review is designed to correct. This is prima facie evidence of sloppy scientific reporting. This is the kind of elementary error we don’t need cluttering a scientific literature that’s already full of legitimate mistakes from much more careful researchers. Ask yourself – why didn’t Trevor just come clean with that data instead of getting testy with Josiah? Inquiring minds want to know.
But Josiah’s question brings up another important point: Trevor is using a program called GROMACS for his molecular dynamics calculations.
GROMACS is a piece of freeware http://www.gromacs.org/ first developed by researchers at the University of Gronigen. It’s a great piece of software, but it’s off the shelf and it’s FREE. Drug and biotech companies buy proprietary software that is a bit more specific to nuclear or other receptors and they pay for it because they get better results from it. You’re telling me that Trevor became the lone genius out ahead of everyone else based on using a piece of free software that any graduate student in the world (hi Josiah!) can and does use.? Are you kidding me? And he’s the only one who’s figured this disease model out, despite the fact that everyone in the world has access to these tools. Does this premise get through your bozo filter? Can I sell you a bridge?
I know exactly what Trevor and his apologists are going to say. Trevor’s made some super-sekrit modifications to GROMACS. Puh-leeeze. That, too should have been noted in the Nature Preceding. Can we say “moving the goalposts fallacy”? http://en.wikipedia.org/wiki/Moving_the_goalpost Good, I knew we could.
But you know, to be fair, in a quantum universe where anything is possible Trevor might have made some proprietary modifications. If so, he doesn’t have to publish all the model’s details to get me to believe. He could fund everything he claims he wants to do with the MP by selling this wonder-program to a couple of biotechs or pharma companies. If it’s that good, those guys would sign a confidentiality contract in a heartbeat. Evidence of such a contract would be good enough for me to take on faith that Trevor’s work is good: that serious scientists are willing to bet serious money on it.
Now I’m going to point you to another deconstruction of Trevor’s work by medical scientists who do make their names public on the internet: the guys and gals at Science Based Medicine. I generally have the highest respect for that site, and I urge you to read that post http://www.sciencebasedmedicine.org/?p=563 as well as this one http://www.sciencebasedmedicine.org/?p=681 , right now before going on. However, one of the reasons I’m writing this is that I don’t think that they took Trevor seriously enough. A lot of patients are dazzled by his version of the evidence, the underpinnings of which rest on his claim to be a lone computational genius. So I don’t think the savants at SBM fully explained to those people why Trevor’s computer model gets such derision from them:
The data to support this, and please read it slowly so it sinks in, is based on a computer model (3). Computer simulations, not experiments.
I mean, I understood the contempt in that statement as soon as I read it, but laymen, especially those inclined to believe in scientific tales that are too good to be true, would not. I hope after reading this lengthy backgrounder (and thanks for sticking with me) you understand why both Dr. Crislip and I snort at computer models when used as the sole evidence for a human medical intervention.
But wait, there’s more.
As far as I can tell from what has been published or posted to his various websites, this vaunted computer model consists solely of the receptors he’s interested in and their substrates. When someone uses the term “disease model” there’s usually a whole lot more than that going on. That term implies that the entire biological system is being modeled. Why is this important? Because this differential activity of the two forms of Vitamin D on the VDR that Trevor makes such a big deal about takes place inside the cell.
All drug chemists can tell you about about stuff they’ve synthesized that works on the pure enzyme assays, but doesn’t do squat in animals or humans because it’s either not transported into the cell where it needs to be, or gets tackled on the line of scrimmage by an active transporter or a CYP in the liver or something else.
You see, the VDR is a nuclear receptor. That means it sits on the nucleus. Deep inside the cell. In order for Vitamin D to get to that receptor in the first place it has to get though the cell wall. Vitamin D didn’t evolve to regulate calcium in the body – the steroid is found in animals that don’t even have a skeleton. It’s very ancient. It probably interacts with multiple receptors, not just the one that bears its name. With something that potent floating around the body, one would think that they evolution would put some failsafe mechanism in place. And it did.
Meet the Vitamin D binding protein. http://en.wikipedia.org/wiki/GC_%28gene%29 It’s a big chaperone that keeps the 25-D from jumping on the first receptor it sees. This protein puts the 25-D carefully in the tissues it is supposed to be targeting, and no disease model that purports to center on Vitamin D can ignore it. I challenge you to go into the Marshall Protocol sites and find much about this regulatory protein. The best I could find was in this thread:
The D-Binding protein is pretty much still a mystery. It is known that 95% of the 'free' 25-D in the bloodstream, and a fraction of the 'free' 1,25-D is bound to this protein, but the exact molecular dynamics are still being elucidated. It does seem to be part of the control system which is intended to maintain 1,25-D at the correct level in the phagocytes, and which is perverted by the Th1 pathogens.
While Marshall’s apologists will say that this is an old thread, the fact remains that he claimed to have a complete disease model in 2006, and in that time period the VDBP was “pretty much still a mystery” to him. Those two pieces of information do not come together make a favorable picture of the Marshall Model in my mind. Please run a Google Scholar search on “Vitamin D binding protein” http://scholar.google.com/scholar?q=vitamin+D+binding+protein&hl=en&btnG=Search&as_sdt=8001&as_sdtp=on for yourself and see what comes up. Cites from 1975, 1981, 1989, 1968; in fact my whole first page of hits on that was pre-2006. One does not need to have “the exact molecular dynamics” in order to conduct some (real life) experiments that yield some information on how this protein delivers Vitamin D to the cell, and while Trevor was optimizing GROMACS, that is exactly what the rest of the world was doing.
Are you starting to have the questions about just what exactly this “disease model” entails?
Trevor will come back and claim that only his Molecular Dynamics simulations tell you what’s really going on. If that claim has any traction with you, please go back and start reading from Part One.
Let’s bend my credulity into a pretzel and say that Trevor’s model of the Vitamin D receptor actually reflects reality at that very local level of VitaminD / receptor interaction (with DRIP). We don’t even now, and certainly didn’t in 2006, know the importance of all of the cells let alone all of the other proteins (besides the VDR) involved in the pathogenesis of autoimmune disease. Where do these other proteins fit into Trevor’s picture? (We’ll get to the other cells later). The short answer is that he ignores them. He claims they are unimportant. I think otherwise, and his view of the world is not a disease model by my definition of the term. But then I’m part of the old guard who has to die off http://marshallprotocol.com/view_topic.php?id=12021&forum_id=39&highlight=one+funeral before Trevor’s genius can be appreciated. Unfortunately for Trevor, many people who think as I do (i.e. most of the rest of the scientific community) are younger than he is by a significant margin.
But wait, there’s more.
If you want a really good technical shredding of the Marshall Protocol, this http://web.mit.edu/london/www/universe.htm is probably the best place to start, but if you’re in my target audience, i.e. almost of a purely non-technical background, it will make your eyes glaze over. The non-technical reader should note, however, a mention in the introduction section of Mark London’s page to the effect that the MP has evolved in its view of the actual mechanism of Vitamin D pathogenisis, in that 1,25D was originally blamed as the culprit, but 25D eventually became the villain.
If you read the really old stuff in the Marshall protocol websites, things won’t add up unless you are paying close attention to chronology. This wonderful molecular dynamics model of disease that allegedly was good enough to trump in vivo data in 2006 switched gears on a number of very important points. And it switched those gears without the drugs used by the Marshall Protocol ever changing.
That’s disturbing on a lot of levels.
Remember the therapeutics the Marshall Protocol uses? Low dose antibiotics and the ARB Olmesartan? Well, there originally was a scientific rationale for the ARB, given in this paper from January 2006:
The ARBs Olmesartan, Irbesartan and Valsartan (Ki asymptotically equal to10 nmol) are likely to be useful VDR antagonists at typical in-vivo concentrations.

That’s right folks, antagonists.
If you’ve forgotten, here’s a layman’s review of antagonism / antagonism: an antagonist is an agent that sits in the binding pocket and prevents the protein from doing its job. An agonist mimics the signaling molecule and forces the protein to do its job in the absence of the real signaling molecule.
For some reason, Trevor had changed his mind about the role of olmesartan by August of that same year:
It is certain that the layout of the Olmesartan oxygen atoms in the vicinity of the key SER278 / TYR143 / ARG274 / SER237 VDR residues is that of an agonist. That is, when Benicar docks into the VDR displacing 25-D and 1,25-D it at least partially activates the VDR. The higher the Benicar dose the greater percentage of the Ds will be displaced, according to the standard displacement curve in my FDA presentation slides.

That’s right folks, an agonist. All based on the same model. Which apparently beat real experiments on real living systems , way back in 2006.
What you will find all over the place in the MP website are traces of previous theories that were discarded like tissue paper. Such as the original theory that 1,25-D was causing the VDR to be overactive: http://autoimmunityresearch.org/karolinska-handout.pdf
In order to induce recovery from chronic inflammatory disease, it is necessary to restore VDR functionality by removing all exogenous sources of the secosteroid we call ‘Vitamin-D’, and dampen down over-exuberant VDR activity, for example with the ARB Olmesartan[1].

This old rationale for the use of the drugs in the Marshall Protocol is at direct odds with the current one that I quoted. When the rationale changed, the drug regimen didn’t.
It’s not just the critics that notice the problems, though, even the faithful on the message boards do. People like the guy who goes by the moniker “tickbite”: http://www.marshallprotocol.com/view_topic.php?id=8264&forum_id=39&highlight=tickbite+antagonist
Am I just going insane? I'm sorry Frans, your link is just reiterating this thread....

This paper:

Research
Common angiotensin receptor blockers may directly modulate the immune system via VDR, PPAR and CCR2b
Trevor G Marshall, Robert E Lee, Frances E Marshall
Theoretical Biology and Medical Modelling 2006, 3:1 (10 January 2006)
[Abstract] [Full Text] [PDF] [PubMed] [Related articles]

says that olmesartan is an antagonist...........it uses the word agonist one time to reference a researcher who thought that telmisartan was both an agonist and an antagonist. That's it...........the word agonist and olmesartan never come together in one sentence. Are we saying this 1 year old paper is just phooey?
(Yes, we are.) And the guy who goes by the moniker sdcreacy saw the same thing:
:
Quick question regarding Olmesartan: are there any references I could read that establish Olmesartan as a VDR agonist biochemically (in-vitro or in cells)? The 2006 paper calls it an antagonist, so I am confused.

I’m confused, too. And sdcreacy is showing other signs of independent thought that you should emulate, talking about corroborating evidence from in vitro studies.
Trevor ignores this discrepancy completely. I have not seen a single statement from him that explains his about face in 2006. I want to know why his essentially complete disease model predicted in early 2006 that the VDR was overactive and that olmesartan would damp down the activity, while later in 2006 that very same model predicted that the VDR was being shut down by the precursor to Vitamin D, and that olmesartan is restoring normal VDR activity.
Theories change in science all the time. That’s legitimate. The problem with the MP is that these theories are antithetical to one another, and are adopted in sequence, but Trevor’s scientific conclusions never change. Olmesartan the antagonist is beneficial, and olmesartan the agonist is still beneficial. The kindest thing I can say about this is that it is magical thinking.
But wait, there’s more.
Trevor claims that he has shown that the 25-D precursor of Vitamin D is an agonist of the Vitamin D Receptor, and that in agonizing (turning off) that receptor, 25-D turns off innate immune response to the invading bacteria that he believes are the true pathogenic agents in autoimmune disease.
Furthermore, he has claimed to the FDA to have deduced that that 25-D is a problem even though by his own calculations it is held into the pocket 10 times more weakly than 1,25-D, and that therefore it will kick 1,25D out of the pocket only at high concentrations relative to 1,25-D. If he’d bothered to read the literature, he’d find that his calculations are off by a factor of 50 to 100 – the literature says that 25-D is held to the binding pocket of the VDR 500 – 1000 times more weakly than 1,25-D. But those were just piddling little experiments on real systems, not properly done in silico work.
Trevor’s made the general claim of antagonism other places, as well:
The problem with 25-vitamin D is that it can also physically bind to the VDR and when that happens the process stops. It is called an antagonist because it cannot activate the VDR like the agonist 1,25-D, so none of the hundreds of genes get transcribed into mRNA. High concentrations of 25-D from supplementing can displace 1,25-D stopping the process.
Soooooo…. The 25-D metabolite of Vitamin D is an antagonist of the Vitamin D Receptor. Yes?
Yes.
A simple clarification of the above abstract:
1,25-D is the only metabolite that turns the VDR on. Everything else turns it off, or at least modifies its capabilities. So exogenous Vitamin D and 25-D both bind into the VDR and block it from working properly. They will displace any 1,25-D from the receptor in a dose-dependent manner. The higher the concentration of Vitamin-D or 25-D competing with the endogenous 1,25-D the more of that 1,25-D will be displaced from the VDR. That occurs in a manner represented by the displacement graphs in the FDA presentation.

Yes.
A high concentration of 25D (dietary vitamin D) provides antagonist ligands to the VDRs, blocking 1,25D from docking to them. This prevents the innate immune system from being activated.

Aaaaand yes.
Although 25-D has some physiologic activity, for example, binding to the Vitamin D Binding Protein (VDP), and the Cartilage Oligomeric Matrix Protein (COMP, see later in this review), it cannot activate the transcriptional activity of the VDR.

Now, keep in mind these claims are solely based on the in silico experiments conducted by Marshall and Marshall alone, purporting to show that the 25-D has an antagonistic effect on the Vitamin D receptor, and that this is important despite the fact that cells using the VDR for signaling accept 25-D from the Vitamin D binding protein, then convert 25-D to the more potent 1,25-D variant that is generally accepted to be the biologically active form. Just how the problematic 25-D gets into the cell and all the way to the nucleus without being converted to 1,25-D (cells are very efficient at doing that) is never explained, and one is tempted to ask just how much Trevor knows about the difference between cell surface and nuclear receptors. Be that as it may, these are qualitative and quantitative predictions we can use to test Trevor’s theories against reality.
And someone did. Well, they didn’t set out to test Trevor’s theories, they set out to conduct science by testing a series of compounds that might be expected to turn on the VDR – i.e. they wanted to see what makes a good agonist. They found that 25-D (labeled by the more mainstream scientific “25(OH)D” in that paper) begins to turn on VDR-related gene expression at a concentration somewhere between 50 and 250 times as great as the concentration at which 1,25-D initiates that same VDR-related gene expression. That’s pretty good agreement with the previous literature, but what you as a layman should be noting is not so much the quantitative aspect as the qualitative: 25-D tuned ON gene expression. If you’ve been paying attention up until now, you should be thinking “agonist”. A weak one to be sure, but then you’d expect something that binds 500 – 1000 times weaker to the pocket to be a weaker agonist. But an antagonist would not turn on gene expression at all.
Here’s a thread on the MP forum attacking that 2010 paper. What’s Trevor’s problem with that paper? Well, he has a couple of problems that the levels of 25-D needed to effect transcription are well above physiologic levels:
Once you figure out how to read the figure you can see the primary flaw in their argument - 25-D only started to induce significant transcription when it is at concentrations around 250nmol/L, or 100ng/ml. Which levels are of course toxic in-vivo.

Now here’s my take on that result. If the previous Vitamin D / VDR binding studies are correct, one might simplistically expect that 25-D would be oh, I don’t know, a few hundred times less able to trigger gene expression than 1,25D. Now that’s a bit simple minded, active transport, messenger protein binding, and a whole lot of other things might throw that number off, but in general, one would expect that, in the absence of the enzyme that converts 25-D to 1,25-D one would need a whole lot more of 25-D than 1,25-D to affect VDR gene transcription in a live cell, if 25-D were to be a weak agonist of the VDR.
And what do we see in Lou et al.’s paper? We see that you need a lot more 25-D than 1,25-D to light up gene expression. But activate the genes 25-D did. That’s the activity of a (weak) agonist, not an antagonist of any sort. I’d say that’s in pretty good agreement with the previous mainstream Vitamin D work, and pretty much at fundamental odds with any predictions one might have made based on Trevor’s work. Antagonists don’t turn into agonists. Unless they’re a mixed agonist / antagonist – but that is never what Trevor claimed form his model. Any attempt to retrofit the results of this wonderful model to claim that 25-D was predicted to be a weak agonist at high concentrations would be what kind of logical fallacy? Anyone? Bueller? That’s riiiiight: moving the goalposts fallacy.
Somehow Trevor walks away from a result clearly showing that 25-D is weak agonist with the idea that the results show it’s not an agonist at all:
Now there are a number of other defects in this study which tend to distract from the essential value of what this group did - they confirmed that at physiological concentrations, 25-D is not a VDR agonist.
Which is a nice Christmas present indeed.

Lump of coal is more like it. What that paper really showed is 1) 25-D is a weak agonist and 2) it doesn’t bind nearly as well to the VDR pocket as Trevor claims it does.
As Russ, one of the faithful on the board notices:
I thought that agonism vs. antagonism was dependent on the structure of the molecule and how it fits into the binding pocket of the receptor. I wouldn't have thought that something could be antagonistic at low concentrations and agonistic at high concentrations. But from reading this thread it sounds like Vitamin D is an antagonist at normal physiological concentrations but turns into an agonist at super high concentrations (above the level of toxicity). Am I missing something?

No, Russ, you’re not missing something. Keep that critical thought capacity open, dude, it sounds like you haven’t drunk all of your glass of Kool Aid yet.
And yet another moving goalpost shows up in that thread: Trevor complains that they only looked at the transcription of one gene – one known to be transcribed by the activated VDR. His complaint? What about the other 899 thought to be transcribed by the VDR? Didja check those? Well, didja? No. We know this gene is transcribed by the VDR, why do we need the other 899? This gene is quiet when there’s no signal from the VDR, and it lights up in the presence of 25-D. I think reasonable people would conclude 25-D is having some agonistic effect on gene expression. Asking about the other 899 genes in a wide variety of cells thought to be influenced by VDR signaling is: … moving the goalpost. It’s not “looking at the whole story” it’s asking someone to conduct an impossibly complex experiment. If Trevor thinks this gene is somehow not representative, he’s got to do one of two things – point in the literature where someone has done an experiment that shows this gene behaves differently from the other 899, or do the experiment himself. But none of Trevor’s researchers have ever conducted a single in vitro experiment to substantiate his claims.
How do I know that? http://marshallprotocol.com/view_topic.php?id=13568&forum_id=39&highlight=in+vivo+data
“We don't have in vitro data as of yet.”
Jcwat101 is Joyce Waterhouse http://autoimmunityresearch.org/lax2006.htm , one of the MP Foundation staff, and a co-author on several of its publications, so I’d say that’s a pretty authoritative source. Let me be very clear. If a drug company went to the FDA with a new drug, or even to apply for a new use for an approved drug, without any in vitro or animal in vivo data (also not conducted by the MP) THE FDA WOULD THROW THEM OUT OF BETHESDA.
But you know what? The whole argument I made above about why I believe that 25-D is a weak agonist of the VDR was an exercise in idiocy. Why? Because, while the VDR is certainly involved in innate immunity, it is far from clear (and Trevor is far from having proven) that it is the MAJOR factor in innate immunity. In addition to that little hole in the MP story, the MP disease model completely ignores the effects of the acquired immune system. Innate immunity is a first line defense before the heavy artillery of the immune system gets called in. This part of the immune system remembers pathogens its seen before, and really targets them with a number of killing cells. Trevor’s gone from “Vitamin D is a factor in innate immunity” to “if you have too much Vitamin D your immune system can’t respond to invaders”. Hypothetical invaders at that, but we’ll get to that.
These leaps in logic are perfectly fine for putting forward hypotheses to test, but they are not acceptable in a finished thesis such as what the MP advocates claim to have. I’ll give you another example of this kind of jumping from “this might be true” to “this MUST be true” in Trevor’s logic:
I was asked how we could search the bacterial genomes and identify the ligand/chemical which the bacteria might produce to block the action of the VDR, and I responded that it would be like searching for a needle in a haystack.

Well, I was wrong. Pasteur's law struck again, and I now have isolated at least one nanomolar-grade (strong) antagonist which is formed by a special type of bacteria which like to live in biofilms, and which are very poorly studied, and almost impossible to culture.

The breakthrough came when I started to look at a recent paper on some strange bacteria which had been isolated from biofilms on prothestic joints. As you know, I have a complete disease model now, so I understand exactly where prostheses fit in the overall scheme of things, and I understand that healthy folk are just sick folk whose bacterial load has not yet made them very ill So it was with that jaundiced eye I read this paper:

"Identification of bacteria on the surface of clinically infected and non-infected prosthetic hip joints removed during revision arthroplasties by 16S rRNA gene sequencing and microbiological culture"
http://tinyurl.com/2hqojq PMID: 17501992

I immediately knew we had something very important here, and when I researched these special "gliding" bacteria, which move by gliding, perhaps like a snail, rather than by whipping flagella like the common blood-borne bacterial species, I realised we had hit mother-lobe.

Associated with the gliding motion is a unique lipid called capnine, a highly charged lipid, and one which is a strong inhibitor (antagonist) of VDR transcriptional activity.
First of all, before we get to the logic part of this, I want to say as a chemist Trevor uses the word “isolate” in a very unusual way. Isolate means you conduct careful physical experiments to separate lots of compounds in a biological soup, and then use more experimentation to determine the chemical structure of those compounds. It does NOT mean reading about a compound in someone else’s research report and shouting EUREKA! By any normal chemist’s definition, Trevor has “isolated” exactly diddly squat, because he does no lab experiments.
OK, let’s get to the logic part. He wants to test a hypothesis by identifying a protein that might be secreted by his mystery bacteria - so far so good. He goes looking at bacteria he thinks are related to his…oh wait, he using bacteria WITH cell walls from a biofilm to model his putative pathogenic bacteria without cell walls, and that kind of analogy’s perfectly kosher with him, but we mainstream scientists have to test all 900 genes activated by the VDR before we can be SURE we are looking at VDR activation… OK, OK, let’s suspend our disbelief about the interchangeability of these bacteria and play along. He sees a substance these bacteria secrete that help them glide through the biofilm. BINGO! It fits in his VDR molecular dynamics model of the VDR! It MUST be the culprit antagonist!
Do you see the problem here? The logical leap that should not be made without evidence here? First of all, if you believe Lou et. Al’s paper showing 25-D is a weak agonist of the VDR, it’s pretty clear Trevor’s model can’t tell an agonist from an antagonist from a hole in the ground. You’d need more evidence than that model to get me to believe that something is an antagonist of the VDR even IF I accepted the premise that blocking the VDR totally cripples the body’s ability to fight off bacteria. Which I don’t. It’s also an unwarranted logical leap because of where the VDR sits in the cell. On the nucleus. Just how does this capnine get past the cell wall? The cell wall evolved to keep most of this kind of crap out of our cells, otherwise human kind would have dies out the first time it wandered off the plains of Africa into a swamp. Furthermore, where’s the evidence that these bacteria secrete enough capnine to use as a chemical weapon? As far as we know, they secrete just enough to give them mobility in the biofilm. Finally, what makes him think that his putative cell wall deficient pathogens secrete capnine in the first place? They are supposed to infect T cells and make them go haywire. Why would a bacteria that does infects cells in that manner secrete capnine at all? Do they build a bioflim INSIDE the T cell? Trevor would have to address all of those issues with careful experimentation before anyone in the mainstream community will take him seriously. They are not trivial logical holes in his story, you could drive a Mack truck through them.
May I stop now? Do you believe me that Trevor’s model is, well, I don’t know what it is, but I know what it isn’t. It isn’t science.
I apologize to Kelly for the long-winded response. It’s just that there is so MUCH wrong with Trevor’s reasoning it’s hard to know where to begin. I also see how the MP advocates argue, and I’ve covered most of their canned retorts, as well as pointing out their major logical fallacy: moving the goalposts.
But wait, there’s more.
I’m not done yet. We haven’t seen the last goalpost moved yet. In the last (and final!) installment, I’m going to come to the bacterial and clinical evidence that Trevor’s put forward and treat it to the same scrutiny as I have the in silico evidence.

Filters: Part III

So, we’ve come to Part Three and I still haven’t come to the identity of mystery researcher. That’s because of two things. One, biomedical research is complicated, and I’m trying to bring you, the lay reader, up to speed on a lot of topics. Two, that initial quote isn’t the only thing that makes mainstream researchers look askance at the mystery researcher, and I’m going to come back to all these topics when I deconstruct the arguments our mystery researcher makes in Parts Four and Five.

So what are the final topics I think you need to have been exposed to in order to parse the totality of what my mysterious medic posits as a unified theory?

Three things. One, you need a basic understanding of how science is published, including the process of peer review. Two, you need to understand how modern medicine evaluates data in clinical trials. Three, you need a basic overview of how the FDA approves and regulates drugs that are marketed to you.

So let’s say you’re a researcher who has done enough experiments to come to some conclusions and write a paper. The first hurdle you have to jump in order to get your new theory accepted is the peer reviewers at the best journal you think would even consider publishing your stuff. Oh yes, there is a hierarchy of journals, and the better the journal, the tougher the peer review.

Just what is peer review? Peer review is the process by which the obvious chaff is first sorted from the wheat for publication or grant funding. An editor or grant administrator sends out submission for anonymous review by known experts in the field. They are not looking to fact check the experiments themselves, they are looking to see if there are basic math or statistics mistakes, mistaken conclusions based on a misunderstanding of the literature, and other basic hurdles that keep the real effluvia out of science journals. Reviews come back, and in the case of the funding agencies, proposals are ranked. In the case of journals, which is most relevant to the topic at hand, reviewers check the methodology and basic science knowledge (including the pattern of references).

The reviews are anonymous because sometimes the editor thinks that a younger guy with less prestige actually knows more about an older researcher’s work than some other of the old guard. The editor wants an honest review, but the older guy who’s on all the funding committees can sink the younger guy’s career. Yeah, some scientists are idiots that way. So we try to get the most honest reviews we can.

Despite peer review, papers that contain results or conclusions that are just plain wrong do get published. A lot. Especially in the lower tier journals. All that the peer review system does is make a first-pass attempt to ensure that real garbage - math mistakes, sloppy methodology, half-baked hypotheses from researchers who can't be bothered to keep up with the literature, etc. do not choke the lawn. We've got enough weeds as it is.

This comes to an important point that the alternative medical people trot out every time a major case of scientific fraud gets exposed. “See?” they claim, “peer review does not work.” Let me be very clear here. Peer reviewers look for elementary errors in knowledge, logic, and math. In the better journals, there is also a judgement call on the part of the reviewers as to whether the paper is of high enough quality to meet the journal‘s standards. It is a very basic, minimum filter. Peer reviewers do not repeat the experiments in the paper when reviewing it. Therefore PEER RERVIEW IS NOT DESIGNED TO CATCH FRAUD. That kind of review would take years. Science would slow to a halt. Science relies on blacklisting to deal with fraud (i.e if you get caught, no more grant money for you, you’ll be using a soda straw and a magnifying glass to conduct experiments until you quit and go home).

If someone comes to you with a therapy and complains about the “old boys network” and how peer review is useless, ask them this – why did the reviewers reject the work? Ask to see the written reviews. If they won’t show the reviews to you, the chances are their mistakes are so basic and so fundamental, that the review was something like the one written for a famous physics paper that was later published in a rock-bottom quality Chinese journal and then exposed as fraud:

It is difficult to describe what is wrong in Section 4, since almost nothing is right. … The remainder of the paper is a jumble of misquoted results from math and physics. It would take up too much space to enumerate all the mistakes: indeed it is difficult to say where one error ends and the next begins.

In conclusion, I would not recommend that this paper be published in this, or any, journal.


That is pretty amusing. I’ve only had the opportunity to write a review that scathing once in my academic career. I’m willing to bet that the reviews on most alternative medicine look like that, though.

But let’s return to mainstream science. It bears repeating, a lot of what is published in the medical literature – by good researchers who are NOT perpetuating fraud - is wrong. Does that surprise you? In fact researcher John Ioannidis has put forward some detailed analysis tracking initial hypotheses over 20 years, and has shown that most of these hypotheses turn out to be wrong.

Like Steve Novella, this does not surprise me. When we operate on the cutting edge of human knowledge, we conduct experiments and come to conclusions based on those experiments. Even in the absence of outright faking of data, sometimes those conclusions are right, and sometimes they are wrong. More often than not they are wrong – we’re making guesses about nature based on limited data. But we had to prove that the wrong guesses are wrong, via the mechanism of the scientific method, in order to be sure that our reasoning about the correct hypotheses is complete.

In this way scientific research is like landing a spaceship on an alien planet. If alien sociologists landed in the middle of an Amish village in Lancaster PA they would write up their findings in a paper that might accurately describe the Amish way of life, but if they didn’t venture outside the village, they would get a hugely skewed idea of American life, and any conclusions they made about America in general would be dead wrong. If another ship landed in NY, that bunch of sociologists would be fighting with the first bunch like the blind men with the elephant. Only after landing a number of ships in the South, Midwest, East and West coasts would an accurate picture emerge.

What does this mean for the bozo filter I’m trying to help you build? Well, the first thing it should instill in you is a healthy skepticism of any one researcher. This drives us scientists nuts when the lay press come calling, because they pick the most photogenic and / or glibbest researcher and quote him or her to death when writing a story. Dr. X says this, Dr. X believes that. Just who the heck is Dr. X? What we want in science is a pattern of conclusions from many researchers at various institutions all pointing in one direction. We especially want that current research should not significantly contradict research from the past, if a pattern is to be established.

As David Gorski noted (and I urge you to read his post in full):

First, it is important to realize that confident medical judgments or conclusions rarely emerge from single studies – confidence requires a pattern of evidence over many studies. The typical historical course for such evidence is first to begin with clinical observations or plausible hypotheses that stem from established treatments. Based upon this weakest form of evidence preliminary or pilot studies are performed by some interested researchers to see if a new treatment has any potential and is at least relatively safe. If these early studies are encouraging then larger and larger studies, with more tight designs are often performed.

In this early phase of research results are often mixed (some positive, some negative) as researchers explore different ways to use a treatment, different subsets of patients on which to use the treatment, varying doses of medication, or other variables. Outcomes are also variable – should a hard biological outcome be used, or more subjective quality of life outcomes? What about combinations with existing treatments? Is there an additive effect, or is the new treatment an alternative.

It takes time to sort out all the possible variables of even what sound initially like a simple medical question – does treatment A work for condition X. Often different schools of thought emerge, and they battle it out, each touting their own studies and criticizing the others’. This criticism is healthy, and in the best case scenario leads to large, well-designed, multi-center, replicable consensus trials – trials that take into consideration all reasonable concerns where both sides can agree upon the results.


This poses a problem for the layman reading papers in a one-off fashion.

Allow me to let you in on another dirty little secret of science. When the socially inept geek smelling of cheese shows up to our party drunk and throws up in the potato chips, we just ignore him. We usually don’t call the cops or throw him out ourselves. In other words, we may suspect someone was sloppy researcher and his or her paper is a piece of garbage that should be retracted, but we rarely come out directly and say “j’accuse”. So the sloppy researcher’s work just sits there in the scientific literature. Ignored to be sure, but it still just sits there. Sits there for the unsuspecting layman to stumble across and shout “Eureka”.

Sometimes people who should know better go on touting some obscure researcher’s work. Before the internet, the layman had a hard time sorting wheat from chaff in this regard. But this is the internet age. The first thing you should do when presented with “but so and so SHOWED this to be true back in 1969” is to go to the scientific equivalent of Snopes: Google Scholar. Google scholar has a “cited by” field. Use it. If a given paper in the biomedical literature (this rule of thumb differs for different disciplines) is more than 10 years old and has less than 20 cites, it’s likely that not many people found it useful, or (even more worrisome) that not many people could reproduce the results.

Remember that journals tend not to publish studies with negative conclusions (i.e. you don’t see a lot of papers with “tried this guy’s stuff, didn’t work, he’s an idiot” in the Conclusions section). So the medical literature is rife with stuff that practicing researchers don’t believe. We scientists get a good idea of the lay of the land from our thesis advisors very early in graduate school. Your advisor will tlel you “don’t bother reading so-and-so. However, unless you’re in the club it’s hard to learn who exactly the cheese-smelling geeks are, except that Google Scholar now gives you a big old clue.

Google the subject of the paper you’ve just been handed (in Google Scholar), and see which names pop up most often. Check how many times they’ve been cited. Check how often a year they publish. Then compare it to the paper and author you have doubts about. This will go a long way in giving you a clue about the ability level of the researcher in question. If the paper and / or author in question has been cited a lot (and if still alive, is still actively publishing), it’s a good bet that your doctor won’t roll his or her eyes when you mention it. If it’s got 7 cites in 20 years, your doctor is a lot more likely to say: Who? What? Huh?

Stuff that’s been orphaned in the literature is almost always junk. More than 99 times out of 100, it’s junk. You can take that to the bank.

Next, how does modern medicine evaluate published evidence from clinical trials? As I said above, getting through peer review is just the first step to wide acceptance. Researchers then look at the data, the way it was collected, and how it fits into the larger pattern of evidence from other researchers, and decide if the paper is worth talking about and using as a reference in their own papers.

Human beings are subject to all kinds of biases. The medical profession has developed a number of methods to cut down on the effects of those biases. The most famous, is of course, the double-blind placebo-controlled trial (DBPCT). If neither the test subject nor administering doctor knows if the patient is getting a sugar pill or the real deal, they can’t knowingly skew the results. There are other methods, though.

Let me say this again, it bears repeating. Real researchers realize that EVERYONE is biased. Clinical trial designs and instruments are constructed to reduce those biases. Anyone who comes along and tells you that he or she is not biased and doesn’t need to follow the conventions of good research is a fool, a charlatan, or both. No exceptions.

One method that is very important in separating anecdote from real evidence (and I can’t say enough the plural of anecdote is NOT data), is the development of validated “instruments” which measure the severity of disease. Since Kelly asked me to write this, I’ll use RA as an example. The standard measures in RA are the American College of Rheumatology criteria and the Disease Activity Score. The ACR is generally favored by Americans and the DAS is generally favored by Europeans, but both scales have their plusses and minuses.

However, both scales take into account markers of inflammation that can be measured directly (C reactive protein levels, sedimentation rates, etc.). You can’t fake those, and they should not be responsive to placebo. Unfortunately those biomarkers also bounce up and down due to factors unrelated to RA, so they are only part of either score. The number of tender (meaning they hurt if you push on them) and swollen joints are also included. But wait. At what level of pain is a joint declared “tender”? That is why we conduct double-blind trials – as objective as that “tender joint count” sounds, it is binary measure (yes, it hurts or no, it doesn’t), and if either party knows which treatment is being given, they could skew the results (the patient by being more or less stoic, the doctor by pressing harder or more gently).

Both disease scales have been extensively studied and are reproducible, which is why we use them in clinical trials. We don’t rely solely on the “patient global assessment” (i.e. asking the patient “how are you doing”), though that IS a part of either measure. We have to ask the patient what’s going on, but if we rely only on the patient, we run often into the Monty Python scenario: She turned me into a newt! Well, I got better…

So when you see some alternative practitioner coming down the pike with a fistful of anecdotal reports, be skeptical. We do not accept anecdotal evidence in real science because we don’t trust even ourselves. We’ve all let our biases run away with us at times, it’s human nature.

What complicates research in autoimmune diseases such as RA is that the disease waxes and wanes, often for no discernable reason. A large number of cases of sarcoidosis, for example, resolve on their own.

The natural course of pulmonary sarcoidosis is highly variable. Unlike most other interstitial lung diseases, remission and resolution are common so that many patients are best served by avoiding use of potentially toxic therapy.


The disease comes for some unknown reason, we try to treat the symptoms, then it leaves for some other unknown reason, and we’re left scratching our heads. RA, unfortunately, has a much, much lower reported rate of spontaneous remission than sarcoidosis, but it does wax and wane, the so-called flaring that many, but not all, patients experience.

In a clinical trial these remissions, temporary or not, look like a drug response or a placebo response, but they are part of the natural history of the disease. If we’re not careful, we can look only at the good data, ignore the bad, and selectively perceive our way into a wrong conclusion. That is why the both the FDA and Big Pharma pay a small army of statisticians to go over clinical trials, and why the Agency (and reputable pharma companies) also frowns on any anecdotal evidence whatsoever.

You cannot just look at the people who respond to a drug in your statistical analysis. Why did the people who did not respond have a bad experience? Did the drug just not work? Or was it working and they dropped out due to side effects? Those are importnant, vital questions to ask when deciding the risk / benefit balance of any therapy. There’s a name for the logical fallacy of looking only at the responders: cherry picking.

By looking at the people who didn’t respond but were on drug, and the people who did respond, but were on placebo, you start to get an idea of the real potential of the treatment. If we are to stick to the premise that we should do no harm (and the FDA does, along with any doctor worth his or her salt), we have to reject anecdotal evidence, except as the start of a careful and non-anecdotally based plan of research. This is very hard for the human brain to do, because evolution has hard-wired us to learn from the anecdotal experience of our elders. But you have to push that back into the reptilian part of your brain when you look at scientific evidence. If someone’s been treating people for a good while and all they’ve got to show for it is anecdotes, you, the patient, are now in the other Monty Python scenario: bravely run away, and run away fast.

So let’s say our hypothetical intrepid researcher has discovered something so good that a representative of the evil pharma empire wants to make a drug out of it. Now we come to the public health portion of this backgrounder.

Most laypeople operate under a very mistaken impression of what the FDA can and can not do. Even most doctors aren’t very good at parsing this, because they get little to no training about it in medical school.

If you want to sell something via a real pharmacy that purports to be a medicine, the FDA has vast regulatory powers. They can dictate the terms of the clinical trials that they will accept in order to prove that a medicine is efficacious and reasonably safe (there is NO SUCH THING as a perfectly safe drug or therapy), and they can dictate the means of manufacture, the impurity levels in the finished product, and a lot of other things. Most of the cost of the pills and injections you buy is not in the drug substance itself – any reasonably competent chemist or biologist can make the “Active Pharmaceutical Ingredient”. The cost of a marketed drug is in all the clinical trials and manufacturing oversight that gives your doctor confidence he or she is not going to do you any harm in putting this “stuff” in your body. Do no harm. All competent doctors put that first. And the tradeoffs they make in real life keep them up nights.

What is the normal process one goes through in getting FDA approval for a new treatment? First the sponsor goes to the FDA with all the pre-clinical (animal!) data that supports the hypothesis that whatever you’re doing will be effective and reasonably safe. Then the sponsor needs to find investigators who can enroll patients. These might be academic clinicians or doctors in private practice, but the drug company does not enroll patients directly, they subcontract.

There is a requirement that an independent body called an Ethics Committee or Institutional Review Board, composed of people who have no dog in this fight, look the experimental design over and pronounce it kosher before those independent investigators put patients into the trial and give their data to the sponsor.

As a potential enrollee in a clinical trial, you should never, NEVER step into research that has not been cleared by an IRB. I’d suggest you ask for the IRB’s comments if you are approached about a trial and have concerns, but at the very least you should be told which IRB approved the clinical protocol. If you enroll at an Academic research center, there will likely be 2 IRBs involved, one global one mandated for the drug company by the FDA and EMEA (the European FDA) and one local one just for the institution your doctor belongs to, because most universities maintain their own Ethics Committees, and those bodies just don’t take the word of the drug company’s IRB, they double-check it.

Then it’s on to clinical research for regulatory approval.

Drug approval the world over comes in three phases. Phase I is usually short term dosing in perfectly healthy people. At this point you’ve done lots of animal studies, and you have some clue what might go wrong. But rats aren’t humans. (Remember what I said about human data trumping rat data in the in vivo / in vitro section of Part Two?) So you put the drug in once and observe the healthy people carefully. Then you work your way up to multiple doses.

When you and the FDA are comfortable that nothing weird is going on in around 100 or 200 healthy people, you go on to patients in Phase II. Not a lot of patients to be sure. Usually Phase II consists of two relatively short (a few months) duration trials in about 400 – 800 patients. If those patients seem to be getting better, you have a meeting with the FDA to outline the Phase III plans. If anything unexpected showed up in Phase II, you might have to do another small trial, or the FDA might require an outside board of independent docs break blind every so often to make sure everyone in Phase III is safe (called a Drug Safety Monitoring Board), or they might require more than the minimum, basic two replicate trials in Phase III to look for potential problems in special populations (for example, people with weak kidneys for a drug that is excreted via the urine, or people with liver problems if the drug is metabolized by the liver).

Phase III, in chronic disease, is comprised of at least 2 trials with about 1000 patients each for about a year’s duration. Usually Phase III is larger than this of course, but that is about the minimum. The FDA wants two replicate trials because it does sometimes happen that the results of the first Phase III trial are not repeated in the second one. Sometimes, even with a large statistical sample, your spaceship lands in the Amish village.

Even after approval, there are requirements for “post-marketing” or “Phase IV” studies to make sure that nothing slipped through the cracks in Phase III. If a serious side effect, let’s say liver failure, happens only in one patient out of a million, or even 1 in 100,000, statistically you’re unlikely to see that event in a 2000 to 6000 patient sample in Phase III. Post marketing studies are designed to try to capture those rare events.

This is a very basic overview, of course. I didn’t even talk about regularly conducted non-efficacy trials such as drug-drug interaction studies to make sure a drug that’s metabolized in a certain way doesn’t raise the blood concentrations of other drugs metabolized the same way to unsafe levels.

Finally, if the drug is approved, the FDA writes an approved label (that package inset of fine print most patients just throw away) based on what the Agency thinks is the best evidence. The sponsor may have conducted a trial that the FDA thinks is sloppy or needs to be repeated, and the evidence from that trial will NOT go into the label. Everything a drug company sales rep says to a doctor has to be “on label”. If he or she says something about that trial the FDA didn’t like? Well, let’s just say that the industry has recently had its wrist slapped a lot for stuff like that that.

The extreme scrutiny in the prescription market often leads laypeople to think that “they” (I assume the “they” so many laypeople often refer to is the FDA) are watching everything with the same level of control.

Not so.

First of all, nutritional supplements play by different rules from prescription drugs thanks to the DSHEA. Note the woring in that last link. Read it carefully. Is there any mention of "efficacy" on that page? No, there is not. The FDA had no authroity to require that a nutritional supplement actaully works.

Linus Pauling is responsible for that, having lent his reputation to outright quackery in the arena of Vitamin C supplements. In the course of that legal battle, the FDA’s hands have been tied for non-drug supplements. As long as they don’t cross certain lines, “nutraceutical” companies can hint around that they help, or even cure some ailment, and then throw the boilerplate disclaimer in the fine print, and get away with it.

Have you ever read the boilerplate disclaimer on a nutraceutical? You should, it should be an integral part of your bozo filter. I found similar language on several sites that are making what appear on the surface to be health-related claims:

The products and the claims made about specific products on or through this site have not been evaluated by _____________ or the FDA, and are not approved to diagnose, treat, cure or prevent disease.

The information provided on this site is for informational purposes only and is not intended as a substitute for advice from your physician or other health care professional or any information contained on or in any product label or packaging.

You should not use the information on this site for diagnosis or treatment of any health problem or for prescription of any medication or other treatment.


You will also find the “claims” of efficacy to be extremely vague on nutraceuticals.

“Helps promote a healthy _____”. What the blue blazes does “promote” mean? Scientists use specific language such as: “has been shown to prevent the progression of joint destruction in patients who have failed anti-TNF therapy”. THAT is a specific statement that can be proven or disproven by the use of statistics. Anything else is a religious discussion among people of different faiths.

If you have never paid attention to these disclaimers and weasel words on your echinacea , start doing so . “They” have very restricted powers in the US, which is one reason that “prescription” pharmaceuticals are referred to as “ethical” pharmaceuticals. Not to say that the pharma industry has a spotless ethics record, but they are accountable to the FDA when they don’t conform to the minimum standards. Nutraceuticals don’t even have to meet this minimum standard. Neither does your garden variety General Practitioner.

What did I mean by that last crack? A doctor, by dint of having a medical license, can prescribe any medication for any reason, or no reason at all. There is a good reason for this. The FDA’s purpose in asking for clinical trials is to describe the efficacy and safety of an agent in a broad spectrum of the patient population. Your individual doctor can look at the clinical evidence and say “This trial had 90% Caucasians in it and it is metabolized by the liver. You’re Asian, and Asians have a different pattern of liver enzymes, so let’s try this out at a dose that’s lower than the label.” A drug rep could NEVER suggest that (legally), but it’s a perfectly valid reason to go “off label”. There are many other such reasons, and your doctor went to school for many years and put in a lot of hard work in order to have the right to look at the evidence and toss it out the window (judiciously, of course).

Sometimes a side effect is so rare it doesn’t show up to the statisticians who measure such things until a drug has been in millions of people. That is why the FDA requires “post marketing surveillance” as well as the three major Phases of clinical development. A doctor needs to keep this in mind before going off-label.

Your doctor should go off label cavalierly, especially when the dose is higher than recommended, or he or she’s using the medicine for a disease that the FDA has not approved it for yet. Absence of evidence (of harm in this case) is not evidence of absence. Doctors can and do abuse their privilege to prescribe off-label.

GPs are the worst in this regard, because their training is less scientific than a specialist’s training. In fact GPs are not scientists at all. Their memorize and regurgitate style of learning in medical school covers scientific topics, but it is perfectly possible to become a GP and hold the most unscientific opinions. Science Based Medicine has plenty of documentation on GPs who get into what we in science call “woo”. If you see an alternative therapy that has a few MDs endorsing it, and those MDs are all GPs, watch out. Not to say that specialists don’t ever fall off the deep end, but those cases are fewer and farther between.

Finally, there is one more way to avoid the scrutiny of the FDA. Use a regimen of drugs that are approved for another purpose, but don't sponsor a clinical trial. I know you're scratiching your head at that one. What you can do is sell information - books, CDs, DVDs, but do NOT sell a drug, device or any treatment - then you would come under the authority of the FDA. To further cover yourself, rcommend that an MD read your literature and administer the treatment under his or her own license. Remember what I said about GPs and the limitations of their training? It is certianly possible to find a GP willing to do this. There are doctors who sell nutritional supplements in their office and who practice homeopathy, so there are certainly doctors willing to try a scientific-sounding treatment out on a desperate patient, especially if that doctor does not have deep training in the scientific method and specialist literature.*

If you are trying to evade FDA oversight, you can even ask that people voluntarily send you their medical history. This has the appearance of a clinical trial BUT IT IS NOT A CLINICAL TRIAL by the legal definition that gives the FDA oversight on human testing.

From an scientific perspective, this voluntary gathering of data is less than useless. There is no way to check the quality of the data coming from the individual doctors, and there is no enforced consistency in record keeping or in measuring the outcomes - did the doctor really do a 28 joint count on the patient, or did he or she just ask the patient a few questions in the exam? In other words, this is a collection of ancedotes. Once again, let's all say it together: the plural of anecdote is NOT data!

With this, I think you will have all the tools to parse the statements of our mystery researcher. Let’s go, shall we?

*I don't mean this criticism to be a blanket condemnations of GPs, here. Most, the vast majority, of GPs are good people who do a major amount of good in the world. It's just that assuming that the holder of an MD degree is always practitioner of science based medicince can adversely affect your health and your wallet.

Wednesday, April 07, 2010

Filters: Part Two

I spent the entirety of Part One explaining why one should not put one’s entire trust the results of computer algorithms that purport to describe biological systems. Yet we scientists (and this scientist in particular), still use computational biology. Why?

Complexity.

Every piece of data, however incomplete, helps you put together a picture of a problem that’s too complex to perform simple experiments on. Every model teaches you something. Sometimes it teaches you by making you figure out why it doesn’t work too well. Sometimes it works well in one place and not in another ,and you learn a lot by figuring out the boundary conditions. But every experiment has plusses and minuses.

As Clint Eastwood so famously said, you have to know your limitations.

In a speech at Digital Biota2, Douglas Adams (of Hitchhiker’s Guide to the Galaxy fame) described the historic difficulty (in the golden age of Physics between Newton and Einstein) of grappling with the science of life itself:

I can imagine Newton sitting down and working out his laws of motion and figuring out the way the Universe works and with him, a cat wandering around. The reason we had no idea how cats worked was because, since Newton, we had proceeded by the very simple principle that essentially, to see how things work, we took them apart. If you try and take a cat apart to see how it works, the first thing you have in your hands is a non-working cat.


Actually, even in Newton’s day you could open up a cat to see how at least part of it works, and then put it back together again into a functioning cat. It was just a tricky proposition, and you had (and still have) to open the cat up just right. Very few humans had that skill in Newton’s day, and most of them were crude animal doctors, not scientists.

Today biology is still tough to study. Of course, your modern vet takes female cats and dogs at least partially apart and puts them back together with amazing regularity. But you still can’t dig into an animal’s brain willy-nilly and expect a fully functioning animal afterwards. That’s why they invented microelectrodes that can be inserted into a small hole in a rat skull. Biology is a weird space to experiment in (speaking as a Chemist).

Even a “simple” cell is a mass of interconnected systems, and it’s hard to study only one part of it without changing how other parts function, which them in turn come back and change the function of whatever it was you were studying in the first place. This is the very definition of a feedback loop, and a simple cell is a morass of hundreds to thousands of feedback loops. Isolating a variable (i.e. keeping everything else constant and changing only ONE thing, to study the function of that one thing) as we do in Chemistry and Physics, is close to impossible. So we triangulate.

Each experimental method we use has limitations, and we use multiple methods to compensate for the limitations of any given experiment. The major classes of experiment in biology are in silico, which I discussed ad nauseum in Part One, in vitro, which are experiments in petri dishes on cells or isolated products of cells (eg. enzymes and receptors), and in vivo, or experiments in living systems such as rats or people.

Let’s go back to in silico for just a moment (we’ll quickly move on, I promise). A good example of triangulation in biology is looking for the correct shape (or conformation) of protein (enzyme, receptor, what have you) and the molecule it interacts with (its “substrate”).

One could, as the initial quote in Part One (the one I spent the entirety of Part One beating up on) suggested, churn out a conformation by brute force calculations. The problem is, as the Air Force report I linked to indicated, this is almost certain to be wrong. Why? Well, how many water molecules are hiding inside that space in the coiled up carbon chain? Are you sure it’s five, or is it six? Do you feel lucky, punk? Well, do ya?

On top of this, there may be more than one solution to the equations. In fact, with a polymer, there is certain to be. We call the solutions energy minima – in other words, proteins are lazy. Proteins fold in such a way as to minimize the energy needed to keep them in that state. They never stand when they can sit, and never sit when they can lie down. But, to take the analogy a bit too far, they can lay on their side, on their back, or even on their front. Which one is the preferred one? You can’t just go picking the absolute lowest energy state that the computer spits out when the differences in energy between several minima are low, because proteins don’t sleep alone. Intracellular conditions and interactions with their substrates can change the conformation. Furthermore, did you remember that bit about hydrogen bonds that we’re just starting to learn about? Did you include that 1.2 kcal / mole term? No? That might tip the balance between one conformation an another. And there’s more than just that term lurking in the swamp we haven’t explored yet.

So what do we do with our current tools and state of knowledge? Let’s take a concrete example. The greatest tool for figuring out the geometric conformation of chemicals, organic or not, is the in vitro technique of X-ray crystallography. Looking at the pattern of X-rays bouncing off of the electrons in a molecule can tell us the exact shape of a molecule. If you had to learn those bond angles for carbon and other compounds in high school chemistry, well, X-ray crystallography is how those angles were determined.

But X-ray experiments have limitations. Those measurements have to be done in a vacuum, so you take a sample of your protein, maybe even a sample of the protein bound to its substrate, and freeze it. Then you stick it in the sample chamber of your X-ray machine and pump out all the air. Do you think that the conformation of the frozen protein is always the same minimum energy conformation as it assumes in body-temperature water?

No.

The conditions of crystallisation are often assumed to be absolutely relevant to the conditions of the biological assay. However, changes in buffer constituents, pH and crystallisation conditions can have a profound effect on the conformation of both ligands and proteins. For instance, the severe acute respiratory syndrome (SARS) coronavirus main protease was crystallised at different pH values and in complex with a specific inhibitor. The structures revealed substantial pH-dependent conformational changes and an unexpected mode of binding for the substrate-analogue inhibitor [49]. At a pH value of 6 the structure of the monomers in the homodimer differs (one being in the active and the other in the inactive conformation) and the inhibitor binds in a different mode to each monomer.


That doesn’t mean that X-ray structures are useless, just that, as with mathematical models, they have to be used judiciously. Derek again:

when your X-ray data and your structure-activity data seem to diverge, it’s often a sign that you don’t understand some key points about the thermodynamics of binding. (An X-ray is a static picture, and says nothing about what energetic tradeoffs were made along the way). Instead of an irritating disconnect or distraction, it should be looked at as a chance to find out what’s really going on. . .


This is a key point. We scientists coming at the problems of biology from different angles are pretty collegial as scientists go. We work together to use our varied expertise to get a whole panorama from our individual snapshots.

If you read that paper I linked to (in the word “no”) from Drug Discovery Today, what you find is an X-ray crystallographer honestly and candidly discussing the limitations of his field and asking modelers to help us all get to goal, that goal being an accurate understanding of protein – ligand biding. You’ll find interspersed throughout that paper, advice for or requests for help from, modelers:

As a matter of fact, modellers can make an important contribution themselves to the structure determination of protein–ligand complexes. First, their knowledge of organic chemistry and stereochemistry of small molecules is often better than that of a protein crystallographer, so the modellers could help formulate appropriate refinement dictionaries with proper restraints and target values for bond lengths, etc. Second, their knowledge of, and eye for, judging protein–ligand interactions could help the crystallographer, by proposing ligand poses that both fit the electron density and make good sense in terms of protein–ligand interactions.


This is a great example of the back-and-forth between in vitro experimenters and in silico modelers, of how true professional relations occur in real life.

So we come to another important point for your bozo filter. Real scientists work together. Claiming to be a modern day Galileo is worth 40 points on John Baez's crank test. When someone is antagonistic to the entire scientific establishment, that's a major red flag.

In biology, techniques also work together. As I stated earlier, no single experiemnt can isolate all the variables in a biological system, so we use incomplete data from lots of different experiments conducted by different means to get at the truth slowly and circumspectly.

For example, beyond the basic level of protein structure, we come to the other major piece of biological triangulation, the correlation between in vitro and in vivo studies. A good example is that, for the designer of drugs in pill form, the first question one asks is “how will this get from the gut to the bloodstream?”.

In order to answer this question, the in vitro modelers came up with a line of human colon cancer cells that one could make a membrane out of in a petri dish. The theory was that these CACO-2 membranes would simulate the gauntlet that a small molecule drug has to run in order to end up in your bloodstream.

Unfortunately pretty much the only thing that the CACO-2 experiment tells you is how well the target compound passes through a membrane of CACO-2 cells. It has low predictive power for actual animal and human gut absorption. Which is unfortunate, because it’s cheap to do, and lots of managers still like to see the data, even if it is suspect (bad data is often worse than no data, a fact that non-scientists seem to have a hard time grasping).

Over at Org Prep Daily a few years ago, the chemist going by the moniker Milkshake pointed out rather forcefully that the best experiment to conduct is still the one where you feed a living rat the compound and see how fast the drug gets into, and gets washed out of, the bloodstream (this experiment is called pharmacokinetics, or PK, by biologists):

10. Ignore Caco-2 and do rodent PK tests instead, use human plasma and whole blood

Caco-2 permeability model is useless. Oral absorbtion/brain penetration tests in rodent should be done early in the project.


And this demonstrates a key point in medical research. That in vivo data trumps all. And data in the system you are interested in trumps data in another animal. So, for instance, human data trumps rat data if you are looking for a human medicine. And data in rats trumps crappy CACO-2 experiments. But in vivo experiments such as CACO-2 that are totally useless are few and far between. Most of the time the experiments in the major arenas of biological triangulation work together, despite their flaws, each giving up a little clue. We scientists working in those major arenas learn from each other. I tend to sit on the modeling side of things (A P-Chemist on the modeling side? Who’d a thunk it?). But, remember: GIGO. A model’s only as good as the sequence, structure and other data that goes into it.

So, for example, when you’re trying to figure out whether something you’re going to give humans might cause cancer, there are a few techniques to use to give you a clue. The in vitro techniques include the Ames and micronucleus tests, in which you bung some compound into some cells and see if their DNA gets damaged.

There was some back and forth between in vivo and in vitro experiments for these tests however, and you don’t simply throw compound and cells together, you need another piece of the puzzle:

Most of the cell lines and bacteria that are used for routine testing lack certain enzymes that metabolise foreign substances. A rat liver extract is therefore added to such test systems. This fraction, from which larger cell fragments, the nucleus, and the mitochondria have been removed, contains metabolic enzymes such as cytochrome P450-dependent monooxygenases that stimulate in-vivo metabolic processes.


“Simulate in vivo processes”. Because we look at the discrepancy between in vivo and in vitro data and try to reconcile it. What we don’t do is scrap in vitro or in silico completely because they don’t agree with in vivo. There’s a lot of stuff going on in a living organism, and in vivo and in silico techniques can go a long way towards isolating the extraneous stuff so you can look at the effects of a single variable. Reductionism is still a goal, even if it’s impossible to entirely achieve in biology.

In carcinogenicity studies, in vitro results are coupled with in vivo. A positive Ames or micronucleus test is cause to take some caution in experimentation, but you have take lots of drugs in your life that come up positive on those tests. The gold standard for genotoxicology is the multi-year rat carcinogenicity test, and a compound that passes that one but flunks the in vivo tests will be allowed in humans, but the FDA will require that the in vivo tests be noted in the drug’s label.
This whole area of research, looking at in vitro and even in silico evidence and trying to reconcile that data with in vivo data, is called translational medicine.

Translational medicine is the only reliable road to the future we have right now, but it is a long and winding one:

in essence, a lot of “translational” research takes close to two decades to bear fruit, and it’s fairly uncommon for it to take less than a decade. Moreover, as Dr. Ionnidis points out, less than 5% of promising claims based on basic science ever come to fruition as actual therapies. In other words, translational research is hard. Few promising ideas make it to therapies, and it takes a long time for those that do.


This is how it works in real life, folks. We’re all spelunkers wandering in natures caves, and almost everyone with a flashlight can help. Anyone claiming to be a lone genius with answers no one else has should be trapped in your bozo filter until you evaluate the claims more carefully. Anyone trashing an entire area of science outside of their field should stay trapped in your bozo filter until you can speak with an expert.