GM Corn & Organ Failure: Lots of Sensationalism, Few Facts
Want to listen to this article for FREE?
Complete the form below to unlock access to ALL audio articles.
Read time: 3 minutes
On Wednesday, we covered the overreaction by a few important online sources to an International Journal of Biological Sciences article claiming to find “signs of toxicity” in three varieties of genetically modified (GM) corn produced by Monsanto. We posted some caveats that made us uneasy about the study, such as the funding sources, the unknown quality of the journal, and the fact that the toxicity claims rely on reinterpreting statistical data that Gilles-Eric Séralini and his coauthors themselves note is not as robust as it needs to be.
Karl Haro von Mogel, a University of Wisconsin Ph.D. student who works with Pamela Ronald (the GM expert we quoted in our last post), responded with some other problems he has on this study. He has a blog post of his own (in which he gets hopping mad at coverage that attributed organ damage, organ failure, or even cancer to the rats in the study). But here are the major issues he points out to DISCOVER:
1. Cherry-picking. “They were picking out about 20–30 significant measurements out of about 500 for one of the sets of data they analyzed,” Haro von Mogel tells DISCOVER. “At the 95% significance level, you would expect that 5% of the observations would show a significant difference due to chance alone, which is what happened.” In other words, one would expect to get some alarming results in approximately 25 out of the 500 of the measurements, which is indeed what they found. “Picking apart what seems to be normal background variability seems to me to be data dredging.”
2. “False Discovery Rate.” The battle over these corn varieties has been cooking for years; Séralini and others published a paper in 2007 on the same issues, and after statistical criticisms like the ones just mentioned the authors came around with this new edition. One of the main shots scientists took at the previous paper, Haro von Mogel says, was that the team didn’t employ a “false discovery rate”—a stringent statistical method that controls for false positives. This time they did, but for at least two of the three varieties—MON 810 and MON 863—the researchers themselves note p-values that are not significant. (A p-value is a measure of the likelihood that any particular finding was due to chance alone rather than a real effect. By convention, science calls anything that has a greater than 5 percent chance of being a random effect “insignificant.”)
3. “Insignificant” results. As you can see in the study’s chart, there a significant effect shown in “Lar uni cell” (large unnucleated cell count) for female rats fed the GM corn as 11 percent of their diet. But for female rats fed three times as much GM corn, it’s not there. “Are they highlighting random variation or finding genuine effects? These are the kinds of questions that scientists need to address before concluding that they have found ’signs of toxicity,’”Haro von Mogel asks. (Séralini et al. have argued that more attention needs to be paid to nonlinear toxic effects, where greater doses would cause less harm.)
4. Lack of corroboration or explanation. The government organization Food Standards for Australia and New Zealand (which disputed Séralini’s 2007 paper [Microsoft Word file]), also disputes the recent study, in part because there is no other science corroborating the statistical data—data that was challenged in the previous points. Their response concludes by saying, “The authors do not offer any plausible scientific explanations for their hypothesis, nor do they consider the lack of concordance of the statistics with other investigative processes used in the studies such as pathology, histopathology and histochemistry…Reliance solely on statistics to determine treatment related effects in such studies is not indicative of a robust toxicological analysis. There is no corroborating evidence that would lead independently to the conclusion that there were effects of toxicological significance. FSANZ remains confident that the changes reported in these studies are neither sex- nor dose-related and are primarily due to chance alone.”
We emailed Séralini to ask if he would respond to these particular criticisms, and have not yet heard a response. But the study is currently available to read for free, and you can see a YouTube clip of him discussing this paper, his methods, and his criticisms of Monsanto.
In light of these concerns regarding the study, it would be an enormous stretch to say the study proves that these corn varieties cause organ damage in mammals. But none of this puts Monsanto’s GM corn totally in the clear, either. As commenters on our earlier post pointed out, Monsanto was simply following the rather laissez-faire rules for government approval, doing the 90-day trials themselves. But Séralini’s team calls for long-term studies, upwards of two years, to get reliable data.
With the dearth of available data, which Monsanto was loath to give up to the researchers in the first place, strong conclusions are tough to come by. As Per Pinstrup-Andersen, a Cornell food expert not associated with Haro von Mogel’s team, sums up this study: “It is very convoluted but the authors imply that the results are not scientifically valid by recommending a study “to provide true scientifically valid data,’” he tells DISCOVER.
But, as Séralini notes in his YouTube clip, that scientifically valid study would cost a fortune. And considering that these biotech crops have already been approved, Monsanto has little incentive to continue testing them.
Karl Haro von Mogel, a University of Wisconsin Ph.D. student who works with Pamela Ronald (the GM expert we quoted in our last post), responded with some other problems he has on this study. He has a blog post of his own (in which he gets hopping mad at coverage that attributed organ damage, organ failure, or even cancer to the rats in the study). But here are the major issues he points out to DISCOVER:
1. Cherry-picking. “They were picking out about 20–30 significant measurements out of about 500 for one of the sets of data they analyzed,” Haro von Mogel tells DISCOVER. “At the 95% significance level, you would expect that 5% of the observations would show a significant difference due to chance alone, which is what happened.” In other words, one would expect to get some alarming results in approximately 25 out of the 500 of the measurements, which is indeed what they found. “Picking apart what seems to be normal background variability seems to me to be data dredging.”
2. “False Discovery Rate.” The battle over these corn varieties has been cooking for years; Séralini and others published a paper in 2007 on the same issues, and after statistical criticisms like the ones just mentioned the authors came around with this new edition. One of the main shots scientists took at the previous paper, Haro von Mogel says, was that the team didn’t employ a “false discovery rate”—a stringent statistical method that controls for false positives. This time they did, but for at least two of the three varieties—MON 810 and MON 863—the researchers themselves note p-values that are not significant. (A p-value is a measure of the likelihood that any particular finding was due to chance alone rather than a real effect. By convention, science calls anything that has a greater than 5 percent chance of being a random effect “insignificant.”)
3. “Insignificant” results. As you can see in the study’s chart, there a significant effect shown in “Lar uni cell” (large unnucleated cell count) for female rats fed the GM corn as 11 percent of their diet. But for female rats fed three times as much GM corn, it’s not there. “Are they highlighting random variation or finding genuine effects? These are the kinds of questions that scientists need to address before concluding that they have found ’signs of toxicity,’”Haro von Mogel asks. (Séralini et al. have argued that more attention needs to be paid to nonlinear toxic effects, where greater doses would cause less harm.)
4. Lack of corroboration or explanation. The government organization Food Standards for Australia and New Zealand (which disputed Séralini’s 2007 paper [Microsoft Word file]), also disputes the recent study, in part because there is no other science corroborating the statistical data—data that was challenged in the previous points. Their response concludes by saying, “The authors do not offer any plausible scientific explanations for their hypothesis, nor do they consider the lack of concordance of the statistics with other investigative processes used in the studies such as pathology, histopathology and histochemistry…Reliance solely on statistics to determine treatment related effects in such studies is not indicative of a robust toxicological analysis. There is no corroborating evidence that would lead independently to the conclusion that there were effects of toxicological significance. FSANZ remains confident that the changes reported in these studies are neither sex- nor dose-related and are primarily due to chance alone.”
We emailed Séralini to ask if he would respond to these particular criticisms, and have not yet heard a response. But the study is currently available to read for free, and you can see a YouTube clip of him discussing this paper, his methods, and his criticisms of Monsanto.
In light of these concerns regarding the study, it would be an enormous stretch to say the study proves that these corn varieties cause organ damage in mammals. But none of this puts Monsanto’s GM corn totally in the clear, either. As commenters on our earlier post pointed out, Monsanto was simply following the rather laissez-faire rules for government approval, doing the 90-day trials themselves. But Séralini’s team calls for long-term studies, upwards of two years, to get reliable data.
With the dearth of available data, which Monsanto was loath to give up to the researchers in the first place, strong conclusions are tough to come by. As Per Pinstrup-Andersen, a Cornell food expert not associated with Haro von Mogel’s team, sums up this study: “It is very convoluted but the authors imply that the results are not scientifically valid by recommending a study “to provide true scientifically valid data,’” he tells DISCOVER.
But, as Séralini notes in his YouTube clip, that scientifically valid study would cost a fortune. And considering that these biotech crops have already been approved, Monsanto has little incentive to continue testing them.