Source STAT

A common saying at the Food and Drug Administration is: “In God we trust, all others must bring data.” The independent evaluation of science is an essential element of the FDA’s dual role of protecting the public health and promoting innovation to bring new therapies into practice. In fact, with certain exceptions, the FDA is the only regulatory agency for drugs and devices that independently evaluates both the science and the source data from relevant studies. In many ways, the FDA serves the world as an arbiter of quality in the global development of medicines.

The recent flurry of news about the inclusion of faulty data in the regulatory submission for Zolgensma (onasemnogene abeparvovec-xioi), a novel gene therapy developed by the Novartis (NVS) subsidiary AveXis for the treatment of spinal muscular atrophy, should remind us all of the FDA’s central role as we grapple with the critical issue of scientific integrity. I have no specific knowledge about this issue other than what I’ve gleaned from the news. But based on those reports it seems that Novartis discovered an issue relating to data integrity while preparing its Biologics License Application (BLA) submission for marketing approval for Zolgensma. Novartis decided to conduct its own investigation before notifying the FDA, with the result that the agency wasn’t alerted until it had already approved the drug for marketing. According to public statements from both Novartis and the FDA, Novartis followed its usual internal procedure and the FDA is confident that the data integrity issues did not concern clinical trials conducted with human volunteers.

Nevertheless, it’s likely that had Novartis notified the FDA about the data integrity issue, the accelerated review would have been delayed. The story is all the more sensational because of the dramatic importance of the drug in treating a frequently lethal genetic condition in children, as well as its record-setting price tag: $2.1 million for a single, one-time dose.

My experience in more than 40 years of developing and evaluating medical products, and two years working at the FDA, has convinced me that it’s critical to have a second pair of eyes on every aspect of preclinical development, clinical development, and post-market evaluation. The overwhelming majority of scientists, clinical investigators, and employees of corporate and academic research organizations are highly motivated by the mission of improving health and conduct impeccable research. That said, preclinical and clinical research are complex. Mistakes, sometimes quite subtle ones, are common in study design and conduct, as well as in the analysis and reporting of findings.

Purposeful manipulation or falsification of data and analyses, on the other hand, is much less common. Nevertheless, the complexity and level of detail needed makes it relatively easy for inaccurate work to pass muster among non-experts, and for the smaller fraction of purposeful data manipulators or fabricators, detecting such malfeasance requires expertise and dedicated effort.

Working to create a cure for a frequently lethal disease in children is a noble undertaking. What then could motivate people involved in such an effort to produce faulty data and reports? Obviously, the enormous financial rewards that accompany a successful therapy can affect behavior. But within the industry, more complex incentives are in play. The capacity to monitor costs at every step of medical product development is much more refined than the capacity to monitor the quality and design of the development program. Project teams are also under unremitting pressure to cut timelines, with significant rewards handed out for reaching a milestone early, and punishments for delays.

Medical product development is an expensive, high-risk undertaking, and adhering to timelines and milestones is essential. Yet monitoring quality is also critically important. The most pervasive risk in drug development is not so much outright falsification as it is the acceptance of inadequate data or inserting bias into the analysis. For example, it is well known that if experiments with contrary results are not included in the overall assessment or outlying data points are excluded from analysis, the interpretation can be biased.

Duke behavioral scientist Dan Ariely has made the argument that while we tend to focus on egregious cases, the much bigger issue is the less sensational but far more common acceptance of sloppiness, bias, or manipulation in data collection and analysis.

Let one assume that these issues are unique to the medical products industry, there’s plenty of evidence that the same issues are common in academia. Competition for grants and desire to see one’s theories upheld in research findings offer perverse incentives and frequently spawn dubious work that ranges from sloppy to outright fraudulent. My own institution, Duke University, has seen several high-profile casesin which the manipulation of research results appears to have been driven by these factors, and questionable data and analyses were accepted by faculty co-authors because the results seemed consistent with their theories and led to further successful grants and notoriety for the investigators and the institution.

If cheating or “fudging” is as ingrained as behavioral research suggests, what are we to do? Given the sheer volume of data and constraints imposed by available resources, it’s simply not possible for the FDA to review every preclinical and clinical protocol or audit every piece of data involved in complex submissions detailing every step of product development. Nor is it possible to completely track all events involved in a product’s use.

As in other industries, ensuring quality requires intelligent design of the oversight system so most problems can be detected in real time. In both manufacturing and clinical trials, an approach known as quality by design specifies key dimensions of quality depending on the purpose of the effort and deploys systematic sampling to judge whether those benchmarks are being met. Importantly, such systems depend on an institutional culture that is dedicated to integrity and quality, even at the cost of slowing timelines when necessary. This cultural aspect is perhaps the most difficult to assess in a positive sense, but when multiple or high-profile failures occur, evaluation is needed to determine whether those failures reflect a systemic cultural problem rather than an unfortunate coincidence of isolated events.

Given the dire consequences that can stem from a lack of integrity, independent reviews of study design, data integrity, and analysis are essential. This is a core function of the FDA that protects not only the United States, but also the rest of the world. Being untruthful with FDA carries civil and criminal penalties and risks a loss of trust for the portfolio of an entire company.

If the FDA is to discharge this essential mission on behalf of the public, however, it must have adequate funding in order to maintain a talented workforce with deep knowledge of the relevant medical products, quantitative expertise, and highly specialized clinical and scientific judgment. An important but challenging part of this effort revolves around balancing involvement with industry while at the same time vigilantly guarding the agency’s independence. Sealing off the FDA from the scientific and clinical worlds leaves it vulnerable to not understanding key scientific and clinical issues, but erring too far in the other direction exposes the agency to the risk of undue influence from the industry it’s charged with regulating.

One particularly thorny issue is how to acknowledge that a potentially concerning signal is present. As I noted earlier, Novartis became aware of the data integrity issue during the critical final phase of FDA review, and its standard operating procedures called for internal investigation before notifying regulators. It’s also true that if the FDA were notified every time a question arose about internal study data, the agency would need a much larger budget and many more personnel to track down and resolve all these signals.

However, these were not ordinary circumstances, and the question of whether standard operating procedures should have been overruled is a matter of ongoing debate. If the review had been delayed, given that the data integrity issue did not affect the positive clinical trial results, would that have been good for the children waiting for access to the drug? On the other hand, what does the apparent concealment of data issues by AveXis and its parent company Novartis say about the likelihood of other critical issues being hidden?

Share Button