medtronic and open science in ethical perspective

Carken WPA SafetyVia Karan Chhabra at Project Millenial, two meta-analyses have been published in the Annals of Medicine reviewing patient-level data from Medtronics’ clinical trials of recombinant human bone morphogenetic protein-2 (rhBMP-2).  This is a drug intended to be used after spinal surgery to promote bone regrowth, but, as Karan describes, the available research on its efficacy over pelvic bone autograft is contentious.  The studies are significant because they use the exact same patient-level data and reach conclusions that are similar– but not exactly the same.  The investigators were able to do this through the cooperation of Medtronic and the Yale University Open Data Access (YODA) project, whose purpose is to make this type of data more widely available to researchers– especially independent researchers.  The two meta-analyses, and four accompanying editorials on the project, can be found in the Annals’ current issue.  The relevant articles are all currently free.

In “herald[ing] a historic moment in the emerging era of open science”, the editorial by Krumholz et al. brings up several important ethical issues suggested by the conduction of open science.  Wider availability of clinical research data, especially industry sponsored research, is a public and scientific good to be celebrated.  But I found myself wondering how we can make sure that we don’t carry forward other practices of our current (and in my opinion, broken) research culture.  This post deals with some of those questions at the broad level of our culture and our engagement with society.

One major concern in data sharing, patient privacy, is addressed in YODA’s data release policy (PDF).  Trial data provided to investigators by Medtronic is de-identified, and investigators are prohibited from attempting to re-identify subjects.  Sharing the data is also prohibited– a safeguard for subjects and against inappropriate studies or those which, due to redundancy or poor study design, are unlikely to yield meaningful results.  Of course, this is also a safeguard for Medtronic, “the rightful owner of the clinical trial data,” and for Yale.  Investigators must also submit the project to their own IRBs before completing a data use application with Yale.

As Krumholz et al. note, selective publishing of clinical data is a serious problem for medical practice as well as academia:

The YODA Project seeks to address the problem of unpublished and selectively published clinical evidence (45). Nearly half of clinical trials are never published, and many that are have long delays in publication. Among those published, the information is often incomplete. Evidence suggests that some data are not missing at random and that the sharing of data, particularly patient-level data, often provides new insights that are consequential to patients.

While some industry sponsors may indeed be selectively seeking publication for the results of their trials (one good discussion of this is in Harriet Washington’s recent book Deadly Monopolies), I believe that the problem may extend in some cases to independent investigators.  I have witnessed an attitude in investigators that if a studied intervention or method does not work, there is nothing to report; or that a negative result will be uninteresting to reviewers.  These investigators often conclude that there is no point in continuing with their study or writing up the results.  They may rephrase the hypothesis or focus on a narrower patient group in which they can report a positive result.  Even if this modified study makes it to publication, the authors may not have a detailed enough description of the negative result in other patient groups to be helpful to other clinicians.

Medical editors should seek to correct their own biases against negative or neutral results (where they exist), or this perception among investigators (where they do not).  However, future open data projects should take care to avoid related biases so that the problem is not simply pushed upstream.  Now that the Medtronic data is publicly available, YODA’s data release policy states that “the review will not include peer review of the submitted project proposal to evaluate its scientific merit or validity; it will only evaluate whether the proposal is for scientific purposes.”  However, the selection process for the two current research groups was “competitive” (Krumholz).  Since I don’t believe that many current (usually unofficial) authorship practices are equitable, I am wary of such competitive processes.  And I share Michael Nielson’s skepticism that such processes can truly select the “best” team to work on a scientific problem:

Over the past decade the complexity theorist Scott Page and his collaborators have proved some remarkable results about the use of metrics to identify the “best” people to solve a problem (ref,ref). Here’s the scenario Page and company consider. Suppose you have a difficult creative problem you want solved – let’s say, finding a quantum theory of gravity. Let’s also suppose that there are 1,000 people worldwide who want to work on the problem, but you have funding to support only 50 people. How should you pick those 50? One way to do it is to design a metric to identify which people are best suited to solve the problem, and then to pick the 50 highest-scoring people according to that metric. What Page and company showed is that it’s sometimes actually better to choose 50 people at random. That sounds impossible, but it’s true for a simple reason: selecting only the highest scorers will suppress cognitive diversity that might be essential to solving the problem. (link)

Krumholz et al. also express hope “that companies can address their declining public perception by committing to data transparency and benefit from a culture of open science.”  The complex moral problems of the pharmaceutical industry are beyond the scope of today’s post.  However, it is worth noting that this industry’s declining public perception is not seriously harming its profits.  Public disdain for the industry is based in part on patients’ inability to go elsewhere for needed treatments (the excellent Harriet Washington also discusses this dynamic in the work linked above).  To the extent that companies’ “declining public perception” harms them, it is most likely to do so in the form of litigation and regulation.  But if companies bring products like rhBMP-2 to market knowing that they could harm patients, fail to share important safety data, or suppress research activity and treatment through abusive legal means such as life patents, then they should face litigation and regulation to address those harms– no matter how much data they make publicly available after the fact.  In response to the authors’ question, “Will society reject claims that data are proprietary when they relate directly to decisions that people are making about products that are on the market?”, I believe that they will– and should.

In addition to questions about how we will organize research for the public good at the broadest level, data sharing also has implications for the local organization of researchers.  I’ll be discussing these next time– thanks for reading!

Advertisements
This entry was posted in Uncategorized and tagged , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s