I really wanted to hear John Ioannidis (Stanford University, Stanford, U.S.A.) speak in the morning about "Re-analysis and replication practices in reproducible research," but I was so tired that I didn't make it until later. I did have time, though, to speak with Skip Garner. I learned that eTBLAST, the text comparison and search tool that populated the Déjà vu database from MEDLINE, was turned off when he left his previous school. But there is a follow-on project, HelioBLAST. More on this later.
Ana Marušíc led the session on Retractions that included a very curious case. There were three talks about this case, it was a shame that they could not have given one talk three times longer (and without two different topics in between).
Alison Avenell (University of Aberdeen, UK) gave the first two talks. She spoke first about "Novel statistical investigation methods examining data integrity for 33 randomized controlled trials in 18 journals from one research group." While preparing a Cochrane study she and her colleages noted a rather odd set of studies by the same Japanese authors that managed to recruit and interview 500 women with Alzheimer's and 280 males and 374 females with stroke in just a few months, interviewing the participants every four weeks over a five-month period. And the studies all had the same results, although the patients were supposedly different.
Doing some statistics on the values reported showed it highly unlikely that the data was not fabricated. They wrote to the authors and quickly received a reply that this was an error, they would correct it. Instead of a retraction, however, there was only a correction published.
By now Alison's group was looking at the 33 other RCT studies from the group that they could find. They were published in 16 journals (the highest one was the JAMA) over 15 years with a total of 26 co-authors at 12 institutions. The group tried to see what the impact of these papers was, that is, in how many reviews this incorrect data was currently used. They found 12 of the studies in secondary publications, one guideline, 8 trials that used these results as the basis for their research rationale involving over 5000 people! That means that even with conservative costs of $500 / person / study, $2.5 million were spent, thinking that they were expanding on solid research. And since they were only looking at English-language publications, the impact was probably even wider.
In all, it took three years to get the JAMA paper retracted. Someone from the audience noted that it is difficult to get journals to retract papers anyway, mostly for legal reasons. Andrew Grey (University of Auckland, New Zealand) reported on the problems they had getting any of the papers retracted and their own paper about the case published (Neurology. 2016 Dec 6;87(23):2391-2402. Epub 2016 Nov 9). He used a timeline that got more and more complicated as time passed as they kept writing back to unresponsive journals. He identified some interesting issues:
- How should journals deal with meta-reviews that are based on retracted work?
- Should journals be more forthcoming in the face of unresolved concerns? If it takes 3 years to retract an article, there will be many people who read the paper and perhaps acted on it.
- Should published correspondence about retracted papers also be retracted?
- They also emailed medical societies and institutions at which the authors worked - should they have done this?
The other talk in the session was by Noemi Aubert Bonn (Hasselt University, Diepenbeek, Belgium). For some reason, it was in the retractions session, although it was not about retractions but is research about research integrity: How is it performed, how is it published, what are the consequenses?
In a plenary session about the Harmonization of RI Initiatives, Maura Hiney from the Health Research Board Ireland (HRB) and the lead author on the ALLEA European Code of Conduct for Research Integrity (2017) charted the development that has been made at WCRI: At the first conference people were discussing whether or not there really was a research integrity problem. Later conferences grappled with defining it, finding methods to investigate it, defining who is responsible for it, and now that there are so many different definitions and methods and policies, how can they be harmonized? Simon Godecharle had presented various maps at the 2013 WCRI showing the wide variations that exist in Europe alone, starting with language. At least by 2017 there are now less countries that have no policy at all.
Daniel Barr from Deakin University, Australia, spoke on the "Positive Impacts of Small Research Integrity Networks in Asia and Asia-Pacific," recurring on the Singapore Statement and noting that RIOs, research integrity officers, are quickly becoming the norm at universities.
Alison Lerner from the National Science Foundation, U.S.A. spoke about the NSF's role in "Promoting Research Integrity in the United States." She spoke of their processes of auditing and investigating cases of fraud, and noted that they have had some extensive plagiarism cases, some of which also involved fraud. Both PubPeer and Retraction Watch
were given a shout-out as non-governmental bodies that work on monitoring integrity.
I then did some more session hopping, as the interesting talks were in different rooms.
Skip Garner talked about finding potential grant double-dippers, it is a similar process to finding duplicate abstracts in MEDLINE or duplicate abstracts for papers given at different conferences (or the same conference in different years). He spoke a bit about Déjà vu and how eventually many of the duplicates he uncovered were retracted. But the rate of retractions is lower than the rate of new questionable manuscripts in the scientific corpus, which is worrying. Even two years after a retraction, 20 % of them are not tagged as such, and thus people continue to use them.
For fun (yes, computer people have perhaps different ideas of "fun" than other folks) he downloaded the abstracts from scientific meetings that had more than 5000 abstracts each and permitted a longitudinal investigation because the meeting recurrs yearly or every other year. These were compared to each of the other abstracts at the meeting itself, with all abtracts in the previous meetings, and with his collection of Medline abstracts.
He encountered multiple submissions, replicate abstracts with different presenting authors, replicate abstracts from previous years, and plagiarized abstracts. He assured the audience that he did not run this meeting :)
His double-dipping work has been published (Double dipping, same work, twice the money) and reported on (Funding agencies urged to check for duplicate grants) in Nature in 2013. [Drat, I should have downloaded the first one when I was in Amsterdam. The VU has full access to Nature, my school doesn't. Of course, I could buy it for $18....].
During Q&A he was asked if he had reported the cases he found. Indeed he did, and the journals didn't like it. Seems the US government also subpoenaed his database...
Miguel Roig (St. John’s University, NY, U.S.A.) spoke about Editorial expressions of concern (EoC). He and some of the Retraction Watch crew pulled EoCs out of PubMed and examined them. They looked at the wording of the EoC and the eventual fate of the paper. Only 7 % resulted in a correction, 32 % resulted in a retraction, in 4 % of the cases the matter was resolved. For the rest (almost 58 %!!!) there was no follow-up information to be found, even if the EoC was published four years previously. He referred to a very recent publication (Feb. 2017) on the same topic, Melissa Vaught, Diana C. Jourdan & Hilda Bastian, "Concern noted: a descriptive study of editorial expressions of concern in PubMed and PubMed Central" in Research Integrity and Peer Review 2017 2:10. He closed encouraging journals to be more specific about the reason for the concern and to use EoCs more often.
Mario Malicki (University of Split School of Medicine, Split, Croatia) spoke about his "hobby project" (i.e. no funding) looking at third party inquiries of possible duplicate
publications. He discovered that the National Library of Medicine will assign a tag of "duplicate publication" in the [pt] field if it finds a pair during manual indexing. But there is no action taken, and since the mark is hard to find, people don't see them. He downloaded 555 potential duplicate publications, and checked to see if they had been retracted. He contacted 250 editors about the duplicates, although 16 editor emails could not be located at all. Not all editors bothers to answer his inquiry, although a few of these were eventually retracted. The correspondence with the editors was evaluated, as there were specific questions asked, such as: are you aware of the duplicate publication field tagging in Medline? Only 1 was aware of this, 15 said no, 165 did not bother to answer the additional question!
Mario catalogued the answers and the reasons given for not taking action, and as far as he obtained information, the excuses of the authors and above all of the publishers for their errors. It seems common that an article is published twice in different volumes (104 times), or doubly published in a sister journal (64 times) or even published twice in the same volume (21 times). Over the span of 4 years, 9 % the articles identified have been retracted. He did not determine the publishers or the precedence of the publications.
J.M. Wicherts (Tilburg University, Tilburg, The Netherlands) has a theory, namely that transparency and integrity of peer review are somehow linked. In order to show this, he set up QOAM: Quality of Open Access Market. Here the readers rate a journal on various paramters on a scale of 1 to 5. Since it was not made clear which is best (1 is the top grade in Germany), this has a cross-cultural issue. To date about 5000 ratings have gone in, there is one particularly active person. He saw this positively, I would check to make sure it is not someone hired by certain journals. As a quick test, I chose my favorite rabid anti-vaxxer paper published in a journal that was on the now-defunct B-list. Sure enough, it was in there, with three reviews and a grade of 4.6. I don't really believe that this is a good idea.
At the closing session Nick Stenek presented the Amsterdam Agenda for assessing the effectiveness of what are seen as the most important ways to promote integrity in research that had been worked on over the past days.
It was quite an experience, these two conferences in Brno and in Amsterdam. They were different in focus, but both offered much for me to learn. And it was fantastic to meet all these people I have corresponded with by email in person! The next WCRI will be in 2019 in Hong Kong, jointly organized by people in Hong Kong and Melbourne.
I have one other link that I picked up from a tweet I want to preserve here: The Authorship and Publication subway map from QUT Library and Office of Research Ethics & Integrity.
Over and out!