Friday, June 7, 2019

WCRI 2019 - Day 3

Day 0 - Day 1a - Day 1b - Day 2 - Day 3

One's brain is already exploding, and there is one more day ahead. I decided to miss the first plenary about fostering research integrity in Malaysia, Korea and China.

Session: Publishing 1

Ana Jeroncic, University of Split School of Medicine, Split
"History of scientific publishing requirements: a systematic review and meta-analyses of studies analysing instructions to authors"

It is interesting to see all of the things that can be investigated. This one was looking at Instructions to Authors (ItAs) that describe manuscript submission procedures and journal policies. In particular, they conducted a systematic review of papers about ItAs. They found 153, the number increasing as digital publishing takes over. The topics slide was only up for a few seconds, but ItAs address issues beyond manuscript formatting such as publication ethics, clinical trial registration, authorship, conflicts of interest....
I asked about plagiarism of ItAs, that is, non-affiliated journals just copying ItAs from other journals, but they didn't look at that.

Michael Khor, Professor at Nanyang Technological University, Singapore, managed to fit something like 40 slides on "Global trends in research integrity and research ethics analysed through bibliometrics analysis of publications" into his allotted 10 minutes. It was quite entertaining, but one could barely take notes, as looking down momentarily meant that you missed a slide or two. It seems he looked at over 25 000 publications on research integrity and research ethics, using a graph representation tool to visualize relationships. He was showing topic maps, selecting by country to show how the topics are quite different from country to country and how the topics have changed over time. I would love to see this in print, as I need time to look over the graphs and take in what exactly has changed (and what disappears).

It was noted in the discussion that Scottish authors self-identfy as Scottish and not as UK :)

The talk I was waiting for was Harold "Skip" Garner, VCOM (Via College of Osteopathic Medicine), Blacksburg, speaking about "Identifying and quantifying the level of questionable abstract publications at scientific meetings." Skip is the driving force behind ETblast and Déjà vu, a technique that uncovered many duplicate publications and plagiarisms in biomedical publications. He currently runs HelioBLAST, a text similarity engine that finds text records in Medline/PubMed that are similar to the submitted query.You plugin up to 1000 words and look at what bubbles up.

He collected conference abstracts found on the open web and has set up an Ethics DB that lets one browse through or do some text mining on the data. There are a lot of false positives such as people submitting five versions of their manuscript and the conference having all of them available web-facing. But there were questionable things tht turned up such as the same abstract at different conferences with different author orders. Interestingly, he was able to find some instances of salami slicing using this method. He then compared the abstracts of 2018 to Medline. Here he turned up things such as previously published material being submitted to a conference 2 years later. He has classified these as "old findings." It seems that since there is such a time lag between abstract submission and acceptance or rejection, people submit their work to multiple conferences.

As a side-effect of his similarity investigations he can take the accepted papers for a conference and let the computer organize them into tracks of similar papers.

Catriona Fennell, Elsevier, Amsterdam

"Citation manipulation: endemic or exceptional?"

Estimated prevalence of citation manipulation by reviewers based on the citation patterns of 69,000 reviewers

She started off with a Dutch saying, "never let at good crisis got to waste". There was a scandal involving citation stacking in soil science that had affected Elsevier. They investigated the entire area of citation coercion through reviewers, citation pushing done by editors, and citation stacking done in journals.

She noted what a journal can do to fight this:
  • Make it clear that citation coercion is unacceptable
  • Educate editors
  • Remove reviewer privileges
  • Inform institutes and funding bodies
  • Create editorial systems to detect self-citations in reviews or revision letters?
  • Retract citations?
  • Black-list worst offenders?
  • Share information with other journals?
The last four are not really possible, in particular, citations cannot be retracted. There are COPE guidelines for reviewers, and Elsevier eithical guidelines. Also an article by Christopher Tancock about the practice: "The ugly side of peer rewiew".
Elsevier looked through 54 000 reviews stored in their systems and identified 49 persons to look more closely at.

In particular there was"Dr. X" with an h-index of 90 and 20 000 citations in Scopus. They contacted him/her, but they were entirely unrepentant, the institute was unresponsive, there was no funding body for the research, the person is active as an author even more so as reviewer. The person is now no longer a reviewer for Elsevier.
She also spoke about generic reviews that are so unspecific, they fit every paper. She called them "horoscope reviews". They saw some reviewers apparently copy & pasting these reviews into their responses.

The last speaker in the session (and rightly so the winner of one of the best speaker awards) was Alexander Panchin, Institute for Information Transmission Problems of the Russian Academy of Sciences, Moscow on "Concealed homeopathy: a natural test of peer-review quality".

A Russian pharmaceutical inventor (and holder of a patent on a homeopathic "remedy") has "discovered" that is cures pretty much all ailments. Alexander had pictures of it being sold in stores in Russia and heavily advertised.  It is made up of "diluted" antibodies, supposedly 1:10^16. There are variations that "combine" dilutions of 1:10^24 and 1:10^30. There is essentially nothing in the pills except sugar, which is why it is a tad off to take these pills to "cure" diabetes.

In the patent application it is called a homeopathic drug, but it is now called "ultra-low dosage" or "release-active" drugs.

Alexander tracked down many papers published by this gentleman, he was even an editor of a special edition published at SpringerLink that included 46 of his own papers! The papers do not disclose his conflict of interest, and often have very flawed study designs, showing peer-review not kicking in.

Alexander wrote to the journals and has managed to get three retractions and two promises to retract, but the authors of a review article that include many references to this stuff refuse to issue a correction until ALL of the flawed papers are retracted....

Even though the Minister of Science has named this manufacturer as the most damaging pseudoscience project, scientists and newspapers that have reported on this have been sued, so I am keeping the name off the blog.

After lunch we had the Plenary session on
Predatory publishing and other challenges of new models to share knowledge
I was really looking forward to this session and it didn't disappoint!

Deborah C. Poff, the new COPE chair and a philosopher from Ottawa titled her talk "Complexities and approaches to predatory publishing"

She spoke at lightning speed, getting faster as time began running out. It could have been at least a two-hour lecture, so jam-packed it was with really good stuff. I could barely keep up, so I hope I get the highlights right.

A definition for predatory publishing is problematic, as there is much overlap with legitimate but new or smallish publishers. She looked at necessary and sufficient conditions for a definition, but found that while deceit is necessary, sufficient conditions are vexing to try and capture.

PP cheat and deceive some authors charging publishing related fees without providing services; PP deceive academics into serving on editorial boards; PP appoint editorial board members without knowledge; no peer review; refuse to retract or withdraw problematic papers; etc.

The list goes on: Misleading reporting, language issues, lack of ethical oversight, lack of declarations of conflicts of interest, lack of corrections or retractions, lack of qualified EiC (if any), made-up rejection rates, false impact factors, false claims of being indexed in legitimate indexes, falsely claiming membership in publication ethics organization including forgery and falsifying logos of such organization. COPE apparently had to fight a forged COPE logo.

What should we call them, anyway? Arguments against the term "predatory": It is not descriptive or instructive, so some suggest using fake, rouge, questionable, parasitic, deceptive, etc.; predatory suggests victims, powerless people who are acted upon without their full knowledge, while a number of studies have shown that some scholars knowingly publish in such journals; Calling the issue "predatory" obviates or mitigates the personal responsibility for choosing where to publish.

The best argument for using the term: Since Jeffrey Beall coined the term, why not use it?
COPE is undecided on what name is best.

I particularly liked Deborah's stakeholder analysis of who or what is harmed by these publishers:
  • The innocent author who is duped into paying for services without receiving them. They may lose status when peers discover that they have published in such a journal, and it can even lead to investigations. Since many such publishers refuse to retract, the damage done may be long-term. 
  • Legitimate Open Access Journals are easily confused with predatory Open Access Journals
  • Legitimate journals which are not top ranked or may not follow best practice are also easily confused with them.
  • Research and funding sources: This depends on whether the research published is legitimate or not. If the research is shoddy and gets published by a PP journal, it may be cited and thus pollutes the scholarly record. If a scandal arises, the scandal may tarnish publicly funded research.
  • Universities and their role in knowledge creation.
  • Citizens who pay taxes.
She pointed out that predatory publishers make a great business ethics case. In closing, she sees only two things that can be done:
  1. Caveat Emptor (let the buyer beware) - use Think / Check / Submit: do you read the journal yourself? Do you cite research published there? Do your colleagues? Who is the editor-in-chief?
  2. Addressing and pursuing predatory publishers as businesses committing criminal acts. The USA Federal Trade Commission won a court case agains the owner of  OMICS and the company itself. The courts fined OMICS $50.1 million.

Bhushan Patwardhan, Professor of medicine, Vice chairman, University Grants Commission, New Delhi, spoke on "Research integrity and publication ethics: Indian scenario". Bhushan first spoke about the University Grants Commission and gave an overview of the India Higher Education sector.

There are more than 900 universities and more than 10.000 other institutions with 1.2 million teachers somehow coping with 36.6 million students. There are just shy of 150 000 publications produced in India per year, and unfortunately, many of these appear in problematic journals.

There is a paper about the situation in India, they selected 2000 Indian authors for papers in journals on Beall's list and sent them a survey. 480 responded, almost 60 % were unaware that they were publishing in a predatory journal:
G. S. Seethapahy,  J. U. Santhosh Kumar & A. S. Hareesha. (2016 December 10). India's scientific publication in predatory journals, need for regulating quality of Indian science and education. Curr Sci, 111(11), pp. 1759-64

Bhushan was shocked to find out just how many Indian publications were in predatory journals. India has just set up the Consortium for Academic and Research Ethics (CARE) in 2019. The goals of the CARE project are to
  • create and maintain a CARE list of reputable journals
  • promote research publications in reputable journals
  • develop an approach and a methodology for identification of quality journals
  • discourage publications in dubious journals
  • avoid long-term damage due to academic misconduct
  • promote academic and research integrity and publication ethics
He put up the URL of the site for CARE:, but the site was down for "maintainence," as it had not even been up for a day before the site was cloned and published on a similar URL by unknown persons.

Then Matt Hodgkinson, Head of Research Integrity @ Hindawi Ltd., London, took the stage to give "A view of predatory publishing from an open access publisher". He first gave a bit of a historical overview and told us a bit about Hindawi. It was founded in Cairo in 1997, publishing the first subscription journals in 1999. In 2007 all journals were flipped to Open Access. In 2016 they created their Research Integrity team that handles all issues that arise at their journals. The headquarters of Hindawi moved to London in 2017.

He spoke of the impact that predatory journals have on legitimate, Open Access journals: they are tarred with the same brush. They also create false impressions for authors, who now expect undue speed in legitimate publishers, and out of impatience (Matt called it "gazumping") dual submissions to see which journal publishes first. They have had so many instances of this, Matt told me over coffee, that they check for text similarity online twice: once at submission, and once more just before publication. Many times they have caught double dippers this way.

He expanded the concept of predatory publishers to what he called the "Cargo cult" publishers (ones who publish unedited theses or the Wikipedia as "books"), paper mills, the selling of authorship and faked peer-review. He also noted that the subscription model is not immune to fakery - there are subscription journals that closely mirror the titles of legitimate publishers, something called hijacking.

He closed with some scandals (publications about elephant autism or space octopi) and then listed some of the newest ideas, the various pre-print server. The question arises, however, how sustainable such initiatives are.

Although I was planning on visiting another session, Jenny Byrne insisted that the session on checking data and images would be very interesting, and she was right. I had thought that Elisabeth Bik was the only person around perusing doctored images, but it turns out there are quite a number of initiatives.

First up was Jana Christopher from FEBS Press, Heidelberg, speaking about "Image Integrity in Scientific Publications."

She observed that the prevalence of image aberrations in publications is generally underestimated. Although there are ways to catch simple-minded manipulators, much like with plagiarism, people are getting more sophisticated in hiding their tracks. Her focus is on Western blots, micrographs, or photos, anything that can be overlayed in Photoshop. If they match identically, there's a problem. She showed in a quick demo how she loads suspected duplicates into different color channels and overlays them. The result is black for identical portions of the image.

She differentiated between manipulated images and wrong images being used to illustrate a finding. Why do people do this? Some apparently want a cleaner, more striking image. Others want to show a particular feature more clearly. Then there are those who wish to show a result that was not actually produced.

She showed some more examples of pictures that have crossed her desk, cut-outs clearly shown as transparent background, the clone tool being used to overwrite undesirable portions of an image, or images that are supposed to show different plants but because of the pattern of the soil are clearly the same plant.

Rennee Hoch, the Senior Manager and Team Manager of the Publication Ethics Team at PLOS One, San Francisco, sang the same song, second verse with her talk on the "Impact of data availability on resolution of post-publication image concern cases."

She noted that image concerns make up 39 % of the concerns raised in her department, but 75 % of the retractions. She took 100 cases of post-publication image cases from 2017-2019 and had a statistical look at them. The numbers flew by so fast, I was unable to keep up. 94 of the cases were with image duplication, the other 6 manipulation or fabrication. All fabrications have been retracted, for manipulations or duplications about half have an Expression of Concern or a Correction.

Their big issue is that when a concern is raised, they request the original data, and none is forthcoming. The excuses are similar: can't find the files, hard-disk crash, person left the lab. Concerns are coming in up to 5 years after publication, but some countries only have a three-year retention policy. So that is clearly not sufficient. At times they wonder if the data ever existed at all, although there is a lot of honest error or poor practice.

What can a journal do? They can require submission of the raw image data, and have the peer-review done with the raw image data, as well as publishing that as supplementary material. This permits better assessment and the journal can make sure that the images are archived properly.

In the discussion it turned out that many journals, upon requesting original data, get sent PowerPoint slides with screenshot images - completely useless for the task.

Daniel Acuna, a computer scientist from Syracuse University in New York State, USA, provides tools to Research Integrity Officers (RIOs) to help investigate cases. His talk on "Helping research misconduct investigations: methods for statistical certainty reporting of inappropriate figure reuse"was about a statistical tool that helps evaluate if the excuse of a scientist ("it just happened by chance") really makes sense.

Similar instruments might indeed generate similar artefacts, image processing software might generate similar noise, software reproducability might generate similar results, and there are some reuse of images that is legitimate, for example, generic brains used as underlays for captions.

They scraped about a million images they could find on PubMed Central, and had to scrape them from PDF which does not actually make things better. They calculated a similarity index, setting a high likelyhood threshold and then looking at the results. They managed a 63 % area under the ROC curve, which is not brilliant, but marginally better than flipping a coin (50 %). They need more images in order to refine their algorithm.

Thorsten Beck from the HEADT center (funded by Elsevier) at the Humboldt University, Berlin, spoke about the image integrity database that they are putting together. Bik, Fang & Casadevall have shown in their 2016 and 2018 papers that about 4 % of all published images have issues, a good 35 000 papers are in want of retracting for this reason.

They want to build a structured database with images from retracted images, recording as much information as they can about the authors of the publications, their institutions, the reason for the retraction, etc. However, retraction notices are famous for being vague, on account of authors suing journals. They want to keep track of who manipulated the image and who detected it, but seeing as how institutions are highly reluctant to disclose the results of an investigation, good luck in trying to obtain that data. [Although Nature has a  WorldView column this week by C. K. Gunsalus calling for institutions to be more transparent about their decisions]. And then there are copyright issues, so there are many challenges.

Jennifer Bryne, an oncologist from the Children's Hospital at Westmead, Australia, presented her work together with Cyril Labbé (University of Grenoble, France) on the Seek&Blast tool.

She first gave us a two minute introduction into genetics, noting that the nucleotide sequences for certain genes are such long strings of letters that no human being can easily remember them. She does, however, remember the name given to some cell line, TPD52L2, that she had worked with ages ago. There had been a dozen and a half papers about this many years ago, and suddenly it was popping up all over the place in papers by various Chinese authors for a wide variety of cancers, which is impossible. The cells come from only one organ.

[Matt Hodgkinson has sent in a correction: "Small correction - TPD52L2 is a gene Jennifer cloned in 96. The authors of suspect papers often reported studying it in cell lines known by to be really HeLa & they often got primer sequences for detecting & knocking down the genes wrong." I can't pretend to understand that, but I'm thankful for the correction!]

As she began reading the papers, she realised that they didn't make sense at all, something about the targeting sequence being off. In speaking with Cyril about this issue, he immediately saw that the nucleotide sequence is just one big word, so it is simple to parse them out of papers. He went and did so, and was even able to identify the context in which these nucleotide sequences were used, so that impossible uses of them could also be identified.

The system, as many software systems in this area are, has a large false positive and false negative rate. The positives must thus be manually examined before flagging a paper. They published a paper about it in Scientometrics, "Striking similarities between publications from China describing single gene knockdown experiments in human cancer cell lines," identifying the flagged papers. We had a look at the papers they identified with nucleotide sequence overlap and the ones I was reporting on with text overlap, and found that the same journal was publishing these papers. They are having very similar problems as I am in getting the offending papers retracted.

The service is available online at for looking to see if there are any publications with a particular sequence. They caution to manually verify a paper before taking any action such as commenting or contacting someone. This is not an automatic detector! Cyril will continue refining the algorithm used, he said after the presentation.

We were now down to the final session.

Maura Hiney and Daniel Barr reported on their results from the focus track on ensuring integrity in innovation and impact, and Klaas Sijtsma reported on the progress being made with the Registry for Research on the Responsible Conduct of Research. He now revealed what some of seemingly odd data was that was being collected at submission time: They wanted to see how many of the accepted papers had been pre-registered. It wasn't many. I think that pre-registration is fine for clinical trials, but there are many other methods of doing research that do not fit in the pre-registration mindset. In particular, when you observe something odd and end up chasing down a crooked alley and suddenly having a great big new field show up, you will hardly have pre-registered what you are writing up for other scientists.

David Moher reported on the Hong Kong Manifesto for the Assessing Researchers, discovering that the principles need a good bit of re-drafting.

The best speaker awards for young researchers were announced, and then Lex Bouter extended an invitation to attend the 7th World Conference for Research Integrity, to be held in 2021 at the University of Cape Town, South Africa.

That's it for blogging, I'm writing this on the plane with a screaming baby in my aisle, I will now put some music on the noise-cancelling headphones, in the hopes of drowning out the piercing screams. We've still got 4 hours to go....
Update 8 June 2019:  Vasya Vlassov had a friend film his talk about Dissernet.

Wednesday, June 5, 2019

WCRI 2019 - Day 2

Day 0 - Day 1a - Day 1b - Day 2 - Day 3

I was sooo tired, that I slept in and missed the first session. Sorry about that.
I wanted to see talks in two sessions, natch, so I had to do a session hopping.

Session: Retractions Lydia Liesegang, a sociologist with the TU Berlin, Germany, spoke on "The impact of published incorrect scientific information on the knowledge production of scientific communities."

The talk was not actually about retractions, but about incorrect information.  She assumed that when people cite a publication, that they are using it to support an argument. That may be the general case, but I have cited things that are used as examples of bad science, and I am not alone, so I don't buy into this argument. She did a citation analysis of 30 problematic papers, finding 126 citing papers. The problematic papers were among those found by
Byrne, J.A. and Labbé, C. (2017) Striking similarities between publications from China describing single gene knockdown experiments on human cancer cell lines, Scientometrics 110(3): 1471-1493
(which is a fascinating reading in and of itself). 

There was some citation and propagation, but in conclusion she found that usually, incorrect information dies in the periphery. But such papers lead on the scientific community about possible rewarding research areas, so this could steer future research in the wrong direction and thus allocates resources to exhausted research areas.

I then switched to the

Session: Whistleblowers

The room was packed, I stood at the back with a number of people until the speaker switch happened and then squeezed myself into an empty space, climbing over many people.

"The 'Murky Waters' of questionable research practices" was presented by Johannes Hjellbrekke of the University of Bergen in Norway.  He conducted a survey for the Research Integrity in Norway (RINO) on  Fabrication, Falsification, and Plagiarism (FFP) and Questionable Research Practices (QRP) that was distributed to 31 206 researchers at Norwegian universities.

They did find self-reported FFP and QRP and used a lot of statistics in the hopes of identifying a specific group that needs to have a whistle blown.  They defined three groups of researchers: The Ethical (82 %), The Generous (oops, we made a boo-boo, for example using gift authorship, not informing about limitations of a study, not whistleblowing on colleagues, 13 %) and The Murky (5 %). The Murky are the group at risk.

Overrepresented in the "Murky" waters were: Private research institutes, social science, postdocs, Researcher II [no idea what this is], Senior Researchers, 30-39 years age.

Underrepresented in the "Murky" group were: Humanities, PhD candidates, Associate Professors, 60-69 years of age.

He closed with an interesting statistical observation, based on the work of Abraham Wald (1943): Don't strengthen war airplanes that have been shot at where there are hits. Instead, strengthen where there are no hits, because we are looking at the ones that returned, not the ones that were shot down.

Fascinating was the next talk, by lawyer  John R. Thomas, Jr., Healy Hafemann Magee, Roanoke and his brother, biologist Joseph M. Thomas, "Perspective of the whistleblower." Joseph Thomas was the whistleblower in the case of US ex rel. Thomas v. Duke University, et al. in which the US Department of Justice sued Duke University and won $112.5 million in damages.

John Thomas sued under a quirky US American law called the False Claims Act or "Lincoln Law."  If you defraud the government, they can sue you. It is traditionally used in contracting law, and now more often in Medicare and Medicaid cases. In this case it was for research misconduct that occured while working on grants from the federal government. Qui Tam provisions allow private whistleblowers to bring suit on behalf of the US Government.  The suit is first sealed while the government investigates. If the government finds that the case has merit, they take over the suit. If they win, they recover triple damages and the whistleblower may recieve up to 30 % of the recovered amount.
More on the case in Science and the press release of the Justice Department.

During the discussion someone from Duke asked if this wasn't a bit too harsh, as such a ruling could bankrupt smaller universities. John Thomas answered that that is the point of the provision: the courts know that a lot of fraud goes on, so if one gets caught, they are severely punished in order to make the others decide to clean up their acts.

The session closed with epidemiologist Gerben Ter Riet, Amsterdam University Medical Center & Univ. Applied Sciences, Amsterdam speaking about his "Reflections of a passionate and almost excommunicated scientist."

He was doing his normal duties and teaching a course on the responsible conduct of research (RCR) as well as mentoring students. He became active in the RCR area and published a few papers (1 - 2), but was told by a new boss that he was not bringing in enough grants, so he should terminate his RCR activity and focus more on science.

He even managed to obtain a grant for research integrity in 2017, but that was when things exploded in his lab.  He tried to get help inside the system, but was stonewalled at every turn. He noted when reading the document that came out 5 weeks ago in the Netherlands that, except for the sexual harassment, it rather fit his case.
Harassment in Dutch Academia, Exploring manifestations, facilitating factors, effects and solutions.
Two of his colleagues have left academia, he has a nominal position at the university hospital with some doctoral students and is now teaching at a local college.

Plenary session: Perspectives for funding agencies in shaping responsible research practices  

Anne-Marie Coriat, Head of Research Management, Wellcome Trust, London, spoke about "Towards a more positive culture for Trust in research – a systems perspective." She introduced the Wellcome Trust and identified that there is a systemic problem in science.  She closed with the statement that change will not happen if we act alone.

Kylie Emery, from the Australian Research Council, Canberra, had "Simplifying and strengthening responsible research practice – the Australian experience as her title and showed the slides of the Australian research landscape, their framework, the focus on shared responsibility, the role of funders for the THIRD time. 

Qikun Xue, Vice President Tsinghua University, Beijing was supposed to speak, but he had to cancel at the last minute, so he sent a colleague who had the job of presenting a Powerpoint Karaoke on "Research integrity practice at Tsinghua University: Policies and Practice". 

Tsinghua University is a very large Chinese university with over 14 000 faculty and staff and over 36 000 students. There were the usual ideas for dealing with the topic: education and sanctioning. A few Chinese scandals from the university were mentioned:
  • 2005: Faculty member Liu from the Medical School fabricated research and was dismissed
  • 2009: Plagiarism in a postdoc's published papers was found and he was punished
  • 2014: A final project report by another faculty member Liu in the Dept. Of Mechanical Engieering turned out to be identical to the proposal, no acutal research was conducted, he was sanctioned.
  • 2017: Papers published by a graduated PhD student were self-plagiarizing, re-using images and fabricating experimental results. The PhD was revoked.
During the discussion the question was asked, it being the 30th anniversary of the Tiananmen Square protests, what the current state of academic freedom at the university is. The speaker appeared a bit flustered, but assured us that people at university are allowed to have different opinions.

I was frozen cold on account of the air conditioning being set so low, so I got some hot tea before I came back to the last lecture of the day.

Mark McMillan, Dept. Vice Chancellor for indigenous people and Aboriginal law at RMIT University, Melbourne, and a Wiradjuri man from Trangie, NSW, spoke about deep time (120 000 years ago) and Aboriginal ways of knowing in "Ethical and integrity dilemmas for engaging with humanity's oldest living knowledge system." I have no earthly idea what he was on about.

We then gathered for a boat ride to the conference dinner. I was happy to spend the time chatting with Tracey Bretag. We froze on the bus, then got out at the water and melted in the heat and rain for about an hour. It was decided that we couldn't get on the boat there, as the water was too choppy, so they had to re-order the busses to take us to another pier, where we were able to get on. The weather cleared up and we had a wonderful trip out to the old city airport that is now a cruise liner terminal. It has a restaurant that does not mind serving 1000 people at once. We were only 700, so that sounded good.

It was bit odd that the vegetarians were expected to sit together. Half of us did, the others stubbornly stayed seated with their meat-eating friends and significant others. We were looking forward to the eight-course meal that started off with white Chinese yams. Not to become my favorite, but okay, seven more to come.

Except that the next five dishes seemed to be more or less the same: mushrooms with something gooey, drowned in brown sauce and with the odd other vegetable stuck in. The "vegetarian shark fin soup" was so strange tasting, one vegetarian said it tasted like beef stock had been used as a basis, so the Indians rather went on strike. Then we each got a small bowl of rice with  few cucumbers sliced in, no sauce. For dessert there were two small squares of Jell-o. At least the wine was good.... As we left we saw piles and piles of noodles and rice on the other tables - that and a bit of sweet-and-sour sauce would have been wonderful!

We had a nice ride back, although Tracey and I rather got into an argument with an administrator from Canada who insisted that their university would punish students who reused more than four words in sequence without a reference. She wanted to know what software she should use to teach the students. I got ticked off about the text-matching software, as is not useful for that purpose, and Tracey insisted that the policy is crocked (it is!) and should be changed. We still have a lot of educating to do!

More to come on the last day of the conference!

Tuesday, June 4, 2019

WCRI 2019 - Day 1b

Day 0 - Day 1a - Day 1b - Day 2 - Day 3

Quite refreshed from a long night's sleep and reluctant to venture out into the rain, here's the rest of Day 1 of the WCRI conference 2019!

Session: Principles and Codes 2

Daniel Barr, RMIT University, Melbourne
"Research integrity around the Pacific rim: developing the APEC guiding principles for research integrity"

They looked at integrity systems across the Asia-Pacific Economic Cooperation (APEC) area and collated a close consensus of guidelines:
  • Research integrity systems appear diverse, multi-faceted and dynamic
  • Principlies-based policies appear common, but are not uniform
  • Limited coordination of institutions with some exceptions
  • Varied roles in leading or influencing research integrity systems
They did a survey with 499 responses, but 85 % of the respondents were from Mexico, so they had to split their analysis on Mexico and not-Mexico. They also conducted a workshop with various participants. In honor of the memory of Paul Taylor they have developed the Draft APEC Taylor Guiding Principles for Research Integrity that are a top priority for avoiding breaches.
Honesty, Responsibility, Rigour, Transparency, Respect, Fairness, & Diversity
The topic of Diversity was a principle that came out of the workshop.

Natalie Evans was supposed to speak about "Research integrity and research ethics experiences: a comparative study of Croatia, the Netherlands, and Spain" but there was some planning mix-up so Jillian Barr took over and spoke about research integrity in Australia, which was a shame for me, because that is what she talked about at the pre-conference workshop.

Dr. Sonja Ochsenfeld-Repp has been Deputy Head of Division Quality and Programme Management, German Research Foundation, since 2016. She spoke about the new Draft Code of Conduct "Guidelines for safeguarding good scientific practice". The old white paper was first published in 1998 [as a reaction to a very large academic misconduct scandal, this was not mentioned], a revision is underway since 2017. 

I asked about how many researchers in Germany actually know about and understand these guidelines, she assured me that everyone does. My own experience shows that this is not the case, there are quite a number of university heads who are unaware of the procedures and guidelines set out, even if it is published on their own web pages. 

I spoke with another researcher afterwards who conducted an actual study investigating how many people did, indeed, know about and understand the guidelines. The results appear to be sobering, I'll see if I can get a hold of concrete results.

Sergey Konovalov, Russian Science Foundation, Moscow
Research ethics in Russia: challenges for the Russian Science Foundation

RSF has existed for 5 years now, but has less than 10% of the federal budget allocated for science. They audit the accounting of the grants: Business class instead of coach, fancy furniture for the bosses' cabin. They don't touch the scientific part, only if the expenses are related to the research.

I asked about Dissernet and if that shows that they need to look beyond the economics to the science itself. He says that they have zero tolerance for plagiarism, but the researchers are themselves responsible for the scientific correctness of what they research. I'm afraid that he doesn't understand my question.

Update 2019-06-18: Sergey writes: "Regrettably, I did not manage to answer your question about Dissernet and if that shows that we need to look beyond the economics to the science itself. Frankly speaking, Dissernet has nothing to do with the Russian Science Foundation activities as they check the the thesises and dissertations and we deal with the project proposals, which is somewhat different. 

Maybe, you missed the point that we do check not only financial part of the projects but also the scientific part (not by RSF staff but it is done by our expert council members), which is equally important to us.
We do not have much of plagiarism concerns but we strictly check the scientific acknowledgements (funding source should be properly indicated in the RSF-funded publications) and duplication of proposal contens submitted to RSF and other funding agencies [...]; these two scientific issues are in our view one of the most common examples of unethical behaviour of researchers in Russia. At least, in our experience (again, our programs cover only 10% of researchers and 15% of research organisations in Russia)."
Session: Predators

I am quite interested in the entire predatory publisher phenomenon, so I decided to attend this session, although there were at least two others in parallel with interesting talks. One was on Dissernet, the Russian plagiarism documenting group (but I know about them already) and the other one was a symposium on "Misdirected allegations of breaches of research integrity" with Ivan Oransky from RetractionWatch.

First up was Rick Anderson from the University of Utah, Salt Lake City with "Predators at the gates: citations to predatory journals in mainstream scientific literature". He identified some predatory journals that had published nonsense in sting operations and then took some papers from each of these journals. He then looked at citations to these papers in the Web of Science, ScienceDirect and PLOS. Yes, there were citations to some of these. I was a bit concerned that he didn't look at the papers themselves to see if they made sense, as misguided individuals will publish good science in bad places. 

Next was Donna Romyn, Athabasca University, St Albert, Canada (a virtual university) on "Confronting predatory publishers and conference organizers: a firsthand account".

She decided to attend a supposed predatory conference on purpose and to chart her journey. She submitted an abstract "At risk of being lured by a predatory publisher? Not me!". The paper was accepted within 24 hours, so there must have been rigorous peer-review done.... There was a bit of back and forth about her giving a keynote, she ended up with the exact same abstract but using a different title, "Safeguarding evidence-informed nursing practice from predatory publishers." She attended the conference and found about 60 people in attendance, many unaware of the nature of the conference. During the discussion the site came up, it has a good checklist on what to look at before submitting a paper.

Miriam Urlings from Maastricht University, Maastricht, looked at "Selective citation in biomedical sciences: an overview of six research fields". She did a citation network analysis for papers in six focused research fields with around 100 relevant potential citations in order to see if there is citation bias. There were, however, only 1-2 citations for many of the publications and then highly cited ones in each area, so the results were not conclusive.

Eric Fong, University of Alabama, Huntsville (with Allen W. Wilhite) spoke on "The monetary returns of adding false investigators to grant proposals". He developed an interesting economic model for looking to see if adding false investigators (FI) to grant proposals increases the monetary value of total grants over a 5-year period. Then emailed 113.000 potential respondents and had a 9.5 % response rate. Their conclusion: if you add FI to your grants, you apply for more grants, but that does not lead to larger funding per grant application. However, adding FI significantly increases cumulative total grant dollars over a five-year period. 

Vivienne C. Bachelet, Universidad de Santiago de Chile (USACH), Santiago, spoke about the interesting problem of academics putting institutional affiliations on their bylines without actually being employed at the institution. "Author misrepresentation of institutional affiliations: exploratory cross-sectional case study on secondary individual data".

They focused on researchers giving a Chilean university as an affiliation for the year 2016 and tried to verify if the person was actually affiliated with the university. For around 40 % of the authors, it was not possible to verify their connection to the university. This private investigation, done with no funding, was commenced after it became known that one university in Chile was paying prolific, Spanish-speaking researchers, to add an affiliation with their university, presumably to increase some metric the university is measured by.

After this I attended the COPE reception. There was a lot of very good discussion there, and some publishers I had mentioned in my talk were very interested to hear more about my cases.

A colleage (who wishes to remain unnamed) reported from a parallel session, here's her take on those presentations (edited to fit my format and fix typos):

Session: Prevention 1

Michael Kalichman, UC San Diego, San Diego
Research misconduct: disease or symptom?

He surveyed RIOs on their perceptions of cases, and got some data that research misconduct occurs in cases deficient in Good Research Practices (i.e. maybe what these courses really need to teach is record keeping).

He listed out 10 GPR or lackings and from ~30 RIOs (out of 60 emailed) what practices were in place in cases they had personally investigated. It’s to be expected, but very good talk.

Michael Reisig, Arizona State University, Phoenix
"The perceived prevalence, cause, and prevention of research misconduct: results from a survey of faculty at America’s top 100 universities."

He has a forensic background and corresponded with about 630 respondents about prevalence, causes, and prevention of research misconduct, and found ~50% people surveyed were very much in favor of formal sanctions to prevent future misconduct. 29 percent said that nothing works, and "30%” wanted an integrated approach. QRP pretty common. But the slides went too fast for numbers.

Sudarat Luepongpattana, National Science and Technology Development Agency, Thailand, Bangkok
"Research quality development of the Thailand National Science and Technology Development Agency"

Yet another survey, and found that researchers don’t really know that authorship entails.

Ignacio Baleztena, European Commission, Brussels
"National practices in research integrity in EU member states & associated countries"

I left during this. My understanding was that representatives from 14 countries were going to have meetings, and then more meetings, and then follow a flow chart of meeting and then produce a definition of research integrity. I was getting seriously jetlagged, but that’s the memory. I just don’t understand why we need YET ANOTHER document. Are there any rules of thumb for when these are useful? [This is an excellent observation. Everyone is producing their own documents (sometimes by gently plagiarizing other institutions documents) on academic integrity. But how do we breathe life into them, change the culture? --dww]

[She missed the last talk and went to another session. She caught the tail end of a survey on Finnish atttitudes toward QRP, who said that it was hard to find a definition of research integrity]

Session: Attitudes 3

De-Ming Chau, Universiti Putra Malaysia/Young Scientists Network-Academy of Sciences, Malaysia, Serdang
"Effectiveness of active learning-based responsible conduct of research workshop in improving knowledge, attitude and behaviour among Malaysian researcher"

He did a survey (small sample size) and found that researchers with more experience say they are more likely to “do nothing” if a colleague is engaging in research misconduct

He’s pretty impressive; an NAS grant got him started designing RCR in Malaysia, and the programs are being designed by early career researchers

Monday, June 3, 2019

WCRI 2019 - Day 1a

Day 0 - Day 1a - Day 1b - Day 2 - Day 3

It was the first full day of the WCRI conference 2019, and oh my, it was full!

It was pouring rain this morning, so I was glad I had the umbrella from the conference swag and my light poncho. I tried the shortcut someone told me about - just walk through the HKU subway station and take all the elevators and escalators. It worked! So now it's a nice walk up to the university.

Charlie Day (CEO, Office of Innovation and Science, Australia) opened the day with a talk on "Research, risk and trust: translating ideas for impact". The talk was dedicated to Paul Taylor, who worked with the thesis that good management of research integrity is an enabler of technological progress and not a blocker. 

He sketched the typical challenges that occur when translating research into economic success and noted that the most innovative people tend to be the ones who are the most difficult to work with. His suggested responses include
  • Having policies and processes for research and translation, that create trust
  • Researcher education – access to training osvital
  • Encouraging researcher mobility

Maura Hiney (Head of Post-Award and Evaluation at the Health Research Board, Ireland) spoke in an extremely fast manner about "Integrity challenges in evaluating the path to impact". She insisted that we need to move from what we currently measure as research success to somehow measuring how we are bringing better services to the citizenry, improved health and well-being, better GNP returns, more or smarter jobs, safer environment, better quality of life, food security, etc. 

She listed a number of plans and documents that would be useful and discussed some DORA (the San Francisco Declaration on Research Assessment) principles.

I found it interesting that she was encouraging citizen science, enabling people from all walks of life to participate in the scientific review process. In Ireland they have trained 150 randomly selected public reviewers to evaluate research.  

Tracey Bretag managed to get a photo of her comparison of Research Integrity vs. Resposible Research and innovation:
Maura noted that we can't talk about translation of research into products without discussing Open Science. As part of the controversial Plan S, she noted that in Ireland all publications resulting from public funding must be published in compliant Open Access journals or platforms; Only OA publications will be considered in applications for funding; and there exists a cap on the cost of OA publications (APCs). A bit of discussion ensued about Plan S :)

A reference was made to the Open Science Partnership Toolkit, I found it here:
Maria Leptin (EMBO director, Genetics professor at University of Cologne, Germany) then spoke on "What innovation can tell us about responsible conduct of researchers." She quoted Nobel prize winner Andre Geim: "It is better to be wrong that to be boring" and noted that many of his experiments were not exactly ethical (see a description of how one of the experiments came to be).

She feels that the speed of discovery would increase if experimental failures were publishable.

In speaking with young researchers at EMBO she has found these major contributing factors to the discoveries theymade:
  • Stable, longer term (5 years +) funding, flexibility
  • Ability to make ad hoc decisions and the freedom to follow new ideas
  • Intellectual environment with critical and stimulating colleagues, good scientific culture,
  • high-level core facilities as infrastructure.

Sounds good!

After coffee I attended the Plagiarism session, as I was speaking there myself.

Nannan Yi (University of Leuven, Southeast University, China)
spoke about "Perceptions of plagiarism by biomedical researchers: an online survey in Europe and China" She conducted a survey by email and had over 1000 responses from Europe and China. A comparison of the answers showed quite a difference in perception in some instances, for example more Europeans reported being unsure if they were plagiarizing than Chinese researchers. More Chinese than Europeans had a perception of ghostwriting as plagiarism. 

Lisa Winstanley (Nanyang Technological University, Singapore) looked at extending the traditional definition of plagiarism to include images and art. The problem is that in art there are techniques that might be considered copying: Pastiche, homage, parody. But if they are done on purpose, then it is not academic misconduct. Thus, she insists that students submit a reflection on how they came about to create their artwork that can include statement such as "I decided to create a parody of ....". 

There is not much research on visual plagiarism. Due to the ambiguous boundaries and inadequate adressing in policy documents they do not sufficiently address visual arts educational needs.

She listed some tools useful in finding image sources:
TinEye, iTrace, Google reverse image search, and the Blob filter and showed examples of the various grey areas. I was surprised to learn that in marketing companies will brazenly copy the ad of a competitor, but hiring their own actors and making their own pictures. There were some plagiarism pictures on boredpanda that I picked out from her slides.

She also noted that there is a site, Steal like an Artist, that very nicely explains what is permissable and what is not. 

We had a good discussion on the difference between copyright and plagiarism, a lawyer in the audience noted that he would always go with copyright, as he will probably have more luck with that in court. 

Jerry Hoffman (Southern Institute of Technology, Invercargill, New Zealand) spoke on
"Managing plagiarism and academic fraud in higher degree programmes"
The New Zealand Qualifications Authority (2017) noted that tertiary eucation providers need to have good processes in place to ensure that cheating is detected and will not allow students to pass assessments where they have not met the required standard.

His university has started to offer a two-hour referencing workshop and dissertation workshops where issues of plagiarism and academic fraud are discussed in detail.
The topic is also Addressed in all taught classes to varying degrees, and supervisors discuss plagiarism issues with student. They use Safe-Assign to test for plagiarism, but that usually only finds copies from previous student work, not necessarily things copied from the internet. But using Google can be useful here.

An oral defence is held when there is some suspicion of plgiarism. If serious plagiarism or academic fraud is detected, the result will usually be a failing grade. 

Then I spoke on "A breakdown in communication: journal reactions to information about plagiarism and duplicate publications"
The room was full (maybe 60 people) and we had a very good discussion session.

I got some lunch and tried to visit the poster session but the room was sooooo cramped and there were so many people that I pretty much just managed to visit 3 posters. One I had already photographed yesterday:

This poster from Indonesia was printed with batik on cloth dyed red. The text was put on with wax before the dying! Unfortunately, there had to be a correction, so it is printed on photo paper and stuck on with velcro :)

Matt Hodgkinson had a very nice poster discussing his frustrations as a research integrity person at Hindawi getting universities or national integrity bodies to answer his emails. I sympathize! I can't get Blogger to upload the picture tonight, his tweet links to a picture [that will only show up if I download it and then upload to Google]:

The third poster was about the Contributor Roles Taxonomy CRediT. It's a brilliant idea, but there was no way to get a picture without poking my elbow into someone's eye.

That's just half of Monday, I'm exhausted and heading to bed. More tomorrow!

Update to include another image.

Sunday, June 2, 2019

WCRI 2019 - Day 0

Day 0 - Day 1a - Day 1b - Day 2 - Day 3

I am currently attending the World Conference on Academic Integrity (WCRI 2019) in Hong Kong, sponsored by a travel grant from the German Academic Exchange Service (DAAD). It is quite an international conference with over 700 attendees from all over the world, although of course given the location there are many Asia-Pacific countries represented. I will be blogging about the talks and workshops that I attend, which are only a small fraction of the talks held, as there are seven tracks in parallel. As usual, there is one paper in each track that I really want to hear, so I will have to flip three coins to see which session I attend.
The conference is being held at the University of Hong Kong, a large university of high-rises, terraces, and steep stairways nestled in between the skyscrapers of Western Hong Kong.

I attended two pre-conference workshops before the opening ceremonies.

1. How to investigate breaches of research integrity and research misconduct

The workshop was designed by Daniel Barr (RMIT Melbourne, Australia), Ton Hol (University of Utrecht) and Paul Taylor(†, formerly of RMIT) and there were around 50 participants. Three talks were held and there was some discussion.
In his introduction, Dan Barr proposed this definition of research integrity:
Research integrity is the coherent and consistent adherence to a set of principles that underpin the trustworthiness of research.
He noted that people are not consistent – they might behave well one month, and use questionable practices the next month. From this I take it that the focus should be on the research and thus the publication or non-publication itself, not on the person.

Ton Hall, head of the School of Law at the University of Utrecht, then spoke on handling allegations of research misconduct in the Netherlands. The Diederik Stapel case appears to have been quite a force in getting Dutch universities to focus on both preventing research misconduct as well as formalizing the investigative process.

He first looked into the reasons why an institution needs to deal with accusations of research misconduct. Above all, the public's trust in science should not be affected by faulty research. There are other reasons of course, not only giving satisfaction to the accuser, but also to protect the reputation of the institution and of the accused researcher, and of course to improve the local research practices. He then explained the difference between an accusatorial or an inquisitorial approach. That means, an institution can either respond to an allegation, or it can start investigations on its own.

He noted that complaints from anonymous accusers can be investigated if there are compelling public insterests or the factual basis can be investigated without additional input from the complaintant (for example in documented plagiarism cases).

Jillian Barr, Director Ethics and Integrity National Health and Medical Reserach Council Australian Government, then gave the view on investigating breaches from the view of an Australian funding agency. In 2018, Australia published a code of Responsible Conduct in Research. There are many additional guides published, among them one on investigating potential breaches of the code.

One of the most important aspects in convening a panel for investigating potential breaches is deciding who should be on the panel, as there are potential consequences for those involved. Which members of the department or other departments should be incuded, should there be external members, should they have prior experience with dealing with such issues, how well do they need to know the code, do they have to understand the relevant discipline, are there conflicts of interest or gender / diversity issues to be addressed? And of course, who should chair such a panel, someone with legal experience? Many questions and no easy answers.

Karin Wallace, from the Secretariat for the Responsible Conduct of Research in Canada, was up next. The body she represents sees plagiarism as one of the largest problems, as well as misrepresentation of credentials. However, each case is unique, so it is not easy to set up guidelines for sanctioning.

Investigation reports do not have names on them, so that the focus for the investigation committee is on the facts of the case, not the institution or person involved. She suggests having a standing investigation committee that is familiar with research integrity, with subject matter expertise filled in on an ad hoc basis. She cautions that external members should be familiar with research integrity procedures and be in close proximity, in order to facilitate their participation.

Finally, Chris Graf, from COPE and the Director of Research Integrity and Publishing Ethics at Wiley, gave us the point of view of the publisher in dealing with breaches of integrity. Wiley, a large scientific publisher, has a number of people working full-time on this topic. 

He noted that research publishers create and maintain the formal "version of record". When an error is identified, corrections are restricted to an equally formal set of permanent options: Corrections, experessions of concern, retractions, withdrawals. The latter is what he calls the "nuclear option" to be used when a paper must be deleted for legal reasons or other serious circumstances, however, the formal bibliographic information is left at the DOI. 

He presented some interesting numbers of the various types of case they handled in 2018 and 2019 to date:
58 Misconduct or Questionable behavior
45 Authorship
43 Data issues
34 Redundant or Duplicate publications
33 Plagiarism
30 Copyright
30 Correction of the literature
26 Legal issues
22 Consent for Publication
20 Questionable or Unethical Research
19 Mistakes
13 Conflicts of Interest
6 Peer review
5 Whistleblowers
5 Funding
4 Contributorship
3 Social Media
2 Editorial Independence
2. Workhop on "The Embassy of Good Science"

Guy Wissershoven opened the session by briefly explaining the project and the web site, which is not yet online. It seemed that about half of the room (also around 50 persons in attendance) was somehow involved in the project, I am not sure but it seemed to be funded by the EU. 

I found a short description on the web:
‘The Embassy of Good Science’ is a European initiative to make the normative framework informing research ethics and research integrity (RE+RI) easily accessible, share RE+RI knowledge and experiences, and foster active participation of the scientific community.
The development of The Embassy platform is underpinned by a stakeholder consultation, which consisted of a series of focus groups in three EU countries with diverse levels of research and innovation (The Netherlands, Spain and Croatia, n=59) and an online community discussion with participants from across Europe and beyond (n=52). Participants included researchers, editors, RE or RI committee members, policy-makers, and funders.
It is an excellent idea to collect up the information about research ethics and research integrity, as well as the various guidelines that exist into some sort of easy-to-use repository. I was intrigued by the idea that they want to provide a forum for discussion about cases and issues relating to research activity, but there was just a rudimentary implementation of tools to facilitate the discussion and no clear concept of how they will attract and retain interested persons to the discussions.

The project will be continuing until 2021, so I do hope that they acquire some deeper understanding of how to cultivate an active community.

Opening Ceremony

The opening ceremony in the Grand Hall that looks like it easily seats 1000 people included the usual words of greetings. Then there were two talks given.

Guoqing Dai (Director-General of the Department of Supervision and Scientific Integrity, Ministry of Science and Technology (MOST) of the People's Republic of China, Beijing) spoke about his department, founded in 2018, and tasked with setting up concerted efforts for coping with new challenges for research integrity that have been identified in China. Multiple measures will be undertaken in China to promote research integrity:
  1. Constantly improve the institutional arrangements
  2. Trying to establish sound operational mechanisms
  3. Strictly investigating and punishing breach behaviors (497 researchers were punished last year)
  4. Strengthening the dissemination and education about research integrity (16 million students recieved instruction last year)
  5. Deepening the reform of science and technology evaluation.
Alan Finkel, Australia's Chief Scientist, from Canberra, used a funny metaphor to describe the current scientific publication process with four million articles published every year: A bridge. What used to be perhaps a simple structure to connect A with B (the scientist with the reader) is now a triple-decker, multi-lane freeway. It's not about to collapse, but showing many signs of stress. The publication bridge must be kept open, but there are currently too many trucks with wrong cargo or contraband (there are over 20.000 retractions in the Retraction Watch database), or with useless cargo that no one wants. Some drivers speed madly in order to make as many trips as possible. There are smugglers (predatory publishers) and researchers jumping off the bridge into the wild waters of open science. 

He identified the biggest problem as the incentives. Systems adapt to follow the incentives, so the incentive system in academia must move away from counting publications and look to quality.  As an engineer he proposed a quality assurance initiative that includes mandatory research integrity training for all researchers (not just new ones) and some version of the Rule of Five (maximum of x publications in the last y years for values of x and y close to 5). There should be a "Publication Process Quality Assurance" seal that is positively awarded to journals, and grant-giving institutions should insist on publications only in these journals.

After this well-recieved talk, a Chinese "Lion's Dance" was presented, and then we were treated to some appetizers and a drink. It was lovely to meet old acquaintences and meet new people active in the field.

Updated 2019-06-03 to include Karin Wallace's institution