It's out! Our pre-print about testing support tools for plagiarism detection, often mistakenly called plagiarism-detection tools. The European Network of Academic Integrity Working Group
TeSToP worked in 2018 and 2019 to test 15 software systems in eight different languages. Of course, everything has changed since then, the software people let us know, but whatever: here's the pre-print, we have submitted to a journal.
arXiv:2002.04279 [cs.DL]
Testing of Support Tools for Plagiarism Detection
There is a general belief that software must be able to easily do things that
humans find difficult. Since finding sources for plagiarism in a text is not an
easy task, there is a wide-spread expectation that it must be simple for
software to determine if a text is plagiarized or not. Software cannot
determine plagiarism, but it can work as a support tool for identifying some
text similarity that may constitute plagiarism. But how well do the various
systems work? This paper reports on a collaborative test of 15 web-based
text-matching systems that can be used when plagiarism is suspected. It was
conducted by researchers from seven countries using test material in eight
different languages, evaluating the effectiveness of the systems on
single-source and multi-source documents. A usability examination was also
performed. The sobering results show that although some systems can indeed help
identify some plagiarized content, they clearly do not find all plagiarism and
at times also identify non-plagiarized material as problematic.
No comments:
Post a Comment
Please note that I moderate comments. Any comments that I consider unscientific will not be published.