Wednesday, June 9, 2021

ECAIP 2021 - Day 1

It's conference time again! The European Conference for Academic Integrity and Plagiarism 2021 (#ECAIP2021 on Twitter, organized by the European Network for Academic Integrity) has started. I will try to make a few notes on the talks for those who are unable to attend. My plan was to do the blogging on my new iPad and use my Mac with a second screen for attending the conference, but Google won't let me into the blog if I don't give it telephone numbers and shoe sizes, so I'll have to be doing some juggling here.

Day 0 - Day 1 - Day 2 - Day 3

The conference was opened by the chairs Sonja Bjelobaba (University of Uppsala) and Tomáš Foltýnek (Mendel-University Brno), giving some background on the conference and an outlook to the conference 2022 that will hopefully be in Porto, Portugal and not just online! 

The proceeding abstracts are available on the ENAI web site.

Today's keynote is from Guy Curtis: from the University of Western Australia:  Evolving an understanding of academic integrity 

He starts with personal stories that we don't often hear in research as to how he started on the academic integrity journey. And he remembers Tracey Bretag, who passed away last year. We really miss her! He read John Croucher's book "Exam scams" in 1997 after hearing about it on the radio, and that got him started on the topic [I just found it at a used book store online and ordered it]. As a young teacher he encountered more and more academic integrity issues. 

His early research was into cultural differences in understanding, incidence, and percieved seriousness of 7 types of plagiarism defined by (Walker, 1998). They published their results in Maxwell, A., Curtis, G., & Vardanega, L. (2008). Does culture influence understanding and perceived seriousness of plagiarism? International Journal for Educational Integrity, 4(2). DOI: https://doi.org/10.21913/IJEI.v4i2.412. A later study did a follow up at Curtis, G. & Popal, R. (2011). An examination of factors related to plagiarism and a five-year follow-up of plagiarism at an Australian university. International Journal for Educational Integrity, 7(1). DOI: https://doi.org/10.21913/IJEI.v7i1.742

He realized that students needed to be explicitly taught how to reference. 

He was able to revisit the questions 10 years later, and published the results in Curtis, G. & Vardanega, L. (2016). Is plagiarism changing over time? A 10-year-lag study with three points of measurement. Higher Education research & Development, 35 (6), pp. 1167-1179. https://researchrepository.murdoch.edu.au/id/eprint/35573/

Then some serendipity happened. He met some new people and old ideas, for example Joe Clare, a criminologist who pointed out that just a few people commit the most crimes. That is, if they do it once, they they tend to do it again. And it seems that only about 3 % of people at that time used contract cheating, but 60 % of them would do it again. 

He has plotted Stiles, McCabe and his own data - the amount of plagiarism appears to be decreasing! They started looking at why people don't cheat. I'm not sure I understand what exactly the "dark triad" (Machiavellianism, psychopathy, and narcissism) is or how exactly they are related to predictions of students not cheating. I am also concerned about "predictive" statistics, as these have an annoying tendency to be misused. 

After meeting Tracey Bretag they set up  TEQSA in Australia. They offer various tools and resources there for dealing with academic integrity policy and issues.

He currently has a number of papers submitted or in preparation about plagiarism and contract cheating. 

Next talk I attended was Phillip Dawson (Deakin University, Melbourne, Australia) with "Remote proctored exams: minimizing the harms and maximizing the benefits". Since I did some tests myself in 2020, I was really interested in hearing his views on this. He has written a book on defending assessment in a digital world. Very cute - he projects his head into the slides, apparently by using the slides as his background! Must try that out. A remote procotored exam is a timed exam on a student-provided computer at a location of the student's choosing that is monitored or recorded by a person and/or a computer. Generally they use some sort of lockdown, use biometrics, attempt to verify identity, and are generally third-parties. There are so many companies offering this service, it is assumed to be a $10 billion market by 2026.

There is a good bit of litigation going on as well and the literature is quite polarized. 

He has 10 suggestions for using remote proctored exams:

  1. Use them only as a last resort
  2. Exam designs must above all be sound assessments of learning
  3. Only the minimal restrictions required are used
  4. Students are offered an alternative
  5. Equity, diversity, adversity, and accessibility are catered for  
  6. Providers pilot RPEs adequately before using them in assessment
  7. A whole-of-institution approach is taken
  8. Regulatory requirements and standards around privacy and data security are met
  9. Effective governance, monitoring, QA, evaluation and complaints procedures are in place
  10. Staff and student capacity building and support are available and ongoing.

More information in detail at http://tinyurl.com/teqsa-exams.

Following his talk, Ann Rogerson (University of Wollongong, Australia) spoke on "Shifts in student behaviours during COVID-19: Impacts of social interactions." She notes that there has been a shift towards collusion. Australia had an early and very hard lockdown because of COVID-19. They are currently returning to campus. 

They had a reduction in the number of cases reported to the AI office, but plagiarism is still the highest reported offence, followed by collusion.  They have a collusion definition that also includes uploading class notes to some online server without faculty permission.

Snapchat became popular to share answers on an exam, as well as Instagram and gaming platforms. Once gathering restrictions were eased, students colluded in small groups, sharing work on questions and trading answers. They were more exposed to cheating sites and less aware of the rules. In a way they are re-creating the networks that normally grow during in-person studies.

The policies were not designed around online exams, so that had to be changed. Students need to be made clear what "open book" means online. The integrity statement had to be adapted to include commentary on the use of social media sites, gaming platforms, support or sharing sites. 

Her main message: Shifts in student behavior mean that we have to change, too!

The next session I attended was Nicolaus Wilder, Doris Weßels, Johanna Gröpler, Andrea Klein & Margret Mundorf (from different universities) speaking on "Who is Responsible for Integrity in the Age of Artificial Intelligence? An analysis using the example of academic writing". Their project is available at path2integrity.eu. Natural language generation introduces new problems in academic integrity. 

There are many free and paid artificial intelligence tools that create, summarise and rewrite texts. Springer has even published fully machine-generated books. [Why would I pay for something like that?] They formulate their "AIAI challenge": Can someone take responsibility for something they do not understand, but whose non-use would be irresponsible, as it can increase quality and efficiency? 

They define responsibility roles: The Creator, the Tool Expert, the User-Producer, the User-Consumer, and the Affected Person. The problem is: who is responsible? the RASCI responsibility assignment matrix is proposed: Who is Responsible, Accountable, Support, Consulted, and Informed.

I have an issue with the use of the term "artificial intelligence", especially in connection with questions of responsibility. Any use of algorithms, including those which are NOT artificial intelligence, carries responsibility issues with it. And there is not just one "creator", but there is the system master who is paying for the system, the company (and the chain of bosses) that is producing it, as well as the individual programmers who acutally implement the system.

We broke out into rooms to discuss the responsibility roles, there was not enough time to get very deep into the discussions.

Over lunch I attended the sponsor presentations of Studiosity and Turnitin. I realize that 24/7 tutoring is something that is good for students, but I'm a bit concerned that they don't learn the academic habitus of how we interact, how we discuss, how we obtain information, how we communicate if they focus more on learning as "getting it right". We want them to learn to think critically. 

Turnitin again spoke of "originality" - it is one of my goals in life to force companies like Turnitin to stop talking about originality or plagiarism, but to focus more on text similarity or text matching. The speaker insisted that Turnitin checks for translation similarity, but in our recent test they (and all systems) failed miserably at this task. At least they say they now find homoglyphs and colored letters. They now have a stylometric analysis. It works, however, by comparing with all the previous work by the same student. That means that all the student's work needs to be stored at Turnitin. 

I then listened to the panel "Student involvement in building culture of academic integrity". It was great to see so many students active in the area of academic integrity!

For the next session I chose Irene Glendinning's workshop on "Comparison of institutional strategies for academic integrity in Europe and Eurasia." She has data that she collected from 27 EU countries (IPPHEAE), SEEPPAI (6 South East European countries: former Jugoslavia and Albania) and PAICKT (Caucasus, Kazakhstan, Turkey, as yet unpublished).

In this workshop she wants to talk about sanctions, penalties, and outcomes. Outcomes should be fair, proportionate, consistent, transparent, and accountable. She sees outcomes as deterrents, means of identifying missing skills & knowledge, correcting inappropriate conduct, upholding standards, fairness, ensuring students are only rewarded for genuine learning & achievement, punishment & justice. Risks arise in that students repeat the same mistakes, there is litigation, reputational damage, devaluation of qualifications, professional/graduate incompetence. 

In the UK there was the AMBeR (Academic Misconduct Benchmarking Research Project) project that found huge inconsistencies. They created the Plagiarism Reference Tariff as a tool for deciding sanctions. Currently the QAA project Shift Insight is running. 32 UK HE are being interviewed, the report is as yet unpublished.

She listed the spectrum of sanctions, and a long list of other factors that are taken into account. The thorny questions are: What is the process? Who decides? Is it formally recorded? How is it kept consistent? How are conflicts of interest avoided? Are there appeals? There were variations in 30 of the 38 countries surveyed!

I'm up next, so no notes on my talk ;) I spoke on "Talking to a Wall: The Response of (German) Universities to Documentations of Plagiarism in Doctoral Theses".

In the evening we had a pub quiz while we ate and then a talk by Phil Newton on "Pragmatic Evidence-Based Education" (Newton et al 2020). I tried to pay close attention, as he was giving good evidence for things I often do when teaching. You need to have useful evidence, a local context, and good judgement in applying the evidence to the context. 

What doesn't work? Most of the teaching we do. Matching instruction to "Learning Styles" (visual, auditory, kinaesthetic......); "Cones of learning" (we tend to remember.... - if anything appears as a triangle it is oversimplified); Bloom’s Taxonomy; ...

What might work?

Phil had us do two "Lightning Learning Activities": We had to recall 10 city names, but we couldn't remember them all even for 5 seconds! There is a difference between Short-term memory, Working memory, Long-term Memory (7 +/- 2 is about what we can remember). When we learn something new, we match it to something we already know. Retrieval practice: Prompting retrieval helps learn. Taking tests/quizzes (like Mentimeter!) helps make learning more effective. Having students write their own tests or quezzes, making flashcards, write down everything you know, elaboration, not just facts, the more the better. Practice tests are better than restudying, which unfortunately is what the students do. They re-read their notes, but that doesn't work as well as taking practice tests. Tests work in authentic settings, works better than other things, and enhance transfer of the learning into different contexts. We also had to remember 10 Welsh place names, but that was much harder. It was novel information that we had never heard before, so we only had working memory with about 4-7 things.

Another theory he brushed on is the Cognitive Load Theory: Working memory is essential for learning, but only has a limited capacity. So don't overload it! Don't waste capacity!

Teaching tips: Use educational scaffolding, keep relevant inofrmation together in space and time, provide worked examples, focus on immediate goals, avoid distracting or unnecessary content.

Spaced practice - instead of a block, break things up into little blocks of 45 minutes with other stuff in between. We don't know how much space is needed, how many subjects to interleave. Cramming is really, really bad. We can tell our students Prof. Phil says so!

Dual Coding - Information is coded into both words/verbal and pictures. Learning information in both ways is better - draw pictures of words! Write text descriptions of pictures! Pictures are better than text, but both is better than just one.

Basic communication skills are necessary, learn how to tell a story, to project your voice. Peer teaching works well, concrete examples of abstract ideas, feedback is often misunderstood: good feedback at the right time with an encouraging time is very effective. 

Sleep is very important! Once you are overloaded, you don't get back. 

So with that, I will get some sleep before tomorrow!

1 comment:

  1. Thanks for blogging about the conference. You did a good job of capturing a lot of the content of my keynote. If you'd lke to know more about the study on why students do not engage in contract cheating the paper is here: https://www.frontiersin.org/articles/10.3389/fpsyg.2019.02229/full. Thanks again for the interesting blog :)

    ReplyDelete

Please note that I moderate comments. Any comments that I consider unscientific will not be published.