Month: April 2010

analysisstaff supportthresholdTurnitin

Turnitin and staff support

Royal Holloway, University of London has been using Turnitin for a few years, with every faculty and department now making widespread use of it. Our users are becoming increasingly adept at interpreting and analysing the originality reports and are now making quite sophisticated demands of both the software and those who support it.

I have two broad questions to which I hope some of you can relate and respond:

1. What strategies do you have for dealing with large student groups, say >300? Where the task of viewing every report is simply overwhelming and unworkable, given the need to provide timely and meaningful feedback to our students. Any equity achieved through use of Turnitin is negated if the reports are not referred to or only partially checked by an academic.

My ideas to deal with this include:

Advising that tutors inspect all the blue (0% similarity), yellow, orange and red (25-100%) report bands, while taking a sample of the green band reports – which account for 80% of our student reports. I maintain that it would be inappropriate to suggest a specific percentage below which the report should be ignored, or above which secondary action should be taken, due to the diversity of assignments and students – but there has to be some sort of ‘informed threshold’, albeit one currently dictated by Turnitin’s own banding system.

I have suggested to Turnitin through the channels provided by Northumbria Learning that, in the Inbox view, the Similarity Index could be augmented with the number of resources with which a student submission has similarity. For example 20% similarity from (20 resources) or 18% (2 resources) – two similarly ranked reports with very different attributes you will agree. This would make it quicker to order the inbox and scan the headline data without having to look at every report.

2. How would you approach the analysis of a group of students, say Y1 students as they progress through to Y3, to see the effects of Turnitin?

This is tricky because groups change, we use anonymity, and because there are so many factors which dictate the average similarity index for a department, class or assignment. Factors such as the diversity of the assignments, the students’ developing and improving research skills, and Turnitin’s growing database. Has anyone used Turnitin to examine this?

I would welcome your comments, ideas and experiences you may wish to share.