Archive

Posts Tagged ‘SEMS’

Self- and Peer Assessment using Turnitin in SEMS: Cengiz Turkoglu

August 1, 2012 5 comments

Cengiz Turkoglu, a Senior Lecturer in the School of Engineering and Mathematical Sciences, principally teaches final-year undergraduate students and one of the MSc Aviation Management modules, with class sizes usually not exceeding 20 students. Each of his modules uses a similar assessment pattern comprising one coursework plus an examination. For the coursework component, he utilizes the self-review and peer review functions of Turnitin as part of the assessment.

The coursework has an initial deadline of a minimum of 6-8 weeks into the module to allow sufficient time for students to conduct research and write their essays. Once the students have submitted their paper, Turnitin’s PeerMark assignment function allows them to be either paired or randomly allocated another paper, which they are then required to peer-review. Given that there is always a range of standards represented by the students and their papers, one dilemma that Cengiz has faced concerns whether to pair the students randomly or to attempt to group them according to their standard. He never pairs them such that two students are asked to review one another’s papers.

The feedback provided by each student in peer review is subsequently made available to the original author – and the students are made aware at the time of writing that their comments will be released in this manner. At the same time, each author is asked to take a self-assessment exercise that follows exactly the same format as the peer review. As the process is conducted entirely online using Turnitin, it is completely paperless, which reduces the administrative workload and makes for a more sustainable structure.

For Cengiz, self- and peer review are only valuable if they lead somewhere in terms of the assessment process. With that in mind, once the feedback has been exchanged between students, Cengiz gives them a week to undertake further revisions to their original submission should they wish to do so. He asks that they do not rewrite their paper substantively, but confine themselves to minor amendments. Plagiarism of the peer-review feedback is not an issue because all the material is traceable and hence can be attributed. Only after the revised submission has been received does Cengiz mark the work summatively using GradeMark and provide his own feedback.

Detailed assessment criteria are provided, with the marking criteria broken down into six different categories each with their own weighting, of which one category is self- and peer review (worth 10% of the mark). The students are therefore aware from the outset that it is an integral part of the assessment, and its summative nature encourages them to engage fully with the process, since Cengiz’s experience is that students can be very assessment-driven. The questions they are asked for the self- and peer reviews correspond to the other assessment categories, so they judge each other’s paper, and their own, in exactly the same way as the examiner.

Cengiz has found this to be a very valuable exercise. It sets the students thinking about how to frame feedback, offering helpful advice to the author rather than simply giving praise or criticism. It also encourages them to consider issues such as whether the author understood the question and maintained focus, how well they researched the subject, and how coherent the arguments they presented were, based on their own reasoning or factual information they identified during their research. (The criteria matrix used by Cengiz is shown below; this is also entered as the rubric in Turnitin.) While students are variable in their engagement with the process, Cengiz notes that the best self-reviews and peer reviews recognize areas where the submission can be improved.

Turnitin screenshot - criteria matrix

Cengiz argues that the value of this assessment model is that it provides a simulation of real-life scenarios. In safety-critical industries such as aviation, for example, maintenance engineers are expected to inspect each others’ work on a regular basis, and the peer review process is widely used particularly by design engineers. In addition, all engineers should be expected to reflect upon, and to strive to improve, their own performance in order continually to develop themselves professionally. They may not necessarily always receive the most favourable advice from their own peers, so engineering students are prepared effectively for the profession through nurturing skills such as being able to evaluate the feedback they receive and to make their own judgement when taking decisions.

Cengiz justifies equalizing the weightings between the coursework and examination (originally weighted at 30% and 70% respectively) by citing the introduction of the requirements for self-assessment and peer review as a reason to give greater weighting to the coursework component. He strongly believes that examination is not the only suitable assessment method for his modules as the nature of the topics he teaches is such that they require understanding and the ability to apply this knowledge to real-life scenarios, rather than merely memorising content from text books or course notes. After studying on the Postgraduate Certificate in Academic Practice programme delivered by the Learning Development Centre at City University London, Cengiz has become an advocate of self-directed and reflective learning, and he encourages his students to become more critically self-reflexive so that they can learn from their own experiences.

If you would like to know more about this assessment model, Cengiz is happy to be contacted by e-mail: cengiz.turkoglu.1@city.ac.uk.

Christopher Wiley and Cengiz Turkoglu

A Case Study of Interim Assessment in SEMS: Mary Aylmer

Mary Aylmer is a visiting lecturer in the School of Engineering and Mathematical Sciences (SEMS), teaching the CAD part of the module CV1407 IT skills, Communication, and CAD. She has developed an assessment pattern in which students produce five pieces of CAD coursework, each of which involves completing engineering drawings. There are two interim submissions each weighted at 2% of the final module mark, two larger submissions weighted at 16% and 40%, and an end-of-module test also weighted at 40%.

The 2% weighting for the interim submissions is intended to ensure that the students’ early work on the module is taken into account in the final module mark, which helps to focus them to the task. The exercises are carefully graded and enjoyable for the students to complete; they tend to take ownership of their own learning as the assessments are designed such that they are able to determine exactly what is required of them, so they can aspire to high marks.

SEMS CAD CV1407The obvious advantage of this assessment pattern is that it ensures that the students are definitely completing their initial work on the module. This means that they are well prepared for the larger submissions: they have already accrued plenty of experience of CAD in the first few weeks through the interim submissions, and are thereby placed in a strong position to tackle the difficult drawings. In other words, it ensures that they undertake the groundwork first.

The downside to this system for the tutor is that it generates a substantial amount of marking. Mary has also noted a tendency among students to query their marks, even in the case of the 2% submissions which are unlikely to have a significant impact on their overall degree average. It can become very time-consuming to justify marks deducted, particularly with 120 students each of whom submit 5 pieces of work.

Nonetheless, the outcomes speak for themselves. By the end of the module, the students can produce good CAD drawings fairly easily; and they have indicated through their feedback that they enjoy the course, which is very encouraging. While an assessment model such as this may be time-consuming for the tutor, it is evidently worth the investment if it results in robust learning and student satisfaction.

Christopher Wiley and Mary Aylmer

Get Ready to SCOF!

October 19, 2011 5 comments

The SCOFS (Standardised, Customisable Online Feedback System) project was a Learning Development Project to develop a tool that would address the needs to give increasing amounts of feedback to students in shorter and shorter timescales.

The SCOF tool is based around the creation of feedback schemes (the ‘Standardised’ part of the name) which can later be used to quickly generate feedback to be tailored to an individual student (the ‘Customisable’ part) before being saved as a file to be sent to the student in some way. If all this seems a little vague it is intentional, because one of the principle decisions that the project team took was that the tool shouldn’t be tied to a particular assessment type or online tool, such as the Moodle VLE. The main reason for this is that the tool is intended to be used in the School of Engineering & Mathematical Sciences where electronic submission of work is sometimes necessarily impossible due to the nature of the work.

The idea was to develop a tool that was quick to use, but allows detailed feedback to be provided to students. This is made possible by pre-creating the feedback schemes, based upon the idea of rubrics, and making use of the range of resources that are available via electronic documents, such as links and images. The output of the tool is a PDF file that can be provided to the student in a number of ways, such as through the Moodle VLE, by email, printing a copy, etc. SCOFS also contains features that allow the feedback to be tied to grades for each element in the scheme, if desired.

SCOFS in Use

A Simple Feedback Scheme with Link, Image and Grades

Feedback sheet given to the student

Example of the output from SCOFS that a student would receive.

Possible Uses

  • Producing feedback during presentations for the student to walk away with.
  • Providing links to useful remedial resources to students who are below average in some areas of their work.
  • Rapidly creating overview feedback to supplement detailed feedback on the original submission.
  • Student peer review using directed comments and appropriate language.
  • Self-assessment of activities for critical reflection.
  • Encouraging the use of positive feedback where work is of a high standard, as well as highlighting areas of weakness.

Continuation Work

SCOFS is currently being evaluated by lecturers across City University London, including a large-scale pilot using Tablet PCs such as iPads and Android devices in the Schools of Arts & Social Sciences. The tool is easily packaged and requires minimal technical knowledge to get started with, so anyone else interested in trying SCOFS is encouraged to get in touch, including both people at City University London and outside.

Once the tool has been properly evaluated it is expected that it will be made available as Open Source software for anyone to download, use and modify.

%d bloggers like this: