Archive

Posts Tagged ‘sustainable structures’

Self- and Peer Assessment using Turnitin in SEMS: Cengiz Turkoglu

August 1, 2012 5 comments

Cengiz Turkoglu, a Senior Lecturer in the School of Engineering and Mathematical Sciences, principally teaches final-year undergraduate students and one of the MSc Aviation Management modules, with class sizes usually not exceeding 20 students. Each of his modules uses a similar assessment pattern comprising one coursework plus an examination. For the coursework component, he utilizes the self-review and peer review functions of Turnitin as part of the assessment.

The coursework has an initial deadline of a minimum of 6-8 weeks into the module to allow sufficient time for students to conduct research and write their essays. Once the students have submitted their paper, Turnitin’s PeerMark assignment function allows them to be either paired or randomly allocated another paper, which they are then required to peer-review. Given that there is always a range of standards represented by the students and their papers, one dilemma that Cengiz has faced concerns whether to pair the students randomly or to attempt to group them according to their standard. He never pairs them such that two students are asked to review one another’s papers.

The feedback provided by each student in peer review is subsequently made available to the original author – and the students are made aware at the time of writing that their comments will be released in this manner. At the same time, each author is asked to take a self-assessment exercise that follows exactly the same format as the peer review. As the process is conducted entirely online using Turnitin, it is completely paperless, which reduces the administrative workload and makes for a more sustainable structure.

For Cengiz, self- and peer review are only valuable if they lead somewhere in terms of the assessment process. With that in mind, once the feedback has been exchanged between students, Cengiz gives them a week to undertake further revisions to their original submission should they wish to do so. He asks that they do not rewrite their paper substantively, but confine themselves to minor amendments. Plagiarism of the peer-review feedback is not an issue because all the material is traceable and hence can be attributed. Only after the revised submission has been received does Cengiz mark the work summatively using GradeMark and provide his own feedback.

Detailed assessment criteria are provided, with the marking criteria broken down into six different categories each with their own weighting, of which one category is self- and peer review (worth 10% of the mark). The students are therefore aware from the outset that it is an integral part of the assessment, and its summative nature encourages them to engage fully with the process, since Cengiz’s experience is that students can be very assessment-driven. The questions they are asked for the self- and peer reviews correspond to the other assessment categories, so they judge each other’s paper, and their own, in exactly the same way as the examiner.

Cengiz has found this to be a very valuable exercise. It sets the students thinking about how to frame feedback, offering helpful advice to the author rather than simply giving praise or criticism. It also encourages them to consider issues such as whether the author understood the question and maintained focus, how well they researched the subject, and how coherent the arguments they presented were, based on their own reasoning or factual information they identified during their research. (The criteria matrix used by Cengiz is shown below; this is also entered as the rubric in Turnitin.) While students are variable in their engagement with the process, Cengiz notes that the best self-reviews and peer reviews recognize areas where the submission can be improved.

Turnitin screenshot - criteria matrix

Cengiz argues that the value of this assessment model is that it provides a simulation of real-life scenarios. In safety-critical industries such as aviation, for example, maintenance engineers are expected to inspect each others’ work on a regular basis, and the peer review process is widely used particularly by design engineers. In addition, all engineers should be expected to reflect upon, and to strive to improve, their own performance in order continually to develop themselves professionally. They may not necessarily always receive the most favourable advice from their own peers, so engineering students are prepared effectively for the profession through nurturing skills such as being able to evaluate the feedback they receive and to make their own judgement when taking decisions.

Cengiz justifies equalizing the weightings between the coursework and examination (originally weighted at 30% and 70% respectively) by citing the introduction of the requirements for self-assessment and peer review as a reason to give greater weighting to the coursework component. He strongly believes that examination is not the only suitable assessment method for his modules as the nature of the topics he teaches is such that they require understanding and the ability to apply this knowledge to real-life scenarios, rather than merely memorising content from text books or course notes. After studying on the Postgraduate Certificate in Academic Practice programme delivered by the Learning Development Centre at City University London, Cengiz has become an advocate of self-directed and reflective learning, and he encourages his students to become more critically self-reflexive so that they can learn from their own experiences.

If you would like to know more about this assessment model, Cengiz is happy to be contacted by e-mail: cengiz.turkoglu.1@city.ac.uk.

Christopher Wiley and Cengiz Turkoglu

Innovation in Assessment and Feedback

April 20, 2012 2 comments

My dual role as University Learning Development Associate in Assessment & Feedback and Senior Lecturer in Music has led me to run several pilot projects in my teaching this academic year (2011-12), exemplifying innovative approaches to the practices surrounding assessment and feedback. Three case studies are given below.

(1) Using wikis in Moodle to track progress on undergraduate dissertations and deliver formative feedback

Last term I set up an wiki template in Moodle to provide each of my final-year undergraduate dissertation students with a resource that both of us could access and periodically update, for the purposes of tracking progress on their dissertations and offering formative feedback on draftwork submitted.

Major Project wikiThe wiki includes pages for the project’s working title, and a separate page for each of the meetings divided into sections for the date of the meeting, a summary of what was discussed, objectives agreed for next time, and the date of the next meeting (see screenshot, right). It was developed owing to the need to help undergraduate students keep on-track in their dissertation work at a critical time in their programme, and was inspired by the Moodle wiki previously set up for the purposes of recording undergraduate Personal Development Planning (PDP) as well as the University’s use of Research And Progress for postgraduate research students.

One student has engaged with this resource to the extent that he has created several new pages to record his ongoing progress in between supervisory meetings; the nature of the wiki is such that I can review his progress at any time and add suggestions or make revisions as needed. Another student always brings her Wi-Fi enabled laptop with her so that we can make updates to the wiki during our tutorials. Whenever one of us makes and saves a change, the other can instantly see it on their screen, which demonstrates the value of using mobile devices to support student learning – particularly as this student now takes the lead at the end of each supervision in ensuring that the wiki has been fully updated.

This would seem to be a helpful way of time-managing the task of researching and writing a dissertation, not least given that it is a challenging process that final-year undergraduates may be encountering for the first time. It also provides a concise and useful reminder (for supervisor as well as student) of discussions, progress, and objectives set at each meeting, while enabling them to take ownership of their learning. This pilot will be rolled out across the entire module next year and all final-year Music students will be expected to use it; there is also much potential for initiatives of this nature to be extended to other programmes and subject areas.

(2) Curriculum design developed in dialogue with the students: elective assessment components

One innovative assessment model that I have been developing for much of this academic year involves giving students some choice as to how they wish to be assessed. Consultation with senior academic staff within and beyond the University has identified that, while such practices are more logistically complex, it should not be supposed that there is only one way to assess students against a prescribed set of learning outcomes necessarily.

After considering several possible assessment patterns which were discussed with colleagues, I settled on the following model which essentially preserves the 30:70 ratio (standard across the institution) between the minor and major assessment points:EVS graph

  • 1 Written Examination (unseen): 30 marks
  • 1 Elective Assessment: 30 marks – the student chooses ONE of the following options:
    • Written Coursework
    • Oral Presentation
    • Musical Performance accompanied by Written Documentation
  • 1 Project developed from the above Elective Assessment: 40 marks

The Examination provides a common component for all students, irrespective of the pathway they choose for the Elective Assessment. The other assessments have been specified mindful of parity with existing module assessment patterns. The benefits to students are that the initiative enables them to play to their strengths, and to influence how they wish to be assessed and how they wish their marks to be apportioned. The Elective Assessment also permits an additional opportunity for interim feedback ahead of the final Project.

My consultation with the students as to whether such an innovation would be welcomed was revealing: the graphical result (above right) of a poll conducted anonymously using EVS handsets (clickers) speaks for itself.

The focus group that comprised 12 students in my class were also consulted on several other major points of curriculum design, including the content and schedule of the lectures as well as the manner in which they will be taught, assessed, and feedback delivered. They have decided upon all of the lecture topics themselves via a Doodle poll, and have been invited to write supplementary assessment criteria using a wiki; elements of self- and peer assessment will also be included in the module. Having discussed several different forms of feedback (written, dialogic, telephone, podcast, screencast) at the focus group, 33% of students said that they would prefer written reports, while fully 50% opted for dialogic feedback – an unexpected but welcome result.

(3) Student self-assessment of in-progress writing of a research dissertation

Earlier in the year, one of my senior postgraduate research students submitted a draft of a dissertation chapter to me in the knowledge that while some sections were complete, others would need revision either because she felt that they would benefit from further work or because she had yet to complete the research (largely ethnographic, for which she is entirely dependent on the availability of her study participants) that would enable her to finalize her writing.

Since I nonetheless wanted to give her feedback on her work in progress, I formulated the idea of suggesting to the student that after a couple of weeks she should return to the draft chapter herself to reflect upon her writing, and to embed comments electronically using Microsoft Word to identify sections where she felt that further revision would be necessary and to explain why. I would then overlay my own feedback in a similar manner.

In being able to review draftwork that the student had herself annotated, I found my attention being much more effectively directed towards the parts of the chapter upon which it was most fruitful to focus. I felt that I would have made many of the same comments as the student herself, and this means of reflection also enabled the student to ask further questions of her work that I was then able to respond to, and for us to engage in a form of written dialogic feedback (see screenshot below).

The student likewise reported that she found it very useful to return to her chapter in retrospect, and particularly to document the areas she believed required additional work. This is a model of self-reflective feedback that I am now seeking to adopt for future research students.

Dissertation feedback sample

Dr Christopher Wiley
c.m.wiley@city.ac.uk
20.04.12

Review: SACWG seminar, ‘The efficiency and effectiveness of assessment in challenging times’

On Thursday 24 November 2011, the Student Assessment and Classification Working Group (SACWG) hosted a one-day seminar, ‘The efficiency and effectiveness of assessment in challenging times’ at Woburn House, Tavistock Square, London. 

To open the seminar, Dr Marie Stowell (University of Worcester) set out the context for the day in her presentation ‘Efficiency and effectiveness in assessment’. She identified that one of the aims of SACWG is to explore variations in practice across the sector and how they impact differently on students, retention, and learning success, and she observed the importance of placing students at the centre of the process given the fee structure proposed for 2012 entry coupled to the implications of assessment and feedback to student satisfaction. In light of the new funding model, one particularly pertinent observation she made concerned the cost of teaching in relation to the cost of assessment: the latter is resource-heavy, particularly once one factors in elements such as formative assessment (for which quality is less assured than its summative counterpart), moderation, external examining, reassessment of failed components, and the possibility that students may be over-assessed in the first instance. She also suggested that assessment criteria may not warrant the detailed attention they are typically accorded, as students tend to take the more direct approach towards assessment of endeavouring by less formal means to uncover exactly what it is that the lecturer is expecting them to produce. These arguments may indicate that both the efficiency and effectiveness of assessment could usefully be enhanced.

The next talk, by Professor Alison Halstead (Aston University), explored how institutions have responded to the challenges of recent years, specifically, the White Paper and its implications to students and to Higher Education. She noted that the potential increase in the students’ financial burden will inevitably lead to heightened expectations concerning teaching quality, learning, and employability, in which respect assessment and feedback are currently among the most important issues. She warned that student challenges to the regulatory framework for assessment may be on the rise in the future and identified that it was imperative, in these changing times, to nurture outstanding, innovative teachers and for staff to support student learning and e-learning (including assessment). Calling for the abandonment of the rigid distinction often drawn between ‘teachers’ and ‘researchers’, she suggested that promotions should award teaching excellence on a par with research. Later sections of her presentation outlined recent initiatives at Aston, for instance, standardizing the use of Virtual Learning Environment across the institution, and introducing learning technologies such as lecture capture and electronic voting systems. Her view was that teaching-enabled practice, while it took more time upfront to implement, was worth the investment in terms of teaching quality and learning success.

A structured group discussion and question-and-answer session with the morning’s speakers ensued. One point that emerged strongly was the importance of maintaining a variety of assessments, organized in a carefully considered schedule that takes a holistic overview at programme level. The latter becomes much more difficult in degree courses that incorporate elective modules, though there are both pedagogical and satisfaction-related reasons for offering choice to students and giving them ownership of their programme pathway. Another preoccupation amongst delegates was that assessments do not become too atomized, but relate to one another even beyond the confines of the module with which they are associated; one of the more innovative solutions proposed was the possibility of assessments straddling two or more modules. The need to develop sustainable structures was also discussed (for instance, moving towards group assessment to cope with rising student numbers), as was the importance of considering (as part of change management) what the benefits of effecting the change might be; if these cannot be persuasively articulated to staff and students, the change may not be worth implementing. A final warning concerned being too driven by regulations in designing efficient and effective curricula: it may be more useful in the long term to refer obstacles presented by the regulatory framework upwards so that they can be addressed.

The seminar resumed in the afternoon with a talk from Professor Chris Rust (Oxford Brookes University) on ‘Tensions in assessment practice’, which opened by reiterating the themes of the seminar in noting that current practices are neither efficient nor effective. He discussed that students have a tendency to focus on the mark they will obtain from the assessment rather than on the educational content of their studies, and that their approach often becomes increasingly surface-level as they progress through their programme. He defended modes such as formative, self-, and peer assessment as potentially yielding more ‘authentic’ assessment, arguing that graduates should be able to evaluate themselves and their peers as an outcome of their programme, and that making greater use of these options might also free up staff resources for summative assessment. Noting that students do not warm to the notion of being assessed, he suggested that perhaps the word ‘assessment’ should not be used for formative tasks. He further observed that feedback practices might be made more efficient by strengthening the relationship between modules, such that students are encouraged to learn from feedback received in one module and to carry what they have learnt over to others. Lessening the sense of compartmentalization of individual modules would, in his view, lead to more inclusive structures albeit less flexible ones, in that standardization (for instance, in terms of the same word limit for all assessments) does not always result in appropriate assessments.

Then followed a second group workshop session, on the theme of ‘What can institutions do to mitigate tensions?’. After a structured discussion of the issues, each group reported back to the seminar as to the problems that they had identified and the possibilities for efficient or effective solutions. It would be impossible to do justice here to the vast amount of ground covered between the several contributing groups. To cite just a few examples, key tensions that were raised included giving formative assessment a greater purpose (a proposed solution being to tie formative and summative assessments together in more meaningful ways), the problem of ensuring parity when using several examiners for the same assessment task (which may be solved by grading the assessment as pass/fail only), and the evergreen question of quality of feedback versus timeliness of feedback (for which there was some discussion about feedback becoming ‘quick and dirty’). On the question of standardization of process, I took the microphone to report back on the standardized feedback proforma that had been created in liaison with the students and implemented across one programme at City University London (see this post for details), and suggested, with much support from the floor, that students should be more involved in consultation regarding matters of assessment and feedback.

Prior to the close of the seminar a final speaker, Professor Paul Hyland (Bath Spa University), provided some reflections upon the day’s discussion. Noting that assessment was a large topic with which to deal, he categorized the day’s discussion as having crystallized around four main areas: external scrutiny (ranging from students’ parents to formal regulatory bodies); administration and management; the tutors’ perspective on assessment; and the students’ perspective. He argued that discussions of  effectiveness and efficiency should always be mindful of the purpose of assessment. In his view, assessment should be concerned with measuring students’ performance and nurturing learning, whereas there exists a danger of (to put it crudely) simply setting assessments in order to get the students to do some work. In this context, a greater level of student involvement and engagement with assessment would therefore be beneficial. He also observed the need to use technology to improve existing practice, for instance, to supplement traditional modes of feedback with video and screen casts. Finally, he commented upon the importance of tutors having access to students’ feedback on previous assessments in order to understand where they are coming from and to be able to support them in their ongoing studies.

SACWG has kindly made available the presentation slideshows used by the speakers, and the comprehensive notes distilled from the two very productive group discussions (as reported back to the seminar by nominees from the groups), at the following link: http://web.anglia.ac.uk/anet/faculties/alss/sacwg.phtml.

%d bloggers like this: