Archive

Archive for the ‘Assessment & Feedback’ Category

Talking Multimedia in Education

December 14, 2012 Leave a comment

As posted in Educational Vignettes  one of the investments in our Strategic Learning Environment(SLE) is about using multimedia to support and better our teaching and learning practices at City University London. This post looks at how multimedia has evolved in Education. This is combined with a quick look at some of the external and internal case studies on using multimedia in teaching. This post will also include a look at the recommendations that the Multimedia Requirements Working group are considering which is based on an analysis of all the schools needs on this topic.

header_services_multimediaCity University has a good track record of enabling academic staff to use multimedia for learning since it is a key aspect of the Strategic Learning Environment (SLE).

Why use Multimedia in Education?

There has been an increase and change in the use of multimedia (video & audio) in learning, teaching and assessment in Higher Education the last few years. This is influenced by the experience of using web 2.0 services such as You Tube and iTunes, and increasing use of mobile devices in education. Educators and students have been inspired to make, share and learn from video and audio in new ways.  Other areas such as marketing, libraries and research are also increasing their use of multimedia.

For an interesting talk in how video is currently being used in education view Salman Khan at TEDTALKS. Common examples of uses of multimedia (as researched by JISC Digital) include:

  • demonstrations of contextual images;
  • images with clickable parts (an image map) that link to further information e.g. Google maps;
  • video recordings of teaching sessions; to produce media-enhanced feedback;
  • recordings of special events such as guest lecturers.

External research on Multimedia

A useful framework to support educators in terms of how to use digital resources (artefacts) has been inspired by a JISC project. The DiAL-e Framework supports the pedagogically effective use of a range of digital content, focusing on what the learner does with an artefact rather than giving priority to its subject or discipline content.

So what’s the latest at City?

Demand for video is increasing, in particular for assessment in the form of coursework submission and reflective portfolios and as well as enabling staff and students to make their own video content.

For a look at some of the case studies around using multimedia you may be interested in the online webinars (run by the Video Special Interest Group). A recent webinar contained a diverse use of multimedia to suit the programmes in three schools. These were:

  • Sophie Paluch (The City Law School) has created mock courtroom scenarios for retraining judges across the UK. These videos enable the practice of representing someone in court as an advocate on the programme.
  • Natasa Perovic (School of Health) has created resources on blood pressure stethoscope sounds as the programme wanted a resource that made it easier for students to recognise the different sounds. The videos were for students who weren’t experienced in measuring blood pressure.
  • Luis Balseca (CASS) is running a pilot on video assessment for students which are being submitted through Moodle for one of the MBA programmes.

The session has been recorded and will be submitted in a vignette in due course.

Schools and their Requirements

All schools recently took part in a requirements gathering exercise in summer 2012. Four themes emerged that describe the direction that City University London expects in the tools or features used most frequently.

Features in relation to the four themes:

1. Help staff and students easily make and share multimedia recordings.

  • An easy to use online workflow with compression and creation of assets that will be compatible on all devices and platforms.
  •  A web cam and a screen capture feature, which is automatically saved to the library.

2. Enable sharing of audio & video material created at the University.

  • A ‘you tube’ like browse-able public and private and administration interface
  •  A library that can be searched from within Moodle

3. Provide a safe and controlled place to store and publish audio & video, so access can be restricted to suit different needs, e.g. confidential subject matter, assessment pieces, student presentations, copyrighted materials and television recordings.

  •     Secure Moodle assignment integration
  •     Very large files can be submitted and handle in batches
  •     Private reflective portfolios for students

4. Take learning, teaching and assessment using audio & video further i.e. to a global, mobile generation and enhance the power of social media tools.

  •     Users can record and upload via mobile devices
  •     Basic editing can be online
  •     Allows users to build playlists and make favourites

With Moodle 2 due to be released to students in September 2013, the Multimedia requirements group are looking at ways in which multimedia can be integrated effectively with Moodle at course and assessment level. Do stay tuned for the next update, and in the meantime if you’d like to be find out about how to use multimedia to suit your programmes, please do contact your educational technology team.

 

 

HeLF meeting: Personalisation of Assessment and Feedback

November 6, 2012 Leave a comment

In the last ten years, higher education has changed beyond all recognition and Heads of E-Learning will be critical to the significant changes to come.  These were some of the opening words by Professor Rikki Morgan-Tamosunas, DVC, University of Westminster, in opening the Heads of E-Learning Forum (HeLF) Meeting held on 31st October.  The theme for this year’s meetings is personalisation and E-learning Heads from around the country came together to explore Personalisation of Assessment and Feedback.

Lisa Gray from JISC gave an overview of the JISC Assessment and Feedback programme supporting and sharing results from numerous projects now running in the UK: http://www.jisc.ac.uk/assessment

Slide outlining impact of EVS on teaching

Slide from Electronic Voting Systems Presentation

How do you avoid assessment bunching on courses? Catherine Naamani from University of Glamorgan shared their Assessment Diaries project designed to ensure assessments are fairly spaced and give students an overview of all assessments across their courses including type, submission date and feedback return date.  The tool linked in with BlackBoard.

Marija Cubric shared their uses of Electronic Voting Systems, known as clickers at City, for assessment at Hertfordshire.  This technology had on the whole been well received by staff and students.  The tool was deemed easy to use and made teaching and learning more enjoyable.

Gunter Saunders and Peter Chatterton finished the day with an exploration of their Making Assessment Count (MAC) project focused on feedback.  Their presentation highlighted a project at City within broadcast journalism enabling students to reflect on assessment feedback.  This project involves Kate Reader from the School of Arts and Social Sciences and here is a presentation about the work: http://estsass.co.uk/2012/07/23/presentations-from-the-learning-city-conference/

Slide from MAC project

Also discussed here was a change management curriculum design technique called Viewpoints that involved the use of cards with principles and examples that could be used to design modules.

 

Self- and Peer Assessment using Turnitin in SEMS: Cengiz Turkoglu

August 1, 2012 5 comments

Cengiz Turkoglu, a Senior Lecturer in the School of Engineering and Mathematical Sciences, principally teaches final-year undergraduate students and one of the MSc Aviation Management modules, with class sizes usually not exceeding 20 students. Each of his modules uses a similar assessment pattern comprising one coursework plus an examination. For the coursework component, he utilizes the self-review and peer review functions of Turnitin as part of the assessment.

The coursework has an initial deadline of a minimum of 6-8 weeks into the module to allow sufficient time for students to conduct research and write their essays. Once the students have submitted their paper, Turnitin’s PeerMark assignment function allows them to be either paired or randomly allocated another paper, which they are then required to peer-review. Given that there is always a range of standards represented by the students and their papers, one dilemma that Cengiz has faced concerns whether to pair the students randomly or to attempt to group them according to their standard. He never pairs them such that two students are asked to review one another’s papers.

The feedback provided by each student in peer review is subsequently made available to the original author – and the students are made aware at the time of writing that their comments will be released in this manner. At the same time, each author is asked to take a self-assessment exercise that follows exactly the same format as the peer review. As the process is conducted entirely online using Turnitin, it is completely paperless, which reduces the administrative workload and makes for a more sustainable structure.

For Cengiz, self- and peer review are only valuable if they lead somewhere in terms of the assessment process. With that in mind, once the feedback has been exchanged between students, Cengiz gives them a week to undertake further revisions to their original submission should they wish to do so. He asks that they do not rewrite their paper substantively, but confine themselves to minor amendments. Plagiarism of the peer-review feedback is not an issue because all the material is traceable and hence can be attributed. Only after the revised submission has been received does Cengiz mark the work summatively using GradeMark and provide his own feedback.

Detailed assessment criteria are provided, with the marking criteria broken down into six different categories each with their own weighting, of which one category is self- and peer review (worth 10% of the mark). The students are therefore aware from the outset that it is an integral part of the assessment, and its summative nature encourages them to engage fully with the process, since Cengiz’s experience is that students can be very assessment-driven. The questions they are asked for the self- and peer reviews correspond to the other assessment categories, so they judge each other’s paper, and their own, in exactly the same way as the examiner.

Cengiz has found this to be a very valuable exercise. It sets the students thinking about how to frame feedback, offering helpful advice to the author rather than simply giving praise or criticism. It also encourages them to consider issues such as whether the author understood the question and maintained focus, how well they researched the subject, and how coherent the arguments they presented were, based on their own reasoning or factual information they identified during their research. (The criteria matrix used by Cengiz is shown below; this is also entered as the rubric in Turnitin.) While students are variable in their engagement with the process, Cengiz notes that the best self-reviews and peer reviews recognize areas where the submission can be improved.

Turnitin screenshot - criteria matrix

Cengiz argues that the value of this assessment model is that it provides a simulation of real-life scenarios. In safety-critical industries such as aviation, for example, maintenance engineers are expected to inspect each others’ work on a regular basis, and the peer review process is widely used particularly by design engineers. In addition, all engineers should be expected to reflect upon, and to strive to improve, their own performance in order continually to develop themselves professionally. They may not necessarily always receive the most favourable advice from their own peers, so engineering students are prepared effectively for the profession through nurturing skills such as being able to evaluate the feedback they receive and to make their own judgement when taking decisions.

Cengiz justifies equalizing the weightings between the coursework and examination (originally weighted at 30% and 70% respectively) by citing the introduction of the requirements for self-assessment and peer review as a reason to give greater weighting to the coursework component. He strongly believes that examination is not the only suitable assessment method for his modules as the nature of the topics he teaches is such that they require understanding and the ability to apply this knowledge to real-life scenarios, rather than merely memorising content from text books or course notes. After studying on the Postgraduate Certificate in Academic Practice programme delivered by the Learning Development Centre at City University London, Cengiz has become an advocate of self-directed and reflective learning, and he encourages his students to become more critically self-reflexive so that they can learn from their own experiences.

If you would like to know more about this assessment model, Cengiz is happy to be contacted by e-mail: cengiz.turkoglu.1@city.ac.uk.

Christopher Wiley and Cengiz Turkoglu

Use of the Personal Response System for Formative Assessment in Optometry: Dr Byki Huntjens and Dr Steve Gruppetta

With the recent founding of the University Personal Response System (PRS) Steering Group, co-chaired by Dr Siân Lindsay and Farzana Latif, this would seem to be an opportune time to profile one of the innovative approaches implemented within the University in using PRS technology for formative assessment.

Dr Byki Huntjens and Dr Steve Gruppetta are lecturers in the Division of Optometry and Visual Science who have introduced the PRS to undergraduate students in order that they may receive immediate classroom feedback during Clinical Skills and Optics lectures. A PRS handset is given to the students (against a small deposit) throughout their degree programme, and is registered to their name to enable responses to be matched to individuals. Each lecture features a succession of multiple choice questions (MCQ). Byki’s practice is to start later lectures with a set of MCQs covering the previous topic plus the background reading for the class, and test the students’ understanding of the new topic later on during the lecture. Steve includes material that potentially encompasses the previous lecture, the current lecture, or even paves the way for a new topic to be discussed. The end result is a series of technology-enabled formative assessments.

Although only the group scores are shown during lectures and the progress of individual students is not revealed, the results of the quizzes are uploaded to Moodle each week by topic and the students are thereby able to check their individual score. This enables them to track their progress over time, and doubles as a reminder of the topics to which they need to direct particular attention prior to the examinations. The Moodle grade book also shows the students’ ranking among the whole group, leading some of them to become slightly competitive. Indeed, the element of competition is actively nurtured – the top five students with the highest marks in the year are awarded a prize at the divisional Prize Giving event.

The students have shown excitement during the PRS quizzes and appreciate the immediacy of the feedback, the anonymity of the process, and the way that it articulates the lecture by providing an interlude. Steve has developed the practice of making the PRS quizzes, which he calls the ‘Optics Challenge’, distinct from the rest of the lecture by changing the background of the slide from white to black (see screenshot below). The students’ responses are also used by the tutors to adapt subsequent lectures to the level of understanding of the specific cohort; this has prompted a change of direction on several occasions. In addition, this information has enhanced the support that the tutors are able to offer when students have sought extra help.

The Optics Challenge Leaderboard

Byki delivered a presentation on the use of PRS technology for formative assessment at the Fourth Annual ‘Learning at City’ Conference on 13 June, 1.20-2.00pm (the video is available here).

Christopher Wiley, Byki Huntjens, and Steve Gruppetta
with thanks to Siân Lindsay and Farzana Latif

A Case Study of Interim Assessment in SEMS: Mary Aylmer

Mary Aylmer is a visiting lecturer in the School of Engineering and Mathematical Sciences (SEMS), teaching the CAD part of the module CV1407 IT skills, Communication, and CAD. She has developed an assessment pattern in which students produce five pieces of CAD coursework, each of which involves completing engineering drawings. There are two interim submissions each weighted at 2% of the final module mark, two larger submissions weighted at 16% and 40%, and an end-of-module test also weighted at 40%.

The 2% weighting for the interim submissions is intended to ensure that the students’ early work on the module is taken into account in the final module mark, which helps to focus them to the task. The exercises are carefully graded and enjoyable for the students to complete; they tend to take ownership of their own learning as the assessments are designed such that they are able to determine exactly what is required of them, so they can aspire to high marks.

SEMS CAD CV1407The obvious advantage of this assessment pattern is that it ensures that the students are definitely completing their initial work on the module. This means that they are well prepared for the larger submissions: they have already accrued plenty of experience of CAD in the first few weeks through the interim submissions, and are thereby placed in a strong position to tackle the difficult drawings. In other words, it ensures that they undertake the groundwork first.

The downside to this system for the tutor is that it generates a substantial amount of marking. Mary has also noted a tendency among students to query their marks, even in the case of the 2% submissions which are unlikely to have a significant impact on their overall degree average. It can become very time-consuming to justify marks deducted, particularly with 120 students each of whom submit 5 pieces of work.

Nonetheless, the outcomes speak for themselves. By the end of the module, the students can produce good CAD drawings fairly easily; and they have indicated through their feedback that they enjoy the course, which is very encouraging. While an assessment model such as this may be time-consuming for the tutor, it is evidently worth the investment if it results in robust learning and student satisfaction.

Christopher Wiley and Mary Aylmer

Using Debate as a Teaching Format

July 7, 2012 2 comments

Three years ago at a City University Creativity Workshop I met Kirsten Hardie who teaches Design at Arts University College of Bournemouth. She told me about a method she had invented called “On Trial.” By coincidence, in working with a group of City teachers recently, they quite unprompted suggested the use of a debate format as a method of increasing student engagement.

It was impressive how quickly they came up with a great number of creative ideas to widen the palette of teaching formats. The session focused on devising a learning activity that reflects often currently missing employability skills:

  • Critical thinking
  • Reflection
  • Persuasive communication
  • Self awareness

Devising fresh learning activities to promote employability skills

I took Chickering and Gamson’s 7 Principles for high engagement learning, as a benchmark.  We selected two of these: and participants were also encouraged to identify their own. The focus was on fresh learning activities; new ideas.  Here are some of the resulting creative learning outcomes from the participants.

Fresh  learning activities.

  • Reflection: in action and on action
  • Scenarios, role play and simulation
  • Debating
  • Combined learning with another school (interprofessional  learning)
  • A buddy system

The question is how can we enable great ideas like these to be put into practice? For example using debate in our teaching.

So returning to National Teaching Fellow Kirsten Hardie’s On Trial project that  explores the use of role play and debate in student centred learning. It promotes and facilitates creativity in and through learning. Students work with colleagues to explore and interrogate problematic issues relating to their specialism

“On Trial harnesses popular culture, and the seductive qualities of the courtroom, as experienced through television and film examples (both historical and contemporary), in a creative fashion to help students engage with tough academic issues and wider ethical concerns.”

In addition a fascinating article by Catherine Sanderson discusses and evaluates debate as an assessment and learning strategy to develop critical and reasoning skills and stimulate learning through assessment in first year Biomedical Science and Public Health Students.

Sanderson’s work with first year undergraduates indicates that although it may be tacitly understood that critical reasoning is an essential skill for all students, it is far too often left to the final year as a learning outcome or even reserved for post-graduate studies.

Leaving audio feedback using GradeMark

May 22, 2012 4 comments
Microphones

Microphones Rusty Sheriff (2007): http://www.flickr.com/photos/rustysheriff/4880169398/ (CC BY-NC 2.0)

Are you using GradeMark in Turnitin to provide feedback to students? Did you know you can now record audio feedback on student assignments?

You can record up to three minutes of feedback on each student assignment allowing you to personalise your feedback. In the Sounds Good JISC project students remarked positively on receiving audio feedback commenting on the personal nature and level of detail provided. (Rotheram 2009a) Some students in this study did comment that they would like both audio and written feedback and you can still use the QuickMarks and general comments in GradeMark to leave written feedback if required.

How do I record audio feedback in GradeMark?

The attached guidance note provides step-by-step instructions on how to record audio feedback in GradeMark:  Providing audio feedback with GradeMark

So what do I need to get started?

  • You need to be using GradeMark in Turnitin to mark your students’ assignments
  • A microphone (An external microphone usually produces a better sound quality)
  • A quiet room to record the audio feedback. The MILL has two Podcast rooms that you can book to record your audio feedback – these provide a quiet space and the AV equipment that you need in order to record your audio feedback. Please send a calendar invite to video@city.ac.uk indicating the length of time that you would like to book a podcast room for and a member of the MILL team will respond to your request.

Tips on preparing audio feedback

  • Focus on the quality of the feedback as opposed to the quality of the recording. Don’t feel like you have to correct small speaking errors by re-recording. You can correct these as you would do in conversation. Do avoid poor quality audio as this can deter from the quality of your feedback.
  • Structure your feedback. Prepare a draft of the key points you would live to cover before you record.
  • Try to stay positive. Even when providing developmental feedback try to end on a positive note.
  • Speak clearly.
  • Make explicit how the feedback can contribute to the student development.  (JISC 2010; Rotheram 2009b)

References

JISC (2010) Audio Feedback [online] Available from: http://www.jiscdigitalmedia.ac.uk/audio/advice/audio-feedback (Accessed: 21.5.12)

Rotheram, B. (2009a) Sounds good: Quicker, better assessment using audio feedback [online] Available from: http://www.jisc.ac.uk/publications/reports/2009/soundsgoodfinalreport.aspx (Accessed 21.5.12)

Rotheram, B. (2009b) Practice tips on using digital audio for assessment feedback [online] Available from: http://www.kent.ac.uk/uelt/ced/conference/2009/Audio_feedback_tips_3_Rotheram.pdf (Accessed: 21.5.12)

The first episode in the Educational Vignettes podcast – an interview with Dr Keith Pond from the School of Business and Economics at Loughborough University

May 21, 2012 1 comment

Presenting the first episode in our Educational Vignettes podcast series!

With the Cass learning development showcase coming up on May 22nd, and themed on ‘Efficient and Effective Feedback’, Sandra and I (at the LDC) carried out a sector review of other business schools across the country, to see how they were performing in terms of their National Student Survey scores for assessment and feedback. The School of Business and Economics at Loughborough University is third in the country in this respect, with 77% of the students there satisfied with the feedback they receive:

Image

University Guide 2012: Business and Management Studies (available online from The Guardian website)

Earlier this month Sandra and I spoke with Dr Keith Pond, Associate Dean for Teaching at the Business school at Loughborough. To listen to some insights into the assessment and feedback practices at his school please go here:

http://podcast.ulcc.ac.uk/accounts/CityUniversityLondon/Educational-Vignettes.xml

Dr Keith Pond, ADE for the Business School at Loughborough University

Dr Keith Pond, ADE for Teaching at the School of Business and Economics at Loughborough University

During our chat with Keith we spoke about the assessment and feedback projects being undertaken at his School, how he ensures consistency of feedback, what he thought his students liked about the feedback they received on their assessments in addition to how they manage the process of giving feedback on exams at Loughborough.

Sian Lindsay and Sandra Partington are LDC liaisons for the Cass Business School at City University London.

Innovation in Assessment and Feedback

April 20, 2012 2 comments

My dual role as University Learning Development Associate in Assessment & Feedback and Senior Lecturer in Music has led me to run several pilot projects in my teaching this academic year (2011-12), exemplifying innovative approaches to the practices surrounding assessment and feedback. Three case studies are given below.

(1) Using wikis in Moodle to track progress on undergraduate dissertations and deliver formative feedback

Last term I set up an wiki template in Moodle to provide each of my final-year undergraduate dissertation students with a resource that both of us could access and periodically update, for the purposes of tracking progress on their dissertations and offering formative feedback on draftwork submitted.

Major Project wikiThe wiki includes pages for the project’s working title, and a separate page for each of the meetings divided into sections for the date of the meeting, a summary of what was discussed, objectives agreed for next time, and the date of the next meeting (see screenshot, right). It was developed owing to the need to help undergraduate students keep on-track in their dissertation work at a critical time in their programme, and was inspired by the Moodle wiki previously set up for the purposes of recording undergraduate Personal Development Planning (PDP) as well as the University’s use of Research And Progress for postgraduate research students.

One student has engaged with this resource to the extent that he has created several new pages to record his ongoing progress in between supervisory meetings; the nature of the wiki is such that I can review his progress at any time and add suggestions or make revisions as needed. Another student always brings her Wi-Fi enabled laptop with her so that we can make updates to the wiki during our tutorials. Whenever one of us makes and saves a change, the other can instantly see it on their screen, which demonstrates the value of using mobile devices to support student learning – particularly as this student now takes the lead at the end of each supervision in ensuring that the wiki has been fully updated.

This would seem to be a helpful way of time-managing the task of researching and writing a dissertation, not least given that it is a challenging process that final-year undergraduates may be encountering for the first time. It also provides a concise and useful reminder (for supervisor as well as student) of discussions, progress, and objectives set at each meeting, while enabling them to take ownership of their learning. This pilot will be rolled out across the entire module next year and all final-year Music students will be expected to use it; there is also much potential for initiatives of this nature to be extended to other programmes and subject areas.

(2) Curriculum design developed in dialogue with the students: elective assessment components

One innovative assessment model that I have been developing for much of this academic year involves giving students some choice as to how they wish to be assessed. Consultation with senior academic staff within and beyond the University has identified that, while such practices are more logistically complex, it should not be supposed that there is only one way to assess students against a prescribed set of learning outcomes necessarily.

After considering several possible assessment patterns which were discussed with colleagues, I settled on the following model which essentially preserves the 30:70 ratio (standard across the institution) between the minor and major assessment points:EVS graph

  • 1 Written Examination (unseen): 30 marks
  • 1 Elective Assessment: 30 marks – the student chooses ONE of the following options:
    • Written Coursework
    • Oral Presentation
    • Musical Performance accompanied by Written Documentation
  • 1 Project developed from the above Elective Assessment: 40 marks

The Examination provides a common component for all students, irrespective of the pathway they choose for the Elective Assessment. The other assessments have been specified mindful of parity with existing module assessment patterns. The benefits to students are that the initiative enables them to play to their strengths, and to influence how they wish to be assessed and how they wish their marks to be apportioned. The Elective Assessment also permits an additional opportunity for interim feedback ahead of the final Project.

My consultation with the students as to whether such an innovation would be welcomed was revealing: the graphical result (above right) of a poll conducted anonymously using EVS handsets (clickers) speaks for itself.

The focus group that comprised 12 students in my class were also consulted on several other major points of curriculum design, including the content and schedule of the lectures as well as the manner in which they will be taught, assessed, and feedback delivered. They have decided upon all of the lecture topics themselves via a Doodle poll, and have been invited to write supplementary assessment criteria using a wiki; elements of self- and peer assessment will also be included in the module. Having discussed several different forms of feedback (written, dialogic, telephone, podcast, screencast) at the focus group, 33% of students said that they would prefer written reports, while fully 50% opted for dialogic feedback – an unexpected but welcome result.

(3) Student self-assessment of in-progress writing of a research dissertation

Earlier in the year, one of my senior postgraduate research students submitted a draft of a dissertation chapter to me in the knowledge that while some sections were complete, others would need revision either because she felt that they would benefit from further work or because she had yet to complete the research (largely ethnographic, for which she is entirely dependent on the availability of her study participants) that would enable her to finalize her writing.

Since I nonetheless wanted to give her feedback on her work in progress, I formulated the idea of suggesting to the student that after a couple of weeks she should return to the draft chapter herself to reflect upon her writing, and to embed comments electronically using Microsoft Word to identify sections where she felt that further revision would be necessary and to explain why. I would then overlay my own feedback in a similar manner.

In being able to review draftwork that the student had herself annotated, I found my attention being much more effectively directed towards the parts of the chapter upon which it was most fruitful to focus. I felt that I would have made many of the same comments as the student herself, and this means of reflection also enabled the student to ask further questions of her work that I was then able to respond to, and for us to engage in a form of written dialogic feedback (see screenshot below).

The student likewise reported that she found it very useful to return to her chapter in retrospect, and particularly to document the areas she believed required additional work. This is a model of self-reflective feedback that I am now seeking to adopt for future research students.

Dissertation feedback sample

Dr Christopher Wiley
c.m.wiley@city.ac.uk
20.04.12

Review: SACWG seminar, ‘The efficiency and effectiveness of assessment in challenging times’

On Thursday 24 November 2011, the Student Assessment and Classification Working Group (SACWG) hosted a one-day seminar, ‘The efficiency and effectiveness of assessment in challenging times’ at Woburn House, Tavistock Square, London. 

To open the seminar, Dr Marie Stowell (University of Worcester) set out the context for the day in her presentation ‘Efficiency and effectiveness in assessment’. She identified that one of the aims of SACWG is to explore variations in practice across the sector and how they impact differently on students, retention, and learning success, and she observed the importance of placing students at the centre of the process given the fee structure proposed for 2012 entry coupled to the implications of assessment and feedback to student satisfaction. In light of the new funding model, one particularly pertinent observation she made concerned the cost of teaching in relation to the cost of assessment: the latter is resource-heavy, particularly once one factors in elements such as formative assessment (for which quality is less assured than its summative counterpart), moderation, external examining, reassessment of failed components, and the possibility that students may be over-assessed in the first instance. She also suggested that assessment criteria may not warrant the detailed attention they are typically accorded, as students tend to take the more direct approach towards assessment of endeavouring by less formal means to uncover exactly what it is that the lecturer is expecting them to produce. These arguments may indicate that both the efficiency and effectiveness of assessment could usefully be enhanced.

The next talk, by Professor Alison Halstead (Aston University), explored how institutions have responded to the challenges of recent years, specifically, the White Paper and its implications to students and to Higher Education. She noted that the potential increase in the students’ financial burden will inevitably lead to heightened expectations concerning teaching quality, learning, and employability, in which respect assessment and feedback are currently among the most important issues. She warned that student challenges to the regulatory framework for assessment may be on the rise in the future and identified that it was imperative, in these changing times, to nurture outstanding, innovative teachers and for staff to support student learning and e-learning (including assessment). Calling for the abandonment of the rigid distinction often drawn between ‘teachers’ and ‘researchers’, she suggested that promotions should award teaching excellence on a par with research. Later sections of her presentation outlined recent initiatives at Aston, for instance, standardizing the use of Virtual Learning Environment across the institution, and introducing learning technologies such as lecture capture and electronic voting systems. Her view was that teaching-enabled practice, while it took more time upfront to implement, was worth the investment in terms of teaching quality and learning success.

A structured group discussion and question-and-answer session with the morning’s speakers ensued. One point that emerged strongly was the importance of maintaining a variety of assessments, organized in a carefully considered schedule that takes a holistic overview at programme level. The latter becomes much more difficult in degree courses that incorporate elective modules, though there are both pedagogical and satisfaction-related reasons for offering choice to students and giving them ownership of their programme pathway. Another preoccupation amongst delegates was that assessments do not become too atomized, but relate to one another even beyond the confines of the module with which they are associated; one of the more innovative solutions proposed was the possibility of assessments straddling two or more modules. The need to develop sustainable structures was also discussed (for instance, moving towards group assessment to cope with rising student numbers), as was the importance of considering (as part of change management) what the benefits of effecting the change might be; if these cannot be persuasively articulated to staff and students, the change may not be worth implementing. A final warning concerned being too driven by regulations in designing efficient and effective curricula: it may be more useful in the long term to refer obstacles presented by the regulatory framework upwards so that they can be addressed.

The seminar resumed in the afternoon with a talk from Professor Chris Rust (Oxford Brookes University) on ‘Tensions in assessment practice’, which opened by reiterating the themes of the seminar in noting that current practices are neither efficient nor effective. He discussed that students have a tendency to focus on the mark they will obtain from the assessment rather than on the educational content of their studies, and that their approach often becomes increasingly surface-level as they progress through their programme. He defended modes such as formative, self-, and peer assessment as potentially yielding more ‘authentic’ assessment, arguing that graduates should be able to evaluate themselves and their peers as an outcome of their programme, and that making greater use of these options might also free up staff resources for summative assessment. Noting that students do not warm to the notion of being assessed, he suggested that perhaps the word ‘assessment’ should not be used for formative tasks. He further observed that feedback practices might be made more efficient by strengthening the relationship between modules, such that students are encouraged to learn from feedback received in one module and to carry what they have learnt over to others. Lessening the sense of compartmentalization of individual modules would, in his view, lead to more inclusive structures albeit less flexible ones, in that standardization (for instance, in terms of the same word limit for all assessments) does not always result in appropriate assessments.

Then followed a second group workshop session, on the theme of ‘What can institutions do to mitigate tensions?’. After a structured discussion of the issues, each group reported back to the seminar as to the problems that they had identified and the possibilities for efficient or effective solutions. It would be impossible to do justice here to the vast amount of ground covered between the several contributing groups. To cite just a few examples, key tensions that were raised included giving formative assessment a greater purpose (a proposed solution being to tie formative and summative assessments together in more meaningful ways), the problem of ensuring parity when using several examiners for the same assessment task (which may be solved by grading the assessment as pass/fail only), and the evergreen question of quality of feedback versus timeliness of feedback (for which there was some discussion about feedback becoming ‘quick and dirty’). On the question of standardization of process, I took the microphone to report back on the standardized feedback proforma that had been created in liaison with the students and implemented across one programme at City University London (see this post for details), and suggested, with much support from the floor, that students should be more involved in consultation regarding matters of assessment and feedback.

Prior to the close of the seminar a final speaker, Professor Paul Hyland (Bath Spa University), provided some reflections upon the day’s discussion. Noting that assessment was a large topic with which to deal, he categorized the day’s discussion as having crystallized around four main areas: external scrutiny (ranging from students’ parents to formal regulatory bodies); administration and management; the tutors’ perspective on assessment; and the students’ perspective. He argued that discussions of  effectiveness and efficiency should always be mindful of the purpose of assessment. In his view, assessment should be concerned with measuring students’ performance and nurturing learning, whereas there exists a danger of (to put it crudely) simply setting assessments in order to get the students to do some work. In this context, a greater level of student involvement and engagement with assessment would therefore be beneficial. He also observed the need to use technology to improve existing practice, for instance, to supplement traditional modes of feedback with video and screen casts. Finally, he commented upon the importance of tutors having access to students’ feedback on previous assessments in order to understand where they are coming from and to be able to support them in their ongoing studies.

SACWG has kindly made available the presentation slideshows used by the speakers, and the comprehensive notes distilled from the two very productive group discussions (as reported back to the seminar by nominees from the groups), at the following link: http://web.anglia.ac.uk/anet/faculties/alss/sacwg.phtml.

%d bloggers like this: