Monthly Archives: May 2015

Am I unduly fixated with my students’ writing skills?

I had an interesting moment this semester. A wonderfully bright and hardworking student came to me, frustrated with their grades on written assignments. I have to admit, I was frustrated too as they should have been earning higher grades based on their work ethic, intellectual curiosity, and knowledge of course materials. I knew they had a deep and clear understanding of the readings and lecture material based on our discussions in office hours and their contributions to class discussions. To be clear, the student had been achieving good (around or often above average) grades and was at zero risk of failing. The grades they were achieving were grades to be proud of and grades that I’m sure many of their colleagues longed for. I’ve also no doubt that this student will succeed at UBC and professionally once they leave us. But, in the meantime, they should have been achieving great grades. I wanted to be assigning them higher grades, but out of ‘fairness’ I felt I had to mark ‘what was on the page’, even though I knew it was not representative of their understanding of politics.

So, late in term I had a bit of a petulant teen-ager moment and thought “wait a minute I don’t ‘have to’ do anything!”. Part of my job description and (tenure decision) will be based on teaching innovation, my willingness to think outside the box and find better ways of advancing teaching and student learning. So, I decided to experiment. For the next small written assignment I decided to allow this student to respond to the question orally. I arranged a time to meet with them in a separate room with me and one of my TAs. After the student being given a few minutes to gather their thoughts, the three of us had a conversation. The TA and I were able to ask follow up questions, where the student again displayed advanced and often critical insights into the problem at hand. OK, so it wasn’t a totally ‘controlled experiment’ (apologies to my positivist friends) but the grade I gave on the oral presentation was around 10% higher than their average on the written assignments of the same ilk.  This was a pretty small stakes assignment, so the impact on their grade was, overall, negligible. But still…..

What was more interesting to me was the debrief afterwards. I wanted their reflections on how it had gone, what the difference was to them in terms of being able to display their knowledge of politics, and why they believed the differences occurred. Without going into all the details, it became clear that the mechanics of writing, the stresses over getting the grammar/tense/punctuation correct were preventing them from displaying what they had learned. For me, thinking about the purpose of this set of assignments (5 reflexive learning logs over the course of the term) I realized that the way I have structured the assignment might actually be masking what I am trying to assess! In the description of the assignment I am asking students to give me a ‘snapshot of what they have learned’. Yet, sticking with the snapshot analogy, they are so fixated on the lighting, shade and focus of that image that I might not actually be getting a real picture of what they have learned.

This is perhaps more of a problem for international students for whom English is a second (or third!) language, but I am also now wondering if the same goes for my domestic students, for any students for whom the written form is their largest struggle. Looking at my own courses this year, written work accounts for anywhere from 70-80% of their grades. Whilst writing is an extremely important skill (both academically and in terms of wider professional skills), I’m left wondering if I am over-assessing the written form. My primary focus is for students to learn about politics, not (just) write about politics.

I am left asking myself, am I favoring students who for whatever reason have superior writing skills whilst students whose intellectual skills rest more in the oral form or non-verbal creative forms are penalized?   If what I really want to assess is their knowledge and critical thinking skills in regards to political science and communicating that knowledge, is relying so heavily on written work really appropriate? This is not to say that writing is not an important skill. Indeed, some of the writing intensive courses in my home department have proven very popular amongst students. And for students who are maybe grad school bound or seeking out careers in certain occupations, developing these skills is extra-essential. However, can there be more room or at least options for alternative (ie non-written assessments). One of my wonderful TAs has pointed me in the direction of other university programmes where a recognition of problems with written literacy skills in student cohorts and biases towards written literacy in assessments has been addressed; I will investigate further!

Of course, regardless of the potential pedagogical advancements, I have to be realistic. A change in assessment structures to more verbal or more creative modes of assessment does not come without serious repercussions in terms of time. Making a switch would be difficult to do in most undergraduate level courses in terms of human resources—with over 100 students in my own class in how on earth would I manage conducting and offering formative feedback on a series of individual oral assignments? Furthermore, how would this impact them moving forward or in terms of taking future classes where the written form remains dominant? Should I not just continue training them primarily to communicate political arguments in the written form to ensure success in future courses? These are all the questions I will keep struggling with as I rework my courses for next year over the summer. I will report back in future blog posts about my progress on this, but for now will simply end with a thank-you to my student who allowed me to run this small experiment (and allowed me to write about it!) and who has really challenged me to reflect much more carefully on my approach to assessments more generally.

Guest Blogger: Prof Roger Mac Ginty ‘Who evaluates the student evaluation system?’

It is the end of semester and with it comes student evaluations of the courses they have undertaken. These are a chance for module convenors to hear what we did right and wrong so that we can change modules for next time around. It is a chance for students to give their feedback on the whole module, to vent frustration or even to say thanks.

But there is a major problem: participation levels. In the last evaluations for my MA module there was less than fifty per cent participation – and this is good compared with other modules apparently! Many students are simply too busy or disinterested to fill in the online form. So we are left with feedback from a minority of the class and no way of knowing if the feedback we get is representative or not. Quite simply, the evaluation system in my own institution is not fit for purpose because of the appallingly low participation rate.

Why are student evaluation participation rates so low? The answer, in my experience, comes from the shift towards computer-mediated formats. ‘In the old days’ – a mere five years ago – I had near 100 per cent student evaluation participation rates. This was a paper method that was incredibly low-tech but it worked. On the last class of the semester, I would come into a seminar class (usually groups of 10-12 students) armed with the evaluation sheets. I would explain the purpose, distribute the sheets and leave the room for 10 minutes or so. One student would be tasked with collecting the sheets, placing them in an envelope and delivering them to the secretary. I did not touch the sheets or see them being filled in.

The method takes advantage of a captive audience but does not seem coercive. Students were free not to fill in the sheet or to cover it with doodles and drawings of daisies – but few took this option. Since attendance at seminars was usually very high, and because students usually wanted to be at the last seminar of the semester in the case they could glean exam tips, the participation rate was usually 95 per cent and above.

The current system in my own institution uses BlackBoard – the online teaching platform. It is marketed as a one-stop-shop for student interaction with module material. But there is no incentive or disincentive for students to engage with the evaluation process. Email reminders are just one of a large number of automatically generated emails that students receive. Many of these emails invite deletion before they are read.

The institutional rationale for persevering with a system that clearly does not work (in the sense that student participation is woefully low) is that BlackBoard allows the central management of evaluation data. This might be useful for the institution in terms of its audit trail but if it is not actually fulfilling its purpose in terms of informing teaching then it is worth asking serious questions about institutional priorities: technocratic box ticking or teaching quality.

So what is to be done? I will revert back to my tried and trusted paper method and ignore the electronic method. BlackBoard will continue emailing the students. Over fifty per cent of them will ignore it. The institutional box tickers will remain happy with mediocrity. BlackBoard will continue to be paid for a service that does not work.

Visit Prof. Mac Ginty’s blog at http://www.rogermacginty.com