Category Archives: evaluations

Hiding the Vegetables? On explaining your pedagogical choices and teaching philosophy to students

I don’t have children, but on my Facebook feed I often see my ‘parent friends’ posting articles about how to get their children to eat more vegetables.  Many tips seem to focus on somehow managing to hide or sneak in vegetables into foods their kids otherwise love—‘add carrot juice into fruit smoothies’  ‘blend up spinach and put in pasta sauce’.   In other words,  if you want them to eat their brussels sprouts, do everything you can to make sure they do not know they are eating brussels sprouts.

What on earth does this have to do with teaching, you ask?  Bear with me.

The necessity of ‘labelling’ my pedagogy this semester

Several of my upcoming blog posts will focus on a funded research project I did this semester on active learning in large undergraduate classes.  The research project did not require any change to my pedagogy or any redesign of my course.  I taught my Introduction to Comparative Politics class exactly as I had four times before in previous years.  Other than updating a few case studies, fixing typos in my lecture slides and nixing a few activities that just didn’t seem to work,   there were no changes to how I taught the course or my general teaching philosophy.

There were, however, two significant changes that seem to have come back to haunt me.  The first is that I added what I called an ‘Active Learning Journal’  where students had to upload some evidence of their engagement with class debates, activities and simulations (very small-stakes—a snapshot of a completed worksheet or their notes capturing both sides of the debate in class would suffice). Secondly,  because I was conducting research on my teaching,  I of course was ethically obliged to inform my students of the research project, its aims etc.  My Research Assistant also recruited students to participate in focus groups to help me gain further insight into my teaching (warts and all).  The ethics requirement and the methodology thus required students to be reminded several times this term that I was using ‘Active Learning’.

I actually thought all of this would be a good thing.  I thought being more transparent and open about my pedagogy and teaching philosophy would diminish the small amounts of resistance to my teaching style that I’ve encountered in the past (which I would stress has been up until this term minimal—some students would just prefer I stand up and talk at them for 3 hours a week).  Oh, how naïve and wrong I was.

The curious case of my teaching evaluations in this one section

While I admit there are things that I can and will change regarding my use of active learning based on some of the qualitative feedback from my focus groups, other types of feedback from students have left me more generally torn and confused.  Having reviewed my formal course evaluations, it appears that the labelling of things as ‘Active Learning’, signaling to students that ‘I am doing things differently’ has possibly backfired.

My numerical scores are pretty much unchanged (in fact they have gone up slightly since last year, despite it being a larger class and me having health issues near the end of the semester that led to a delay in getting grades out).  However, the comment section was filled with notes about students’ dislike of active learning.  There were positive comments too of course, regarding my skills as a lecturer, my being available and helpful to students, and some students were positive about my pedagogy—but  the comments regarding active learning were roughly 75% negative.  This is quite surprising given very good scores on all of the quantitative elements of the evaluations which measure students’ assessment of the quality of me and the learning experience as a whole.  It also does not match (at all) with the incredible evidence of learning that I saw in their reflective writing on active learning.

Now, the reason this is so interesting to me, is that I have NEVER had these comments (or at least so many of them) in the 4 others sections that I teach the course—even though the course and active learning elements are unchanged.  In fact, I taught two other section of this same course in the same semester (with pretty much exactly the same pedagogy and exercises) and the comments section was overwhelmingly positive regarding the activities that I did. The only substantive difference in these other two sections being that I was not explicit about my active learning pedagogy/philosophy in any of my other courses.

Moving forward:  what are the pros and cons of sharing your teaching philosophy with students?

So what to make of all of this?  I’m not sure.  I’m still processing the whole experience.  I had a good group of intelligent students (many, though certainly not all) engaged with everything I threw at them during the term.  The reflective writing that they also did on some of these activities also generally showed thoughtful engagement with the aims and lessons of these activities.  So in terms of student learning, I’m still confident that the course works.

The experience certainly hasn’t shaken my teaching philosophy, but it has made me think about if it is necessary (or at all beneficial)  to share your teaching philosophy with students.  Does holding something up as different create resistance from the start?  If a set of pedagogical tools are shown to be effective for student learning through research, should we just use them and hope for buy-in from students?  Is active learning the carrot juice or brussels sprouts of the pedagogical terrain— good for you, but best kept secretly mixed in with the things more familiar and liked?

I’ve no clear answers for these questions, but despite my experience this year, I think I will still be explicit at some stage with my students about my approach to teaching.   However, perhaps I won’t give it a label, won’t characterize it as ‘other’.  I do want to have my students reflect on the process of learning, take ownership of their own education, so I still believe that being open about the aims of and rationale of your teaching approach is important for students’ intellectual development.  Perhaps the answer lies in more subtly inviting students who are interested and intellectually curious about teaching and learning to have these conversations with you, without belaboring the point and just allowing the pedagogies to work/speak for themselves.

 

Guest Blogger: Prof Roger Mac Ginty ‘Who evaluates the student evaluation system?’

It is the end of semester and with it comes student evaluations of the courses they have undertaken. These are a chance for module convenors to hear what we did right and wrong so that we can change modules for next time around. It is a chance for students to give their feedback on the whole module, to vent frustration or even to say thanks.

But there is a major problem: participation levels. In the last evaluations for my MA module there was less than fifty per cent participation – and this is good compared with other modules apparently! Many students are simply too busy or disinterested to fill in the online form. So we are left with feedback from a minority of the class and no way of knowing if the feedback we get is representative or not. Quite simply, the evaluation system in my own institution is not fit for purpose because of the appallingly low participation rate.

Why are student evaluation participation rates so low? The answer, in my experience, comes from the shift towards computer-mediated formats. ‘In the old days’ – a mere five years ago – I had near 100 per cent student evaluation participation rates. This was a paper method that was incredibly low-tech but it worked. On the last class of the semester, I would come into a seminar class (usually groups of 10-12 students) armed with the evaluation sheets. I would explain the purpose, distribute the sheets and leave the room for 10 minutes or so. One student would be tasked with collecting the sheets, placing them in an envelope and delivering them to the secretary. I did not touch the sheets or see them being filled in.

The method takes advantage of a captive audience but does not seem coercive. Students were free not to fill in the sheet or to cover it with doodles and drawings of daisies – but few took this option. Since attendance at seminars was usually very high, and because students usually wanted to be at the last seminar of the semester in the case they could glean exam tips, the participation rate was usually 95 per cent and above.

The current system in my own institution uses BlackBoard – the online teaching platform. It is marketed as a one-stop-shop for student interaction with module material. But there is no incentive or disincentive for students to engage with the evaluation process. Email reminders are just one of a large number of automatically generated emails that students receive. Many of these emails invite deletion before they are read.

The institutional rationale for persevering with a system that clearly does not work (in the sense that student participation is woefully low) is that BlackBoard allows the central management of evaluation data. This might be useful for the institution in terms of its audit trail but if it is not actually fulfilling its purpose in terms of informing teaching then it is worth asking serious questions about institutional priorities: technocratic box ticking or teaching quality.

So what is to be done? I will revert back to my tried and trusted paper method and ignore the electronic method. BlackBoard will continue emailing the students. Over fifty per cent of them will ignore it. The institutional box tickers will remain happy with mediocrity. BlackBoard will continue to be paid for a service that does not work.

Visit Prof. Mac Ginty’s blog at http://www.rogermacginty.com