Big Data Analytics on student surveys

It’s the new “thing” – analytics applied to student responses to courses. And it is really quite scary.

To give an example, I will share my own results from a recently taught course of 22 students of which 10 filled out the survey. This is “small data”. It takes about 5-10 minutes (generously) to read and reflect upon the student feedback. Since I am sharing, they generally liked the course including guest lectures and excursions, but felt that one topic didn’t need as much time and that my Moodle page wasn’t well organised. All very helpful for the next time I run the course (note to self to start my Moodle page earlier and tweak the class schedule).

The problem is no longer the feedback, it is the “analytics” which now accompany it. The worst is the “word clouds”. I look at the word cloud for my course and see big words (these generally reflect the feedback, subject to an exception discussed below) and then smaller words and phrases. Now the smaller ones in a word cloud are obviously meant to be “less” important but these are really quite concerning, so much so that I initially panicked. They include “disrespectful/rude”, “unapproachable”, “not worthwhile”, “superficial” and “unpleasant”. Bear in mind the word cloud precedes the actual comments in my report. None of these terms (nor their synonyms) were used by ANY of the students (unless an organised Moodle page could count as “unapproachable”). And they are really horrible things to say about someone, especially when there is no basis for these kinds of assertions in the actual feedback received.

The problem here is applying a “big data” tool to (very) small data. It doesn’t work, and it can be actively misleading. One of the word clouds (there are different ones for different “topics”) had the word “organised”. That came up because students were telling me my Moodle page was NOT well organised, but it would be easy to think at a quick glance that this was praise.

So what is the point of this exercise? One imagines it might be useful if you have a course with hundreds of students (so that reading the comments would take an hour, say). But the fact that the comments can be actively misleading (as in “organised” above) demonstrates, you still need to read the comments to understand the context. Further, students often make subtle observations in comments (like the fact that too much time was spent on a particular topic) that are difficult to interpret in a word cloud where the phrases are aggregated and sprinkled around the place. So, it doesn’t really save time. The comments still need to be read and reflected on.

Big Data tools always sound very exciting. So much buzz! Imagine if we could predict flu epidemics from Google searches (that no longer works, by the way) or predict crime before it happens (lots of jurisdictions are trying this, particularly in the US). But the truth is more like the word cloud on student feedback – inappropriately applied, error prone, poorly understood by those deploying the tool, and thus often unhelpful. Data analytics CAN be good tool – but it is a bit like a hammer in the hands of those who don’t understand its function and limitations, everything looks like a nail.

Lyria Bennett Moses

Can teaching be measured? #2

Carolyn Penfold

Following on from Justine Rogers’ 30th May post: ‘Can Teaching Be Measured’ I’m adding links to some articles on the topic. I think these questions are becoming increasingly important as universities seek ‘metrics’ by which to measure their work forces. The articles linked to below suggest that bias is a concern in teaching evaluations, which for me raises the question of whether those using the metrics will need to ‘correct’ for likely (or even just potential) bias. Check these out and let me know what you think:

https://www.insidehighered.com/news/2016/01/11/new-analysis-offers-more-evidence-against-student-evaluations-teaching

http://blogs.lse.ac.uk/impactofsocialsciences/2016/02/04/student-evaluations-of-teaching-gender-bias/

https://tcf.org/content/commentary/student-evaluations-skewed-women-minority-professors/

http://www.utstat.toronto.edu/reid/sta2201s/gender-teaching.pdf

 

Can Teaching be Measured?

4175299981_614e7d9dc5 (1)

(image)

By Justine Rogers

Last week UNSW had its second ‘Great Debate’, introduced last year as a fun, accessible way for the UNSW community to explore a serious and stirring topic. (For a post on last year’s, click here)

Each team: professor-manager, non-prof academic, and student.

The topic: Of Course Teaching Can be Measured (it’s a 5.3!).

I was on the affirmative (which I knew going in would be tough).

Given it was a private event for staff and students, I’ve written this assuming some version of the Chatham House Rule applies.

The affirmative’s arguments were:

  1. Teaching can be measured, albeit imperfectly, and certainly better and more reliably than it is now.
  2. Teaching needs to be measured to enhance the quality, rewards and status of teaching.

The negative’s arguments were:

  1. Teaching cannot be measured, only learning experiences and learning outcomes can. 
  2. Teaching measures are flawed and unreliable.

The negative committed to the empirical questions, whereas I tried (unsuccessfully in the 4 or so mins we had) to engage both sides in the wider empirical and normative argument suggested in affirmative point 2: whether there is some positive correlation between measurement, and motivation, quality and status, and therefore whether a more robust measurement of teaching is worthwhile.

I wish we’d had the format and time to examine this: whether this is true, or whether, using research measures as example, such measures have too many biases, perverse incentives, and inefficient and/or demoralising effects to be of real value (even if it entails superficial value). 

I will share my main arguments here, some of which I am fairly convinced, many posed as part of my role on the affirmative side, and some raised in the spirit of fun and provocation. Above all, I think the topic raised several questions left that need to be contemplated, many of which I’ve posted below – so please share your thoughts!

Continue reading “Can Teaching be Measured?”

What is the SRA getting the UK into?

By Alex Steel

John Flood’s post below highlights the radical nature of the SRA’s proposal.  To be fair to LETR it was primarily a research paper rather than an options paper, and there’s a lot in there that’s of interest to Australian academics – including an overview of the current state of UK education and practice, and useful literature reviews.  A couple of cautionary notes from LETR are apposite to the proposals to move to competency based outcomes: Continue reading “What is the SRA getting the UK into?”

One Size Doesn’t Fit All in Legal Education

By John Flood

After digesting the Legal Education and Training Review report (LETR) for three months, England’s largest legal regulator, the Solicitors Regulation Authority (SRA), has delivered its response. This is important because the big legal regulators—SRA, Bar Standards Board, Ilex Professional Services—shape the structure of the qualifying law degree, and they commissioned LETR.

Without revisiting the LETR report in detail,[1] it recommended no radical change. Instead the SRA, according to its chief executive, Anthony Townsend, is to propose a radical programme of reform.

It has three elements: Continue reading “One Size Doesn’t Fit All in Legal Education”

ALTA Conference, Teaching and Instructor Freedoms

This week’s ALTA conference was an interesting mix of papers. A number of people are talking about the ‘flipped’ classroom. This ‘new’ approach involves getting your students to read or do some other activity before they come to class.  It was interesting how new and unusual many people seemed to think this was. There was real concern about the possibility of getting students to do the work, and it became clear that some universities keep their teachers on a very tight rein – e.g., do not allow them to have class participation marks, do not allow a teacher to say to a class that they will not teach the class that day if they had not done the reading, and there was clearly a sense that many universities have so many rules that it makes it impossible for teachers to teach classes in the way they think fit.

There is always a tension, I guess, between too much and not enough regulation. The thing regulators of any kind (including teachers) need to keep in mind is the extent to which people will perform to expectation. If you regulate people as if they will behave badly they probably will. The challenge is to regulate in a way which suggests to people that they will do well so that they respond to those expectations and rise to meet them. Having some faith in teaching staff to do the right thing may be beneficial from that point of view alone.  The same applies to the classroom where students often perform extremely well when it is clear to them that they are expected to. 

By Prue Vines

Blog at WordPress.com.

Up ↑