International Students’ Class Participation: Looking Beneath the “Educational Culture” Surface?

by Lucas Lixinski

In an article I co-authored (with a number of contributors to this blog) in the latest issue of the Legal Education Review, we suggested that one of the biggest issues international students (particularly Postgraduate) face is relearning how to behave in a classroom. Many cultures, we argued, frame the student-instructor relationship as largely one-directional, with the student acting as an empty vessel in which the instructor pours knowledge.

That is certainly the way I was educated in my first law degree, so I know this argument holds true. In a classroom environment where class participation (CP) is not only praised by also expected (and part of the final grade for the semester), it can be quite a shift for a student to go from not speaking at all, to being an active part of the learning process for the entire group.

What if, however, there is something else going on, concurrently with educational culture? What if there are other issues that we, as educators, need to be mindful of, that speak not only to managing expectations in the classroom, but also to how we teach more fundamentally?

In Quiet: The Power of Introverts in a World that Can’t Stop Talking, Susan Cain summarizes a lot of the key research around introversion. Most of this science looks at introversion as an individual phenomenon, that is, something that affects a person. But a number of these studies also suggest that there is something that happens culturally. These studies highlight that a number of cultures outside the English-speaking west (particularly in Asia) are, as a role, more introverted.

For my experience as a legal educator in an English-speaking country where extroversion is valued (to the point of being part of how students are assessed in my law school), it means that I have to think very carefully about how I expect students to engage with materials and contribute to classroom discussions.

Of course, these ideas apply across the cohort at large, as introversion does exist among my Australian students. But it may be that Asian students (the main cohort of international students in Australia) in my classroom are more introverted on average. And that these numbers in the population are more disproportionately represented among Asian students who go abroad for postgraduate study.

In addition to introversion being a cultural trait in several Asian countries, Cain also suggests it is a praised one. In other words, to the same extent I value a student in Australia who speaks in class and makes engaging contributions (typically a more extroverted student), in a number of Asian countries students who are more reflective tend to be more valued. And, since these students will more likely be more successful in their first degrees in their home countries, they are likely to be the ones who get the grades needed to be admitted for postgraduate study internationally.

In other words, it may be that, because of this combination of cultural, educational, and plain biological factors, our international students are likely to be more predominantly on the introverted end of the spectrum then we normally assume. If this logic holds up, then the question is: what can we, as educators, do so we are not setting up our introverted international students for failure?

Coupled with linguistic obstacles and educational culture now we have introversion to deal with. If class participation is to be an enriching part of the educational experience of all students, as opposed to a trap into which we let them fall, we may need to rethink our strategies for class participation. I am in no way advocating we drop the more Socratic approach, but it may be that diversifying our approaches is useful.

Technology allows us to do that, by, for instance, giving students the opportunity to post quick reactions to the readings ahead of the class in which they will be discussed. I often do that in many of my courses, and hope to amplify the practice now. I use these quick reactions not only as a check on student participation, but also tend to incorporate them in the discussions of the class (hence my requiring they be submitted before the class in which the relevant material is being discussed). The fact that students then had the opportunity to prepare something in advance, and reflect on the material, is usually enough for an introvert to be able to speak up in class, if anything just to present the idea they posted ahead of time.

That is just one alternative, of course. I would love to hear more about what others do in this area, and their thoughts on the role that introversion plays in how class activities are conducted.

Student evaluations and innovative teaching

STUDENT EVALUATIONS OF TEACHER PERFORMANCE MAY HINDER INNOVATIVE ‘GOOD TEACHING’ PRACTICE: SOME OBSERVATIONS FROM RECENT RESEARCH

Julian Laurens

Recent research on student evaluations of teacher performance suggest, I argue, that assessing teacher performance via narrowly constructed student evaluation surveys designed to produce a quantifiable indicator of ‘good teaching’ may in fact have the indirect consequence of hindering innovation in ‘good’ teaching approaches. The May 2017 study [1] findings are consistent with a growing body of research [2] that shows university students are simply unable to recognise ‘good teaching’ or ‘what is best for their own learning’. The research identifies that students do not reward ‘good’ or ‘innovative’ teaching in the sense of an improved student evaluation mark for that teacher. Indeed, the evidence suggests the biggest factor informing a positive evaluation from a student is the grade the student is given.

On the other hand, as the studies show, student ignorance of ‘good teaching’ exists alongside evidence demonstrating that quality teaching has a positive impact on a student’s grades and learning outcomes. Moreover, when exposed to quality teaching the improved learning outcomes are transferred to subsequent subjects. This effect is consistent with findings from educational psychology: the work of Albert Bandura and other social-cognitive theorists on the development of student self-efficacy is particularly relevant. Worryingly, the research suggests that students who rated their teachers based on marks they received actually did worse on subsequent courses.

The research thus far raises implications for legal education. A specific and immediate issue raised by the findings is that given student evaluations of teaching are used by University and Faculty management when considering an academic’s career progress, there is a real risk that teachers may choose to ‘play it safe’. What incentive is there for a teacher to actually try something new in their teaching given that they will potentially not receive improved evaluation scores, and in fact may be penalised by students for being ‘innovative’?

A limitation is that much of the research so far is from non-law disciplines. There is yet to be a systematic look at how this particular problem with student evaluations does (if at all) apply to an Australian law school. This should not preclude us from taking note though.  Issues surrounding student evaluations generally have long been recognised in law schools. As Roth said (back in 1984), “[e]veryone agrees that evaluations ought to be done, but few are satisfied that it is now being done properly, or meaningfully” [3]. This remains the wider challenge. Elsewhere on this blog, colleagues Justine Rogers and Carolyn Penfold have also begun examining issues surrounding student’s evaluations.

In conclusion, for present purposes, the research may support the argument that over reliance on current narrow neo-liberal/managerialist inspired approaches to student evaluations of teachers at law schools in Australia may actually hinder innovative ‘good’ teaching practice. Current iterations of such practices can indeed appear as mechanisms of academic control, rather than tools that promote mutually collaborative learning environments. The research calls into question more broadly claims by universities to be dedicated to ‘good teaching’ and ‘innovation in learning’. There is a need to explore this issue further in the context of Australian legal education, situating it alongside continuing conversations around what actual good teaching looks like in law for example. I would like to make three brief practical observations at this stage derived from analysis of the research that may assist us to address some of the negative challengeS posed by the findings:

  • Firstly, we should always strive to improve our teaching, and we should be uncompromising in that. We should communicate this commitment to our students;
  • Secondly, we need to do a better job of explaining to students our teaching approach and rationales and how something relates to the learning experience;
  • Thirdly, we need to give students more opportunities to practice and reflect on what they have achieved along the way. Students need to see how they are progressing. They need to be regularly reminded of the value of learning. Royce Sadler’s work on the importance of feedback is worth reflecting on here.

[1] Brian A Jacob et al, ‘Measuring Up: Assessing instructor effectiveness in higher education’ (2017) 17(3) Education Next 68.

[2] See e.g. Michela Bragga et al, ‘Evaluating students’ evaluations of professors’ (2014) 41 Economics of Education Review 71; Arthur Poropat, ‘Students don’t know what’s best for their own learning’ (The Conversation, November 19, 2014) (online: http://theconversation.com/students-dont-know-whats-best-for-their-own-learning-33835 )

[3] William Roth, ‘Student Evaluation of Law Teaching’ (1984) 17(4) Akron Law Review 609, 610.

The importance of regulatory context: some questions for legal educators

By Justine Rogers

ANU College of Law hosted its annual legal ethics roundtable last week. The theme was ‘reimagining lawyer regulation’.

The regulatory ideas presented raised many worthwhile questions for legal education. I’ve selected two here.

From the talk given by keynote speaker, Professor Leslie Levin, expert in the legal profession, ethical decision-making and lawyer discipline, University of Connecticut:

1) How do we teach law students to be professional when the primary influence over their ethical attitudes, decision-making and compliance will be their particular, divergent work contexts?

Building on other research, Levin’s study of some 1300 lawyers revealed that of all the determinants of future ethical behaviour, most decisive is practice context (workplace, type of client, court etc) and the behaviour of those who inhabit it. Far less so or less significantly are the things that students need to disclose for admission (such as mental health).

Context shapes the importance given to the professional bodies outside the workplace when deciding what values and rules are worth following, such as, the associations, the regulators (or the disciplinary architecture), the court, and the insurers. For instance, big firms look within their own firms and otherwise interact with insurers, prosecutors are less concerned with criminal liability (when does that happen?), in-house counsel don’t worry about discipline, whereas sole practitioners do. In other words, different things matter to different practices.

Levin asked: How do we create professional training, sanctions, and incentives in order to motivate lawyers to behave the way we want them to and to teach them what positive norms there are in the profession? How do we regulate lawyers if context (what matters in each context) is the key variable?

This also means we need to think about how to develop professional integrity and core ethical skills among students for contexts that will introduce, emphasise and enforce professional values in very different ways.

From the talk given by Dr Stephen Tang, Lecturer, ANU College of Law:

2) What is the proper role of behavioural ethics (or any applied psychology) in legal ethics courses?

Behavioural (Legal) Ethics is ‘trending’ in legal ethics education. Popularised by books like Thinking Fast and Slow, and Nudge, this scholarship argues that people are fundamentally irrational and use cognitive short cuts that can lead to suboptimal decision-making. At UNSW Law, we use behavioural legal ethics material in our core course to allow students to better identify, prepare for and discuss ethics issues. Our material includes this leading Robbennholt and Sternlight piece and this wonderful series of ‘Ethics Unwrapped’ videos from UT Austin, both of which were commended by Tang.

This material is useful, he argued, at least as a set of cautionary tales of how irrationality can contribute to immorality and discrimination in routine, subconscious ways. But his concern with behavioural ethics, or the behavioural economics or applied psychological approaches from which it derives, is how the information can be used  – and is used – to manipulate behaviour, even if in a benignly paternalistic or ‘nudging’ way. Equally concerning, its use is usually guided by simplistic, narrow, short-term and consequentialist (eco-based) ideas of people’s motivations and values.

When regulating lawyers, he argued, we need to contemplate and include professional narratives, and organisational climates, and cultures – or more complexity. We will have, then, a better chance of success in fostering certain positive behaviour because we will understand bigger psychological dimensions and developments over time, not just quick, aesthetic behavioural changes, of the same sort as the etched image of a fly in a urinal used to lead to cleaner bathrooms (yes, that’s a real example of applied psych). “If regulation is inescapable, then we must understand people in context. We need to have a sense of our own limits as regulators when deciding what other people decide.”

Similarly, I would say, when teaching students behavioural ethics to more effectively engage in ethical discussions, it must involve a commitment to use the information transparently and inclusively as a way of better understanding and discussing mistakes and fallibilities, of broadening the range of harms considered, and not ruling out or underestimating other motivations, explanations and solutions.

Big Data Analytics on student surveys

It’s the new “thing” – analytics applied to student responses to courses. And it is really quite scary.

To give an example, I will share my own results from a recently taught course of 22 students of which 10 filled out the survey. This is “small data”. It takes about 5-10 minutes (generously) to read and reflect upon the student feedback. Since I am sharing, they generally liked the course including guest lectures and excursions, but felt that one topic didn’t need as much time and that my Moodle page wasn’t well organised. All very helpful for the next time I run the course (note to self to start my Moodle page earlier and tweak the class schedule).

The problem is no longer the feedback, it is the “analytics” which now accompany it. The worst is the “word clouds”. I look at the word cloud for my course and see big words (these generally reflect the feedback, subject to an exception discussed below) and then smaller words and phrases. Now the smaller ones in a word cloud are obviously meant to be “less” important but these are really quite concerning, so much so that I initially panicked. They include “disrespectful/rude”, “unapproachable”, “not worthwhile”, “superficial” and “unpleasant”. Bear in mind the word cloud precedes the actual comments in my report. None of these terms (nor their synonyms) were used by ANY of the students (unless an organised Moodle page could count as “unapproachable”). And they are really horrible things to say about someone, especially when there is no basis for these kinds of assertions in the actual feedback received.

The problem here is applying a “big data” tool to (very) small data. It doesn’t work, and it can be actively misleading. One of the word clouds (there are different ones for different “topics”) had the word “organised”. That came up because students were telling me my Moodle page was NOT well organised, but it would be easy to think at a quick glance that this was praise.

So what is the point of this exercise? One imagines it might be useful if you have a course with hundreds of students (so that reading the comments would take an hour, say). But the fact that the comments can be actively misleading (as in “organised” above) demonstrates, you still need to read the comments to understand the context. Further, students often make subtle observations in comments (like the fact that too much time was spent on a particular topic) that are difficult to interpret in a word cloud where the phrases are aggregated and sprinkled around the place. So, it doesn’t really save time. The comments still need to be read and reflected on.

Big Data tools always sound very exciting. So much buzz! Imagine if we could predict flu epidemics from Google searches (that no longer works, by the way) or predict crime before it happens (lots of jurisdictions are trying this, particularly in the US). But the truth is more like the word cloud on student feedback – inappropriately applied, error prone, poorly understood by those deploying the tool, and thus often unhelpful. Data analytics CAN be good tool – but it is a bit like a hammer in the hands of those who don’t understand its function and limitations, everything looks like a nail.

Lyria Bennett Moses

Critical thinking in legal education: What? Why? How? By Lucas Lixinski

by Lucas Lixinski

An article in today’s The Conversation asks whether universities really do a good job (or any job at all) of teaching critical thinking. While acknowledging that defining critical thinking is incredibly difficult, and that most definitions out there are vague at best, the article then moves to discussing whether universities actually teach critical thinking in the way they promise they do. In what seems like a job market that increasingly pays a premium for applicants who can demonstrate having learned critical thinking skills, there is a clear financial incentive (beyond the obvious intellectual one) to be more self-aware of what critical thinking is in our discipline, and how we actually go about teaching it.

 

What is critical thinking in law?

I will not by any means attempt to give an all-encompassing definition of critical thinking more broadly, nor critical thinking in the law. Instead, let me just say where I come from, and try and make sense of the landscape from there. The intention here is to start conversations and provoke reactions, rather than lay down the law (pardon the pun) on the matter.

 

In my opinion, critical thinking has to do with challenging assumed wisdom, and showing students how to do that themselves. In the law, as far as I can see, there are two ways in which I can do that. The first one is to focus on the contingencies of the law, whether they are economic, historical, or political. Things like the old adage that “the law is made largely by, and for the benefit of, white, male[, heterosexual, able-bodied] property owners” tends to be a great starting point to unravel those contingencies. As is the broader historical context of critical moments in the formation of the legal system (like the influence of Protestant ethics in the shaping of the Common Law and its approaches to labor and property, which is different from the way the mostly Catholic Civil Law jurisdictions behaved in Europe at around the same time).

 

Secondly, another way of critically thinking about the law, in my view, is to look into the background. More specifically, when we think about, say, a contract for the purchase of milk, the foreground body of rules operating is contract law. However, in the background there are a number of other bodies of law that influence what is possible for a contract (even though on paper contract law is still the quintessential guardian of private liberty), such as food security rules, (international) trade law if milk is considered to be a strategic product the production of which is incentivized, the corresponding tax arrangements, etc. Admittedly, it makes teaching a simple case daunting, but I always tell my students that I don’t need to have all the answers to those all the time, nor do they. But they need to be mindful of those knock-on effects of the simplest legal rule (sort of a “butterfly effect”, but in the law, and hopefully not creating any hurricanes anywhere).

 

How can we “teach” it?

If you haven’t caught on to it yet, let me out myself here. The way I think about critical thinking, and consequently teach it, is influenced by the way I think and write about the law more generally. Which is to say, I have a hard time dissociating critical thinking as an abstract and transferrable skill from critical legal studies, which is a specific way of theorizing and understanding the law. In other words, the way I conceptualize and “do” critical thinking is deeply influenced by my own bias as a critical scholar (well, much of the time anyway), which is framed by my politics, rather than my raw analytical ability. Assuming this neutrality is desirable (and the article on The Conversation referred to above suggests as much), how do I counter my own biases?

 

Maybe the assumption is that teaching a lefty orthodoxy induces critical thinking, in that it challenges status quo and conventional wisdom students come to the table with. So, maybe the way to teach critical thinking is to constantly challenge student’s assumptions. Except that those assumptions vary radically within a cohort, and change a lot throughout the degree. Which is to say, it may be safe to assume that a first-year undergraduate class at an elite university is made up of students whom you can assume espouse certain center- to right-leaning assumptions about the world, inherited from their parents and their upbringing. But, after spending a year being challenged on those assumptions, it may be that an upper level class needs to be re-presented with the Liberal version of the world. That is, of course, if critical thinking is to be conveyed through “thick descriptions” of reality as a means to understand and apply the law.

 

Which is to say, maybe the way to teach critical thinking is to make the teaching less about what I think, and more about playing devil’s advocate all the time to what students think. And that is a fair enough proposition in a student-centric model of education, but, if teaching is also meant to be (at least to some extent) research-driven (not to mention students’ insistence on “answers”), isn’t it my job to convey what I think after all? I constantly try to strike a balance between what I think and other opinions out there, and present them all, but I’m not sure I’m always successful.

 

This discussion brings to mind an old and still current debate about the purpose of legal education. Is legal education about teaching substantive knowledge of the law, or just skills (“thinking like a lawyer”)? I tend to think the latter, but, in considering the legal profession is subject to an increasingly strict regulatory environment, content is also incredibly important. It is also easier to measure and assess. Problem questions have a way of assessing critical thinking, but often enough (as people marking exams everywhere may attest to), answers to problem questions can too easily devolve into knowledge-spewing for significant segments of the student population.

 

So, what to do?

I honestly don’t know, and invite other people’s views on the matter. As far as I can see, I will keep on trying to challenge students at every turn (and have they challenge me), but being mindful that my opinion counts, while certainly not the only one that does.

 

In one of my classes (an Introduction to the Legal System-type class, called “Introducing Law and Justice”), I have the privilege of talking to students in one of their early contacts with the legal discipline. And in doing that I present students with a list of questions that they should be asking of materials they read (cases, statutes, scholarly texts) as a means to stimulate critical thinking:

–        Why is the law this way?

–        Who stands to gain?

–        Who loses?

–        What does the law as is miss? What are its blind spots?

–        What do other people do faced with similar legal problems, and why? Can we learn lessons there?

–        When was this case decided? What was the broader context around this case?

–        What was the court / law-maker trying to say between the lines?

–        Who is the court / law-maker (white, male, property owner)?

–        What is this legal statement / assertion / rule a reaction to?

–        How does the private affect the public (and vice-versa)?

 

That strikes me as a fairly useful checklist to spark critical thinking, on the models above. But are there other ways of doing that in law teaching? Let me hear your thoughts!

 

 

Smart Casual teaching development modules now available

An innovative resource for specifically developed for sessional law teachers (but able to support permanent staff as well!) is now online.

The Modules

The first five modules of the Smart Casual suite of online modules to support sessional colleagues with law specific teaching strategies are now available at https://smartlawteacher.org/modules.  They are:

  • Reading Law
  • Critical Thinking
  • Legal Problem Solving
  • Student Engagement
  • Feedback

They are supported by an introductory module that highlights four themes that run through the modules and are key to legal education: diversity, internationalisation, digital literacy and gender.

A further four modules will be available in the coming months:

  • Wellness
  • Communication and Collaboration
  • Legal Ethics and Professional Responsibility
  • Indigenous Peoples and the Law

Format

The modules are written in Articulate Storyline with links to video clips and are designed to allow viewers to either work through the slides sequentially or skip to areas of interest.    Modules take around an hour to work through, but can be skimmed for relevant content much more quickly.

The modules are designed to have a peer-to-peer approach, recognising the experience that sessional colleagues bring to their teaching.  They feature a range of short videos from sessional staff themselves discussing the issues in the modules.  The use of reflective questions throughout the modules means the modules can also be used a conversation starters for peer discussions.

Background

Smart Casual involves a collaboration of academics from five Australian law schools producing a suite of professional development modules for sessional teachers of law. Half of all teaching in Australian higher education is provided by sessional staff (and possibly more in law schools), so the quality of sessional teaching is critical to student learning, retention and progress. However, national research suggests that support and training for sessional teachers remains inadequate.

In law, this problem is compounded by the need for staff to teach discipline-specific skills and content to students destined for a socially-bounded profession. Yet sessional law teachers are often time-poor full-time practitioners weakly connected to the tertiary sector. The distinct nature of these sessional staff and the discipline-specific learning outcomes required in law demand discipline-specific sessional staff training.

The project was funded by grants from the Australian Government’s Office of Learning and Teaching.  The  project team is:

  • Mary Heath, Associate Professor, Flinders University (Project Leader);
  • Kate Galloway, Assistant Professor, Faculty of Law, Bond University.
  • Anne Hewitt, Associate Professor, Adelaide Law School, University of Adelaide;
  • Mark Israel, Adjunct Professor of Law and Criminology, Flinders University; Visiting Academic, School of Social Sciences, University of Western Australia;
  • Natalie Skead, Associate Professor, University of Western Australia;
  • Alex Steel, Professor, University of New South Wales

 

Can teaching be measured? #2

Carolyn Penfold

Following on from Justine Rogers’ 30th May post: ‘Can Teaching Be Measured’ I’m adding links to some articles on the topic. I think these questions are becoming increasingly important as universities seek ‘metrics’ by which to measure their work forces. The articles linked to below suggest that bias is a concern in teaching evaluations, which for me raises the question of whether those using the metrics will need to ‘correct’ for likely (or even just potential) bias. Check these out and let me know what you think:

https://www.insidehighered.com/news/2016/01/11/new-analysis-offers-more-evidence-against-student-evaluations-teaching

http://blogs.lse.ac.uk/impactofsocialsciences/2016/02/04/student-evaluations-of-teaching-gender-bias/

https://tcf.org/content/commentary/student-evaluations-skewed-women-minority-professors/

http://www.utstat.toronto.edu/reid/sta2201s/gender-teaching.pdf

 

Innovation for the next generation of legal education: student-led video production

11100028555_a1601749d0

image

How can legal education be enhanced through student-led video production? How effective is it for class learning? And what are benefits and challenges that this form of blended learning poses for environmental law and legal education more generally?

These questions were explored by Cameron Holley and Amelia Thorpe in a recent UNSW Law Learning & Teaching seminar where they presented the findings from their Learning and Teaching Innovation grant entitled: ‘Updating legal education with blended classrooms: lessons from student-led resource development’.

The premise:

  • Videos are one of most popular form of online media teaching (particularly in MOOCs) 
  • Facilitate thinking and problem solving

–creative challenge of using moving images and sound to communicate a topic

–filmmaking skills, but also research, collaborative working, problem solving, technology, and organisational skills

  • Inspire, engage and foster deep learning

–Videos as part of student-centred learning activities benefit motivation, opportunities for deeper learning, learner autonomy, communication skills,

  • Authentic learning opportunities

–method for students to construct concepts and learning about real life issues relevant to them

  • Assist with mastery learning

–providing learning resources for future cohorts

What did they do?

–students asked to identify a recent development in environmental law that is not already covered in the prescribed text book

–required to produce a short video, no longer than 10 minutes, that portrays the subject matter of a recent environmental law development and reflects thoughtfully on in its implications for achieving ecologically sustainable development

–low risk – 5% for trial (would be more in future)

–outcomes and process assessed

–small teams of 4-6 students

  • to assist: three iPads made available and guide sheets on a suggested timeline, working in small groups, and media production.
  • videos shown to the class as a set late in semester.

–roughly 40% of class already had experience with technology

The Results?

Cameron and Amelia showed examples of videos that demonstrated highly engaged, deep learning among the student groups, with a strikingly high level of production value!

The presentation drew on empirical data collected from student interviews and surveys, as well as teacher and peer reflections. It rounded off by critically examining the strengths and weaknesses of student produced videos as a tool for blended learning, before a lot of us in attendance decided we all want to try it out in our courses!

For those who wish to experiment with similar innovations, view the student data, or track the sources for the above,  their slides are available here: Holley_Thorpe_UNSWLaw_video.

Can Teaching be Measured?

4175299981_614e7d9dc5 (1)

(image)

By Justine Rogers

Last week UNSW had its second ‘Great Debate’, introduced last year as a fun, accessible way for the UNSW community to explore a serious and stirring topic. (For a post on last year’s, click here)

Each team: professor-manager, non-prof academic, and student.

The topic: Of Course Teaching Can be Measured (it’s a 5.3!).

I was on the affirmative (which I knew going in would be tough).

Given it was a private event for staff and students, I’ve written this assuming some version of the Chatham House Rule applies.

The affirmative’s arguments were:

  1. Teaching can be measured, albeit imperfectly, and certainly better and more reliably than it is now.
  2. Teaching needs to be measured to enhance the quality, rewards and status of teaching.

The negative’s arguments were:

  1. Teaching cannot be measured, only learning experiences and learning outcomes can. 
  2. Teaching measures are flawed and unreliable.

The negative committed to the empirical questions, whereas I tried (unsuccessfully in the 4 or so mins we had) to engage both sides in the wider empirical and normative argument suggested in affirmative point 2: whether there is some positive correlation between measurement, and motivation, quality and status, and therefore whether a more robust measurement of teaching is worthwhile.

I wish we’d had the format and time to examine this: whether this is true, or whether, using research measures as example, such measures have too many biases, perverse incentives, and inefficient and/or demoralising effects to be of real value (even if it entails superficial value). 

I will share my main arguments here, some of which I am fairly convinced, many posed as part of my role on the affirmative side, and some raised in the spirit of fun and provocation. Above all, I think the topic raised several questions left that need to be contemplated, many of which I’ve posted below – so please share your thoughts!

Continue reading “Can Teaching be Measured?”

Blog at WordPress.com.

Up ↑