Our colleague Alex Steel has recently given a public lecture as a Fellow of the UNSW Scientia Education Academy. A video of his lecture is now online (a bibliography is on the right of the page)
There is ongoing interest in what is known as contract cheating – where students pay others to write assessment for them.
The Australian higher education regulatory agency TEQSA has just released a Good Practice Note on the subject. A team of researchers is currently in the middle of a large scale project to develop strategies to design assessment to counter the phenomenon: ‘Contract Cheating and Assessment Design: Exploring the Connection’
Contract cheating is conceptually different to traditional notions of plagiarism in that it is a predominantly commercial, market driven enterprise. Rather than a student surreptitiously borrowing another student’s work, contract cheating involves arms length payments to strangers who provide assessment answers for profit. Research has uncovered that such organisations can be multinational in scope, sourcing paper writers from Africa, Asia and Europe. In such circumstances it becomes relevant to ask whether such behaviour should be seen as criminal, and what offence may have been committed. While students are easy targets here, finding ways to prosecute the ‘essay mills’ is more complex.
Two articles address this issue. My recent article ( Contract cheating: Will students pay for serious criminal consequences?) looks at criminal liability in Australia, with a focus on NSW, suggesting that offences such as fraud, forgery and conspiracy to defraud may be committed. It also considers New Zealand’s specific anti-cheating legislation. The existence of an arguable basis for criminal prosecution of such entities also permits authorities to apply to freeze bank accounts under proceeds of crime legislation. This may be seen as a controversial approach, but it has already been adopted in New Zealand: Commissioner of Police v Li  NZHC 479
In a UK context, an article earlier this year by Michael Draper, Victoria Ibezim and Philip Newton ( Are Essay Mills committing fraud? ) explored whether essay mills can be prosecuted under the English Fraud Act 2006. They suggest it would be difficult and argue for the creation of a strict liability offence. Although their article does not discuss it, conspiracy to defraud and proceeds of crime offences are likely to be also available.
The conundrum that underlies the analysis in both articles is the fact that the criminality is at base that of a group crime – joint criminal enterprise or conspiracy – but on policy grounds it seems best to have the students dealt with in an educative and restorative manner outside of the criminal courts while throwing the book at the essay mills. Specific legislation banning the operations of the essay mills is the simple fix, but raises further questions of scope and standards of proof. It will be interesting to see if governments in Australia and the UK decide to take that path.
by Lucas Lixinski
In an article I co-authored (with a number of contributors to this blog) in the latest issue of the Legal Education Review, we suggested that one of the biggest issues international students (particularly Postgraduate) face is relearning how to behave in a classroom. Many cultures, we argued, frame the student-instructor relationship as largely one-directional, with the student acting as an empty vessel in which the instructor pours knowledge.
That is certainly the way I was educated in my first law degree, so I know this argument holds true. In a classroom environment where class participation (CP) is not only praised by also expected (and part of the final grade for the semester), it can be quite a shift for a student to go from not speaking at all, to being an active part of the learning process for the entire group.
What if, however, there is something else going on, concurrently with educational culture? What if there are other issues that we, as educators, need to be mindful of, that speak not only to managing expectations in the classroom, but also to how we teach more fundamentally?
In Quiet: The Power of Introverts in a World that Can’t Stop Talking, Susan Cain summarizes a lot of the key research around introversion. Most of this science looks at introversion as an individual phenomenon, that is, something that affects a person. But a number of these studies also suggest that there is something that happens culturally. These studies highlight that a number of cultures outside the English-speaking west (particularly in Asia) are, as a role, more introverted.
For my experience as a legal educator in an English-speaking country where extroversion is valued (to the point of being part of how students are assessed in my law school), it means that I have to think very carefully about how I expect students to engage with materials and contribute to classroom discussions.
Of course, these ideas apply across the cohort at large, as introversion does exist among my Australian students. But it may be that Asian students (the main cohort of international students in Australia) in my classroom are more introverted on average. And that these numbers in the population are more disproportionately represented among Asian students who go abroad for postgraduate study.
In addition to introversion being a cultural trait in several Asian countries, Cain also suggests it is a praised one. In other words, to the same extent I value a student in Australia who speaks in class and makes engaging contributions (typically a more extroverted student), in a number of Asian countries students who are more reflective tend to be more valued. And, since these students will more likely be more successful in their first degrees in their home countries, they are likely to be the ones who get the grades needed to be admitted for postgraduate study internationally.
In other words, it may be that, because of this combination of cultural, educational, and plain biological factors, our international students are likely to be more predominantly on the introverted end of the spectrum then we normally assume. If this logic holds up, then the question is: what can we, as educators, do so we are not setting up our introverted international students for failure?
Coupled with linguistic obstacles and educational culture now we have introversion to deal with. If class participation is to be an enriching part of the educational experience of all students, as opposed to a trap into which we let them fall, we may need to rethink our strategies for class participation. I am in no way advocating we drop the more Socratic approach, but it may be that diversifying our approaches is useful.
Technology allows us to do that, by, for instance, giving students the opportunity to post quick reactions to the readings ahead of the class in which they will be discussed. I often do that in many of my courses, and hope to amplify the practice now. I use these quick reactions not only as a check on student participation, but also tend to incorporate them in the discussions of the class (hence my requiring they be submitted before the class in which the relevant material is being discussed). The fact that students then had the opportunity to prepare something in advance, and reflect on the material, is usually enough for an introvert to be able to speak up in class, if anything just to present the idea they posted ahead of time.
That is just one alternative, of course. I would love to hear more about what others do in this area, and their thoughts on the role that introversion plays in how class activities are conducted.
STUDENT EVALUATIONS OF TEACHER PERFORMANCE MAY HINDER INNOVATIVE ‘GOOD TEACHING’ PRACTICE: SOME OBSERVATIONS FROM RECENT RESEARCH
Recent research on student evaluations of teacher performance suggest, I argue, that assessing teacher performance via narrowly constructed student evaluation surveys designed to produce a quantifiable indicator of ‘good teaching’ may in fact have the indirect consequence of hindering innovation in ‘good’ teaching approaches. The May 2017 study  findings are consistent with a growing body of research  that shows university students are simply unable to recognise ‘good teaching’ or ‘what is best for their own learning’. The research identifies that students do not reward ‘good’ or ‘innovative’ teaching in the sense of an improved student evaluation mark for that teacher. Indeed, the evidence suggests the biggest factor informing a positive evaluation from a student is the grade the student is given.
On the other hand, as the studies show, student ignorance of ‘good teaching’ exists alongside evidence demonstrating that quality teaching has a positive impact on a student’s grades and learning outcomes. Moreover, when exposed to quality teaching the improved learning outcomes are transferred to subsequent subjects. This effect is consistent with findings from educational psychology: the work of Albert Bandura and other social-cognitive theorists on the development of student self-efficacy is particularly relevant. Worryingly, the research suggests that students who rated their teachers based on marks they received actually did worse on subsequent courses.
The research thus far raises implications for legal education. A specific and immediate issue raised by the findings is that given student evaluations of teaching are used by University and Faculty management when considering an academic’s career progress, there is a real risk that teachers may choose to ‘play it safe’. What incentive is there for a teacher to actually try something new in their teaching given that they will potentially not receive improved evaluation scores, and in fact may be penalised by students for being ‘innovative’?
A limitation is that much of the research so far is from non-law disciplines. There is yet to be a systematic look at how this particular problem with student evaluations does (if at all) apply to an Australian law school. This should not preclude us from taking note though. Issues surrounding student evaluations generally have long been recognised in law schools. As Roth said (back in 1984), “[e]veryone agrees that evaluations ought to be done, but few are satisfied that it is now being done properly, or meaningfully” . This remains the wider challenge. Elsewhere on this blog, colleagues Justine Rogers and Carolyn Penfold have also begun examining issues surrounding student’s evaluations.
In conclusion, for present purposes, the research may support the argument that over reliance on current narrow neo-liberal/managerialist inspired approaches to student evaluations of teachers at law schools in Australia may actually hinder innovative ‘good’ teaching practice. Current iterations of such practices can indeed appear as mechanisms of academic control, rather than tools that promote mutually collaborative learning environments. The research calls into question more broadly claims by universities to be dedicated to ‘good teaching’ and ‘innovation in learning’. There is a need to explore this issue further in the context of Australian legal education, situating it alongside continuing conversations around what actual good teaching looks like in law for example. I would like to make three brief practical observations at this stage derived from analysis of the research that may assist us to address some of the negative challengeS posed by the findings:
- Firstly, we should always strive to improve our teaching, and we should be uncompromising in that. We should communicate this commitment to our students;
- Secondly, we need to do a better job of explaining to students our teaching approach and rationales and how something relates to the learning experience;
- Thirdly, we need to give students more opportunities to practice and reflect on what they have achieved along the way. Students need to see how they are progressing. They need to be regularly reminded of the value of learning. Royce Sadler’s work on the importance of feedback is worth reflecting on here.
 Brian A Jacob et al, ‘Measuring Up: Assessing instructor effectiveness in higher education’ (2017) 17(3) Education Next 68.
 See e.g. Michela Bragga et al, ‘Evaluating students’ evaluations of professors’ (2014) 41 Economics of Education Review 71; Arthur Poropat, ‘Students don’t know what’s best for their own learning’ (The Conversation, November 19, 2014) (online: http://theconversation.com/students-dont-know-whats-best-for-their-own-learning-33835 )
 William Roth, ‘Student Evaluation of Law Teaching’ (1984) 17(4) Akron Law Review 609, 610.
It’s the new “thing” – analytics applied to student responses to courses. And it is really quite scary.
To give an example, I will share my own results from a recently taught course of 22 students of which 10 filled out the survey. This is “small data”. It takes about 5-10 minutes (generously) to read and reflect upon the student feedback. Since I am sharing, they generally liked the course including guest lectures and excursions, but felt that one topic didn’t need as much time and that my Moodle page wasn’t well organised. All very helpful for the next time I run the course (note to self to start my Moodle page earlier and tweak the class schedule).
The problem is no longer the feedback, it is the “analytics” which now accompany it. The worst is the “word clouds”. I look at the word cloud for my course and see big words (these generally reflect the feedback, subject to an exception discussed below) and then smaller words and phrases. Now the smaller ones in a word cloud are obviously meant to be “less” important but these are really quite concerning, so much so that I initially panicked. They include “disrespectful/rude”, “unapproachable”, “not worthwhile”, “superficial” and “unpleasant”. Bear in mind the word cloud precedes the actual comments in my report. None of these terms (nor their synonyms) were used by ANY of the students (unless an organised Moodle page could count as “unapproachable”). And they are really horrible things to say about someone, especially when there is no basis for these kinds of assertions in the actual feedback received.
The problem here is applying a “big data” tool to (very) small data. It doesn’t work, and it can be actively misleading. One of the word clouds (there are different ones for different “topics”) had the word “organised”. That came up because students were telling me my Moodle page was NOT well organised, but it would be easy to think at a quick glance that this was praise.
So what is the point of this exercise? One imagines it might be useful if you have a course with hundreds of students (so that reading the comments would take an hour, say). But the fact that the comments can be actively misleading (as in “organised” above) demonstrates, you still need to read the comments to understand the context. Further, students often make subtle observations in comments (like the fact that too much time was spent on a particular topic) that are difficult to interpret in a word cloud where the phrases are aggregated and sprinkled around the place. So, it doesn’t really save time. The comments still need to be read and reflected on.
Big Data tools always sound very exciting. So much buzz! Imagine if we could predict flu epidemics from Google searches (that no longer works, by the way) or predict crime before it happens (lots of jurisdictions are trying this, particularly in the US). But the truth is more like the word cloud on student feedback – inappropriately applied, error prone, poorly understood by those deploying the tool, and thus often unhelpful. Data analytics CAN be good tool – but it is a bit like a hammer in the hands of those who don’t understand its function and limitations, everything looks like a nail.
Lyria Bennett Moses
by Lucas Lixinski
An article in today’s The Conversation asks whether universities really do a good job (or any job at all) of teaching critical thinking. While acknowledging that defining critical thinking is incredibly difficult, and that most definitions out there are vague at best, the article then moves to discussing whether universities actually teach critical thinking in the way they promise they do. In what seems like a job market that increasingly pays a premium for applicants who can demonstrate having learned critical thinking skills, there is a clear financial incentive (beyond the obvious intellectual one) to be more self-aware of what critical thinking is in our discipline, and how we actually go about teaching it.
What is critical thinking in law?
I will not by any means attempt to give an all-encompassing definition of critical thinking more broadly, nor critical thinking in the law. Instead, let me just say where I come from, and try and make sense of the landscape from there. The intention here is to start conversations and provoke reactions, rather than lay down the law (pardon the pun) on the matter.
In my opinion, critical thinking has to do with challenging assumed wisdom, and showing students how to do that themselves. In the law, as far as I can see, there are two ways in which I can do that. The first one is to focus on the contingencies of the law, whether they are economic, historical, or political. Things like the old adage that “the law is made largely by, and for the benefit of, white, male[, heterosexual, able-bodied] property owners” tends to be a great starting point to unravel those contingencies. As is the broader historical context of critical moments in the formation of the legal system (like the influence of Protestant ethics in the shaping of the Common Law and its approaches to labor and property, which is different from the way the mostly Catholic Civil Law jurisdictions behaved in Europe at around the same time).
Secondly, another way of critically thinking about the law, in my view, is to look into the background. More specifically, when we think about, say, a contract for the purchase of milk, the foreground body of rules operating is contract law. However, in the background there are a number of other bodies of law that influence what is possible for a contract (even though on paper contract law is still the quintessential guardian of private liberty), such as food security rules, (international) trade law if milk is considered to be a strategic product the production of which is incentivized, the corresponding tax arrangements, etc. Admittedly, it makes teaching a simple case daunting, but I always tell my students that I don’t need to have all the answers to those all the time, nor do they. But they need to be mindful of those knock-on effects of the simplest legal rule (sort of a “butterfly effect”, but in the law, and hopefully not creating any hurricanes anywhere).
How can we “teach” it?
If you haven’t caught on to it yet, let me out myself here. The way I think about critical thinking, and consequently teach it, is influenced by the way I think and write about the law more generally. Which is to say, I have a hard time dissociating critical thinking as an abstract and transferrable skill from critical legal studies, which is a specific way of theorizing and understanding the law. In other words, the way I conceptualize and “do” critical thinking is deeply influenced by my own bias as a critical scholar (well, much of the time anyway), which is framed by my politics, rather than my raw analytical ability. Assuming this neutrality is desirable (and the article on The Conversation referred to above suggests as much), how do I counter my own biases?
Maybe the assumption is that teaching a lefty orthodoxy induces critical thinking, in that it challenges status quo and conventional wisdom students come to the table with. So, maybe the way to teach critical thinking is to constantly challenge student’s assumptions. Except that those assumptions vary radically within a cohort, and change a lot throughout the degree. Which is to say, it may be safe to assume that a first-year undergraduate class at an elite university is made up of students whom you can assume espouse certain center- to right-leaning assumptions about the world, inherited from their parents and their upbringing. But, after spending a year being challenged on those assumptions, it may be that an upper level class needs to be re-presented with the Liberal version of the world. That is, of course, if critical thinking is to be conveyed through “thick descriptions” of reality as a means to understand and apply the law.
Which is to say, maybe the way to teach critical thinking is to make the teaching less about what I think, and more about playing devil’s advocate all the time to what students think. And that is a fair enough proposition in a student-centric model of education, but, if teaching is also meant to be (at least to some extent) research-driven (not to mention students’ insistence on “answers”), isn’t it my job to convey what I think after all? I constantly try to strike a balance between what I think and other opinions out there, and present them all, but I’m not sure I’m always successful.
This discussion brings to mind an old and still current debate about the purpose of legal education. Is legal education about teaching substantive knowledge of the law, or just skills (“thinking like a lawyer”)? I tend to think the latter, but, in considering the legal profession is subject to an increasingly strict regulatory environment, content is also incredibly important. It is also easier to measure and assess. Problem questions have a way of assessing critical thinking, but often enough (as people marking exams everywhere may attest to), answers to problem questions can too easily devolve into knowledge-spewing for significant segments of the student population.
So, what to do?
I honestly don’t know, and invite other people’s views on the matter. As far as I can see, I will keep on trying to challenge students at every turn (and have they challenge me), but being mindful that my opinion counts, while certainly not the only one that does.
In one of my classes (an Introduction to the Legal System-type class, called “Introducing Law and Justice”), I have the privilege of talking to students in one of their early contacts with the legal discipline. And in doing that I present students with a list of questions that they should be asking of materials they read (cases, statutes, scholarly texts) as a means to stimulate critical thinking:
– Why is the law this way?
– Who stands to gain?
– Who loses?
– What does the law as is miss? What are its blind spots?
– What do other people do faced with similar legal problems, and why? Can we learn lessons there?
– When was this case decided? What was the broader context around this case?
– What was the court / law-maker trying to say between the lines?
– Who is the court / law-maker (white, male, property owner)?
– What is this legal statement / assertion / rule a reaction to?
– How does the private affect the public (and vice-versa)?
That strikes me as a fairly useful checklist to spark critical thinking, on the models above. But are there other ways of doing that in law teaching? Let me hear your thoughts!
Last week UNSW had its second ‘Great Debate’, introduced last year as a fun, accessible way for the UNSW community to explore a serious and stirring topic. (For a post on last year’s, click here)
Each team: professor-manager, non-prof academic, and student.
The topic: Of Course Teaching Can be Measured (it’s a 5.3!).
I was on the affirmative (which I knew going in would be tough).
Given it was a private event for staff and students, I’ve written this assuming some version of the Chatham House Rule applies.
The affirmative’s arguments were:
- Teaching can be measured, albeit imperfectly, and certainly better and more reliably than it is now.
- Teaching needs to be measured to enhance the quality, rewards and status of teaching.
The negative’s arguments were:
- Teaching cannot be measured, only learning experiences and learning outcomes can.
- Teaching measures are flawed and unreliable.
The negative committed to the empirical questions, whereas I tried (unsuccessfully in the 4 or so mins we had) to engage both sides in the wider empirical and normative argument suggested in affirmative point 2: whether there is some positive correlation between measurement, and motivation, quality and status, and therefore whether a more robust measurement of teaching is worthwhile.
I wish we’d had the format and time to examine this: whether this is true, or whether, using research measures as example, such measures have too many biases, perverse incentives, and inefficient and/or demoralising effects to be of real value (even if it entails superficial value).
I will share my main arguments here, some of which I am fairly convinced, many posed as part of my role on the affirmative side, and some raised in the spirit of fun and provocation. Above all, I think the topic raised several questions left that need to be contemplated, many of which I’ve posted below – so please share your thoughts!
SSRN has recently posted a great ethnographic study of young US lawyers in terms of what they actually do in the office.
Sinsheimer, Ann and Herring, David J., Lawyers at Work: A Study of the Reading, Writing, and Communication Practices of Legal Professionals (March 14, 2016). Legal Writing Journal, Vol. 21, Forthcoming; U. of Pittsburgh Legal Studies Research Paper No. 2016-11.
It includes great evidence of lawyers dealing with the following (SSRN pinpoints):
- Using close reading and skimming strategies (pp13ff)
- Strategic reading (p23, 30ff)
- Reading from computer screens (p26) but using printed materials by preference (p24)
- Huge use of email for written communication (p45ff)
- Use of precedents (p48)
- Reviewing and revising constantly (p49ff), being meticulous (p50)
- Research/writing nexus (p51ff)
- Interpersonal skills and stress in the office (p58ff)
- Time-management (p60ff)
- Cross cultural communication (p64)
- Developing professional identities (p66ff)
- Suggestions for curriculum change (p24, 71)
It’s a wonderful collection of vignettes and data that help to flesh out what we are often trying to impress on students are the real skills they need in preparing for legal practice environments.
A recent New York Times opinion piece goes slightly against the learning and teaching orthodoxy but arguing in FAVOUR of lectures – http://mobile.nytimes.com/2015/10/18/opinion/sunday/lecture-me-really.html. It looks at some of the advantages of a lecture format, particularly in the humanities, including teaching students how to listen critically, take notes that summarise ideas and arguments, and learn to understand before commenting/opining.
In law, there is some information that is best communicated through lecture elements, but my classes run in a more “Socratic” or questioning style (having been partially trained in the US), with problem-solving, group discussions and class debate. So I don’t do a lot of pure “lecture” although there is some content that I do present in this way.
But it got me thinking. I would probably be frustrated if I wanted to learn something in an area with which I wasn’t familiar (say at a conference or seminar) and the speaker adopted an “active learning” approach. Sometimes all you want to do is hear someone knowledgeable about something deliver an engaging, interesting and informative “lecture”. And when listening to such, I am rarely “passive” but usually constantly questioning them (initially in my head and eventually by raising my hand in question time). Of course, one difference is that I already know how to “do” legal reasoning, so that is not what I am learning. But the same could be said of later year students too.
So, my question is this: When are lectures the best way to teach?
Lyria Bennett Moses