Wednesday, September 12, 2012

The Dumbassification of Academia

"We don't yet know what the body can do."
Baruch Spinoza

It's hard to describe the disappointment I felt while reading Adam Briggle's "Technology Changes 'Cheating'", after Steve Fuller, who I count among the most formative figures in my intellectual development, had described it as "one of the smartest things on academic cheating in a long time" on Twitter yesterday. But, then again, it's always hard to interpret a tweet. I guess Steve might have meant it sarcastically. Perhaps Adam Briggle himself is just being ironic; perhaps it's a pastiche of empty-headed techno-hype carping its diem. Maybe Steve just meant it as a comment on the generally poor quality of discussion in the debate. But I think we are justified in taking it straight, at least for the sake of argument. So what, you may be wondering, do I think Briggle gets so wrong?

In a word, everything. He begins by observing, uncontroversially, that "students are now using cell phones and other devices to text answers to one another or look them up online during tests" and points out, quite rightly, that "these sorts of practices are widely condemned as cheating". (The Harvard cheating scandal, which Farhad Manjoo in a not quite, but almost, as ill-conceived piece denied even exists, is of course somewhere in the background here.) But he then attempts to challenge this conventional wisdom (that if you use your cell phone in a closed-book exam you are cheating) by suggesting that similar rules don't apply in the workplace. "Imagine a doctor or a journalist punished for using their smart phone to double-check some information," he balks.

Well, okay. Let's imagine a doctor who is unable to even begin to diagnose your condition without first Googling your symptoms. Or a journalist who can't engage in a lively panel discussion without the help of his smartphone. And do note that journalists do get fired for not attributing their work to their rightful sources, and doctors are in fact expected to do some things, like take your blood pressure or remove your brain tumor, relying heavily on skills they carry around with them in their bodies, and which they were presumably asked to demonstrate to their examiners before they were granted their degrees and licenses.

But to point this out, of course, only exposes how hopeless backward I am. This is the pre-emptive point of Briggle's familiar techno-fantastic boilerplate that now follows. It might well get him invited to a TED conference one day, but we must really stop letting this sort of thing serve as a universal game changer:

The human used to be that creature whose brain occupies the roughly 1,600 cubic centimeters inside of the skull. Call that humanity 1.0. But now we have humanity 2.0. Cognition extends beyond the borders of the skull.

Humans are nebulous and dispersed: avatars, Facebook profiles, YouTube accounts and Google docs. These cloud selves have the entire history of human knowledge available to them instantaneously and ubiquitously. Soon we will be wearing the Internet, and then it will be implanted in our bodies. We are building a world for humans 2.0 while our education system is still training humans 1.0.

Welcome to the twenty-first century! The truth is, and has always been, that humans are creatures with brains that occupy 1,600 cubic centimeters of a body, a volume that contains muscles, bones, nerves, skin, a heart, guts .... We are building a world, if we are, for disembodied brains, not "humans 2.0".

That's the whole problem with Briggle's enthusiasm for the new technologies and social practices. It's not a new kind of "human" environment, it's simply an inhuman one. We have to learn how to face this environment resolutely, not simply be assimilated by it. Human bodies don't improve by way of implants, but by training and discipline. We don't need to "upgrade" them, we just need to learn how to use them, as Spinoza noted in his Ethics.

Last year Arum and Roksa made a good case for something many of us had long suspected. Social study practices (and Humans 2.0 are hyper-social, if they're anything) don't make students smarter. They may, of course, improve their ability to work in groups, and therefore should not be entirely done away with. But what actually improves critical thinking and analytical reasoning is the old-school, individual arts of reading and writing. And the only way to test whether students can do these things is to get them to close their books (after having read them carefully), unplug, and tell us what they think, on their own. We've never trusted a student who couldn't talk about a book without having it open on their desk. Why are we going to grant that their constant gestures at their touch screens count as a demonstration of intelligence? Note that Briggle is not just for socially-mediated examination: he is against old-school conceptions of cheating. He wants to do away with traditional values, not just augment them with new technologies.

This, like I say, displays a desire to do away with the body itself: to jack the (disembodied) brain ("wetware") directly into the cloud of human knowledge. But it doesn't take long to realize how utterly empty Briggle's image of "cloud selves" who "have the entire history of human knowledge available to them instantaneously and ubiquitously" ultimately is. After all, our stone-age forebears had "the entire world of nature's bounty", you know, "at their fingertips". The problem was just sorting the dangerous things in the environment from the useful ones, and getting to where the good stuff was, while also finding places to weather a storm. That problem remains even if everything that's ever been written can found be found by Google (something we're far from even today). Let's keep in mind that the vast majority of the "cognitive" capacity of the "collective mind" (a.k.a. the Internet) is devoted to circulating spam and porn. We can "instantaneously and ubiquitously" jump into that sewer. But that's only how the fun begins.

Briggle closes the piece with something that looks like an insight. "The ultimate problem is not with students but our assessment regime that values 'being right' over 'being intelligent'. This is because it is far easier to count 'right' answers than it is to judge intelligent character." This has been known to progressive educators for a long, long time of course. And Briggle doesn't realize that all the technologies he's so hyped about letting the students use presume precisely that the point is finding the right answer not being able to think. That's all we can test if we don't count their use as cheating.

His protestations to the contrary, Briggle really does value the right answer over the exercise of real intelligence. How else could he suggest that "it doesn't matter how you get there"? Yes, life's the destination now, not the journey. In fact, we've already arrived. There's nothing more to learn except what the "the entire history of human knowledge" (which is so readily available to everyone!) has already brought us.

The brain doesn’t obey the boundaries of the skull, so why do students need to cram knowledge into their heads? All they need in their local wetware are the instructions for accessing the extended mind. This is not cheating, it is simply the reality of being plugged into the hive mind. Indeed, why waste valuable mental space on information stored in the hive?

Right! Of course! Why hadn't we thought of that sooner? We should have abandoned painting after we invented photography. And stopped teaching anyone to play the violin after we had recorded the first thousand concertos! And I guess all these doping scandals in the Tour de France are also going to be a thing of the past—when it becomes a motorcycle race? What absolute nonsense.

Someone has to tell Professor Briggle (and it may as well be me) that nobody is asking anyone to cram knowledge into their heads. Students are being asked to train their minds, which, because they are part of the same being, is to say they are being asked to train their bodies, to be able to hold their own in an ongoing conversation ("of mankind", if you will) with other knowledgeable peers. To read and write coherent prose paragraphs. That's a distinctly "academic" virtue, to be sure. But surely there is room for academics in the new era?

Ben Lerner once proposed that poetry is part of the "struggle against what Chuck D has called the ‘dumbassification’ of American culture, against the deadening of intellects upon which our empire depends". I don't think it is too much to ask our philosophers to help out, rather than promoting the spread of these deadening social media as some kind of glorious new stage of human evolution. Philosophers could, at the very least, resist, rather than celebrate, the dumbassification of academia.

(Continues here.)

10 comments:

Anonymous said...

Even before this newfangled internet!

"Knowledge is of two kinds. We know a subject ourselves, or we know where we can find information upon it. When we enquire into any subject, the first thing we have to do is to know what books have treated of it. This leads us to look at catalogues, and at the backs of books in libraries."
— Samuel Johnson (Boswell's Life of Johnson)

JBritt said...

I think you've misread Briggle. His point, as I read him, is that asking students to cram facts into their heads that can then be 'tested' as right or wrong is lazy education. What we should be doing is teaching students how to think -- which has very little to do with whether the 'facts' they use in thinking are stored in their bodies or in cyberspace. In other words, we are distracted by all the new ways students now have at their disposal to 'cheat'. Instead, it is we (educators) who are cheating them by requiring only the memorization of 'facts'.

Andrew Gelman said...

A Tour de France on motorcycles would be cool--but only with a strict no-additives-in-the-gasoline rule.

Jonathan said...

Right, a Shakespearian actor could simply call up the soliloquy on a smart phone and hold it up as though it were Yorick's skull. You wouln't need to cram that poetry into the poor actor's brain.

Adam said...

I am not “hyped” about letting students use these technologies. I am dismayed by an education system where they feel tempted to do so and are rewarded for doing so well (that is, not getting caught).

My point is that education ought to be structured such that machine-thinking (the stuff the hive does so well) is not its purpose. The stuff about humanity 2.0 is tongue-in-cheek. You miss the irony here, so I’ll spell it out: our education system is increasingly set up to reward this dumbass (your words, and I agree) image of what it means to be an excellent human being. Everyone is worried about students who earn this reward illicitly by ‘cheating.’ I am concerned about the reward itself and what it says about education.

The analogy to Tour de France is cute but you misuse it. Note what I say in full (you only quote the last part, presumably so you have an easier straw man to wrestle): “If the goal of education is simply to get through the maze, then it does not matter how one gets there.” But my point is that of course the goal of education should not just be about getting through a maze. Similarly the goal (purpose, telos) of the Tour de France is not just to get up the hill and across the line as fast as possible. The purpose of education and the Tour is to develop, display, and honor human excellence – and for this it most definitely matters how one gets there.

The disanalogy with the Tour de France is that in that case no one wants to see a motorcycle race (it is clear that that is not the point). But in education (and this is where the analogy breaks down) we have the dysfunctional system where we have turned it into something like a motorcycle race but insist on not letting students use motorcycles. To make it clearer by dropping the analogy: we lazy educators have adopted a machine-like assessment regime that rewards machine-like thinking but we’ll be damned if students go about using machines!

Thomas said...

I don't want to spend a great deal of time quibbling about whether I misread your post or you miswrote it, but you left the distinct impression that you think we're living in a new "2.0" reality that the educational system better get hip to.

It did not seem like you think Humanity 2.0 is just so much hype and old-school education focused on developing individual competences is worth defending.

If you think closed-book written examinations or take-home essay exams where collaboration is not allowed is still a perfectly good way of testing student abilities, and that the abilities that these forms of examination test are worthwhile having, then I can just inform you that you've not expressed yourself very clearly.

Also, by my count, on your interpretation, roughly half the post is tongue-in-cheek.

In any case, though I don't want to endorse "machine-like assessment regimes" as anything but a necessary evil given limited resources and mass enrollment, I think your argument is still very flawed. There is no contradiction in machine-grading a multiple choice exam about what, say, Hamlet did and did not say and when he did or did not say it (in order to test whether or not the students know the play) and expressly forbidding them to use machines when taking that exam (which would simply defeat the whole point of examining them).

We want to know "what's in their heads", but only because we are making some (quite plausible) assumptions about what they might learn from getting it in there.

Thomas said...

@JBritt: As Jonathan also rightly suggests, I think we undervalue the accomplishment of "memorization of facts". If someone can account for all the characters, scenes, and conflicts in Hamlet, that shows something. If they can also recite (from memory) some of the key soliloquies, that also counts for something. It doesn't prove that they can "think" about Hamlet's motives and doubts, but it is unlikely that they can "regurgitate" these things without having learned something about the play.

It's no easy matter to "store a fact in a body". When someone succeeds, it's as impressive as being able to play the violin. Both admit to differences in the degree of mastery, of course.

Steve Fuller said...

Let me intervene because I was the one who tweeted Adam’s article, which appeared originally in the North Texas Daily News, remarking on the smartness of the argument.

I meant what I said because I took Adam to be providing a transhumanist reductio of the increasing identification of academic authority with the catching of cheaters, plagiarists, and others who somehow get around the rules of the game. Perhaps because academics control so little of the political economy in which they operate these days, regression to an old school sense of discipline can appear attractive, especially given the high-tech fraud detection devices at the academic’s disposal.

Against the backdrop of those assumptions, Adam, with a somewhat TED-ly tongue in cheek, is arguing that academics should go with the grain of student usage and treat the high tech as an enhancement to student performance. After all, not all students are equally adept at using search engines to plagiarise just the right text that gives them an optimal score on an exam question. The classic pedagogical problem of reaching a discriminating judgement still applies.

What’s changed is that the stuff you’re discriminating is now located outside rather than inside your head – the ‘mind’s eye’ has been effectively distributed between you and the ‘cloud’. If the academic still thinks that such prosthetic cognition makes life too easy for the student, then the burden is on the academic to raise her game and re-create the relevant level of discrimination in the new media.

Nobody disagrees with the project of disciplining students’ minds in certain ways that enable them to make discriminating judgements. But the introduction of new technologies tests the academic’s own ability to discriminate that end from the means by which it is achieved.

The problem with the doping scandals surrounding competitive athletics is less to do with the drugs themselves somehow undermining the sense of gamesmanship than with their lack of general availability, which does undermine it. By analogy, it should be less of a worry that students have smartphones than that not all of them do.

JBritt said...

I was presenting only my interpretation of Adam's text, not defending his position. I agree that memorization of facts is an accomplishment. I disagree that we undervalue it. If anything, we overvalue it, reducing educational assessment to testing for the presence of facts in students' brains. This leads me in the direction I take Steve to be going. It makes some sense to think of advanced information technology as an enhancement to our natural fact-carrying capacity. On the other hand, as one might also say of writing, it does tend to leave the part of our brain that might have contained those facts looking for something to do (perhaps even leading to a kind of atrophy). We should question whether that's a good thing. But the larger issue still seems to me to be the reduction of education to learning facts. This move is what inclines us to think of using technology instead of brain cells as cheating. Again, I agree with Steve that allowing or prohibiting the use of such technology says nothing about the pedagogical goal of teaching judgment. It's silly, for instance, to tell students they shouldn't use Google to conduct their research. The trick is not to restrict them paper versions of the Philosopher's Index (if it even still exists in that form); it's to teach them how to conduct research using the new technology.

Thomas said...

Thanks, everyone, for your comments. I'll have a follow-up post up tomorrow.