Academic knowledge is the ability to support assertions. To assert claims, that is, and then offer support for their truth. For each specific claim, this ability is developed by studying the object about which the claim is made. But the scholar also draws on more general abilities, one of which is the ability to write scholarly prose. The unit of scholarly composition is the paragraph, which is an important if limited rhetorical gesture. In a paragraph an assertion is made and then supported. There are many different kinds of assertion and many different kinds of support that can be offered for them. But the general principle holds: a piece of scholarly writing is a series of supported claims.
We can judge a university, in part, by the quality of its "writing environment". What conditions are provided for teachers and students to develop the craft of composing coherent prose paragraphs? Do the faculty members take the time they need to keep their prose in shape? Do they impress upon the students the need to write often in order to write well? (Still more fundamentally, do they emphasize the need to write well?) Is the work produced by students approached in terms of its potential to become scholarly prose, and judged accordingly? Are the students rewarded for composing themselves in paragraphs, one supported assertion at a time?
No one can be expected to master the art of supporting assertions without a great deal of practice. In a particular subject area, the art includes the acquisition of factual knowledge (facts are what one makes assertions about). This is why it is so important to think about the conditions that are provided at a university, both for faculty and students, under which to write. In the student the craft of writing must be developed. Faculty members, too, can work to become better writers. And as a culture, we can be become better at asserting and supporting facts. (Note that "critique" is also the assertion and support of facts, albeit facts adduced in opposition to the facts already asserted by others.) Minimally, however, a university should be a place where the state of the art is maintained. I.e., where the culture maintains the ability of its language to assert facts.
A university should provide good ccnditions under which to assert oneself in prose.
Friday, September 28, 2012
Conditions
Thursday, September 27, 2012
The Work of Knowing
Academics are people whose job it is to know. It is important in a society that some people take this work upon themselves. Every morning they get up, and then get down to the business of knowing things. It is true that everyone "knows" things all the time in their own way, but it's not everyone's job to do so. We will all also find ourselves, at one time or another, having to put out a fire, and we'll (hopefully) find we are able to do it. That doesn't make us firemen.
Everyone can dance.
The work of knowing includes, essentially, the work of research and the work of teaching. Some academics work to know about medieval poetry, some work to know about sub-atomical particles. Some work to know "Donna me prega", some the Higgs boson. Each knower has his or her set of objects. Our professional knowers are expected to continuously revise and update their knowledge by engaging with the class of objects for which they are responsible. And they are expected to pass on what they know to a new generation of knowers who will continue the work. Some things we already know about the Higgs field, and must simply be taught. Much still needs to be discovered. We will never know enough about "Donna me prega" either.
It is very important that our knowers be given good conditions under which to work, to study and to teach. These conditions, of course, depend on the state of our universities. As a society, we must always ask ourselves whether we are providing our academics with conditions that are favorable to the work we are asking them to do. And academics must also always ask themselves whether they can work under the conditions we provide them. Not just "work", of course. Anyone can look busy.
Are we able, under these conditions, to do the work of knowing?
Monday, September 24, 2012
Realism
"An education consists in 'getting wise' in the rawest and hardest boiled sense of that bit of argot." (Ezra Pound)
Brayden King has raised an interesting concern about Fabio Rojas's Grad Skool Rulz that also applies to what I do here at RSL. Both of us are trying to help by offering advice to researchers. But in so doing we are also describing the life of research, often pushing back against a number of myths about what it's like to be a scholar. Brayden rightly wonders whether he would have gone into research if he had read and believed the Rulz before entering grad school.
I say "rightly" because of my own experiences. While I was doing my PhD, it dawned on me very slowly that "the life of the mind" promised by an academic career was not quite what I had been dreaming of. If I had encountered either Fabio's "rul book" or this blog, I'm not sure I would have drawn the intended conclusion: accept the conditions and learn to work accordingly. I may very well have decided that I can't work under those conditions; I'm going to do something else. (Which is, in fact, what I decided to do after learning it the hard way.) I didn't get into academia to work hard satisfying a bunch of publishing criteria and other bureaucratic performance measures. I got into it to satisfy my own curiosity about how the world works. (Specifically, to answer the question, "What is knowledge?")
The question that Fabio and I have to ask ourselves is whether our descriptions of research prioritize the aspects of research that the people who we (i.e., as a society) want to pursue research themselves prioritize. Brayden puts it well when he says that, in comparison to the lucidity and realism of the Rulz, he went into grad school with a high degree of naiveté. Learning the truth the hard way as he went. Does what Fabio and I say, then, actually imply (at least for some readers) that to think you're going to be able to satisfy your curiosity about how the world works by pursuing a life of scholarship is simply naive. Are we saying you need to "get wise"? Are we saying that you should expect instead to discover only that the university is yet another domain of social life that puts you to work?
My answer to this question is ambivalent. Universities are changing and they are certainly attracting people of what Heidegger called "a different stamp" than when I started out.
To be continued.
Thursday, September 13, 2012
Dumbassification 2.0
This may be ill-advised, but I'm not going to let Steve and Adam off the hook by retreating to the idea that Adam's piece (or the at least the most ridiculous parts of it) were meant to be "tongue-in-cheek" or that it merely presents what Steve calls "a transhumanist reductio of the increasing identification of academic authority with the catching of cheaters". Maybe that is how it was intended, I don't know. But it seems to me to be much easier to it read as a piece that argues for the identification of intelligence with the ability to retrieve information, i.e., "getting the right answer", and the further (surely transhumanist) celebration of the ability of new technologies to "augment" this ability and, indeed, render the memorization of known facts obsolete as a life skill.
Most charitably, I take him to be saying that an exam that requires memorization is somehow "daring" students to use their smartphones instead of actually memorizing the required material and, importantly, that having a mind that can actually remember facts is so, you know, "1.0". I don't think Adam is making fun of these ideas. I think he's promoting them, whether he thinks so or not.
I don't know how else to take the following:
This is the age of augmented cognition, or the extended mind. When teachers ask a student to put away her cell phone or iPad before the exam is handed out, it is like asking her to put away her occipital lobe or her frontal cortex.
Is this a joke? Is it supposed to mock the whining student who makes this claim? If so, I'm with Adam, but I just don't see how it can be read that way in context. As Steve makes clear, the argument is that something important changed with the invention of the smartphone: "the stuff you’re discriminating is now located outside rather than inside your head – the ‘mind’s eye’ has been effectively distributed between you and the ‘cloud’." And this means, as Adam says, that to ask students to function without their phones is like asking them to function without some part of their brain.
But surely nothing has radically changed. Some stuff remains inside the head (the result of years of experience and study) and some of it remains outside (in the "library", however virtual it may now be). Examination is the means by which we find out how well the student can use what's inside, sometimes (but not always) by putting it in relation to what's outside, but always (and not just sometimes) by showing us what was inside at the time of examination. The "augmentation" of the mind has been going on since the invention of writing and the abacus, and at each stage of development, examiners have had to distinguish between the not very impressive ability of a student to merely look something up, or plug something into a calculating device, and the much more impressive accomplishment of having internalized a part of their own heritage. Adam's piece (and especially Steve's comment) is suggesting that the game has now radically shifted and that all that needs to be known can be "instantly and ubiquitously" looked up. Therefore all we should be examining is the ability of students to do that.
But a mind that can actually retain three or four hundred relevant facts for the purpose of passing an exam is demonstrating a far more important ability than this information retrieval skill ever was or will be. And also a more important ability than memorization as such. Again, Adam may just be mocking a position when he says this:
The brain doesn’t obey the boundaries of the skull, so why do students need to cram knowledge into their heads? All they need in their local wetware are the instructions for accessing the extended mind. This is not cheating, it is simply the reality of being plugged into the hive mind. Indeed, why waste valuable mental space on information stored in the hive?
If all he's doing here is mocking some imagined smart-ass student, I'm with him. But, again, I can't get his piece to read like that. If I do, the whole thing becomes satire (which is a possibility I explicitly considered, and then rejected "for the sake of argument"). And to think that a course and a subsequent test that values memorization is calling for students merely to "cram knowledge into their heads" simply fails to recognize the value of a having a mind that can actually retain facts (regardless of how important the individual facts are).
The sensible view is to treat smartphones, and the "cloud" to which they are connected, like we have treated books and calculators in the past: sometimes they are allowed and sometimes they are forbidden. The bar for the exam is set accordingly. And cheating remains what it always was: the unauthorized use of a technology or technique for the purpose of passing an examination.
I'm going to write one more post on this. I think I'm now obliged to go after Farhad Manjoo's denialism of the cheating scandal at Harvard on the grounds that they were merely "collaborating".
* * *
[Update: Jonathan brings us the good news in parable form.
Update 2: I just found the source of the title concept in The New Yorker. Fittingly, it was part of Chuck D's message to college students.]
Intended to Accomplish Goals
(I'll follow up on my post about academic cheating later today. I want to stay on schedule with my series on article design.)
The object that is specified by a designer must serve some specific set of ends. To understand a design is, in part, to understand its purpose; if we cannot see the purpose of an object, if we do not know what it's for, then we do not understand its design. This is captured by the third part of Ralph and Wand's definition: a design is "intended to accomplish goals".
When designing your article, therefore, it is useful, and quite necessary, to have some clear goals in mind. You are obviously designing it to be published somewhere. This goal can be defined by making a list of potential journals that indicate the public space in which the conversation you want to participate in goes on.
But you also have decide what effect you want to have on the conversation. Do you want simply to inform others about your results? Do you want to change their minds? Do you want to correct a misconception? Do you want to re-orient the field and take it in another direction? (I'm going to leave aside secondary goals like impressing future employers or earning tenure. As goals, these don't have a very specific effect on the design on the article.)
You can take this goal-orientation down to the level of the paragraph (and even the sentence). Ask yourself what you want your reader to do with it: believe it, or agree with it, or understand it, for example. These all set up slightly different tasks, slightly different rhetorical problems. You may be saying something that you know the reader will find difficult to believe; your job here will be to overcome their doubts. Or you may be engaging in an argument that has clearly defined sides where you want them to come over to yours. Or you may be saying something that is difficult to understand and your job is explain it clearly and effectively.
In any case, thinking about your article as an object to be designed demands that you make yourself aware of your goals. It will be useful to have one overarching goal and a 40 smaller ones. One for each paragraph.
Wednesday, September 12, 2012
The Dumbassification of Academia
"We don't yet know what the body can do."
Baruch Spinoza
It's hard to describe the disappointment I felt while reading Adam Briggle's "Technology Changes 'Cheating'", after Steve Fuller, who I count among the most formative figures in my intellectual development, had described it as "one of the smartest things on academic cheating in a long time" on Twitter yesterday. But, then again, it's always hard to interpret a tweet. I guess Steve might have meant it sarcastically. Perhaps Adam Briggle himself is just being ironic; perhaps it's a pastiche of empty-headed techno-hype carping its diem. Maybe Steve just meant it as a comment on the generally poor quality of discussion in the debate. But I think we are justified in taking it straight, at least for the sake of argument. So what, you may be wondering, do I think Briggle gets so wrong?
In a word, everything. He begins by observing, uncontroversially, that "students are now using cell phones and other devices to text answers to one another or look them up online during tests" and points out, quite rightly, that "these sorts of practices are widely condemned as cheating". (The Harvard cheating scandal, which Farhad Manjoo in a not quite, but almost, as ill-conceived piece denied even exists, is of course somewhere in the background here.) But he then attempts to challenge this conventional wisdom (that if you use your cell phone in a closed-book exam you are cheating) by suggesting that similar rules don't apply in the workplace. "Imagine a doctor or a journalist punished for using their smart phone to double-check some information," he balks.
Well, okay. Let's imagine a doctor who is unable to even begin to diagnose your condition without first Googling your symptoms. Or a journalist who can't engage in a lively panel discussion without the help of his smartphone. And do note that journalists do get fired for not attributing their work to their rightful sources, and doctors are in fact expected to do some things, like take your blood pressure or remove your brain tumor, relying heavily on skills they carry around with them in their bodies, and which they were presumably asked to demonstrate to their examiners before they were granted their degrees and licenses.
But to point this out, of course, only exposes how hopeless backward I am. This is the pre-emptive point of Briggle's familiar techno-fantastic boilerplate that now follows. It might well get him invited to a TED conference one day, but we must really stop letting this sort of thing serve as a universal game changer:
The human used to be that creature whose brain occupies the roughly 1,600 cubic centimeters inside of the skull. Call that humanity 1.0. But now we have humanity 2.0. Cognition extends beyond the borders of the skull.
Humans are nebulous and dispersed: avatars, Facebook profiles, YouTube accounts and Google docs. These cloud selves have the entire history of human knowledge available to them instantaneously and ubiquitously. Soon we will be wearing the Internet, and then it will be implanted in our bodies. We are building a world for humans 2.0 while our education system is still training humans 1.0.
Welcome to the twenty-first century! The truth is, and has always been, that humans are creatures with brains that occupy 1,600 cubic centimeters of a body, a volume that contains muscles, bones, nerves, skin, a heart, guts .... We are building a world, if we are, for disembodied brains, not "humans 2.0".
That's the whole problem with Briggle's enthusiasm for the new technologies and social practices. It's not a new kind of "human" environment, it's simply an inhuman one. We have to learn how to face this environment resolutely, not simply be assimilated by it. Human bodies don't improve by way of implants, but by training and discipline. We don't need to "upgrade" them, we just need to learn how to use them, as Spinoza noted in his Ethics.
Last year Arum and Roksa made a good case for something many of us had long suspected. Social study practices (and Humans 2.0 are hyper-social, if they're anything) don't make students smarter. They may, of course, improve their ability to work in groups, and therefore should not be entirely done away with. But what actually improves critical thinking and analytical reasoning is the old-school, individual arts of reading and writing. And the only way to test whether students can do these things is to get them to close their books (after having read them carefully), unplug, and tell us what they think, on their own. We've never trusted a student who couldn't talk about a book without having it open on their desk. Why are we going to grant that their constant gestures at their touch screens count as a demonstration of intelligence? Note that Briggle is not just for socially-mediated examination: he is against old-school conceptions of cheating. He wants to do away with traditional values, not just augment them with new technologies.
This, like I say, displays a desire to do away with the body itself: to jack the (disembodied) brain ("wetware") directly into the cloud of human knowledge. But it doesn't take long to realize how utterly empty Briggle's image of "cloud selves" who "have the entire history of human knowledge available to them instantaneously and ubiquitously" ultimately is. After all, our stone-age forebears had "the entire world of nature's bounty", you know, "at their fingertips". The problem was just sorting the dangerous things in the environment from the useful ones, and getting to where the good stuff was, while also finding places to weather a storm. That problem remains even if everything that's ever been written can found be found by Google (something we're far from even today). Let's keep in mind that the vast majority of the "cognitive" capacity of the "collective mind" (a.k.a. the Internet) is devoted to circulating spam and porn. We can "instantaneously and ubiquitously" jump into that sewer. But that's only how the fun begins.
Briggle closes the piece with something that looks like an insight. "The ultimate problem is not with students but our assessment regime that values 'being right' over 'being intelligent'. This is because it is far easier to count 'right' answers than it is to judge intelligent character." This has been known to progressive educators for a long, long time of course. And Briggle doesn't realize that all the technologies he's so hyped about letting the students use presume precisely that the point is finding the right answer not being able to think. That's all we can test if we don't count their use as cheating.
His protestations to the contrary, Briggle really does value the right answer over the exercise of real intelligence. How else could he suggest that "it doesn't matter how you get there"? Yes, life's the destination now, not the journey. In fact, we've already arrived. There's nothing more to learn except what the "the entire history of human knowledge" (which is so readily available to everyone!) has already brought us.
The brain doesn’t obey the boundaries of the skull, so why do students need to cram knowledge into their heads? All they need in their local wetware are the instructions for accessing the extended mind. This is not cheating, it is simply the reality of being plugged into the hive mind. Indeed, why waste valuable mental space on information stored in the hive?
Right! Of course! Why hadn't we thought of that sooner? We should have abandoned painting after we invented photography. And stopped teaching anyone to play the violin after we had recorded the first thousand concertos! And I guess all these doping scandals in the Tour de France are also going to be a thing of the past—when it becomes a motorcycle race? What absolute nonsense.
Someone has to tell Professor Briggle (and it may as well be me) that nobody is asking anyone to cram knowledge into their heads. Students are being asked to train their minds, which, because they are part of the same being, is to say they are being asked to train their bodies, to be able to hold their own in an ongoing conversation ("of mankind", if you will) with other knowledgeable peers. To read and write coherent prose paragraphs. That's a distinctly "academic" virtue, to be sure. But surely there is room for academics in the new era?
Ben Lerner once proposed that poetry is part of the "struggle against what Chuck D has called the ‘dumbassification’ of American culture, against the deadening of intellects upon which our empire depends". I don't think it is too much to ask our philosophers to help out, rather than promoting the spread of these deadening social media as some kind of glorious new stage of human evolution. Philosophers could, at the very least, resist, rather than celebrate, the dumbassification of academia.
(Continues here.)
Manifested by an Agent
The word "design" has recently been given new importance by its association with creationism. This is the belief that life on earth did not (just) evolve, i.e., develop through a series of random mutations that were then "selected" by the environment. Rather, argue the "intelligent design" theorists, someone or something made us. Something or someone wanted us to exist, intended us to be (more or less) as we are. Our bodies and their capacities are the expression of a plan, not just, as evolutionists believe, the fortuitous result of a long, natural process. Whether the designer is God or an advanced alien life form, the important thing is that it possesses agency, it is able to act with the aim of bringing something about.
Ralph and Wand's (2009) definition of design, completely unrelated to creationism of course, also stipulates an agent. Design must be "manifested by an agent", they tell us. A designed object is always "artificial", man-made. And, though I suppose this is open to stylistic variation and shifting tastes, a designed object is generally made to look artificial. The attempt to make an object look like something nature made often results in kitsch. In an important sense, the object must not just be the product of design; it must manifest the will of the designer.
In any case, scholarly articles, too, must manifest agency. They must appear to be created by an intelligent being, who wanted the text to be as it is, who had a "plan" for it, and exercised his or her (or its) own capacity for action to realize that plan in the "specified object" (the article). A journal article must not look like it came about through a series of fortunate accidents, random mutations that just happened to survive the dangers in some hostile environment (the peer-review process). A journal article is a paper that is manifestly trying to say something and there should be a sense, in reading it, that there is some agency behind it, some one who is trying to say it.
The agency that a journal article manifests is known simply as "the author". In the case of the co-authored paper, this "one-ness" of the agent is important, I should add. A paper should not look like it is the result of struggle for dominance between two brutes. It should manifest a "meeting of minds" that is, for all intents and purposes, a single intelligence. This authorial persona is of course a construction—it is in many ways part of the design. It certainly should be. Indeed, the sense we get of the agency behind the text is part of the meaning of the text as a whole.
Tuesday, September 11, 2012
A Specification of an Object
"Mais Dégas, on n'écrit pas des poèmes avec des idées, on écrit des poèmes avec les mots."
I'm going to work through each element of Ralph and Wand's (2009) definition of design over the next two weeks, trying to show that article writing involves a component of "design".
One thing to keep in mind is that I'm here talking about the work of planning the article, not actually writing it. The reason for this will become clear as we look at the first element of Rand and Wand's definition: design is "a specification of an object". It is not, that is, the construction or assembly or production of that object. There is a difference between designing a coffee pot and mass-producing it. In designing it, you are only specifying its properties.
But it is important to keep in mind that you are specifying the properties of an essentially physical object; you are not imagining some ideal "intellectual" thing-in-the-mind or spectral entity. An article is, ultimately, an arrangement of words across 20 or 30 pages of an issue of a journal. You are deciding how those words will be arranged. To do this, you will, have to group those words into sections and paragraphs. You will have to decide (at least roughly) how many words the whole paper will arrange. Then how many there will be in each section.
When specifying an object, be specific. What will each section say and how much of the paper will be devoted to saying it? Consider making a list of the core concepts you'll be using in each part of the paper along with a short statement of the section's overall purpose.
Like a designer, you are imagining an object without yet having to construct even a model of it. You might make some sketches on a piece of paper of course.
Monday, September 10, 2012
Article Design
Every now and then I get a great surge in traffic here at RSL. Lately, these surges have been owed to a kind tweet by Oliver Reichenstein, whose firm, Information Architects, has given us the iA Writer, which looks worth trying. When I get around to it, I'll write a post about it too. For now, however, in a pretty bald attempt to pander to the design community, I'm going to write a few posts this week about article writing as a "design" problem.
The Wikipedia article provides us with the following definition of design:
a specification of an object, manifested by an agent, intended to accomplish goals, in a particular environment, using a set of primitive components, satisfying a set of requirements, subject to constraints. (Ralph and Wand 2009)
It suits me nicely because, on this definition, I've been talking about article "design" forever. I have long been encouraging writers to think of their problem as that of constructing an object with particular goals in view. Also, my focus on the paragraph is very much an attempt to identify "primitive components", while the "environment" of an article is, of course, the discourse or conversation that it is attempting to engage with. Finally, there is no question that publication in this environment is dependent on satisfying certain requirements and respecting particular constraints. In this sense, then, your problem as a writer of a journal article is a design problem.
Many years ago, as a kind of philosophical exercise, I tried to imagine the perfect object, the ideal thing. I quickly decided that it would be one that you'd immediately know what to do with when you see it. It would need no scientific investigation to understand. No owner's manual to operate. It would simply be obvious what it was for, and once you put it to that use you'd discover that it was perfectly suited to the task.
Clearly, we're striving for the same kind of perfection in our article writing. We want the reader to be able see at a glance what the article is for and, then, while reading it, to feel that the article is perfectly suited to accomplish that goal. Just as designers must constantly keep the user in mind, writers must be ever mindful of their readers. They must imagine what the reader will do with the object they're constructing.
Friday, September 07, 2012
Cutting Your Work Out for You
The standard, empirical social science paper can be given a simple provisional structure. By "provisional", I mean one that you can impose on your image of a paper in the early planning stages, before you've written very much of it. It gets you "into the ballpark", we might say. It does not guarantee a home run.
The structure goes as follows:
3 paragraphs of introduction
(The first tells us something interesting about the world you have studied. The second tells us about the science that studies it. The third tells us what your paper is going to say.)5 paragraphs of background (elaborates §1)
5 paragraphs of theory (elaborates §2)
5 paragraphs of method
15 paragraphs of analysis (in 3 5-paragraph sections)
5 paragraphs of implications
2 paragraphs of conclusion
Now, over these past three days I've been considering other kinds of paper: theory papers, methods papers, and critical essays. I suggested that a similar provisional structure could be used there too. Here's how I think it might look:
A theory paper could have:
3 paragraphs of introduction
(The first tells us something interesting about the practices that you want to theorize. The second tells us about the state of the theory. The third tells us what your paper is going to say.)5 paragraphs of background (elaborates §1)
5 paragraphs on the current state of the theory (elaborates §2)
5 paragraphs of the problems with that theory when confronted with practice (anomalies)
15 paragraphs of theory development (in 3 5-paragraph sections)
5 paragraphs of implications (usually methodological)
2 paragraphs of conclusion
A methods paper could have:
3 paragraphs of introduction
(The first tells us something interesting about the kind of object that you want to develop method to observe. The second tells us about the state of the methods today. The third tells us what your paper is going to say.)5 paragraphs about the object (elaborates §1)
5 paragraphs on current methods (elaborates §2)
5 paragraphs of the problems with those methods (impasses)
15 paragraphs of methodology (in 3 5-paragraph sections)
5 paragraphs of promised advantages to using your new method
2 paragraphs of conclusion
Finally, a critical paper could have:
3 paragraphs of introduction
(The first tells us something interesting about some interesting real-world issue. The second tells us about the current state of scholarship. The third tells us what your paper is going to say.)5 paragraphs about the world as seen by scholars (elaborates §1)
5 paragraphs on what the scholars believe about this (elaborates §2)
5 paragraphs about how you approached this material
15 paragraphs of critical analysis (in 3 5-paragraph sections)
5 paragraphs of implications for future scholarship
2 paragraphs of conclusion
Like I say, these are just rough guides. They offer a place to start and a way of dividing up the problem of writing in smaller tasks. You've now got your work cut out for you.
Thursday, September 06, 2012
Critical Papers
Methodological and theoretical papers share the problem of making a contribution to the literature on the basis of no particular empirical experience. This does not mean that they don't draw on the experience of the scholar as a researcher; it just means that they do not present original data in support of their argument. This is also true of a third kind of paper, one that I am very much concerned about as a viable art form, namely, the critical essay. Such a paper attempts to contribute to the conversation among scholars simply by reading the other contributions to that literature and assessing the validity of their arguments.
This is a really important function in scholarship, but one that is being eclipsed in some fields by the demand that papers make a theoretical contribution on the basis of methodical study. A good critical essay will generally make a "negative" contribution, much like the contribution that weeding makes to a garden. It will correct long-standing errors and remove tenacious but specious arguments. It will point out underlying and perhaps mistaken assumptions in the work of particular scholars or whole subfields. It will, as much of my work tries to do, point out problematic connections between work that has been published in the literature and its sources. A critical essay is all about putting what we think we know into a larger perspective.
While such work serves a distinct purpose, the process of writing it remains the same. You'll want to introduce your critique quickly and effectively. What's the broader real-world setting (and theoretical issue) on which your essay bears? What's the state of the art of the field you'll be engaging critically with? What conclusions will your critique arrive at, and how will it get there? You'll want to answer these questions within the first 600 words of the essay. That'a about three paragraphs.
I'm currently revising an essay of this kind for resubmission and have decided to use my standard outline as a guide. This means it will have a three-paragraph introduction (as just described), and this will be followed by about five paragraphs of "background" (which in this case will situate an influential account of a social practice in a broader theory of social organization). I will then summarize the account and its standard interpretation (in lieu of a "theory" section, but definitely to remind readers of what they thought they knew). Where an empirical paper would have a "methods" section, I will discuss how I located the sources I've uncovered to push against the standard account. This leaves an "analysis" in which I present the substance of my critique and "implications" section in which I suggest where we might go from here. I'll then offer a standard two-paragraph conclusion.
It'll be interesting to see if it works. If it does, it shows that pretty much any journal article can be thought of as 40 paragraphs divided into sections consisting of 3, 5, 5, 5, 15, 5, and 2 each. That is sort of comforting to know, even if it is only a rough approximation.
Wednesday, September 05, 2012
Methods Papers
Like theory development, the evolution of research methods normally happens in the context of original empirical work. A researcher finds that existing methods are unable to generate the data needed to answer a particular research question and goes looking for new ways of observing the world. If a theory is a "program of perception", a method is a program of observation. It is not just a way of seeing the world but a way of getting a good look at a particular part of it. It is therefore quite reasonable to ask someone who claims to have developed a new method what it has made them able to see. That is, we want to hear about that original empirical work.
Still, sometimes researchers will want to make contributions to methodological debates in their field without at the same time presenting original empirical results. That is, they'll want to write a "methods paper". I encourage such writers to begin as they would with any other paper:
First, identify some interesting set of real-world social practices.
Second, present the current state of the art as to the observation of those practices, emphasizing its limitations or, at least, its potential to develop.
Third, present your paper. This will include a short description of your methodological innovation, emphasizing how it overcomes those limitations, or realizes that potential. It will also outline your paper.
Write a paragraph—that is, about six sentences, and no more than 200 words— for each of these. Then, as a "background" section, develop your description of the real-world social practices you think we need to be better at studying in about five paragraphs. Next, develop the state of the art in another five paragraphs. Then provide some sense of your role in methods development. What has forced you to take up these questions? How did you run into the limitations? How did you notice this potential? This is where you build your credibility with the reader.
Now, write about 15 paragraphs that develops your methodological innovation. Make sure you provide real or imagined examples of how the methods would be applied in practice, i.e., in a research project.
Finally, write about 5 paragraphs that make the benefits of this method explicit. And then write a short, two-paragraph conclusion that brings us back to the still-interesting, but now more readily observable, "real" world of social practices. As in a theory paper, you will not actually have made any claims about this world. You will only have proposed a better way generating data to support such claims.
Tuesday, September 04, 2012
Theory Papers
In general, papers should make a theoretical contribution. While most papers will bring empirical material to bear in order to accomplish this goal, however, some papers, which are often called "theoretical" or "conceptual" papers, accomplish their theoretical aims by purely theoretical means.
Theories, Bourdieu tells us, are "programs of perception". They condition what researchers see and do not see when they look at the world. They are also, I tell people, systems of expectation; they condition what people expect of your object. But in a theoretical paper, there is no specific empirical object. Instead, there is a general class of objects—the kinds of things you are able to see, but have not looked at. Your reader has certain expectations of these objects, is programmed to perceive them in certain ways.
You are trying to change those expectations, reprogram them. And you are trying to do so without showing them anything about any particular object. What you bring to bear on their expectations is more theory—that is, other expectations, other parts of their program.
Normally, those who hold a particular theory have a kind of knee-jerk version of it in mind. When you mention a social practice, they'll immediately theorize it in a certain way, and this will reduce the complexity of their image of the object. In an empirical paper, you use your data to push against this simplified image. That's how you "artfully disappoint your reader's expectations of the object" as I usually say.
But in a purely theoretical paper, you are trying to reconfigure your reader's expectations by activating other expectations. This may be accomplished by drawing in other theorists that the reader is, if perhaps only vaguely, aware of but does not use in the initial conceptualization of a practice. You here argue that these other theorists should affect our expectations of the object in question, that they should have a stronger influence on us. If your argument holds, the reader's expectations will change without being confronted by any new empirical data.
Alternatively, you can offer a closer reading of the theory in question. You can show that our expectations of our object have been formed by superficial or careless readings of the major theorist in the field. Since your readers presumably respect the work of this theorist, this may go some way towards changing their expectations.
The new expectations will of course have to be somehow "tested", but in a theoretical paper, this task is left for future research, perhaps done by other researchers. This means that you do well to identify the methodological implications of your research. If we now see the world differently, what should we do differently when we look at it?