Brad, a friend and teaching mentor of mine, recently posted about how his online astronomy students’ teaching evaluations complain that they don’t receive study guides to prepare for exams. He wondered out loud (well, on Facebook at least) if this is a recent trend, perhaps the result of more testing in schools, or if there was some other explanation. Brad is an experienced and acclaimed teacher, and he’s reflective about all that he does. His wondering was a genuine query.
Of course, everyone responding had distinct opinions. All supported Brad, and most suggested that students should be learning not to have study guides or other crutches. This is the point of education, after all. There’s no study guide for life, citizenship, or economic prosperity.
Could it be that some of our own educational devices, like the barrage of testing that our students endure, make the learners more reliant on some guide or hint or other revelation? I don’t think so. I know that I’ve had these study guide requests from students since I started teaching my own courses in 1996; and, I’ve created different kinds of orienting materials for students, ranging from old exams to pages of questions that I think about while I’m writing exams. It seems to me that students appreciate any kind of hint along these lines. My impression is that, for the most part, students are simply anxious about not knowing — not just some course content but the nature of what they’ll be doing. Moreover, the infection of testing that’s permeated our school system is bemoaned, but the stakes for students are generally low. In this locale, at least, the standardized tests cannot be used towards any kind of course credit or grade in the K-12 public school system. (This creates its own problems, no doubt, but not the ones that Brad worries about.)
So, though there’s no doubt that there’s this jungle of interplaying factors that I don’t fully grasp, I don’t blame the current edupolitical terrain for misguided requests of students. But then why don’t they know what could be on a test if they’re in the class? Even the students who are fully engaged in the text, online materials, and inclass presentation are often asking for some further orientation to what they might be expected to learn and do. From the instructor’s point of view, it seems obvious: The stuff of the exam or any assessment will be the content that has been focused upon in the class. This is the general consensus responding to Brad’s original query. We wants students to learn; and part of learning is figuring out how to navigate the material that is right there in front of them. This includes a host of things, from labs to readings to exercises to previous other exams and quizzes, in addition to what’s presented by the instructor themself. Over the course of 15 weeks, we continually give students examples of what we expect them to understand.
This is exactly the problem.
If I drive down the highway or set foot on a trail, I’ll pass a wide array of information that is likely really coherent. The path of the highway is clearly planned and it has rules. The trail gains elevation and crosses through a variety of well known ecosystems and formations. Yet, as a novice I may not know what I’m looking for. The highway has a white line to my right and a yellow line on my left, but this doesn’t mean that I know that these are significant or even part of a pattern. The north facing slopes of a mountainside could be populated by conifers while the exposed west-facing cliffs may sustain only stunted oak trees, but having traveled that path only once I might not see that these are important details — even if I notice them. A class, whether in introductory astronomy or current political systems or ancient cultures, is inherently more complicated than any physical journey. Not only are there observations along the way, but a complicated system of interconnections, foundational ideas, and practices within the discipline. These tie all the ideas, the exam worthy content of the class, together. Any course worth its catalog description has a constellation of individually complex and interesting ideas, but the point of the course itself is to weave these into something more meaningful. We do our best to provide maps and orientation — maybe a tourniquet and flare gun in case of dire emergency — to try to aid the navigation.
The challenge is that students are novices. They are not just learning about features of a given area, but trying to figure out how to even think about these things. In fact, they may not even realize that there’s more to a discipline than a collection of facts, piles of information that they need to memorize. Even if we tell them this isn’t the case, they may not have the map or the map skills to understand how to orient themselves. After all, we present our courses in terms of numbers of pages to be read and days on the calendar to be checked off in sequence. We inherently, unknowingly, show students the layout of the course in the same way that we provide nutritional information and caloric intake on food labels. Consume these requirements in no particular order and move on to the next day. (We are often told to state clearly a list of learning objectives on a syllabus, and I can imagine that these may actually serve to undermine the more comprehensive course objectives.)
This reminds me of is work that Eric and I have done on students in psychology and physics, analyzing how they answer questions about a novel new topic that’s just been presented to them in a short video. We’ve found evidence of students performing better in these tasks when they are enrolled in courses with face-to-face instruction (as opposed to online), and they especially do better when they are prompted to take the perspective of their professor, someone who represents the thinking of the discipline. This is a crazy result because we don’t actually ask them different questions to consider for themselves versus on behalf of their instructors; we just prompt them to think about how someone else would think about a question. It’s shocking that this makes a difference. And, then again, it isn’t: If I stop to consider how an expert puts pieces together and how they practice within the discipline — their organizing principles, practices, metaphors, and the like — then I myself become better informed. But, this is only possible if I have an instructor to model, something which is more likely when I know the instructor on some personal and present level.
A takeaway from this is that students aren’t simply studying and learning new facts and ideas. Rather, whether they know it or not, they are learning how to think about the new concepts, where to place them and how to organize them. This goes beyond a study guide of any kind, and actually looks into trying to get students to take on the perspective of another, the expert in the field. What kinds of questions would she ask? How does he make the connection between a gravitational pull, a mass, a temperature, and an age? (This list sounds super random, discrepant from one another unless you’re an astrophysicist, I suspect.) What big ideas are we using to construct our arguments even before we know what the argument is going to be about? What tools do you have to work with, and what’s the foundation that you’ll be building upon? (A side note that might be encouraging: new reforms in science education refer to these as science practices and crosscutting concepts, respectively.)
There are appropriate pushes in science education to make the classroom (or lecture hall, or laboratory, or whatever) more engaged and interactive, student centered and problem based. These are all well intended and helpful. At the same time, they aren’t sufficient. There are narratives of the experts that set out a blueprint of motivation, orientation, and navigation. I don’t know all that there is to know about a planetary nebula if I only understand its place in stellar evolution. I need to think about the metaphors that scientists use, the history of their endeavors, and the questions they pose. In fact, these might be even more important than anything else. Instead of a study guide and a multiple choice question, I now wonder if we should be posing the question, “Why do we care about planetary nebulae?” I actually know that Brad and many of my other colleagues actually go out of their way to give this perspective. What the students don’t recognize is that through this we’re providing the “study guide” right in front of them. These motivations and ways of thinking provide a more general topographic map for the discipline. And, more pragmatically, they tell the students what we the instructors are thinking about when we write these exams. This is the “study guide” they should be wanting. Our challenge, then, is to figure out how to help them recognize this. It sounds simple, and yet I know it isn’t. Otherwise students’ course evaluations wouldn’t have complaints that there was no way they could have known what was going to be on the test.