# Shut Up and Listen to Your Students

– October 19, 2012 refsmmat.com

Educational innovation in the past few years has focused on new ways to deliver lectures and textbooks. With Udacity, Coursera, OpenCourseWare and Khan Academy we can watch well-made lectures delivered by professional educators; with Inkling, iBooks and CK-12 we can read textbooks on our laptops and tablets. We can deliver knowledge to thousands of students with a single website. But we haven’t stopped to consider a simple question:

Will it help us learn?

After all, we’ve been using the same educational model for hundreds of years – since before books were invented. We have professors concoct explanations of scientific concepts, deliver them by lecture, create their own diagrams on the chalkboard, and answer student questions. Videos and interactive textbooks are merely new flavors of the same techniques.

We can all agree that science education needs improvement; new articles appear every few months bemoaning its poor state. Much of the recent innovation focuses on bringing science and math to the masses through online videos and interactive textbooks. But our fundamental model of teaching is broken. We don’t need videos and flashy ebooks. We need to stop lecturing. Shut up and listen to your students.

We have a traditional model of teaching: the professor understands the subject and devises an explanation of the crucial concepts and facts the students need to understand. The professor delivers this explanation as a lecture, standing in front of the class and narrating it to the students. The students, if they are attentive and the explanation is good, will now understand the concepts, and they’ll do some homework for practice.

Essentially, the professor is an explanation delivery device, and the students are explanation receptacles. We like professors that devise particularly clear or elegant explanations, and professors like students that pay attention. We innovate by finding new ways to deliver explanations in videos and textbooks and exploring new ways to keep student attention.

So, how well is this model doing? Let’s look at physics, because I’m a physics major and I’ve been sufficiently bored to research the field in depth.

# Evaluating physics teaching

In physics, there’s a conceptual test of introductory Newtonian mechanics called the Force Concept Inventory. We can administer this test before and after a course to determine how much benefit the students get out of the class. We use something called the normalized gain:

$\langle g \rangle = \frac{\%\langle \mbox{post} \rangle - \% \langle\mbox{pre}\rangle}{100 - \% \langle \mbox{pre} \rangle }$

The normalized gain tells us how much scores improved, out of how much they could possibly have improved. A student who gets eight out of ten questions right the first time ($$\% \langle\mbox{pre}\rangle = 80\%$$) and nine out of ten the second time ($$\% \langle\mbox{post}\rangle = 90\%$$) has improved half as much as he could have.

There have been many surveys of normalized gain in regular old lecture-style introductory physics classes. The largest available sample, including 2,084 students, gives this result:1

$\langle g \rangle = 0.23 \pm 0.04$

That is: students started off with a gap in their knowledge. At the end of the semester, they had filled about a quarter of it.

The FCI isn’t particularly hard – many instructors refused to use its questions on exams on the grounds that they’re too easy. It’s not that students already knew too much about physics, either. Initial scores were often lower than 50%.

So what causes these low gains? Something in our lecture model is broken, but what? In short: the problem is misconceptions.

# Misconceptions

Misconceptions are like cockroaches.

You have no idea where they came from, but they’re everywhere – often where you don’t expect them – and they’re impervious to nuclear weapons.

Many scientific concepts contradict what we experience day-to-day, or don’t relate to our experience at all. To learn science, you need a strong foundation of basic concepts. And students come to your class with their own ideas about those basic concepts – and in science, their ideas are usually wrong. Very, very wrong.

Here’s a fun list of misconceptions students often have in introductory physics courses:2

• Students think force is always necessary to sustain motion, no matter how many times you repeat Newton’s First Law
• Students confuse velocity and acceleration ($$a = \frac{dv}{dt}$$)
• Students confuse force and momentum ($$F= \frac{dp}{dt}$$)
• Students don’t believe in “passive” forces, such as normal forces, despite hearing Newton’s Third Law in their first week of class

A teacher can’t communicate with students without a shared vocabulary; when students have such deep misconceptions about fundamental concepts, your explanations cannot work! Students will try to combine their broken ideas with your misunderstood explanations, and they will succeed – in producing incoherent and often self-contradictory mental models of reality.

Even worse, students with misconceptions often believe that lectures agree with them. Students who watch lectures contradicting their misconceptions report greater confidence in their misconceptions afterwards, and do no better on simple tests of their knowledge. Often they report not paying much attention, because the lectures allegedly cover concepts they already “know.”3

(Professors often lament that students don’t ask them questions. “I can’t tell if you’re confused if you don’t ask me questions!” But you can’t trust students to recognize they hold misconceptions. And when students do ask questions, professors often respond before they can finish, not realizing that the question comes from a deeper misconception.)

We can see this in action in the classroom when students watch lecture demonstrations. Demonstrations are a cornerstone of the introductory physics experience, but do they help?

# Lecture demonstrations

Lecture demonstrations show real-world applications, turning abstract concepts into concrete ones, and they spark questions from the students. They appeal to students who think visually, and they’re generally good fun. (Who wouldn’t want to put a cinderblock on their professor’s chest and smash him with a sledgehammer?)4

Hence, lecture demos should be a good thing. But we can test this.

Suppose you have two classes: one which sees the lecture demo and hears an explanation of it from their professor, and one which never sees the demo. At the end of the semester, you ask them to (a) predict the outcome of an experiment just like the lecture demo, and (b) explain that outcome. You compare the students in each class against each other. This was tried in a 2004 study:5

Simply watching the demo provided no statistically significant benefit in student ability to explain the demonstration’s results, although they became slightly better at predicting its outcome. Yay, our students can remember what they saw! But only 24% of the students who watched the demo could explain it.

This is worrisome, but it fits with our misconception hypothesis. Students with misconceptions view lecture demos and don’t realize their misconception doesn’t fit with reality – they just invent horrible new misconceptions to make the new data fit in. But perhaps we can alter how demonstrations are used to prevent this.

The study actually used lecture demos three different ways:

1. Just show the demo and let students observe
2. Ask students to predict the outcome before the demo
3. Have students predict the outcome, observe the demo, then discuss the results with their peers

Each group also heard an explanation of the demo from their professor afterward. The result:

Students forced to make predictions were able to correct their misconceptions and produce correct explanations in far greater rates. The predictions forced their hands: it is easy to rationalize what you see after the fact, but it is much harder when you have seen your concrete prediction falsified.

How can we extend this success to the classroom?

# Lecture inversion

Students should learn before class. Class is a time to diagnose problems and fix misunderstanding. Professors and teachers should force students to confront their misconceptions.

How to do this? One popular method is peer instruction, a method largely devised by professor Eric Mazur of Harvard University.

The basic method is simple:

• Use pre-class readings/videos to cover the material
• Use class to review basic concepts, then ask conceptual questions
• Students choose answers and discuss before you explain the correct answer
• Listen to students as they discuss, to spot problems

The miracle of textbooks – and iPads, and video lectures, and so on – is that students can learn from a carefully-crafted explanation that is developed once and widely distributed, rather than forcing professors to devise their own lectures. Professors can instead focus on drawing out the misconceptions and exterminating them by forcing students to make predictions, discuss concepts with peers, and see when they’re wrong.

# Does it work?

Several universities have compared interactive teaching and peer instruction to their conventional lectures:

• Harvard: improved from $$\langle g \rangle = 0.25$$ to $$\langle g \rangle = 0.62$$6
• John Abbot College: improved from $$\langle g \rangle = 0.33$$ to $$\langle g \rangle = 0.50$$7
• University of British Columbia (quantum physics course): improved from $$\langle g \rangle \approx 0.53$$ to $$\langle g \rangle \approx 0.79$$ (estimated)8
• A survey of 48 courses at various colleges and high schools: $$\langle g \rangle \approx 0.23 \pm 0.04$$ for traditional courses, and $$\langle g \rangle \approx 0.48 \pm 0.14$$ for interactive courses9

These results show a decided advantage for peer instruction and similar interactive courses over traditional courses, for both introductory physics and quantum mechanics. The method has also been tried in introductory chemistry, mathematics and biology, to great success.

Now, if you’re still having a hard time believing this, then I have one more study. In this study, students taught in an interactive inquiry-based course (where students discover concepts for themselves, guided by a professor or teaching assistants) were compared against honors physics students, engineering students, and non-science majors, all taught in traditional lecture-based courses.

They were evaluated on four questions. Two were synthesis questions, requiring the students to predict – without calculations – what would happen in a simple electric circuit when various bits were added or removed. The other two questions were analysis questions, asking for computations of resistance and current in a simple circuit.

Here are the results:10

The inquiry-based students creamed everyone else on the synthesis questions, and were only bested by honors physics students on the analysis questions. The honors students were great at churning through equations (for the analysis questions) but didn’t understand the concepts well enough to answer the synthesis questions.

You might wonder what kind of students were in the inquiry-based physics class. Physics majors? No. Engineers? No. Science majors? No.

It was an introductory physics course for elementary education majors.11

Really.

# Conclusions

Lectures are not a substitute for hands-on teacher-student time, no matter how you deliver them. Coursera, Udacity and Khan Academy can be great supplementary resources, but students will not significantly improve their understanding without active engagement with a live teacher.12 I don’t expect the wave of new online resources to bring substantial benefit to science and mathematics students.

By engaging your students, drawing out their misconceptions, and listening to them, you can make them perform better. Writing better textbooks or giving better lectures can never replace interactive teaching.

Shut up and listen to your students.

# Contact

I hope you are as surprised as I was when I first did this research. I’d love to hear about other ideas and research you know of. Contact me at . You can also browse my other work on my website.

Beyond the references (listed below) I used in this essay, I can recommend quite a few other papers if you’re interested in digging deeper. If you can’t find a copy of a paper you’d like to read, let me know and I may be able to help.

• Deslauriers, L., Schelew, E., & Wieman, C. (2011). “Improved Learning in a Large-Enrollment Physics Class.” Science, 332(6031), 862–864. doi:10.1126/science.1201783. Interactive teaching techniques in a huge introductory course double the learning, despite the teachers being inexperienced TAs.

• Wittmann, M. C., Steinberg, R. N., & Redish, E. F. (2002). “Investigating student understanding of quantum physics: Spontaneous models of conductivity.” American Journal of Physics, 70(3), 218. doi:10.1119/1.1447542. Students build misconceptions from their confusion about the models they’re taught, which are sometimes contradictory.

• Roth, W., McRobbie, C., & Lucas, K. (1997). “Why may students fail to learn from demonstrations? A social practice perspective on learning in physics.” Journal of Research in Science Teaching, 34(5), 509–533. A long exploration of why students don’t learn from demonstrations, with many interesting examples.

• Lorenzo, M., Crouch, C. H., & Mazur, E. (2006). “Reducing the gender gap in the physics classroom.” American Journal of Physics, 74(2), 118. doi:10.1119/1.2162549. Interactive courses significantly reduce the gap between male and female students on tests like the Force Concept Inventory.

• Arons, A. (1981). “Thinking, Reasoning and Understanding in Introductory Physics Courses.” Physics Teacher, 19(3), 166–172. Useful examples of ways to teach difficult concepts interactively.

• Golde, M. F., McCreary, C. L., & Koeske, R. (2006). “Peer Instruction in the General Chemistry Laboratory: Assessment of Student Learning.” Journal of Chemical Education, 83(5), 804. doi:10.1021/ed083p804. Peer instruction methods meet with success in the undergraduate chemistry laboratory.

• Pilzer, S. (2001). Peer instruction in physics and mathematics. PRIMUS, 11(2), 185–192. doi:10.1080/10511970108965987. Peer instruction tested in introductory calculus courses, to great success.

• McDermott, L. C. (1999). “Resource Letter: PER-1: Physics Education Research.” American Journal of Physics, 67(9), 755. doi:10.1119/1.19122. If you want to read even more, there are 224 citations waiting for you here.

• Meltzer, D. E., & Thornton, R. K. (2012). “Resource Letter ALIP–1: Active-Learning Instruction in Physics.” American Journal of Physics, 80(6), 478. doi:10.1119/1.3678299. Another excellent collection of research papers.

1. Hake, R. (1998). “Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses.” American Journal of Physics, 66(1), 64-74. doi:10.1119/1.18809

2. McDermott, L. C. (1984). “Research on conceptual understanding in mechanics.” Physics Today, 37(7), 24. doi:10.1063/1.2916318

Peters, P. C. (1982). “Even honors students have conceptual difficulties with physics.” American Journal of Physics, 50(6), 501-508. doi:10.1119/1.12797

Clement, J. (1982). “Students’ preconceptions in introductory mechanics.” American Journal of Physics, 50(1), 66-71. doi:10.1119/1.12989

3. Muller, D. A. (2008, April 11). “Designing Effective Multimedia for Physics Education.” University of Sydney. (Thesis available online.)

4. I should point out that Dr. Swinney, our energetic but not-exactly-young professor, only allowed a trained assistant to wield the sledgehammer. He may trust in the power of inertia, but he’s not stupid.

5. Crouch, C., Fagen, A. P., Callan, J. P., & Mazur, E. (2004). “Classroom demonstrations: Learning tools or entertainment?” American Journal of Physics, 72(6), 835. doi:10.1119/1.1707018

6. Crouch, C. H., & Mazur, E. (2001). “Peer Instruction: Ten years of experience and results.” American Journal of Physics, 69(9), 970. doi:10.1119/1.1374249

7. Lasry, N., Mazur, E., & Watkins, J. (2008). “Peer instruction: From Harvard to the two-year college.” American Journal of Physics, 76(11), 1066. doi:10.1119/1.2978182

8. Deslauriers, L., & Wieman, C. (2011). “Learning and retention of quantum concepts with different teaching methods.” Physical Review Special Topics - Physics Education Research, 7(1). doi:10.1103/PhysRevSTPER.7.010101 The authors don’t compute a normalized gain for their results, so I estimated it from their data.

9. Hake, R. (1998). “Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses.” American Journal of Physics, 66(1), 64-74. doi:10.1119/1.18809

10. Thacker, B., Kim, E., & Trefz, K. (1994). “Comparing problem solving performance of physics students in inquiry-based and traditional introductory physics courses.” American Journal of Physics, 62(7), 627-633. doi:10.1119/1.17480

11. I’m sure I’ll receive angry comments from elementary education majors for this. There may be plenty of smart education majors, but they certainly don’t have a reputation for being better at physics than aspiring engineers.

12. I don’t want to accuse Khan of not knowing this – Khan Academy is working with elementary schools to integrate its course material into curriculums, with teachers standing by to work with students. But casual Khan Academy or Coursera viewers stand to gain very little.