Teaching the process of science: A simple, no-frills approach

By Mariëlle Hoefnagels and Matt Taylor

I'm a fan of colored toothpicks.
Credit: theirl on flicker

It’s been quite a while since I wrote about how students in my class conducted experiments on condoms. The activity was part of our lab covering the process of scientific inquiry, the metric system, tools of science, and graphing. I updated the post several years later, adding new ways for students to estimate permeability to viruses (among other things).

But some instructors and students may not be comfortable using condoms in their introductory courses. In addition, the metric system, unit conversions, and lab equipment such as pipettes and graduated cylinders may be beyond the scope of some nonmajors lab classes. A simple, no-frills lab that focuses exclusively on experimental design might be a better fit.

I just learned of a lab that might fit the bill by focusing on a simple box of colored toothpicks. The lab was originally inspired by a math and statistics activity posted on the excellent HHMI BioInteractive site. Each student in the class measures the time it takes to break 20 toothpicks with each of their hands, then the class compiles the data and computes basic statistics.

The HHMI activity focuses more on handling data than it does on designing and interpreting experiments. But the toothpick-breaking idea is easily expanded to enable groups of students to think of their own question, plan an experiment that might answer the question, predict the outcome, carry out the experiment, collect the data, compute simple statistics, graph the data, and interpret the results.

My colleague Matt Taylor recently tried this activity in his nonmajors biology lab, and he agreed to share his impressions in the rest of this blog post.

Even though the lab manual provided a variety of sample questions, most groups came up with their own, including:

  • Number of toothpicks picked up with one eye closed vs. the other eye closed (to investigate the potential role of a dominant eye)
  • Color preferences when toothpicks are placed on white or black surfaces
  • Role of contrast in the ability to pick up toothpicks of various colors against backgrounds of various colors
  • Durability of toothpicks of various colors
  • Toothpick-tossing accuracy with dominant hand versus non-dominant hand (with an empty beaker as the target)
  • Toothpick-gathering speed with dominant hand versus non-dominant hand

The amount of creativity was impressive. In addition, some groups learned the importance of choosing a specific, quantifiable dependent variable. For example, a group that wanted to test the strength of different-colored toothpicks ran into trouble finding a consistent way to apply pressure, leading them to question their results. They gave each toothpick a durability score based on if it survived pressure applied by a pinky and thumb, as well as other finger combinations. In their post-lab report, that group’s members acknowledged that using a machine that measures pressure would have made them feel more confident in their results.

Two groups measured color preferences. Once they completed their tests, they learned the limitations of working with categorical data compared to numerical data. What is the “average” color preference? And how do we determine if one preference diverges from that average in a significant way? I talked to them about their observed results versus what their expected results would have been if everyone randomly picked toothpick colors. This first taste of experimental design was too early to mention a Chi-squared test, but they understood the concepts without the math.

My favorite part about this lab is that students had the freedom to make the experiment their own. They were invested in their chosen question, and so they worked hard to set up an experiment to answer it. They also had the freedom to make mistakes without much consequence, and that is the real power of this lab. One of the groups that wanted to measure color preferences got to the question about independent and dependent variables and summoned me to their table. “What would the independent variable be? Toothpick color?” I told them it might, but what would they actually be manipulating? Could they test color preference differences among groups to expand the scope of the experiment? This line of questioning sent them back to their hypothesis, which they could then refine into something more specific and meaningful. If the lab manual had told them what question they were answering, then they would have missed an important lesson in stating a clear hypothesis and designing an effective experiment.

Many students had never constructed a graph before, so that part of the lab was also a challenge. They had to consider whether they would use a line graph or a bar graph (spoiler: a line graph would not have made sense for any of their experiments, but they did try to make it work), how to label the axes, how to combine data and take averages, etc. I helped them along the way, and again, these discussions were productive because they had already invested thought in what approaches might work.

Science often isn’t pretty, even when an experienced scientist* is carrying out the experiment. We come up with hypotheses, try to test them, gather data, realize that we’ve not accounted for several other explanations for what occurred, and start over again. This lab allows students to get a taste of that process, but in a simplified setting where they could feel confident in their ability to succeed.

I remember thinking when I first read the lab activity, “Wow… toothpicks. I wonder if they will find this at all interesting.” I can tell you that they absolutely did. Many students even told me, unprovoked, that they enjoyed the lab. They found creativity in their bags of colored toothpicks, they worked through the right amount of struggle with experimental design, and they left with a better understanding of how science works.

*A great example of the “Science isn’t perfect” idea is the FDA’s recent reversal on the effectiveness of oral phenylephrine, an ingredient in many popular cough and cold medicines. I have considered writing a blog post on this saga, but it’s pretty complicated and would be hard to summarize for a nonmajors biology audience. I did learn, however, that this drug was first approved in the 1970s, based on data that would not meet today’s quality standards. The FDA’s reversal came in 2023, yet a review and meta-analysis of the data in 2007 already questioned the effectiveness of oral phenylephrine. The real-world effects of scientific data sometimes accumulate very slowly.

This entry was posted in Active learning, Engaging students, Experimental design, Laboratory activities and tagged , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a comment