# Lab Meeting Activity - Teaching about Uncertainty

I’m sure that I am not alone in having sat through some absolutely terrible lab meetings. All is fine and dandy when someone is practicing a job talk or workshopping a paper or something, but unless the lab is really large, it’s unrealistic to expect this to happen every at every lab meeting. Thus, the problem of what to do in the other meetings. Different labs that I’ve been affiliated with have solved this in pretty different ways. In our lab, we have two separate meetings - one for grad students, postdocs, lab managers, and PI’s, and one targeted more specifically to the undergraduate research assistants. This helps ensure that the conversation in each meeting takes place at the appropriate level. The RA lab meeting is held every other week, and the grad students, post-docs, lab managers and PI’s take turns leading it. Yesterday was my turn to lead.

I struggled for a while with what to do. I knew I wanted it to be something methodological, probably dealing with statistics - often a painful topic with psychology undergrads. However, it wasn’t entirely clear what kind of statistics ‘lesson’ I could really expect to effectively teach in an hour to people with varying degrees of interest and background (some of our RAs have not taken stats, others have pretty extensive backgrounds in quantitative methods).

Eventually, I settled on doing something with the idea of uncertainty. This appealed for a wide variety of reasons. Notably, it allowed for discussion of a certain degree of quantitative rigor (objective uncertainty, variability, etc.), while also giving us the ability to acknowledge some of the more subjective experiences associated with doing science (subjective uncertainty, perceivd fraudulence/imposter syndrome, etc).

Prior to lab meeting, I sent out a friendly email, and included a link to Howard Wainer’s short American Scientist article, The Most Dangerous Equation. I encouraged, but did not require them to read it. Here’s a rough outline of what we did in the meeting:

### Outline

####Why uncertainty?

• There is, arguably, an overemphasis on measures of central tendency, with a lack of attention paid to variability.
• As scientists, we spend a lot of time feeling very uncertain.

####The most dangerous equation

• I briefly summarized the article.
• We examined the equation in question, that for the standard error of the mean: $\frac{\sigma}{\sqrt{n}}$. I gave a quick summary of what a standard deviation was, and what the equation described.

####Some activities

• ‘Most’ of the time
• ‘F’ test
• How many emails did you send yesterday?
• The first two of these came from Melton (2004). The third I made up.
• For each, I drew a histogram on the board to illustrate that we had variability.
• I had them come up for reasons why the observations we had varied in each case. I then used the ideas they generated to illustrate how these three activities illustrate three very different sources of uncertainty/variability: uncertain operationalization, measurement error, and naturally occurring variability, respectively.

####Group work

• We next split up into groups of 3/4 (the ideal group size for these purposes, I think). I instructed them to think of some psychological experiment they were familiar with (e.g., a classic study, something they were involved with personally as an experimenter, a group project for a research methods class, or something they had discussed in a recent seminar or lecture), and identify where these three sources of variability could occur in that study.
• After discussion within the group, I had a few groups share their thoughts and what they came up with.

####Closing thoughts

• I finished by emphasizing the difference between subjective and objective uncertainty within science. I also found a nice Feynman quote to share (source).

Some people say, “How can you live without knowing?” I do not know what they mean. I always live without knowing. That is easy. How you get to know is what I want to know.

### What worked

The three activities Each of these was a really nice demonstration of different sources of variability within research. It was engaging, and didn’t seem too basic for even our more advanced research assistants. I should say that there were a few grad students in the group too, and even they seemed to like thinking about this.

The group work I think the key here was making sure that there was a range of ability within each group. I made sure that each group had either a grad student, a lab manager, or an advanced RA. Otherwise, I can see how this might not have worked out so well. They likely would have had a hard time even getting started (what experiment do we do???).

### What did not work

The reading I think it’s a good article, but the connection to what we were talking about was too tenuous. In the future, I would either make it mandatory and then make it more central, or just do away with it entirely.

My mini-lecture There wasn’t anybody in that room that wanted to hear me talk about a standard deviation or sampling variability. Again, I’d either do away with this bit entirely, or make it more central and find other ways of addressing the idea.

Subjective vs. Objective certainty I think this felt too much like a haphazard add-on. I kind of mentioned it halfway through and then returned to it at the end. What I should have done was introduced the idea first thing, illustrated it with the activities, and then had them reflect on it at the end of the meeting.

Lessons learned. Some resources:

A helpful workshop run by Reyes Llopis-Garcia.
Some great posts on the fantastic Dynamic Ecology blog.

Written on November 22, 2014