The following is a completely fake study that I've made up out of thin air.
Here are some of our questions:
- Do students learn better in mixed experience-level classes than they do in classes where all the students are at the same level?
- Do students learn better in classes with certain types of activities?
- Do teachers who have seen the video How to Avoid Death by Powerpoint tend to produce better-performing students?
- At what time of day do students learn best?
How exactly do you go about answering these questions? The world is at your fingertips. You can create surveys for teachers and surveys for students; you can access any data source collected by the institution, including grades, instructor comments about students, student evaluations of instructors, and (I am making this up) injury and death rates for previous students.
This may seem like a morass of information to slog through for quite a complicated study, and it is. I have come across numerous data sets, sets of questions, and study designs in my work for Anovisions. Over the years, I have developed a method for getting my head around a study like this, and you might find it useful as you take your study from research question to final design.
Here are the first steps:
- Find out what the entity paying for the study has as constraints in terms of time and money. What are the deadlines? How much of your analysis time can they afford? In this case, we have three months and a budget that limits us from doing any qualitative (read: "expensive and time consuming") analysis.
- Define the main research question(s). In this case, the main research question is How can we change our pedagogical methods to produce better outcomes for our students? All possible answers should be explored, including the answer, "None of your considered changes will help much."
- Find out about any variables your client may be considering. In this case, our client is not thinking, "Hm, I wonder about this variable or that variable?" Clients rarely think like that. But they often have excellent questions that you can convert into variables. In this case, we have a few variables that come to mind:
- Grades (we'll call this a scalar variable for now)
- Survival time in the field (scalar)
- Number of injuries in training (scalar)
- Number of injuries in the field (scalar)
- Student experience level (scalar or categorical, to be decided once our plan is further developed)
- Instructor status regarding How to Avoid Death by Powerpoint (binary, Y/N)
- Types of class activities (categorical)
- Instructor evaluations (we'll need to find a way to take lots of string data and compress it to a numerical value, so let's call this scalar for now)
- Time of day for class
- . . .
"But wait," you say. "What kind of study are you doing? A randomized trial? A case study? Survival analysis, even? No, a cohort study, or--yes! It's a cross-sectional study, right?"
Actually, no, I don't say that, at least not out loud. I say, "Don't put the cart in front of the horse," which adage, though frequently used and perhaps stale, is difficult to replace with a more modern pithy saying while retaining the same wisdom. What I mean is, if you try to define the study type before you thoroughly examine all the client's questions and available data, you will never be able to deliver what the client needs (in the cart, ha ha).
At this point in your process, it is essential that you engage in one of the most overlooked and important steps in any study design.
You walk away, take a break, have a coffee, call someone you love, and otherwise stop thinking about it for a few minutes or even overnight (the process of sleep does wonders for analytical tasks). When you come back, you will perform better for having taken your break.
So now I'll take mine, and we'll continue this subject in the next blog post. Don't forget to sign up for notifications so you don't miss it.