Student-Derived Metrics

I’m still reading the Standards for Our Schools book. I don’t have a rigorous background in literature about learning and education, but what I see so far indicates that incorporating standards into education is analogous to incorporating evidence into your reasoning.

Although testing, grading, scoring and other quantitative metrics don’t capture everything we would like (it’s hard to measure loosely defined concepts such as ‘understanding’ and ‘experience’) they do serve as a good first approximation. The danger, of course, would be to promote the metric too far. For example, we don’t want teachers that hand out test answers just so their average class score looks better. Nor do we want the learning to be focused on specific techniques for improving guessing odds on multiple choice questions. Despite the fact that these metrics measure what students think, it’s more important that they learn how to think.

If all we can only quantitatively measure task-specific performance, how do we ensure that students learn the process of thinking in favor of the ‘right’ answer? How do we prevent over-optimization on the standard at the expense of general understanding?

The answer lies in using different teaching methodologies. We must choose methods that don’t allow the metrics to trump the learning experience. Within my field of CS, I can see alot of promise in the eXtreme Programming methodology, due to it’s iterative, self-reinforcing, and interactive nature.

The Standards book mentioned an interesting observation: In kindergarten, when kids are first learning how to read, they learn faster and retain more if the subject they are reading about interests them. That is, if they are personally interested in the story, they have more internal motivation to figure out what the new words are. They want to deepen their knowledge. (I’m not a child psychologist, so I don’t know the conceptual model young children have about the world around them and can’t accurately speculate on what stories they will find interesting. But, I’m willing to bet that it’s not what you or I find interesting!) The lesson here is that, if students are able to relate to the subject, they will pick up the material faster. So, our programming assignments must be interesting/exciting/cool!

Sometimes it’s hard to make captivating assignments. If that’s the case then your second tool is to do a good job on selling the reason why. Be clear on what the practice does for their skills, and provide a way to demonstrate that it paid off. For example, you can use a follow-up assessment, or revisit an old problem to show how easy it is with the new skill.

OK. So, once you’ve got an interesting assignment, or have motivated the students into feeling excited about it, how do you maintain the involvement? Easy answer, direct from XP: build interaction into the assignment! Do the work in groups/pairs. Even the Standards book observed that (a) students (even as young as 5 or 7) can teach each other and (b) the one tutoring benefits enormously (because they have to organize their thought process into words). Work in groups must be structured though, so it’s not one person doing all the work, while everyone rides the coattails. Structuring can be done in pairs though: one person can be the ‘driver’, while the other person is the ‘navigator’. We use this model in some of our undergraduate courses, but I think we should use it throughout the whole curriculum. I think that a big difficulty is that, while students have a limited time in the lab they are over-focused on churning out code. What’s really needed, is for the student that’s acquired the concepts to slow down and patiently explain the concepts to their partner. Taking a brief role as a tutor actively exercises communication and maintains student involvement.

But, if students don’t already know how to work effectively in a pairing, how will good peer-based education get started? and how shall we keep it on-track? This was a tough question, but I got a flash of insight this morning that held the answer. It’s a strange fusion of the two things I’ve been discussing: XP is an iterative methodology which stresses continuous evaluation. Standards in education provide metrics for evaluation. Why not have the students design their own partner evaluation forms? That way they are held to their own standards! (what participation!)

Some related observations that I’ve made in my brief time as an instructor: (a) students are uncomfortable without a clear idea of how they will be evaluated, preferring explicit rules over wish-washy guidelines. (b) too many students lack study and interpersonal skills. (c) pretty much everyone dismisses explicit goal setting as a mechanism for improving their performance. We can actually solve all these problems in one fell swoop!

Here’s my idea: Run a class where you first explain what the pair programming means to accomplish, then ask the students, explicitly, to come up with a list of things they want to have in a good coding partner. I’m confident that an entire class of students will come up with a list that strongly resembles anything the instructor would think up on eir own. Put these items on the partner evaluation form. Viola! Students have made up their own metric and they were forced to think about their responsibilities, setting explicit goals and rules of behavior. Consequently, they will behave conscientiously towards these items.

Of course, since it’s their first time thinking explicitly about this kind of interaction. The form will be incomplete in some respects. This discrepancy between what students expected might happen and what they actually experienced can be easily handled. Point out the inconsistency in class, have a discussion about it. Then do a modification round to refine the evaluation form before the second assignment. Explain how this activity also practices the iterative-design aspect of the XP methodology. Only this time instead of being about the code, it’s about the metrics of evaluation.