Members of the VCA will recall—if they have memories like
iron vises—a discussion between CP and me about the number of MJP categories.
I’ve gotten pretty used to 6 categories, with 6 being a strike and conflicts
being separate. Tabroom allows you to set as many categories as you want. I’ve
experimented with 5 for small pools, but found that unnecessary when I’ve
compared similar pools with 6.
Palmer’s argument in favor of more categories is simple,
that more categories allow for closer mutuality. Imagine 60 judges broken down
into 6 categories of 10. My 1 can be 9 away from your 1, and in a 1-2, it can
be up to 19 away. With 60 and 9 categories, my 1 is 6 or so away from your 1,
and in a 1-2, it can be up to 13. And keep in mind that the scale slides, as if
you’re looking at the numbers with a literal slide rule, which is the entire
basis of ordinal MJP, another thing entirely. (We’ll get there shortly.) These
numbers are clearly mathematically better, and CP’s argument is based on the
undeniable math. In practice though, it may or may not work out that way. If
you have fewer 1s you have less likelihood of mutual 1s, so you’re more often
doing 1-2s (and 2-3s and 3-4s). At this point, you may or may not be getting
the benefit of the math anymore. I don’t know. You’d have to look at it knowing
not only the ranks in the 1-9 tournament but what the coaches would have ranked
in a 1-6 scenario. Impossible.
So is it worth going against the norm? I mean, I wouldn’t do
ordinals which, following the math of the slide rule, probably gives you the
closest mutuality, because the field is not really familiar with the idea.
After all, we’ve only been doing MJP regularly for a couple of years, and there
is still a significant percentage of schools who simply don’t pref, for whatever
reason. I used to do a whole campaign trying to get them to do it, on the
assumption (a good one) that these were more conservative schools who believed
(wrongly) that MJP favored circuit styles, which it only did if the more
conservative schools didn't pref, a perfect example of a self-fulfilling
prophecy. I worked with a number of people to come up with elementary
categorization of judges as traditional, circuit or newly trained. We did
everything we could, but at some point, something becomes standard practice and
it’s no longer our responsibility to insure that everyone understands what
we’re doing. Let’s face it: schools that don’t pref now certainly wouldn’t do
it if only we went to ordinals. We’d probably have about the same buy-in
eventually that we have now. Lord knows, I’ve really wanted to experiment with
ordinals because I do believe that it probably renders better mutuality. But
here’s the thing. In practice, if the difference between 6 and 9 isn’t all that
much and not really demonstrable (even though we know it has to be true that 9
is better), is the difference between ordinals and 9 and/or 6 any more
demonstrable or, in fact, all that much? I ask this because you’ve got to take
into consideration the users. If I can prove in theory that ordinals is better,
does that really matter when I can’t prove it in practice? Users don’t like
change, unless they get a direct, measurable benefit. It doesn’t matter what
product the users are evaluating. If they don’t see something in it for themselves,
they won’t do whatever is necessary to take up the product. That’s why those
conservative schools remain resistant to MJP. They don’t see the benefit to
themselves of trying to figure out all these judges they’ve never heard of, even with
our little crib sheet of Trad/Circ/New. As for everyone else, we’ve got them on
board with MJP now, except for the ones who regularly query why they got a 4
and why didn’t their opponent. Is the benefit of a different system—and
ordinals is a radically different system while 9 vs. 6 is only a slightly
different system—worth the hassle? Do we think users, i.e., debate coaches, are
clamoring for it?
Mutuality only promises one thing: that you and your
opponent think similarly about a judge. Ordinals probably gives you the closest
possible mutuality, but in the end is it all that much different from what
we’re already doing to warrant the havoc of change (and all change is havoc)?
So many coaches now seem to be convinced that better MJP numbers equates with
better results, as if their debaters aren’t good enough to just look at the
judge they’ve got and pick up that ballot, period. Everything else is just
playing with the data because we can. Should we nurture coaches’ worst
competitive instincts? Maybe this would happen. As we move into any newer,
deeper system, we lose the older, not-so-deep people. LD has already lost the
buy-in of a lot of folks because of its arcane, non-resolutional styles. Should
we add to that the most complicated ranking system possible, the one that
requires encyclopedic understanding of every pool every week (unless it’s the
same old deadbeat college judges traveling from circuit tournament to circuit
tournament, the familiarity with which is also in the $ircuit coach’s favor)?
You’ve got to draw the line somewhere.
I say, for the time being, we draw the line where we are
now. There may be theoretical ways of doing it better, but are they practical?
Aren’t we better off locking in, at least for a while, best practices that stay
put, rather than always throwing new stuff at people?
This is not a plea from a hyper-conservative for
hyper-conservatism. It’s just the ramblings of a realist suggesting that every
change made has repercussions, and we need to study and understand the
repercussions before we make the next change. 6-step MJP is settling in. How
has that affected the activity, if it has at all? I want to know the answer to
that before moving to something else.
No comments:
Post a Comment