Skip to main content

Reply to "The Perfect Storm of Measuring Errors from a Certifier's POV"

Lyman, that's an interesting idea, and it makes me think about what it is that I do as a certifier. While a small part of it is checking for completeness and checking math, I feel like a more important part is trying to get a feel for how someone has approached the measurement task, how they are thinking about their results, and how well their course map serves to instruct that imaginary race director from out of town who uses this map as a guide to set up and conduct the race correctly.

I just don't think we are at the stage where computers can take over very many of those tasks. For example, sometimes a measurer gives elevations that I have a hunch may be wrong-- I can go and check on Google Earth, or I can ask them how they got that information, etc-- but the checking starts with the hunch, which I'm not sure can be programmed. Another example would be submissions like the one Toni described to start this conversation, and ones that Mike W and I have received-- lots of mistakes (or just one) but possibly still acceptable. I'd rather work with someone to help them get things right, for the course they've just measured but also for future ones, than risk just having the course rejected due to a math error. (The risk isn't that a measurement is rejected but that a measurer might be discouraged and stop.)
×
×
×
×