Skip to main content

Grading Measurers?

This idea was brought forth also, and also to mixed acceptance. A strictly numerical approach, based on the amount of measurement activity done by the measurer, while it has drawbacks, has the advantage of being apolitical and avoids hard feelings. It is easy to do and I will be preparing a sample listing. When I do, discussion will be invited.

I am lukewarm to this idea, as any person wishing to check out the kind of work a measurer is doing can find it out by going to the measurer list on our web page, finding the measurers in their state, and looking at a listing of the courses measured. Also, they can see the maps and get an idea of the quality of the work.

It was not stated exactly to what use this listing would be put. Opinions are solicited.
Original Post

Replies sorted oldest to newest

A preliminary list of measurers who might be graded under the proposed scheme has been created. The complete file may be found at:

http://members.aol.com/riegelpete/Performance2004_2006.pdf

Here is a summary:

2004 - 2006
Measured 10 or more 99
Measured 5 or more 165
Total Measurers 446

2006 Only
Measured 10 or more 37
Measured 5 or more 76
Total Measurers 258

Question: What purpose does this serve aside from curiosity satisfaction? This information has appeared in somewhat different form each year for decades, in Measurement News, and lately online.

In addition, information concerning measurers is also presently available on the USATF search engine.

The information was prepared by downloading the latest course list from the USATF web site. The basic information is available to all.

Should a numerical limit be established to decide who is A, B, C or whatever?

Should appointments be perpetual or should they only be good for the last few years? What have you done for us lately?
Last edited by peteriegel
Scott,

I completely agree with you. The reason I used the course listing data was that it is unequivocal, while any listing by a certifier must necessarily contain subjective opinion. People who feel slighted can argue with that, and who needs that? We have a bellyfull of those.

When the idea of grading was proposed nobody could state with any certainty just what the purpose of the list was.

To avoid questions of competence Paul Hronjak proposed using the term "measurer experience list."

The thing could become a can of worms. Are you a better measurer than me? Who decides? Will I get all huffy if you are elevated above me? To whom do I appeal?

A great deal of energy could be expended on any list that relies on annual certifier polls for input data.

I think the whole idea is busywork to no useful end.
I analyzed the Pete’s data to help provide a bit more statistical description of “experience" as a factor to consider in determining whether -- and, if so, what kind of -- a scale should be used in "grading" (loaded term) or classifying measurers in response to requests to assess qualifications of measurers similar to that currently in place for officials.

Pete’s sample includes 443 “active” measurers, that is, persons who measured at least one course in the past 35 months. The sample has some duplication due to typographical errors in measurers' last names, but not enough to substantially influence the results of this analysis.

In 2004, 111 measurers, or 25.1% of active measurers, measured at least one course.

In 2005, 189 (42.7%) measured one or more courses for certification.

Thru Nov. 2006, 238 (53.7%) have measured at least one course.

The overall productivity of this sample was examined in terms of a 3-year average. The mean annual number of courses per measurer was 2.99, with a median of 1.0. The average annual minimum number of courses was 0.33 (the “active” threshold). The average annual maximum was 38.67. Slightly more than 22% (344) measured 3 or fewer courses per year and less than 15% (377) of the sample measured 5 or fewer courses per year.

The following chart displays the 3-year average number of course measurement certifications by the number of measurers. Not unexpectedly, the distribution is highly skewed toward a small number of active measurers. Approximately 16% of active measurers accounted for two-thirds of the 3,976 course certifications completed in the 35-month period.



I don’t draw any conclusions at this time about the usefulness of these results for “classifying” measures, preferring to leave that to further discussion. And, of course, this analysis does not shed any light whatsoever on the subjective “quality” of the work, but simply the more objective, quantitative aspects of measurer productivity. One item of note, however, is that a substantial number of course certifications are performed by individuals who do not often engage in this activity. The contribution of these “low-producers” to overall productivity of course certification should not be ignored. As a counterpart to the small group of high-producing active measurers, this group supports the principle that course measurement can be (and is) done by a broad spectrum, and is not limited to a select few.
Last edited by jimgilmer
Hi all,

Yes, grading can open up a can of worms as Pete suggests. However, if we keep it simple then it could be very easy to maintain. For some reason this is wanted by USATF and it is not a difficult thing for us to do.

My suggestion is to classify all FS and those who have measured at least x number of courses an A. Anyone who has measured a course is a B and all others are a C.

This could be posted on the Measurers list. We would not put any letter by a person's name, but have the above explanation at the top of this page. Just a thought!
The home office, USATF, wants us to grade measurers? Why? To what purpose?

In the end, since we aren't able to determine the accuracy of all the courses that are getting measured, any grading is shy of the most important ingredient for use.

It's relatively easy to grade an official; you can see how good they are immediately. We can assess how many courses somebody measures, how good their maps are but we're missing the most important element; how well they measure the course.
This post is for review and comment only; it is not a formal proposal.

Draft criteria for a course measurer classification system

The criteria proposed below for the classification of course measurers shall be in accord with the following fundamental principles of Road Running Technical Council (RRTC):

Principle #1: Course measurement for USATF measurement certification follows a detailed protocol and procedures promulgated by the RRTC as articulated in the Course Measurement and Certification Procedures Manual. http://www.usatf.org/events/courses/certification/manual/.

Principle #2: Course measurement protocol and procedures are designed to accommodate the wide geographic range of the road running community having need for USATF measurement certification services.

Principle #3: Course measurement protocol and procedures are designed to be accessible to the broad and diverse membership of the road running community.

Principle #4: The classification of course measurers is based on evidence-based criteria, and shall be consistent with the existing structure in place for the past 25 years, whereby regional certifiers review applications for the certification of road courses submitted by measurers who do not have final signatory status.

I. Rules for the classification of course measurers:

  • 1. The body of course measurers in the United States is divided into “classified” and “unclassified” categories.
    a. “Classified” measures are persons who have ever submitted an application for certification of a road course that resulted in an approved measurement certificate by a regional certifier.
    b. “Unclassified” measurers are persons who have successfully completed an RRTC-sanctioned course measurement clinic but have not submitted an application for certification of a road course for review by a regional certifier.

  • 2. Classified measurers are graded on the basis of their performance and experience in the measurement of courses that successfully resulted in the awarding of a measurement certificate by a regional certifier.
    a. Grade “C” is accorded a measurer who has submitted an application for certification of a road course that resulted in an approved measurement certificate by a regional certifier.
    b. Grade “B” is accorded a measurer who in the three calendar years prior to the year in which the classification grade is assigned has submitted an application for certification of a road course that resulted in the awarding of a measurement certificate by a regional certifier.
    c. Grade “A” is accorded a measurer who:
    i. Has submitted an application for certification of a road course that resulted the awarding of two or more measurement certificates by a regional certifier in the three calendar years prior to the year in which the classification grade is assigned; or
    ii. Is a regional certifier in year in which the classification grade is assigned; or
    iii. Has final signatory status in the year in which the classification grade is assigned.
Last edited by jimgilmer
Jim's draft proposal is well thought out, and, should an ABC list be something we need, it would do the job. Some discussion would undoubtedly ensue concerning the numbers employed to separate A from B from C, but these are details easily handled.

The main question is whether it is worth doing at all. The information contained in the new list would not be new - it already exists and is accessible. Moreover, it is not a one-time effort. The list would require periodic effort to update it. This would become a new job for someone in RRTC.

Considering that it is not hard to get measurer information at present, including courses measured and when this was done, the end-purpose of the grading scheme remains unclear.
First problem, is quantity v quality.

If you are going to measure output, shouldn't you rate by total millage measured?

Quantity v Quality - Some people do a bunch of sloppy 5K's and others do a few difficult marathons.

Second problem, you have a system for anointing kings but not chopping their heads off. What happens to your grading system if a course later fails verification?

Maybe the only way to grade a measurer is for some one else to verify some of their courses, before a record depends on it.

I think visual map quality is not as important as the ability to use the map to turn that back into a course. You can't find out how useful a map is until you are standing on the ground with the map in your hand. I have worked from some beautifully clear maps, only to find that there is not enough detail to accurately determine where the heck the course goes. Omitting detail for a better visual grade is counter productive.

Should we have a random system like drug testing uses? Random tests keep most players honest. Would random verifications not make people double check their numbers and maybe ride another ride just for better stats?

Should there be an anonymous reporting system so that if we should end up riding someone else's course, and we find it short, we report it some how and it is assigned to some one to officially verify? Have you never come across marks that were... well not in the same calibration range as yours?

Should there be the occasional checkup of work? In other fields I have been in there have been checks on work product or knowledge, or both. I have had to pas tests, either open book or not, and do direct work that was either judged on output or monitored.

We could test people. The person taking the test could fly in with their bike or use one that was available. I could envision a permanent test course laid out over 13 miles with a varying scale painted all along the side. (The scale marks would be continuous but not consistent, in places 11 inches apart and in others 13 inches, just so the person could not use the marks to correct from.)
The person taking the test would have to lay out a course using been bags for split points. After correction and adjustments they recorded the number marked by the road side and it is compared by computer to the pre-measured course.
If if their been bags were within inches of the correct place, they get an A. If their finish is within feet of the correct place, they get a B, etc.

Also: Most people have a home cert course. Something close for using on local races. Mine is a 1/2 mile cert right out side my front door. (It helps to live on a straight flat road). It would not be difficult for this nation to have one or two sets of very accurate distance measuring systems that could be used to verify peoples home cert courses.

Maybe to keep an A grade you have to go and verify X number of other peoples courses.

Anyone good enough to catch their errors is also good enough to fudge their math. The key is verification of the work on the ground, not the paperwork.

ALSO: Your thinking about grades is all wrong. You are assuming you are an A and everyone works down from there. This is working the wrong direction. The grades should start from 1 going up. That way you have head room as new technologies or other grades show up. So you start at level 1, and go up, just like school.

Measurement and verification of certification are different issues and should not be mixed.

I think you should have a better measurement grade based primarily on the accuracy of your work and secondarily on your experience.

Maybe you can combine the two, something like this:

  • Grade 0 - taken class
  • Grade 1 - done a couple of courses
  • Grade 2 - had at least one course verified
  • Grade 3 - had 5 courses verified, at least one of 15K or more, and all found to be within x% of accuracy (at least 3 times better than max error.)

  • Grade 3G - Got a grade 3 and passed test on assembling and running a group measurement with multiple bikes over course of at least a 1/2 marathon.

  • Grade 3I - Got grade 3 and done course and practical for international events.

Add Reply

Post
×
×
×
×
Link copied to your clipboard.
×