Skip to main content

Now that I have your attention Eeker let me assure you I'm not talking about anything radical. Currently we say that a certified course
"must be at least as long as the advertised distance"

I would suggest changing that to
"must be at least as long as the advertised distance, but should not be more than X% longer than the advertised distance"
Original Post

Replies sorted oldest to newest

Mark,

You have brought up an interesting point,which could be discussed at the convention. However, first let me give you a little history of the SCPF/SPR told by Bob Baumel to many of us a short time ago.

Here it is!

I'll provide some more background on these issues. As an interesting point, the AEVM((Allowance for Error in the Validation Measurement), which has never been used in any country except the U.S. didn't come into use until at least 7 years after adoption of the SCPF. The SCPF was introduced in the U.S. in 1982 (possibly the IAAF began using it earlier, but I'm not sure of that). To show you all the historic September 1982 letter where Ted Corbitt announced the SCPF, I've looked through my (paper) files and found a copy, then scanned it and posted it at http://www.rrtc.net/SCPF_Adoption_1982.pdf (Note: the letter from Ted is the first page of this pdf, which also includes some material from Ken Young as I'll describe below). This copy of Ted's letter was addressed personally to me; I assume he typed individual copies for each of the regional certifiers around the country (there were only a handful of us at the time).

The 2nd paragraph of Ted's letter, beginning "The one meter rule is dead," marked a new, strict interpretation of the SPR (The previous rule was to stay one meter from road edges). Ted's 3rd paragraph introduced the SCPF.

Pages 2 and 3 of this pdf include material on the same topic by Ken Young, taken from December 1982 NRDC News. This is actually part of a longer article that you can read in the Course Measurement archive at http://www.runscore.com/coursemeasurement/ (and you may also find it interesting to read Ken's comments in other NRDC News issues from that period).

The material I've included from Ken discusses changes agreed to at the 1982 TAC Convention a few months after Ted's letter. You'll notice one procedural difference: Whereas Ted said in Sept 1982 that if you do two measurements you can average them, the Dec 1982 agreement said to use the measurement that produces the longer race course (which is still the recommended procedure in our current Course Measurement Manual). Most importantly, Ken described a major change in course measuring philosophy, a paradigm shift: Whereas the goal had previously been to measure courses "as accurately as possible," the new philosophy, based on record keeping requirements, is to ensure that courses are at least the advertised distance.

Ken Young was RRTC's first Validation Chairman, and also played the major role in rewriting the rules for both course certification and road race record keeping in the early 1980s. Ted Corbitt had been the father of accurate course measurement in the U.S., but it was Ken who put certification on a rigorous basis. Ken's work enhanced the reputation of road racing, which had previously been considered a poor relative of track racing. Ken was keenly aware of the intimate relationship between an effective course certification program and an officially recognized system of road records. The certification program is clearly necessary for record keeping. But in the other direction, the record keeping program is necessary for establishment of a rigorous certification program with enforceable standards. Without the record keeping, we probably wouldn't have our current, unified certification program under the auspices of the national sport body. And, obviously, we wouldn't have validations, so we wouldn't have a Validation Chairman.

The record keeping philosophy asserts that for a mark to be approved as a record, we need high confidence that the runner ran at least the stated distance in a time at least as fast as stated. For course measurement, this leads to the concept of one-side tolerances. Every measurement has uncertainty, but we use various techniques to try to push the range of uncertainty to one side of the advertised distance, so we feel confident that the course is at least the stated distance. Recall my previous observation that although Ted's Sept 1982 letter advised averaging two measurements of the course, the subsequent decision in Dec 1982 was to pick the measurement that produces the longer race course. This is another example of a technique intended to push the uncertainty range to one side.

One-sided tolerances of this sort are common throughout sport. When laying out a 400 m track, it's standard to include a few extra centimeters, to ensure that the length is at least 400 m. Race times are routinely rounded up, Distances of throws and jumps are rounded down. Throwing implements must have at least the weights specified in the rules.

Occasionally, we encounter a course measure who doesn't feel comfortable with these one-sided tolerances, but would apparently prefer to return to the pre-1982 measurement strategy, where the goal would be to produce measurements with a range of uncertainty centered on the stated distance. Unfortunately, such an approach isn't consistent with the record keeping philosophy that's been at the heart of course measuring since 1982.

Incidentally, among the various changes adopted in 1982, the one that probably had the biggest impact in most cases (bigger than the SCPF) was probably the stricter SPR interpretation. That's because, depending on the overall curviness of a course, the effect of measuring at 30 cm instead of 1 meter from road edges can easily exceed 1/1000 of the course length.

Given all the changes in course measuring and record keeping adopted in the early 1980s, some US measurers began arguing for a negative tolerance in validation measurements, known as the AEVM (Allowance for Error in the Validation Measurement). I supported that position myself, to some extent, although Pete was the major advocate. The arguments were largely semantic, based on interpreting the words in the existing TAC rule: "shows that the actual course distance was shorter than the stated distance."

You can read many of the arguments on this topic in the pages of Measurement News which, like the NRDC News issues, is available in the Course Measurement archive at http://www.runscore.com/coursemeasurement/ The arguments reached a peak in 1988. Then, in the November 1989 issue (MN #38), Pete issued a proclamation titled "RRTC Guidelines to Interpretation of Validation Measurements" which made the AEVM an official RRTC policy.

In 1997, that policy was refined somewhat by specifying when a course that "passes" validation could be considered prevalidated for future races or would have to be adjusted before being considered prevalidated. This didn't change the previous policy (adopted in 1989) about when a course passes or fails validation, but only attempted to clarify when such a course could be considered prevalidated for future races.

Meanwhile, this RRTC policy on AEVM, and the related policies adopted in 1997, were never accepted inernationally. Consequently, in 2007, RRTC reversed these practices in order to conform with the IAAF position.

Reviewing all the history, we see that the SCPF was adopted 27 years ago, as part of a change in philosophy to make course measurement consistent with record keeping. Since then, the SCPF has remained a standard procedure for measurements all around the world. The AEVM was accepted for a period of 18 years (from 1989 to 2007), but only in one country, i.e., the U.S. It would be interesting to know how many validations during that period fell in the "gray" area, where the measurement came up short, but by less than 0.05% (does anybody have that statistic?). For any validations that came out that way, the course would be said to "pass" by the RRTC interpretation, but "fail" by the IAAF interpretation. Of course, this discrepancy was untenable, so RRTC eventually had to change its policy to match IAAF.

Bob Baumel
Thanks for that history Gene. I had heard some of it before, but not all.

To be clear, I'm not suggesting any procedural or rule change here, just a change in the description of what we already do.
This is an issue because whenever anyone suggests that some certified course is too long (why don't they ever complain they're short? Wink) there is always someone close at hand to chime in "Well ya know, USATF doesn't care if the course is too long. Their only requirement blah blah blah..." This would simply be a clear statement that says yes, the USATF and RRTC do care if the course is too long.
On the surface it may look attractive, but we really don’t know exactly how long a course is. To say a course should have a length somewhere within the SCPF limits implies knowledge we do not possess.
The SCPF has proven its worth as judged by how validations come out, but the jury is still out on exactly how long a given course is. Nobody knows. This does not apply only to roads. It’s the nature of measurement.

If a perfect layout is performed by a perfect measurer, a 10k will be exactly ten meters oversize. If any calibration change occurs, the larger constant will cause the course to be more than ten meters oversize. If the average is used, the water is muddied by the uncertainty of exactly what the most accurate constant is.

We have a working procedure in place, the interpretation of which remains somewhat inexact. I believe we should avoid writing rules which imply knowledge we don’t possess.
Pete, for all the reasons you mention I'm suggesting we NOT make a new rule. That's why I would use the phrase "should not be longer" rather than "must not be longer."
And the "X" would not be 0.1%. We add that because we believe we can't control 0.1%, so "X" would be at least 0.2%. Because we do other things that make the course longer, like using the larger cal constant, "X" would probably be a bit bigger than that. You have a bunch of validation data that gives a pretty good indication of how long courses actually end up being. That could also be used a way of selecting, and a way to justify, what is chosen for the value of "X."
Mark, 2 courses I've measured were validated to WAY more than .2% long. Does that mean they shouldn't be regarded as certified and any records are lost? (A note: I could comment on the validation performance - but won't in this space). I agree w/Pete and say we leave alone the wording on what qualifies as a certified course. I don't see why we should care that the general running public might think a certified course is more than .1% long.

Perhaps a different argument could be made for shortening validated courses that come up more than .2% long?
Again, I'm not suggesting any change to any of the rules. Perhaps "should not be longer" is too strong of a phrase since people seem to be interpreting that as a requirement. I just think there should be some statement of an estimate of the accuracy in there. "Must be at least as long as the advertised distance" says nothing about the accuracy of the course.

Add Reply

Post
×
×
×
×
Link copied to your clipboard.
×