Skip to main content

Reply to "ARE CALIBRATION COURSES UNNECESSARILY LONG?"

> Might be worth another experiment though.

Well it took a while but I finally got around to doing that other experiment.

I measured a 300-meter calibration course and included marks at the 100-meter and 200-meter locations. At each of the marks (100m, 200m, 300m) I put 4 pieces of tape just before the mark and 4 pieces of tape just after the mark at random locations. I then measured the distance of each piece of tape from the mark (100m, 200m, 300m) close by. Here's what it all looked like at the 200m mark.

The cones are at the 1st tape piece, the 200m mark, and the last tape piece, just to help me identify everything as I approach.

Now that the course was set up I began my calibration rides. The plan was to ride to each of the 24 tape pieces once, and ride back from each of the tape pieces once, for a total of 48 rides. 16 of those rides would be close to 100m, 16 close to 200m, and 16 close to 300m.

The rides were done in a mostly random order. I would ride to a random piece of tape and would return from one of the other 7 pieces of tape near that mark. This random ordering was all set up ahead of time.

This may seem like an awful lot of trouble to go through, but setting it up this way allowed me to have 3 things in the experiment that would not have been possible otherwise:

1) A fairly large number of trials, 48.

2) Completely blind observations. As I was taking data the numbers had no significance to me, which would not have been the case if I simply repeated 100, 200, and 300m rides.

3) Compensation for the time factor. Because the trials were random, I was able to factor out the effect of time (increasing temperature) in the analysis.

The charts below show the PRELIMINARY results. I have sent my data to a statistician friend, and I'm sure the results will change somewhat after she gets a hold of them.


This chart shows the calibration constants calculated from each of the 48 rides. There is significantly more scatter in the shorter calibration rides.


This chart shows the means of the cal constants calculated with the rides at the three distances. It can be seen that the differences are very small, on the order of 1 count/km. I'm guessing these differences are statistically insignificant. For this experiment at least, there was little or no "wobble" effect.


This chart shows the 1-sigma(~67%) and 3-sigma(~97%) errors. For example, you have a 97% chance of your error being less than 0.029% if you ride a 300m calibration course 4 times and average the results. This error drops to 0.02% if you ride it 8 times and average the results.

Because the means of the cal constants from the three course distances are nearly the same, I'm guessing the scatter in the individual readings is caused mostly by the fact that you can't really discriminate less than 1/2 count on the Jones, which translates to 5 counts/km for a 100m cal course. Perhaps the use of the marked-up rim, as Neville suggests, for calibration rides might reduce this scatter.
×
×
×
×