Skip to main content

I have a course that has been submitted 8 times, returned 7.  There are a myriad of problems, missing data, collaborator data used in the wrong place, extra copies of maps and one attachment for the map and another (extra data) for the 2nd map sheet.  I think the fact that we are 2 riders is beyond the capability of the system.  Is anyone else collaborating?  Is anyone else having problems with the online system?

Original Post

Replies sorted oldest to newest

Just as side note... The LDE or Limited Data Entry is available for regional certifier and final signatories only. It is not available to measurers. The LDE option allows the certifier to create a course certificate manually, in much the same way certificates were processed before the online system. The certifier is responsible to manually check the calibration and course measurement data and methods as well as manually calculating the drop and separation.

The LDE method would be used if the course had a small change at the start or finish, or was renamed, or the information on the certificate was corrected, or the measurement method was not included in the race course measurement method.  Hope this helps. --  Justin

After having gone through this recently, would it make sense to reiterate the measurement strategies that do not align themselves well with the online system so that in the event a measurer measures a course in one of these ways, he or she can at least contact their certifier, communicate what they've done, and get the certifier's guidance for how the application should be submitted?

For a measurement with two riders it appears that the system only uses the originator's calibration constant to compute the distance for both riders instead of using each rider's calibration constant. This causes the two rides to be greater than the 0.0008 allowable difference.  This information has been sent to the application developer for review with details from Guido Brothers input to the on-line system.  We are waiting for a response from the application developer and will share any information as it becomes available.  Thank you.

This makes sense to me, Jay. 

I know Jim Gilmer's goal with V. 2 of "the system" is to incorporate more and more types of measurements into the options for entry. I have a peripheral understanding of the priorities and the complexities of building an "omnibus" data entry portal. This knowledge is sufficient to inform me that we'll likely never have a system that readily incorporates all scenarios.

I think Jim and his team have done a superb job of determining which kinds of measurements are "cookie cutter" in nature, and they have made it easy to submit courses that comprise the largest volume by percentage. There is an inverse relationship between the more complicated course submissions and the volume of them, so naturally these types of submissions will be made available in subsequent versions of the portal. It seems to me that there is also a corresponding relationship between the less-frequent types and the amount of development work that goes into creating the online forms that will handle them.

Jim and his team have put a monumental amount of work and time into creating and managing this system. Jim has done all of this as a volunteer. If you are of a mindset to give Jim and his team a shout out, please do so. They richly deserve it.

I think it's a classic application of the 80-20 rule.  You could put in 100% of the work necessary to cover 100% of the scenarios, but 20% of the work will cover 80% of the scenarios, and the return on effort/investment diminishes from there.  The system makes things smoother for far more scenarios than it doesn't, especially those scenarios that involve less experienced measurers.  If I were still scanning over 400 maps and/or certificates like I did for a couple of years, I would welcome this in a big way.

This whole thread is not to cast stones at Jim and his team.  I understand that most of the programming involved with the new system is beyond the arithmetic of the cal constants, elapsed counts and % difference calculations.  Also that a lot of work is required to accommodate all the perturbations that can occur with obtaining the data and processing that simple arithmetic.  The data for things like collaboration, segments and course adjustments looks like all the normal measurement data, but requires special handling by the program.  I am sure that Jim and his team will develop a way to accommodate these perturbations.  As soon as he does, we will think of new twists for special cases that will require more programming.

In the meantime, I have 2 measured courses that crash the system and 5 more to measure.  A workaround would be appreciated. 

I also now understand that maybe my single data entry person idea may not work if the collaborator's data is somehow not used.  As of now, the person who originates and finally submits the certification request must sign out of the system and the collaborator must sign in separately to enter the collaborator's data, then sign out.  The originator/submitter then signs back in to complete the Measurement Comparison and Application for Certification screens.  Maybe these 2 gaps have something to do with the loss of the collaborator's data, as a single data entry person capability would eliminate the gaps.

As I understand the problem, there are known issues with the PDF forms (generated internally using the data entered on the screens the measurer sees and uses), especially when the input involves a collaborator.  The measurer can't see these forms before submitting the course to the certifier.  I'm not sure of the workaround, but it involves the certifier reviewing the RAW forms (not the same as PDF data or PDF forms). 

My understanding is that there is a bug in the app that causes it to not work correctly if you are measuring a course with a collaborator. So if you have a course that was measured with a collaborator you should talk to your certifier about submitting it as LDE.

The developer or someone else can correct me here if I am wrong about this.

The worksheets for collaborators are correct and the collaborator function should be used instead of an LDE , if possible.  What is incorrect is the summary PDF sheet that compares the two rides.  It uses a single constant for both riders which causes the difference between the two rides to potentially exceed the .0008 allowable difference. Certifiers and FS should use the work sheets to compare the courses and ignore the pdf summary sheet.  The developer can possibly provide additional information or remove the pdf summary sheet to avoid confusion.  It is my understanding that there is no fix proposed at this time.

Add Reply

Link copied to your clipboard.