13.0 QUALITY CONTROL METHODOLOGIES
Print Friendly, PDF & Email

This section provides suggested methodologies to ensure that the final, delivered point cloud meets the desired data collection category requirements and geometric accuracy described in the previous section.  Appendix F discusses classification accuracy and completeness evaluations, should those be necessary.  This section will focus solely on geometric accuracy and resolution evaluations.

Ideally, a validation dataset should be an order of magnitude more accurate than the network or local accuracy specification requested. However, this is a challenge for the highest accuracy MLS datasets since there are limited technologies that would meet this criterion. For example, control for a project will normally be set using GNSS (long duration static occupations for highest accuracy (sub-cm), RTK for faster evaluation (a few cm), terrestrial scanning (cm), or total station/digital leveling (sub-cm) depending upon the required accuracy of resultant product. Further, while instruments such as a total station provide very high local accuracy, coordinates still must be tied to control for network accuracy evaluations. Leveling can provide high vertical accuracy (sub-mm), but does not provide the ability to assess horizontal accuracy.

There are several differences in kinematic surveying compared to conventional surveying. An important concept with MLS is that each point is individually geo-referenced, and hence, each point will have a unique geo-referencing solution and associated accuracy value (although current methodologies do not actually provide precision information for each point as is commonly provided with total stations). In conventional surveying, however, multiple data points are acquired from a single setup and are geo-referenced together. Multiple setups can be linked together to complete the survey.

13.1    Control requirements for evaluation

This section provides guidance as to how validation points should be acquired for verifying the accuracy and data collection category specifications. There should always be more than 20 points used for the QC evaluation in order to compute a 95% confidence (FGDC, 1998). However, to be statistically significant in sampling the large datasets obtained by MLS, many more points should be used in validation.  For MLS surveys in which geometric corrections are applied to control points, the validation points must be different than the control points used for the adjustment.  Additionally, these points should be widely distributed throughout the project in order to reflect variance across the project extents. For example, to consider variability in accuracy across the road, one can use two validation points across from each other on each side of the road or alternate along the road). In cases where the primary data of interest is not in the road, validation points should be acquired on the features of interest, when possible, since accuracy will degrade with range.  One should always examine the dataset for clustering of high error validation points for problems of the dataset.

The frequency at which these evaluation tests are performed depends on the desired data collection category. For example, a certification 1A (highest DCC) would require validations to be more frequent than a certification at 3C (lowest DCC). The following intervals are recommended (will vary depending on project requirements) as spacing for validation points:

Accuracy Level 1Validation points spaced at 150 – 300 m (492 – 984 ft) along the highway.Accuracy Level 2Validation points spaced at 300 – 750 m (1,000 – 2,500 ft) along the highway.Accuracy Level 3Validation points spaced at 750 – 1,500 m (2,500 – 5,000 ft) along the highway.The statement of work should discuss the frequency, type and location of validation points along the highway.
Recommendation:  QC checks should be performed more frequently in location with poor GNSS quality (PDOP > 5.0, e.g., dense urban or tree canopy) or other problems.

Evaluation surveys should be completed independently using methods with higher accuracy. For example, accuracy level 1 certification would require the evaluation control to be tied into rigorous control established via static GNSS observations (See NGS-58, Guidelines for establishing GPS-derived ellipsoid heights). Whereas, accuracy levels 2 and 3 certification for mapping and asset management purposes can generally be obtained using faster methods such as RTK GPS.

Recommendation:  Use an independent data source of higher accuracy to validate the dataset than any control used in acquiring or processing the dataset.

13.2    Suggested geometric accuracy evaluation procedures

13.2.1 Quantitative analyses

Currently, many MLS projects are geometrically corrected (adjusted) using control points and verified using discrete validation points.  This process can be very cumbersome, particularly for projects spanning long corridors or with complex ramp structures. Further, it is difficult to obtain sufficient density to appropriately evaluate horizontal accuracy on a validation point or target because there is no guarantee that the laser pulse will actually hit the center of the target or that point will be able to be detected in the point cloud.  As such, often only vertical error is reported.  Although this may be acceptable for certain applications, others require more stringent horizontal accuracies.

Recommendation:  Require a 3D (including both horizontal and vertical components) accuracy at 95% confidence to be reported.

Validation points can be obtained through usage of artificial or natural targets that have been appropriately surveyed with an independent source.  Any artificial targets need to be placed prior to MLS acquisition.  These targets with fixed dimensions can often be incorporated into software as templates and fit to the point cloud.  Several packages can automatically extract these objects from the point cloud.  The 3D error is calculated by as the distance between the center (or other key point) of the target and the validation point coordinates.  Some suggested targets include:

  • Preplaced, non-reflective, patterned survey targets.  These targets can either be established directly above a control point or have their centers tied into a network via a total station.  These targets will generally be too small for lower resolution MLS acquisitions, but will work for higher resolution acquisitions where automated fitting and detection algorithms can extract the centers of the target.  Although in low (dm to m level) resolution point cloud datasets these targets may be identifiable in a higher resolution photograph, the fidelity of coordinates extracted from the photograph will depend on the accuracy of the camera calibrations.  More complex (e.g., checkerboard) patterns can also be used to verify proper image calibration.

13_Fig11

Figure 11: Examples of patterns for survey targets.
  • Reflective target measurements. Pre-placed reflective targets or reflective features (e.g., turn arrows, striping, etc.) can be easily detected in the point cloud because of their high intensity returns. One can acquire control coordinates on a defined part of the reflective object (e.g., corner) and compare the distance between the MLS and control coordinates of that point. For specific examples of how to apply this method, see Toth et al. (2008). A chevron shape (Figure 12) is a popular target for MLS, because they are easy to place, and allow both a horizontal (point of chevron) and vertical accuracy validation.

13_Fig12

Figure 12: Example of a painted chevron shape.
There are some important considerations for using reflective targets for mobile LIDAR:
  • The target should be modeled so that the desired comparative point (e.g., center or corner) location is improved by interpolation rather than requiring the selection of a single point in the point cloud.
  • Because the vehicle is moving, sampling on the target may not be sufficient to evaluate the desired point (typically the center or corner) of the target.
  • Reflective surfaces can be problematic when scanning:
  1. Saturation – At close distances (in some cases up to 50 m, depending on the material and scanner), the laser returns from reflective features will be very strong. As such, it can be very difficult for the laser scanner firmware to resolve the peak of the returning waveform to accurately determine range.
  2. Blooming – At far distances, reflective targets will be enlarged in the point cloud. This is because the laser spot size increases with distance. At far distances, a portion of the laser may hit the edge of the target, which would cause the point to have a higher intensity value. If the center of the object is of interest, then the effect is minimized if symmetric coverage of the target is obtained. However, if the edge or corner is desired, there may be a bias in the point cloud.

In addition to specific targets, feature modeling can be used for error assessment and provides a more rigorous check compared to single points.  In the above examples, when fitting procedures are performed to extract the target centers, the resulting error estimate is actually an error of the modeling process rather than the point cloud itself.  For example, in fitting a plane for a target shape, some systematic noise will be filtered in the process.

These methods presented below will not be cost-effective to implement across the entire project; however, they could be implemented at key locations.  Particularly, they will be more effective for evaluating calibration errors in the MLS system than the previous methods.

  • Iterative Closest Point (ICP) least-squares fitting analysis between mobile LIDAR and static Terrestrial Laser Scanning (sTLS). The strength of this approach is that you are using thousands to hundreds of thousands of data points across an area to validate against.  The disadvantage is that it cannot be implemented as frequently.  The results are also influenced by the network accuracy of the static scan.
  • ICP least-squares fit of cross sections. Cross-sections obtained across the road surface (preferably in two directions such as across an intersection) using another technique such as a total station can also be used for validation. In order to obtain a 3D error estimate, one should obtain at least two cross sections perpendicular to each other (e.g., one North-South and one East-West).  For examples of this method, see Williams (2012).
  • Planar, least-squares fitting approach (Skaloud and Lichti, 2006). For this approach, one acquires sample points, using another survey methodology, on planar features (at the desired interval) visible in the MLS data. Point density can be determined by dividing the number of MLS points on the plane by the total plane area. 3D accuracy can be assessed by measuring offsets of the MLS points from multiple planar surfaces facing different directions. The local accuracy can be determined by evaluating the residuals following a least squares fit of the MLS points to the plane. A potential limitation of this method would be that some surfaces may not actually be planar in reality. Useful surfaces include:
  1. Road surfaces, sidewalks, etc. for vertical evaluation.
  2. Curbs, buildings or walls for horizontal evaluation.

The ICP least squares fits for the previous techniques should be constrained to a rigid body translation (no rotation) using a copy of the data. This translation should generally not actually be applied to the point cloud, but solely used for accuracy evaluation purposes only.  The 3D error estimates can then be reported with 95% confidence as:

Network Error Estimate = D3D + 1.6166 * (3D RMSE)                      (7)Local Error Estimate = 1.6166 * (3D RMSE)                                     (8)where:
  • D3D = , which is the 3D translation vector provided by the rigid body translation, for evaluation purposes only. It is also the average, 3D offset between the point pairs of the validation dataset and the MLS dataset.
  • Network Error ≥ Local Error

The Local Accuracy is calculated directly from the residuals of the fit; whereas the Network Accuracy accounts for the overall shift in the data and should always be lower (higher error value) than the relative accuracy.  Note that ICP fits should have appropriate outlier screening criteria to define matching point pairs since they are estimating matching points that are not necessarily point pairs in reality.  If the dataset is of poor quality, ICP will be difficult to implement.  Further, point to plane variants of the ICP algorithm will generally yield more realistic error estimates by removing some of the resolution bias.  ICP algorithms are available in commercial and open-source static LIDAR and some airborne LIDAR software packages.

Finally, to verify that the assumption of normally distributed error is valid, the errors for 95% of all matching point pairs should be below the contract-specified accuracies, with a maximum of 5% exceeding that value.

13.2.2 Qualitative verification

In addition to the validation point quantification, additional visual quality control procedures that should be performed on the dataset when multiple passes, lasers, or overlap are available include:
  1. Coloring each pass and or laser differently and evaluating overall blending in the dataset.  If one color (pass) tends to dominate the view in areas which were covered by the other pass and/or laser or ghosting effects are visible, that is an indication of geo-referencing error.
  2. Cut narrow-width (few cm) cross sections through the data.  Misaligned data will show up as multiple cross sections, rather than a single, blended section.
  3. Some software packages can color code points by deviations between datasets.

These visual validations are an important part of the process and can provide additional insights on potential geo-referencing or distortion errors that may not be found in the numbers alone. However in this process, it is likely that one will find some points from noise that are above the error tolerance.  It is important to remember that at 95% confidence, 5% of the dataset will exceed the error tolerance, so overemphasis should not be placed on a stray points when the majority are satisfactory.  For example, in a dataset of 100 million points, 5 million can be above the error threshold!

13.3    Suggested point density evaluation procedures

Similar to accuracy, point density should be evaluated throughout the dataset, particularly for objects of interest.  Resolution at each location can be evaluated by:
  1. Drawing a polygon (e.g., a 1m x 1m square) on a planar feature,
  2. Selecting all points on the planar surface that are within the polygon extents (excluding points that do not belong to the surface),
  3. Calculating point density as the number of points found in step 2 divided by the 2D area of the polygon drawn in step (1).

As with accuracy, point density checks should be conducted throughout the entire dataset.  The frequency of evalutions depends on the variability observed in the point density as well as the spatial frequency of the objects of interest.  In general, one could use similar intervals as those for accuracy evaluations (A:  150-300 m; B: 300-750 m; C: 750-1500 m).  The results for each test location can then be statistically evaluated to ensure that 95% of the samples meet the appropriate point density requirements for the features of interest.  If different point density requirements are established for various objects (e.g, pavement, signs, cliff), the samples should be categorized and summary statistics for point densities reported for each feature category individually.

When surface or solid models are delivered, point densities can be calculated by the number of points on the model divided by the surface area of the model.

While it may be tempting to calculate a quick estimate of point density for the dataset by dividing the total number of points in the dataset by the 2D projected area of coverage, such an approach is not recommended because:

  1. The actual point density will vary substantially across the dataset, as described previously, and
  2. Such an approach does not account for the 3D nature of the data and does not account for vertical features.

A continuous, color-coded point density map with summary statistics should also be provided as a deliverable and can provide a general overview of point density quality throughout the dataset.  Note that if a 2D map is provided, point densities will be overestimated in locations with vertical features (e.g., walls, buildings).

Next Section >> Chapter 14: Considerations for Common Mobile LIDAR Applications