The Geolocation Accuracy of LiDAR Footprint

This paper describes the geometric geolocation accuracy of LiDAR footprint, utilizing an aircraft configuration that supports a sensor designed to scan the surface of the Earth, a DGPS and INS/IMU system. It is presented a review of LiDAR’s footprints accuracy as a relationship between the input parameters, which include errors of the orbital state, attitude information of the aircraft and the look vector errors of the active sensor (LiDAR scanner), that give us the coordinates of the point of intersection of the line of sight scanning system and the Earth's surface as a function of: terrestrial ellipsoid surface, aircraft position, aircraft velocity, aircraft attitude (spatial situation) and the orientation of the LiDAR scanner. Using the derived error formulas, based on the accuracy of the navigation solution, the boresight misalignment angles, the ranging and scan angle accuracy, and laser beam divergence, the achievable point positioning accuracy can be computed for any given LiDAR system which operates at different flying heights between 70 m – 6,000 m.


Introduction
The principle of using laser for range measurement was known from late 1960s. At the same time it was thought of using the airborne laser for measurement of ground coordinates. However, this could not be realized till late 1980s as determination of location of airborne laser sensor, which is a primary requirement, was not possible. The operationalization of GPS solved this problem. This is one of the important reasons why laser mapping from airborne platform could not be realized before.
The LiDAR technology is known by several names in industry. One may regularly come across the names like Laser altimetry, Laser range finder, Laser radar, Laser mapper and airborne altimetry LiDAR. The term airborne altimetry LiDAR (LiDAR) is the most accepted name for this technology. The principle of LiDAR is similar to Electronic Distance Measuring Instrument (EDMI), where a laser (pulse or continuous wave) is fired from a transmitter and the reflected energy is captured. Using the time of round trip of this laser, the distance between the transmitter and reflector is determined. The reflector could be natural objects or an artificial reflector like prism.
The laser could be classified in many ways: pulsed and continuous; infrared, visible, and ultraviolet; high-power and low-power; and so on. The most important classification is into solid-state, gas, liquid, and semiconductor categories. For remote sensing purposes, lasers capable of emitting high-power, short-duration, narrow-bandwidth pulses of radiant energy with a low degree of divergence are required. Lasers can be used for both spectral analysis and range measurement of a target.
There are three operational categories of LiDAR systems: continuous wave, light striping video profiling, and pulse. The distance to the object is calculated using the time the transmitted pulse travels to the target and back, given the speed of light as a constant. Lasers used for collecting topographic information on land are in NIR portion of the light spectrum, typically 1,064 nm, so we cannot see the light that is emitted for a LiDAR mission. The output power of the laser pulse is far too weak to cause blindness in people and animals, a common concern among people when first told about the technology. The pulse may intersect several objects on the way down, bouncing back multiple returns to the sensor or may only have one return such as a building roof or the ground.
Airborne LiDAR systems can be operated between 70m -6,000 m altitude, the choice of altitude depending on the strength of the laser, weather conditions and specifications for final deliverables. The basic concepts of airborne LiDAR mapping is that a pulsed laser is optically coupled to a beam director which scans the laser pulses over a swath of terrain, usually centred on, and co-linear with, the flight path of the aircraft in which the system is mounted, the scan direction being orthogonal to the flight path. The round trip travel times of the laser pulses from the aircraft to the ground are measured with a precise interval timer and the time intervals are converted into range measurements knowing the velocity of light. The position of the aircraft at the epoch of each

144
The Geolocation Accuracy of LiDAR Footprint measurement is determined by a phase difference kinematic GPS. Rotational positions of the beam director are combined with aircraft roll, pitch and heading values are determined with an inertial navigation system (INS), and with the range measurements, to obtain vectors from the aircraft to the ground points. When these vectors are added to the aircraft locations they yield accurate coordinates of points on the surface of the terrain.
All LiDAR systems use Differential GPS (DGPS) positioning technology. At least 4 satellites with precisely known orbits are needed to determine the position of the GPS receiver. For the very precise locations required for accurate positioning in LiDAR, a lock on at least six GPS satellites is desirable. Two GPS receivers are used during flight, one aboard the aircraft, and the other a well-surveyed location. The ground receiver should preferably be located at a survey area that has been precisely measured in coordinates of the horizontal and vertical projection the end-user wants. Complications with GPS come from irregularities in the Earth's gravity, shape of the Earth, the map projection to be used, as well as inconsistencies in atmospheric conditions and other phenomena.
In Figure 1 are shown the diagram of the process of computation of ground coordinates and the diagram of the processes LiDAR data pass through before reaching the end user. Most of the initial uses of LiDAR were for measuring water depth. Depending upon the clarity of the water LiDAR can measure depths from 0.9m to 40m with a vertical accuracy of 15cm and horizontal accuracy of 2.5m. A laser pulse is transmitted to the water surface where, through Fresnel reflection, a portion of the energy is returned to the airborne optical receiver, while the remainder of the pulse continues through the water column to the bottom and is subsequently reflected back to the receiver. The elapsed time between the received surface and bottom pulses allows determination of the water depth. The maximum depth penetration for a given laser system is obviously a function of water clarity and bottom reflection. Water turbidity plays the most significant role among those parameters.
For bathymetric measurements, the wavelength used in this case is blue or green as these can transmit in the water body thus maximizing the measurable depth by LiDAR. A hybrid LiDAR system employs both infra-red (IR) and green laser (concentric). While the IR laser is reflected from land or from the water surface, the green wavelength proceeds to and gets reflected from the bottom of water body. This makes it possible to capture both land topography and water bed bathymetry simultaneously.

Materials and Methods
For the review of geolocation accuracy of LiDAR footprint, we used NGA standardization documents from 2009 and 2011. A laser footprint is the area on ground which is illuminated by the laser pulse, due to its divergence and a finite size of the transmission aperture. In the process of geolocation for LiDAR footprint, are necessary the following measurements which are obtained for each laser pulse fired:  Laser range by measuring time of travel of a pulse  Laser scan angle  Aircraft roll, pitch and yaw  Aircraft acceleration in three directions  GPS antenna coordinates Geolocation means how to determine the coordinates of laser footprint in ECEF (WGS-84) reference system by combining the aforesaid basic measurements. The transformation between the footprint on the image and the ECEF footprint is expressed in terms of a series of consecutive matrix transformations applied to the line of sight vector of LiDAR. Finally, for any scan footprint, we obtain ECEF coordinates (by intersection of the IFOV with the ellipsoid used to model Earth) and then geodetic coordinates (geodetic longitude and latitude).
The World Geodetic System 1984 (WGS84) models the Earth's surface as an oblate spheroid (ellipsoid), which allows Cartesian ECEF positions on Earth's surface to be represented using the angles longitude and geodetic latitude. The WGS84 was developed by the National Imagery and Mapping Agency, now the National Geospatial-Intelligence Agency, and has been accepted as a standard for use in geodesy and navigation. [4] Csanyi N. and Toth C. developed in 2007 the error formulas which can also facilitate the analysis of the effects of individual error sources on point positioning accuracy, although due to the size limitations of the paper this analysis is not included in this paper. As is seen in Figure 2, a LiDAR system consists of three main sensors: LiDAR scanner, IMU and GPS. These systems operate at their respective frequencies. The laser range vector which is fired at a scan angle η in the reference frame of laser instrument will need to be finally transformed the earth centred WGS-84 system for realizing the geolocation of the laser footprint. This transformation is carried through various rotations and transformations as shown below. First it is important to understand the various coordinate systems involved in this process and their relationships. Range measurement is represented as a vector [0,0,z] in temporary scanning system. Rotate this vector in instrument reference system using scan angle (η). Further rotate the vector in INS reference system with origin at instrument using the mounting angle biases (α 0 β 0 γ 0 ). Now this vector is translated by GPS vector [d x ,d y ,d z ] measured in INS reference system. Next step is to rotate the vector to the Earth tangential (ET) reference system using roll, pitch, raw (α β γ). At this stage the vector is in ET system with origin at GPS antenna. Now rotate the vector in WGS-84 Cartesian system with origin at GPS antenna, using antenna latitude and longitude (φ, λ), which are measured by GPS. The vector is translated in Earth-centreed WGS-84 system using Cartesian coordinates of antenna (a x , a y , a z ), as observed by the GPS. The vector now refers to the Cartesian coordinates of laser footprint in WGS84, which can be converted in ellipsoidal system. If R x (θ) is rotation about x axis by θ angle, T(V) is translation by a vector V, [X] is final vector in WGS-84 system and φ and λ are latitude and longitude of GPS antenna at the time of laser shot the aforesaid steps can be written and seen in Figure 2:  [5] For any point measured by a LiDAR system, error propagation can be used to determine the error in the LiDAR-derived coordinates given the errors in the LiDAR input parameters. Another issue related to LiDAR error analysis is the nature of the errors resulting from random errors in the input system measurements. The error in the LiDAR-derived coordinates is affected by errors in the components of the LiDAR equation. These components, or input parameters, can either be measured or estimated from a system calibration procedure.

Results and Discussions
LiDAR accuracy is generally stated in vertical direction as the horizontal accuracy is indirectly controlled by the vertical accuracy. This is also due to the fact that determination of horizontal accuracy for LiDAR data is difficult due to the difficulty in locating Ground Control Points (GCPs) corresponding to the LiDAR coordinates.
The vertical accuracy is determined by comparing the Z coordinates of data with the truth elevations of a reference (which is generally a flat surface). The accuracy is stated as RMSE (root mean square error) and given by: (1) It is assumed that systematic errors have been eliminated as best as possible. If vertical error is normally distributed, the factor 1.9600 is applied to compute linear error at the 95% confidence level (Greenwalt and Schultz, 1968; Andre Samberg 2005). Therefore, vertical accuracy, noted A z , reported according to the american standard NSSDA (National Standard for Spatial Data Accuracy) shall be computed by the following formula: LiDAR accuracy is reported generally as 1.96·RMSE z . This accuracy is called fundamental vertical accuracy when the RMSE is determined for a flat, non-obtrusive and good reflecting surface. While the accuracy should also be stated for other types of surfaces, which are called supplemental and consolidated vertical accuracies?
According with NSSDA, horizontal accuracy RMSE xy : where:  x data,i , y data,i are the coordinates of the i th check point in the dataset data  x check,i , y check,i are the coordinates of the i th check point in the independent source of higher accuracy  n is the number of check points tested  i is an integer ranging from 1 to n Horizontal error of a point i is defined as RMSE xy with formula: (5) It is assumed that systematic errors have been eliminated as best as possible. If error is normally distributed and independent in each the x-and y-component and error, the factor 2.4477 is used to compute horizontal accuracy at the 95% confidence level (Greenwalt and Schultz, 1968; Andre Samberg 2005). If we consider that RMSE x = RMSE y , the Accuracy xy , noted A xy , shall be computed according to NSSDA, by the formula:  (6) The various sensor components fitted in the LiDAR instrument possess different precision. For example, in a typical sensor the range accuracy is 1-5 cm, the GPS accuracy 2-5 cm, scan angle measuring accuracy is 0.01 rad, IMU accuracy for pitch/roll is < 0.005° and for heading is < 0.008° with the beam divergence being 0.25 to 5 mrad. However, the final vertical and horizontal accuracies that are achieved in the data is of order of 5 to 15 cm and 15-50 cm at one sigma. The final data accuracy is affected by several sources in the process of LiDAR data capture. A few important LiDAR error sources are mentioned below. Errors due to sensor position, due to error in GPS, IMU and GPS-IMU integration (are shown in Figure 3).  [5] Error due to angles of laser travel as the laser instrument is not perfectly aligned with the aircrafts roll, pitch and yaw axis. There may be differential shaking of laser scanner and IMU. Further, the measurement of scanner angle may have error.
The vector from GPS antenna to instrument in IMU reference system is required in the geolocation process. This vector is observed physically and may have error in its observation. This could be variable from flight to flight and also within the beginning and end of the flight. This should be observed before and after the flight.
The total spatial accuracy of a LiDAR footprint is given by the formula: There may be error in the laser range measured due to time measurement error, wrong atmospheric correction and ambiguities in target surface which results in range walk.
Error is also introduced in LiDAR data due to complexity in object space, e.g., sloping surfaces leads to more uncertainty in X, Y and Z coordinates. Further, the accuracy of laser range varies with different types of terrain covers.
The divergence of laser results in a finite diameter footprint instead of a single point on the ground thus leading to uncertainty in coordinates. For example, if sensor diameter Ds = 0.1 cm; divergence= 0.25 mrad; and flying height 1000m, the size of footprint on the ground is Di= 25 cm. Varying reflective and geometric properties within footprint also lead to uncertainty in the coordinate.
It is very important for quality control to check the relative consistency of the LiDAR data. This is usually conducted by checking the compatibility of LiDAR footprints in overlapping strips. On the other hand, the external QC measures verify the absolute quality of the LiDAR data by checking its compatibility with an independently collected and more accurate surface model.
A common quality control procedure is to assess the coincidence of conjugate features in overlapping strips. Such a procedure ensures the internal quality of the available LiDAR data. There are two main approaches to doing so:  comparing interpolated range or intensity images from overlapping strips and  comparing conjugate features extracted from the strips.
The degree of coincidence of the extracted features can be used as a measure of the quality of the data and to detect the presence of systematic biases. In other words, conjugate features in overlapping strips will coincide if and only if the LiDAR data is quite accurate. Therefore, the separation between conjugate features can be used as a quality control measure (Ayman Habib, 2007).
Another common approach to external quality control involves check point analysis using specially designed LiDAR targets. The fixed targets in the field are extracted from range and intensity LiDAR imagery using a segmentation procedure. The coordinates of the extracted targets are then compared with the surveyed coordinates using a Root Mean Square Error (RMSE) analysis. The resulting RMSE value is a measure of the external/absolute quality of the LiDAR-derived surface (Csanyi and Toth, 2007).

Conclusions
LiDAR data are used for generating the maps of urban areas at large scale. LiDAR facilitates identification of buildings from the point cloud of data points, which are important for mapping, revenue estimation, and change detection studies. Drainage planning in urban areas needs accurate topographic data which are not possible to be generated in busy streets using conventional methods. The ability of LiDAR to collect data even in narrow and shadowy lanes in cities makes it ideal for this purpose. Accurate, dense and fast collection of topographic data can prove useful for variety of other GIS applications in urban areas, e.g. visualization, emergency route planning, etc.
The advantages of LiDAR technology in comparison with the other methods of topographic data collection (land surveying, GPS, inteferrometry and photogrammetry) are described in detail by American Center For Geospatial Intelligence Standards in 2009 [5] and are listed below:  Vertical accuracy 5-15 cm (10cm)  Horizontal accuracy 30-50 cm  Fast acquisition and processing (acquisition of 1000 km 2 in 12 hours, DEM generation of 1000 km 2 in 24 hours).  Minimum human dependence (as most of the processes are automatic unlike land surveying, photogrammetry or GPS).  Independence of weather and light.  Data collection independent of sun inclination and at night and slightly bad weather.  Canopy penetration. LiDAR pulses can reach beneath the canopy thus generating measurements of points there unlike photogrammetry.  Higher data density, up to 167,000 pulses per second and more than 24 points per m 2 can be measured.  Multiple returns to collect data in 3D.  GCP independence, only a few GCPs are needed to keep reference receiver for the purpose of DGPS. There is no need of GCPs otherwise. This makes LiDAR ideal for mapping inaccessible and featureless areas.
The advantages of LiDAR centre upon its relatively high-accuracy of 5-15cm in height and 30cm to 60cm in the horizontal, and upon the very high mass point density of at least 1 point/m 2 . This high point density greatly assists artefact removal in the DSM-to-DEM conversion. Moreover, LiDAR has high productivity of around 300 km 2 of coverage per hour, and it can be operated day or night. In practise, data acquisition is generally confined to daylight hours since most LiDAR units nowadays come with dedicated digital cameras (usually medium format), the resulting imagery being used for orthoimage production.

150
The Geolocation Accuracy of LiDAR Footprint One of the most significant attributes of LiDAR is multi-pulse sensing, where the first returned pulse indicates the highest point encountered and the last the lowest point. There may also be mid pulses. As a consequence, LiDAR has the ability to 'see through' all but thick vegetation and it can be safely assumed that a good number of the last returns will be from bare earth. This greatly simplifies the DSM to DEM conversion process in vegetated areas.
The advantages of Lidargrammetry over high-resolution Photogrammetry in urban and city environments are less pronounced since the reflections of surfaces such as the sides of buildings can complicate shape definition and obscure breaklines. However, LiDAR is a near nadir sensing system. As with the photogrammetric DSM to DEM conversion, considerable manual post processing of the filtered and thinned out LiDAR DEM is required to 'clean' the bare-earth representation. The cost of the manual post-processing stage has been reduced over recent years as software systems have become more sophisticated. Although the manual intervention may account for 90% of the post-processing budget, it is now down to something in the order of 20%-30% of the overall project budget.
In many respects LiDAR data is similar to image acquisition from aerial photography: flights are carried out in strips, with a nominal side overlap of around 30%, depending upon terrain. Accuracy is a function of flying height, but in the case of LiDAR the height accuracy (ranging accuracy) remains reasonably constant whereas the ground sampling density varies. In general, LiDAR is less expensive than standard photogrammetry, with the cost advantages becoming more pronounced as project areas become larger.
The laser scanner creates highly dense point clouds in areas with low texture, while image data is advantageous for edge measurement and texture mapping. The existing flight planning tools would have to be extended for the combined data acquisition of LiDAR and image data. The most important factor for the combination of image and LiDAR data is the improvement of the accuracy of the flight trajectory of an UAV for example, which would lead to the real-time capability of data processing of such UAV platforms and the combined processing of the data. In this combination, first the images have to be oriented. In a second step using the enhanced image orientation the trajectory of the LiDAR data can be improved. Thirdly, the registered LiDAR point cloud can then be projected back into the images. Finally, in a combined adjustment the trajectory can be further improved and a DSM can be generated both from the LiDAR and image data.