The Best Software For LIDAR Classification

The Best Software For LIDAR Classification

 

 

LiDAR, which stands for Light Detection and Ranging, is a remote sensing technology that uses laser pulses to measure distances and create 3D models of surfaces. LiDAR data can be used for a variety of applications, including urban planning, forest management, and topographic mapping. One of the most important steps in processing LiDAR data is classification, which involves identifying and labeling different types of objects and terrain features in the LiDAR point cloud.

There are several software packages available for LiDAR classification, each with its own unique features and benefits.

1. LAStools:

 

LAStools is a popular software package for LiDAR processing and classification. It includes a suite of tools for filtering, quality control, and classification of LiDAR data. LAStools can classify LiDAR data into ground, non-ground, buildings, vegetation, and other features. It also includes tools for point cloud thinning, which can significantly reduce processing times.

2. TerraScan:

 

TerraScan is another powerful software package for LiDAR classification. It includes a range of tools for point cloud management, filtering, and classification. TerraScan can classify LiDAR data into ground, buildings, vegetation, and other features, and can also perform advanced classification tasks such as building footprint extraction and power line detection. TerraScan is widely used in the forestry, transportation, and utility industries.

 

3. ArcGIS:

 

ArcGIS is a popular GIS software package that includes tools for LiDAR data processing and classification. It can classify LiDAR data into ground, vegetation, buildings, and other features, and can also perform advanced classification tasks such as tree species identification and canopy height modeling. This software is widely used in the urban planning, forestry, and environmental management industries.

 

4.CloudCompare:

 

CloudCompare is an open-source software package for LiDAR data processing and classification. It includes a range of tools for point cloud filtering, registration, and classification. CloudCompare can classify LiDAR data into ground, non-ground, buildings, vegetation, and other features. It is widely used in the surveying, archaeology, and geology industries.

5.TopoDOT:

 

TopoDOT is a software package for LiDAR processing and classification that is specifically designed for the transportation industry. It includes tools for point cloud management, filtering, and classification, as well as advanced features such as automated roadway extraction and sign detection. TopoDOT is widely used in the transportation industry for highway and rail planning, design, and maintenance.

6.GlobalMapper:

 

One of the key features of Global Mapper’s lidar classification tool is the ability to distinguish between ground and non-ground points. This is an essential step in many lidar applications, as it enables accurate terrain modeling and surface analysis. Global Mapper offers several algorithms for ground point classification, including the progressive morphological filter (PMF) and the triangulated irregular network (TIN) method.

 

In conclusion, LiDAR classification is an essential step in processing LiDAR data for a variety of applications. There are several software packages available for LiDAR classification, each with its own unique features and benefits. LAStools, TerraScan, ArcGIS, CloudCompare, and TopoDOT are some of the most popular software packages for LiDAR classification, widely used in different industries for different purposes. It is important to select the appropriate software package based on the specific requirements of the project and the industry.

 

What is LIDAR? How it works?

What is LIDAR? How it works?

 

Introduction:

LIDAR or Light Detection And Ranging uses lasers to measure the elevation of things like the ground forests and even buildings. It is lot like sonar which uses sound waves to map things, or radar which uses radio waves to map things, but a LIDAR system uses light sent out from a laser.

For the record, there are different ways to collect LIDAR data: from the ground, from an airplane or even from space.

Airborne LIDAR data are the most commonly available LIDAR data and airborne LIDAR data will also be freely available through the National Ecological Observatory Network or NEON. Many other sources are becoming free for many countries.

The four parts of LIDAR Sytem:

To understand how lasers are used to calculate height in airborne LIDAR, we need to focus on the four parts in the system.

1. LIDAR Unit – Scans the ground:

First, the airplane contains the LIDAR unit itself which uses a laser to scan the earth from side to side as the plane flies. The laser system uses either green or near infrared light because these wavelengths or types of light reflect strongly off of vegetation.

2. Global Positioning System – Tracks planes x,y,z position:

The next component of a LIDAR system is a GPS receiver that tracks the altitude and X,Y location of the airplane.

The GPS allows us to figure out where LIDAR reflections are on the ground.

3. Inertial Measurement Unit (IMU) – Tracks Plate Position:

The third component of the LIDAR system is what’s called an inertial measurement unit or IMU.

The IMU tracks the tilt of the plane in the sky it flies which is important for accurate elevation calculations.

4. Computer – Records Data:

Finally, the LIDAR system includes a computer which records all that important height information that the LIDAR collects as it scans the earth’s surface.

 

How these four parts of the system work together to get fantastically useful later dataset?

 

The laser in the LIDAR system scans the earth actively emitting light energy towards the ground. Now before we go any farther, let us get two key LIDAR terms associated with this emitted light energy out of the way.

First, let’s define the word “pulse”. A pulse simply refers to a burst of light energy that is admitted by the LIDAR system.

And second, lets define the word “return”. Return the first reflected light energy that has been recorded by the LIDAR sensor.

Pulses of light energy travel to the ground and return back to the LIDAR sensor.

To get height the LIDAR system records the time that it takes for the light energy to travel to the ground and back. The system then uses the speed of light to calculate the distance between the top of that object and the plane.

To figure ground elevation, the plane’s altitude is calculated using the GPS receiver and then we subtract the distance that the light travel to the ground.

There are two more things in a LIDAR system to consider when calculating height. First, the plane rocks a bit in the sky as it flies due to turbulence in the air. These movements are recorded by the inertial measurement unit or IMU so that they can be accounted for when height values are calculated for each LIDAR return.

An airborne system scans the earth from side to side to cover a larger area on the ground when flying. So while some light pulses travel vertically from the plane to the ground or directly at nadir, most pulses leave the plane angle or off nadir. The system needs to account for pulse angle when it calculates elevation.

How a LIDAR system works?

The LIDAR system emits pulses of light energy towards the ground using a laser, it then records the time it takes for the pulse to travel to the ground and return back to the sensor. It converts this time to distance using the speed of light.

The system then uses the plan’s altitude, tilt, and the angle of the pulse to calculate elevation. It also uses a GPS receiver to calculate the object’s location on the ground.

All this information is recorded on that handy dandy computer also mounted on the airplane.

Top 10 Questions You May Have about LiDAR

Top 10 Questions You May Have about LiDAR

 

1. Why Is LiDAR Such a Valuable Tool?

The value of LiDAR lies in the fact that it virtually places you on your target site without having to leave the office. Combining imagery with LiDAR point cloud data is the next best thing to being there. Plus, it’s fast, accurate, and affordable.

2. Can I Collect Other Information While I’m Gathering LiDAR Data?

You name it — cameras, video imaging systems, multi-spectral and hyper-spectral imaging systems can all be mounted on and operated with most current LiDAR systems. Because you can operate these multiple systems using the same components, you’ll save time and money, a big benefit to corridor mapping projects such as transportation, pipeline, and transmission mapping.

Mobile mapping systems typically include two or more cameras within the system. One drawback: These passive systems require a light source, which means your collection times are lim-ited. Unlike LiDAR, they can’t see in the dark.

3. Do I Need Breaklines If I’m Using LiDAR?

Not necessarily. It all depends on the requirements of the product you’re generating. Typically, LiDAR is very good at defining the surface, as long as the sample spacing is ade-quate and there isn’t too much vegetation. Most features and terrain are very well defined in LiDAR data.

The rule of thumb relative to breakline usage comes down to edge recognition needs in the surface. If you have key elements, such as a back of curb line or a lip of gutter line, you will want to collect breaklines. If you’re only producing large-scale contour maps, you may not need the accuracy that breaklines provide.

4. Where Can I Find an End-to-End Solution for LiDAR Data?

Try Autodesk. Specifically, you’ll find an end-to-end solution in AutoCAD 2011, AutoCAD Labs, AutoCAD Civil 3D, Map 3D, and Navisworks products. Another robust data extraction solution for feeding all these applications with classified LAS and featurized GIS is the Virtual Geomatics solution.

5. Classical Photogrammetric Data Collection Works for Me — How Does LiDAR Compare?

LiDAR is about 40 percent less expensive than classical pho-togrammetric collection. And it takes less time to collect, process, and extract the needed information from LiDAR com-pared with traditional methods.

6. Is LiDAR Data Accurate Enough to Use on Road Overlay Projects?

You bet, as long as you use the appropriate collection method with the sufficient survey control. Just be careful when you pick your method and plan the control.

7. What Is Corn-Rowing?

You can’t eat it. The term corn-rowing refers to an artifact of LiDAR sampling that typically occurs at the edges of scans and overlapping data areas. It’s caused when LiDAR points are sampled close together and the difference in the sampled points is greater than the relative accuracy. With proper col-lection and filter processes that remove the points causing the trouble, you can minimize corn-rowing.

8. What Type of LiDAR Data Do I Really Need?

The best way to determine what LiDAR products you need is to really understand the application for the data. Call a qualified LiDAR collection agency for a recommendation on the appropriate collection methods based on your specific requirements. Here’s the info you’ll need to pass along:

✓ Accuracy requirements for data.

✓ Extraction requirements — do you just need points, or linear features, too?

✓ End products you’ll require. Do you need a triangulated surface, a classified LAS, and/or GIS features?

9. What Is an Intensity Image?

An intensity image is a monochromatic (shades of gray) image of the illumination (energy) returns from the LiDAR system. These images can be used for generating planemetric and breaklines by using LiDARgrammetry. The intensity image is typically a Geotiff, and the accuracy of the image is a function of the horizontal accuracy of the LiDAR, along with the inter-polation of the point to a raster image.

10. You Just Said LiDARgrammetry—What’s That?

LiDARgrammetry is the process of using intensity images to generate synthetic stereo pairs, much like the stereo pairs used in photogrammetry. The data generated from LiDARgrammetry tends to be only as accurate as the LiDAR from which it’s generated. There are varying opinions regard-ing the usefulness of this information and how accurate it is. Still, it’s a good byproduct of LiDAR, and whether it’s useful to you just depends on the scope of your project.

The LIDAR Terms you must know

The LIDAR Terms you must know

 

If you want to feel a lot more fluent in the language of LiDAR, you must know theses terms:

Repetition rate: This is the rate at which the laser is pulsing, and it’ll be measured in kilohertz (KHz). Fortunately, you don’t have to count it yourself, because these are extremely quick pulses. If a vendor sells you a sensor operating at 200 KHz, this means the LiDAR will pulse at 200,000 times per second. Not only does the laser transceiver put out 200,000 pulses, the receiver is speedy enough to receive information from these 200,000 pulses.

Scan frequency: While the laser is pulsing, the scanner is oscillating, or moving back and forth. The scan frequency tells you how fast the scanner is oscillating. A mobile system has a scanner that rotates continuously in a 360 degree fashion, but most airborne scanners move back and forth.

Scan angle: This is measured in degrees and is the dis-tance that the scanner moves from one end to the other. You’ll adjust the angle depending on the application and the accuracy of the desired data product.

Flying attitude: It’s no surprise that the farther the plat-form is from the target, the lower the accuracy of the data and the less dense the points will be that define the target area. That’s why for airborne systems, the flying attitude is so important.

Flight line spacing: This is another important measure for airborne systems, and it depends on the application, vegetation, and terrain of the area of interest.

Nominal point spacing (NPS): The rule is simple enough — the more points that are hit in your collection, the better you’ll define the targets. The point sample spac-ing varies depending on the application. Keep in mind that LiDAR systems are random sampling systems. Although you can’t determine exactly where the points are going to hit on the target area, you can decide how many times the target areas are going to be hit, so you can choose a higher frequency of points to better define the targets.

Cross track resolution: This is the spacing of the pulses from the LiDAR system in the scanning direction, or per-pendicular to the direction that the platform is moving, in the case of airborne and mobile systems.

Along track resolution: This, on the other hand, is the spacing of the pulses that are in the flight direction or driving direction of the platform.

Swath: This is the actual distance of the area of coverage for the LiDAR system. It can vary depending on the scan angle and flying height. If you’re flying higher, you’ll have a larger swath distance, and you’ll also get a larger swath distance if you increase the scan angle. Mobile LiDAR has a swath, too, but it is usually fixed and depends on the particular sensor. For these systems, though, you might not hear the word “swath;” it may instead be referred to as the “area of coverage,” and will vary depending on the repetition rate of the sensor.

Overlap: Just like it sounds. It’s the amount of redundant area that is covered between flight lines or swaths within an area of interest. Overlap isn’t a wasted effort, though — sometimes it provides more accuracy.

How LiDAR is Being Used to Help With Natural Disaster Mapping and Management

How LiDAR is Being Used to Help With Natural Disaster Mapping and Management

 

Michael Shillenn, vice president and program manager with Quantum Spatial outlines three projects where LiDAR data from the USGS 3D Elevation Program (3DEP) has been used to assist in planning, disaster response and recovery, and emergency preparedness.  

This month the United States Geological Survey (USGS) kicks off the fourth year of its grant process that supports collection high-resolution topographic data using LiDAR under its 3D Elevation Program (3DEP). The 3DEP program stemmed from the growing national need for standards-based 3D representations of natural and constructed above-ground features, and provides valuable data and insights to federal and state agencies, as well as municipalities and other organizations across the U.S. and its territories.

With geospatial data collected through 3DEP, these agencies and organizations can mitigate flood risk, manage infrastructure and construction projects, conserve national resources, mitigate hazards and ensure they are prepared for natural and manmade disasters.

Here’s a look at three projects undertaken by Quantum Spatial Inc. on behalf of various government agencies, explaining how the LiDAR data collected has been used to support hurricane recovery and rebuilding efforts, provide risk assessments for potential flooding and address potential volcanic hazards.

Hurricane Sandy Disaster Response and Recovery

Hurricane Sandy was one of the deadliest and most destructive hurricanes of the 2012 Atlantic hurricane season, impacting 24 states, including the entire Eastern seaboard from Florida to Maine. The Disaster Relief Appropriations Act of 2013 enabled the USGS and National Oceanic and Atmospheric Administration (NOAA) to support response, recovery and mitigation of damages caused by Hurricane Sandy.

As a result, USGS and NOAA coordinated the collection of high-resolution topographic and bathymetric elevation data using LiDAR technology along the eastern seaboard from South Carolina to Rhode Island covering coastal and inland areas impacted by the storm. This integrated data is supporting scientific studies related to:

  • Hurricane recovery and rebuilding activities;
  • Vulnerability assessments of shorelines to coastal change hazards, such as severe storms, sea-level rise, and shoreline erosion and retreat;
  • Validation of storm-surge inundation predictions over urban areas;
  • Watershed planning and resource management; and
  • Ecological assessments.

The elevation data collected during this project has been included in the 3DEP repository, as well as NOAA’s Digital Coast — a centralized, user-friendly and cost-effective information repository developed by the NOAA Office for Coastal Management for the coastal managers, planners, decision-makers, and technical users who are charged to manage the nation’s coastal and ocean resources to sustain vibrant coastal communities and economies.

In this image, you’ll see a 3D LiDAR surface model colored by elevation centered on the inlet between Bear and Browns Island, part of North Carolina’s barrier islands south of Emerald Isle in Onslow Bay. The Back Bay marshlands and Intercostal Waterway also are clearly defined in this data.

3D LiDAR surface model colored by elevation centered on the inlet between Bear and Browns Island, part of North Carolina’s barrier islands south of Emerald Isle in Onslow Bay.

Flood Mapping and Border Security along the Rio Grande River

Not only is flooding one of the most common and costly disasters, flood risk also can change over time as a result of development, weather patterns and other factors. The Federal Emergency Management Agency (FEMA) works with federal, state, tribal and local partners across the nation to identify and reduce flood risk through the Risk Mapping, Assessment and Planning (Risk MAP) program. Risk MAP leverages 3DEP elevation data to create high-quality flood maps and models. The program also provides information and tools that help authorities better assess potential risk from flooding and supports planning and outreach to communities in order to help them take action to reduce (or mitigate) flood risk.

This image depicts a 3D LiDAR surface model, colored by elevation, for a portion of the City of El Paso, Texas. U.S. and Mexico territory, separated by the Rio Grande River, is shown. Centered in the picture is the Cordova Point of Entry Bridge crossing the Rio Grande. The US Customs and Border Protection, El Paso Port of Entry Station is prominently shown on the north side of the bridge. Not only does this data show the neighborhoods and businesses that could be impacted by flooding, but also it provides up-to-date geospatial data that may be valuable to border security initiatives.

3D LiDAR surface model, colored by elevation, for a portion of the City of El Paso, Texas. U.S. and Mexico territory, separated by the Rio Grande River

Disaster Preparedness Around the Glacier Peak Volcano

The USGS has a Volcano Hazards Program designed to advance the scientific understanding of volcanic processes and lessen the harmful impacts of volcanic activity. This program monitors active and potentially active volcanoes, assesses their hazards, responds to volcanic crises and conducts research on how volcanoes work.

Through 3DEP, USGS acquired LiDAR of Glacier Peak, the most remote, and one of the most active volcanoes, in the state of Washington. The terrain information provided by LiDAR enables scientists to get accurate view of the land, even in remote, heavily forested areas. This data helps researchers examine past eruptions, prepare for future volcanic activity and determine the best locations for installing real-time monitoring systems. The LiDAR data also is used in the design of a real-time monitoring network at Glacier Peak in preparation for installation in subsequent years, at which time the USGS will be able to better monitor activity and forecast eruptions.

This image offers a view looking southeast at Glacier and Kennedy Peaks and was created from the gridded LiDAR surface, colored by elevation.

3D LiDAR surface model of a view looking southeast at Glacier and Kennedy Peaks.

 

Source : www.gislounge.com

USGS 3DEP Lidar Point Cloud now available as Amazon Public Dataset

USGS 3DEP Lidar Point Cloud now available as Amazon Public Dataset

 

The USGS 3D Elevation Program (3DEP) announced the availability of a new way to access and process lidar point cloud data from the 3DEP repository. 3DEP has been acquiring three-dimensional information across the United States using light detection and ranging (lidar) technology- an airborne laser-based remote sensing technology that collects billions of lidar returns while flying- and making results available to the public.

The USGS has been strategically focused on providing new mechanisms to access 3DEP data beyond simple downloads. With 3DEP’s adoption of cloud storage and computing, users now have the option to work with massive lidar point cloud datasets without having to download them to local machines.

Currently, there are over 1.77 million ASPRS LAS tiles compressed using the LASzip compression encoding in the us-west-2 region, which equates to over 12 trillion lidar point cloud records available from over 1,254 projects across the United States. This resource provides users a mechanism to retrieve and work with 3DEP data that is quicker than the free FTP download protocol.

“The 3D Elevation Program was founded on the concept that high-resolution elevation data should be provided unlicensed, free and open to the public,” explained Kevin Gallagher, Associate Director for USGS Core Science System. “This agreement with Amazon helps to fulfill that promise by providing cloud-access to the trillions of data points collected through the Program.

The democratization of elevation data is a tremendous achievement by the community of partners leading this effort and promises to revolutionize approaches to applications from flood forecasting and geologic assessments to precision agriculture and infrastructure development.”

Hobu, Inc. and the U.S Army Corps of Engineers (USACE) Cold Regions Research and Engineering Laboratory (CRREL) collaborated with the Amazon Web Services (AWS) Public Datasets team to organize these data as Entwine Point Tile (EPT) resources, which is a lossless, streamable octree based on LASzip (LAZ) encoding. The data are now part of the Open Data registry provided by AWS, similar to the Landsat archive.

3D LiDAR technology brought to mass-market with Livox sensor

3D LiDAR technology brought to mass-market with Livox sensor

 

US: Livox is shifting the marketplace for LiDAR sensors by introducing a reliable, compact, ready-to-use solution for innovators, professionals and engineers, around the world working closely with 3D sensing technology. After years of intense R&D and exhaustive testing, Livox has released three high-performance LiDAR sensors: The Mid-40/Mid-100, Horizon, and Tele-15. All sensors are developed with a wide range of different industry applications in mind, offering customers a best-in-class combination of precision, range, price and size.

As the first available Livox sensor, the Mid-40/Mid-100 sensor can accurately sense three-dimensional spatial information under various environmental conditions, and plays an indispensable role in fields such as autonomous driving, robotics, mapping, logistics, security, search and rescue, to name a few.

Low Cost and Mass Production

Traditionally, high-performance mechanical LiDAR products usually demand highly-skilled personnel and are therefore prohibitively expensive and in short supply. To encourage the adoption of LiDAR technology in a number of different industries ranging from 3D mapping and surveying to robotics and engineering, Livox Mid-40/Mid-100 is developed with cost-efficiency in mind while still maintaining superior performance.

Instead of using expensive laser emitters or immature MEMS scanners, Mid-40/Mid-100 adopts lower cost semiconductor components for light generation and detection. The entire optical system, including the scanning units, uses proven and readily available optical components such as those employed in the optical lens industry. This sensor also introduces a uniquely-designed low cost signal acquisition method to achieve superior performance. All these factors contribute to an accessible price point – $599 for a single unit of Mid-40.

Livox Mid-40/Mid-100 adopts a large aperture refractive scanning method that utilizes a coaxial design. This approach uses far fewer laser detector pairs, yet maintains the high point density and detection distances. This design dramatically reduces the difficulty of optical alignment during production and enable significant production yield increase.

Powerful and Compact

The Mid-40 sensor covers a circular FOV of 38.4 degrees with a detection range of up to 260 meters (for objects with reflectivity at 80%). Meanwhile, the Mid-100 combines three Mid-40 units internally to form an expansive horizontal FOV of 98.4 degrees (Horizontal) x 38.4 degrees (Vertical). The point rate for Mid-40 is 100,000 points/s while for Mid-100 is 300,000 points/s. The range precision (1σ @ 25 m) of each sensor is 2 cm and the angular accuracy is < 0.1 degrees.

Livox sensor’s advanced non-repetitive scanning patterns deliver highly-accurate details. These scanning patterns even provide high point density in a short period of time and can even build up a higher density as the duration increases. The Mid series can achieve the same or greater point density as conventional 32-line LiDAR sensors.

With this level of 3D sensing capability, Livox has optimized the hardware and mechanical design, so that a compact body of Mid sensors enables users to easily embed units into existing designs.

Reliable and Safe

All Livox LiDAR sensors are individually and thoroughly tested and are proved to work in a variety of environments. Every single unit has a false detection rate of less than one ten-thousandth, even in the 100 klx sunlight condition[3]. Each sensor’s laser power meets the requirements for a Class 1 laser product to IEC 60825-1(2014) and is safe for human eyes[4]. The Mid-40/Mid-100 operate in temperatures between -4 degrees F and 149 degrees F (-20 degrees C to 65 degrees C) and always reliably output point cloud data for objects with different reflectivity. Livox LiDAR does not use any moving electronic components, thus avoiding challenges such as slip ring failures, a common problem in conventional, rotating LiDAR units. Livox has also optimized the optoelectronic system, including software, firmware, and algorithms, enhancing environmental adaption in a wide variety of conditions including rain, smoke, and fog.

Livox Horizon and Tele-15

Beside Mid-40/Mid-100 sensors, Livox is currently working on extending its product portfolio with two additional LiDAR sensors, the Horizon and Tele-15.

The Livox Horizon is a high-performance LiDAR which offers a broader FOV with much higher coverage ratio while retaining all the key advantages of the Mid-40, such as long detection range, high precision, and a compact size. Compared with the Mid-40, the Horizon has a similar measuring range, but features a more-rectangular-shaped FOV that is 81.7 degrees horizontal and 25.1 degrees vertical, highly suitable for autonomous driving applications. The Horizon also delivers real-time point cloud data that is three times denser than the Mid series LiDAR sensors.

Made for advanced long-distance detection, the Livox Tele-15 offers the compact size, high-precision, and durability of the Mid-40 while vastly extending the real-time mapping range. This allows users to detect and avoid obstacles well in advance when moving at higher speeds.

As for the Tele-15, it features an ultra-long measuring range of 500 meters when reflectivity is at 80%. Even with 20% reflectivity, the measuring range is still up to 250 meters. In addition, the Tele-15 has a circular FOV of 15 degrees and delivers a point cloud that is 17 times denser than the Mid-40. These key features enable the Tele-15 to see objects far ahead with great details.

Livox Hub

The Livox Hub is a streamlined way to integrate and manage Livox LiDAR sensors and their data outputs. When using Livox Hub with our LiDAR SDK, you will have unified access to software and hardware, making the development process simplified and efficient. The Livox Hub can access up to 9 LiDAR sensors simultaneously and supports an input range of 10-23V.

Livox SDK

To release the unlimited potential of LiDAR, Livox SDK offers a wide range of essential tools that help users develop unique applications and algorithms. The Livox SDK supports various development platforms, such as C and C++ in Linux/Windows/ROS and applies to all existing products such as Livox Mid-40, Mid-100, Horizon, Tele-15, Hub.

Drone LiDAR or Photogrammetry?

Drone LiDAR vs Photogrammetry

 

With the recent development in the drone surveying space, there has been a lot of myths and misconceptions around UAV LiDAR and photogrammetry. In fact, these two technologies have as many differences as similarities. It is therefore essential to understand that they offer significantly different products, generate different deliverables and require different capture conditions but most importantly they should be used for different use cases.

There are no doubts that compared to traditional land surveying methods both technologies offer results much faster and with a much higher data density (both techniques measure all visible objects with no interpolation). However, the selection of the best technology for your project depends on the use case, environmental conditions, delivery terms, and budget among other factors. This post aims to provide a detailed overview of the strengths and limitations of LiDAR and photogrammetry to help you choose the right solution for your project.

HOW DO BOTH TECHNOLOGIES WORK?

Let’s start from the beginning and have a closer look into the science behind the two technologies.

LiDAR that stands for Light Detection and Ranging is a technology that is based on laser beams. It shoots outs laser and measures the time it takes for the light to return. It is so called active sensor as it emits its energy source rather than detects energy emitted from objects on the ground.

Photogrammetry on the other side is a passive technology, based on images that are transformed from 2D into 3D cartometric models. It uses the same principle that human eyes or 3D videos do, to establish a depth perception, allowing the user to view and measure objects in three dimensions. The limitation of photogrammetry is that it can only generate points based on what the camera sensor can detect illuminated by ambient light.

In a nutshell, LiDAR uses lasers to make measurements, while photogrammetry is based on captured images, that can be processed and combined to enable measurements.

OUTPUTS OF LIDAR AND PHOTOGRAMMETRY SURVEYS

The main product of LiDAR survey is a 3D point cloud. The density of the point cloud depends on the sensor characteristics (scan frequency and repetition rate), as well the flight parameters. Assuming that the scanner is pulsing and oscillating at a fixed rate, the point cloud density depends on the flight altitude and speed of the aircraft.

Various use cases might require different point cloud parameters, e.g., for power line modeling you might want dense point cloud with over 100 points per square meter, while for creating Digital Terrain Model of a rural area 10 pts/m2 cloud be good enough.

It is also important to understand that LiDAR sensor is only sampling positions without RGB, creating a monochrome dataset which can be challenging to interpret. To make it more meaningful, the data is often visualized using false-color based on reflectivity or elevation.

Example of point cloud before and after adding a color attribute. Courtesy of TerraSolid

It is possible to overlay color on the LiDAR data in post-processing based on images or other data sources however this adds some complexity to the process. The color can also be added based on classification (classifying each point to a particular type/group of objects, e.g., trees, buildings, cars, ground, electric wires).

Photogrammetry, on the other hand, can generate full-color 3D and 2D models (in the various light spectrum) of the terrain that is easier to visualize and interpret than LiDAR. The main outputs of photogrammetric surveys are raw images, ortophotomaps, Digital Surface Models and 3D points clouds created from stitching and processing hundreds or thousands of images. The outputs are very visual with a pixel size (or Ground Sampling Distance) even below 1cm.

Aerotriangulated images and generated 3D point cloud. Screen from Pix4D software.

With that in mind, photogrammetry seems to be the technology of choice for use cases where visual assessment is required (e.g., construction inspections, asset management, agriculture). LiDAR, on the other hand, has certain characteristics that make it important for particular use cases.

Laser beams as an active sensor technology can penetrate vegetation. LiDAR is able to get through gaps in the canopy and reach the terrain and objects below, so it can be useful for generating Digital Terrain Models.

LiDAR is also particularly useful for modeling narrow objects such as power lines or telecom towers as photogrammetry might not recognize narrow and poorly visible objects. Besides, LiDAR can work in poor lighting conditions and even at night. Photogrammetry points clouds are more visual (each pixel has RGB), but often with generalized details, so it might be appropriate for objects where a lower level of geometric detail is acceptable but visual interpretation is essential.

ACCURACY

Let’s start with defining what the accuracy is. In surveying, accuracy always has two dimensions: relative and absolute. Relative accuracy is the measurement of how objects are positioned relative to each other. Absolute accuracy refers to the difference between the location of the objects and their true position on the Earth (this is why any survey can have a high relative but low absolute accuracy).

Example of Terrestrial LiDAR scanner

LiDAR is one of the most accurate surveying technologies. This is particularly the case for terrestrial lasers where the sensor is positioned on the ground, and its exact location is measured using geodetic methods. Such a setup allows achieving sub-centimeter level accuracies.

Achieving a high level of accuracy with aerial LiDAR is however much more difficult as the sensor is on the move. This is why the airborne LiDAR sensor is always coupled with IMU (inertial motion unit) and GNSS receiver, which provide information about the position, rotation, and motion of the scanning platform. All of these data are combined on the fly and allow achieving high relative accuracy (1-3cm) out of the box. Achieving high absolute accuracies requires adding 1-2 Ground Control Points (GCPs) and several checkpoints for verification purposes. In some cases when additional GNSS positioning accuracy is needed, one can use advanced RTK/PPK UAV positioning systems.

Photogrammetry also allows achieving 1-3 cm level accuracies however it requires significant experience to select appropriate hardware, flight parameters and process the data appropriately. Achieving high absolute accuracies requires using RTK/PPK technology and additional GCPs or can be based purely on a large number of GCPs. Nonetheless, using $500 DJI Phantom-class drone with several GCPs, you can easily achieve 5-10cm absolute accuracy for smaller survey areas, which might be good enough for most of the use cases.

DATA ACQUISITION, PROCESSING, AND EFFICIENCY

There are also significant differences in the acquisition speed between the two. In photogrammetry one of the critical parameters required to process the data accurately is image overlap that should be at the level of 60-90% (front and side) depending on the terrain structure and hardware applied. The typical LiDAR survey requires only 20-30% overlap between flight lines, which makes the data acquisition operations much faster.

Additionally, for high absolute accuracy photogrammetry requires more Ground Control Points to achieve LiDAR level accuracy. Measuring GCPs typically require traditional land surveying methods which mean additional time and cost.

Moreover, LiDAR data processing is very fast. Raw data require just a few minutes of calibration (5-30min) to generate the final product. In photogrammetry, data processing is the most time-consuming part of the overall process. In addition, it requires powerful computers that can handle operations on gigabytes of images. The processing takes on average between 5 to 10 times longer than the data acquisition in the field.

On the other hand, for many use cases such as power line inspections, LiDAR point clouds require additional classification which might be very labor intensive and often needs expensive software (e.g., TerraScan).

COST

When we look at the overall cost of LiDAR and photogrammetry surveys, there are multiple cost items to be considered. First of all the hardware. UAV LiDAR sensor sets (scanner, IMU, and GNSS) cost between $50.000-$300.000, but for most use cases the high-end devices are preferable. When you invest so much in a sensor, you don’t want to crash it accidentally. With that in mind, most users spend additional $25.000-$50.000 for the appropriate UAV platform. It all adds up to $350.000 for a single surveying set which is equivalent to 5x Telsa Model S. Quite pricey.

For photogrammetry, all you need is a camera-equipped drone, and these tend to be much cheaper. In the $2.000-$5.000 range, you can find a wide selection of professional multirotor devices such as DJI Inspire. For the price level of $5.000-$20.000 you can buy RTK/PPK enabled sets such us DJI Matrice 600 or fixed-wing devices Sensfly eBee and PrecisionHawk Lancaster.

Another cost item is a processing software. In case of LiDAR, it is typically added for free by a sensor manufacturer. However, post-processing, e.g. point cloud classification might require using 3rd party software, such as TerraScan that cost $20.000-$30.000 for a single license. Photogrammetry software prices are closer to the level of $200 a month per license.

Obviously, another important factor that influences the cost of the service is labor and time. Here, LiDAR has a significant advantage over photogrammetry, as it not only requires significantly less time to process the data but also to lay and mark GCPs. Overall depending on the use case business model it is not given that

Overall, depending on the use case and business model photogrammetry services are typically cheaper than LiDAR simply because the investment in the hardware has to be amortized. However, in some cases, the efficiency gains that come with LiDAR can compensate for the sensor cost.

CONCLUSIONS

When comparing LiDAR and photogrammetry, it is a key to understand that both technologies have their applications as well as limitations, and in the majority of use cases they are complementary. None of these technologies is better than the other and none of them will cover all the use cases.

LiDAR should be certainly used when for surveying narrow structures such as power lines or telecom towers and for mapping areas below tree canopy. Photogrammetry will be the best option for projects that require visual data, e.g., construction inspections, asset management, agriculture. For many projects, both technologies can bring valuable data (e.g., mines or earthworks) and the choice of method depends on a particular use case as well as time, budget, and capturing conditions among other.

LiDAR and photogrammetry are both powerful technologies if you use them the right way.  It is clear that with decreasing prices of hardware and software it will become more and more available. Both technologies are still in its early days when it comes to UAV applications and in the following years, we will undoubtedly witness further disruptions (especially when it comes to hardware prices, and machine learning software automation). Stay tunes. We will keep you posted.

How Accurate is LiDAR?

How Accurate is LiDAR?

 

LiDAR is an acronym for Light Detection and Ranging. It is an active remote sensing technology that measures distance by illuminating a target with a laser and analyzing the reflected light. It is similar to RADAR but instead of using radio signals, it uses laser pulses. LiDAR depends on Infrared, ultraviolet and visible rays to map out and image objects. By illuminating the target using a laser beam, a 3D point cloud of the target and its surrounding can be generated. Three types of information can be obtained using LiDAR:

• Range to target (Topographic LiDAR)

• Chemical Properties of target (Differential Absorption LIDAR)

• Velocity of Target (Doppler LiDAR)

History of LiDAR

The initial attempts were made in the early 1930s to measure the air density profiles in the atmosphere by determining the scattering intensity from the searchlight beams. LiDAR was first created in 1960 shortly after the invention of the laser. The very first initial attempts at LiDAR were made by combining the laser-focused imaging with the ability to calculate distances by measuring the time for a signal to return using appropriate sensors and data acquisition electronics. The first LiDAR application came in meteorology where the National Centre for Atmospheric Research used it to measure clouds.

LiDAR’s accuracy and usefulness was made available to the public first in 1971 during the Apollo 15 Mission. During this mission, astronauts used a laser altimeter map to obtain the correct topographical representation of the moon. The first commercial airborne LiDAR system was developed in 1995.

Accuracy of LiDAR

The accuracy of LiDAR technology is no longer in doubt. LiDAR applications varies in a number of fields across all industries in the world. The most common application of LIDAR is in the field of forestry and agriculture and most recently in the field of autonomous cars. In considering driverless cars, for instance, the accuracy of LiDAR is guaranteed in the sense that manufacturers of these cars trust the technology to maintain order and avoid any incidences on the road. Autonomous cars depend on the laser pulses to measure the distance between the vehicle and any proximate vehicle. The laser pulses are transmitted at the speed of light towards an object and the time taken for the laser pulses to hit the target is recorded. The laser pulses are consequently reflected back to the transmitter and the time taken for the reflected pulse to hit the transmitter is also recorded.

This cycle is repeated over a number of times and the distance between the vehicle and the object can then be calculated. As the distance between the vehicle and the object reduces, the vehicle’s onboard diagnostics are able to decide whether or not to apply the brakes.

A better understanding of the accuracy of LiDAR is perhaps best described on the speed guns often used by cops. The speed guns employ the use of LiDAR technology to determine accurately the speed of approaching vehicles. Previously, radar was used to acquire these speeds but the accuracy of the system was always in doubt. Radar shoots out a short, high-intensity burst of high-frequency radio waves in a cone-shaped pattern. Officers who have been through the painfully technical 40-hour Doppler radar training course know it will detect a variety of objects within that cone pattern, such as the closest target, the fastest moving target or the largest target. Officers are trained to differentiate and properly match targets down range to the radar readings they receive. Under most conditions, skilled users get good results with radar, and it is found to be most effective for open stretches of roadway. But for more congested areas, locking radar on a specific target is more difficult.

Experts opine that Laser systems are more accurate when it comes to providing traffic and speed analysis as compared to other systems including radar. A laser can point at a specific vehicle in a group while radar cannot. A laser beam is a mere 18 inches wide at 500 feet compared to a radar beam’s width of some 150 feet.

Source: http://lidarradar.com

Can drones be utilized in construction for creating accurate BIM models?

Can drones be utilized in construction for creating accurate BIM models?

 

Not many years ago, people who thought they were being constantly watched by someone or something were labeled paranoids. But that is not the case right now; times have changed and we live in a world where there are flying cameras watching over human activities. We have seen these flying cameras during sports events, concerts and even during some wedding receptions.

You could call it a plane without a pilot or a flying remote controlled toy camera but how do we define them in a surveying and engineering context? When technically elaborating, they may be identified as tools that capture beneficial digital data and images from a different perspective. These systematic images captured are then used to create a 3D model, point cloud or a Digital Terrain Model (DTM). The DTM statistics are extremely useful for the generation of 3D renderings of any location in a described area and they could come handy for engineers working in various fields like geodesy & surveying, geophysics, and geography.

All this cumulatively contribute to elevated efficiency levels during the different phases of construction engineering. Construction is a one of a kind industry, where even such small gains in efficiency and flexibility can reap billions of savings. With that in mind it’s no real surprise that the engineers are slowly embracing the so called “Drone Revolution”. Now, UAVs are starting to dominate all the 4 stages of Architectural engineering, namely; pre-construction stage, construction stage, post construction stage and finally and most significantly the ongoing safety maintenance stage.

Pre-construction Stage

During preconstruction stage, the project is in its budding stage and the whole design is nourished slowly and carefully by the architects. The paramount activity during this stage is land survey documentation. Drones can provide us with precise and speedy overviews of large sites and high risk areas thereby ensuring that the documentation of land condition is precise. This data can further be used for scheduling and planning of the construction activities which are to happen in the location.

In conventional point cloud methods, there are possibilities of uneven topography due to certain occlusions in the sight, but the bird’s eye vision advantage of drones ensure generation of data across an entire region with the identical consistency in accuracy and density and this data can be even used to create a Building Information Model (BIM) which clearly shows how exactly our building is going to look like after the whole construction process is done, which is very beneficial from the designer’s point of view.

Construction Stage

During the construction stage, there are innumerable difficulties to be dealt with. One such difficulty is the proper documentation of the project progress schedule. Usually there would be a site manager traversing around the site capturing photographs at random points and then preparing the whole site report based on these limited photographs. Needless to say the report would be defective and insufficient. But with the introduction of UAVs into the construction industry, a series of high definition aerial shots and videos can be easily captured so as to get a better insight to the progress that has occurred without actually being on-site. The real time data acquired by light detecting sensors mounted on the Drones can help create point clouds or Building Information Models (BIMs) which can be directly fed into Autodesk’s program line such as BIM, Inventor, AutoCAD and Revit for early damage detection procedures, quality management exercises and other asset evaluation techniques. The point clouds or Building Information Models (BIM models) can be further used to retrieve relevant information at the wish and will.

Post Construction Stage

The post construction stage can be just as problematic as the construction stage. Evaluation of high rise buildings and other complex structures are often a tedious task with the naked eye. Inspecting a building roof using UAV multi – rotor system is an economical and secure way than by using conventional methods. Like laser systems, drones can also be used to capture aerial thermal images to locate the potential hot and cold spots in a building but their 4K quality gives them an upper hand over the low quality laser scanned images. This aesthetic dominance the drones have over conventional laser methodologies are certainly a boon while considering a marketing angle as well. There’s undeniably no better way to advertise a new project than a top to down view from a bird’s eye point of view. An engaging walk through project video is a delightful way to introduce key personnel to the project and get them on board.

Ongoing Safety and Maintenance Stage

The role of UAVs in implementing a safe and secure work atmosphere is the one salient feature that stands out and this simply is the reason why drones have become a household name for safety inspectors in large construction sites. Often in multi-million projects, the officer in charge may not always be around and this is where the live video coverage of the drones strike gold. The live feeds can be accessed by the superiors easily even from a remote location, thereby enabling routine asset inspections, fatigue and damage evaluations and condition surveys at all times. Keeping in mind all this, it is no wonder that UAVs are nicknamed the new onsite “BOSS”.

As researches have demonstrated, in the coming years we are undoubtedly to witness drones spearheading the construction industry. With our eyes and ears virtually in the sky, it’s already quite effortless to identify the contradictions in the ongoing process and in addition to that we can know how aesthetically appealing the buildings are coming up. The control and planning aspects of the construction process have also witnessed considerable changes which were practically unfeasible a few years back. The money and time saved with the support of drones are going to be immeasurable in the future. In short,

Drones can be used for:

  1. Land survey and site inspection during pre-construction stage
  2. Building Information Modeling (BIM) and Point cloud scanning
  3. Marketing and promotional photography during and after construction
  4.  Monitoring and tracking onsite activities thereby ensuring accurate work flow
  5. Ensuring routine asset inspections and safety measures at all times

Thus, it is safe to claim that the notion of ‘technology integrated construction’ have advanced by leaps and bounds with the intervention of drones into the construction industry!

 

Source: http://www.advenser.com/blog/

error: Content is protected !!
Exit mobile version