Laser Scanning Technology and Its Advantages in Construction Industry
Laser Scanning is a method of collecting external data using a laser scanner which captures the actual distance of densely scanned points over a given object at breakneck speed. The process is usually known as a point cloud survey or as light detection and ranging (LIDAR, a combo of the words ‘light’ and ‘radar’).
Laser scanning is currently acquiring the impetus in the construction industry for its competency in helping Building Teams to collect tons of remarkably authentic information in a very short span of time. When done in a perfect way, Laser scanning can prove to be beneficial to all the involved parties in the life cycle of the project.
The laser scanning method can be used to create 3D representations that can be converted for use in 3D CAD modeling or BIM (Building Information Modeling).
While the construction industry is relatively gradual in adopting the newer technology, the designers and the construction professionals are challenging themselves to complete the project in rapid pace with the use technologies like BIM and custom-designed apps. The 3D laser scanning is less promoted technology in the adoption phase, though the AEC industry is now noticing the benefits of laser scanning can bring the boost in their projects.
Accuracy:
The laser scanning technology determines to be much quicker, more exact and inexpensive than the traditional survey measurement. The exactness of the process depends on the stability of the instrument base and the distance from the object.
Benefits of using the Laser Scanning Technology in Construction Industry
Laser Scanning has been a boon to the construction industry that allows obtaining a level of detail, accuracy which was not feasible with the other traditional methods. Let’s have a brief look at the benefits of implementing the Laser Scanning technology to build in a smarter way.
Enhanced Planning and Designing
Using the laser scanning method, a tremendous boost in planning and designing is seen. The clashes between newly designed elements and existing conditions have been analyzed before the construction. The exactness of dimensions obtained from laser scans can also help improve planning by providing exact measurements for destruction and removal of components as well as assist in minimizing the waste materials.
Reduction in cost and Schedule
It has been seen that the 3D scanning can curtail the total project cost by 5% to 7%. The scanning can be performed in minimal hours to a few days, depending on the site as compared to several weeks in the traditional data collection methods.
Safety and Regulatory Agreement
The Laser scanning methods are often safer that the manual data collection method and are increasingly used to help satisfy with health, safety, and environmental responsibilities. The features such as remote sensing ability and quick data capture of the laser scanner trim the teams’ exposure to harmful environments. For example, when used in nuclear power plants, the laser scanner helps in reducing the size and the time of group’s exposure to the high radiation areas.
The laser scanning provides booming methods for surveying remote surfaces as well as complex geometrical surfaces are also surveyed with absolute ease. All the major providers of CAD 3D modeling and BIM have built compatibility that acknowledges their system to import the point cloud data into the 3D visual graphic material.
The use of drones with laser scanning has indeed become a recognized method of getting the exact detail of topography. LIDAR has been widely used for surveys from rail to the road vehicles. The instrument can easily operate at night when the targeted surfaces are less interfere with people and can produce outstanding accuracy.
How LiDAR is Being Used to Help With Natural Disaster Mapping and Management
Michael Shillenn, vice president and program manager with Quantum Spatial outlines three projects where LiDAR data from the USGS 3D Elevation Program (3DEP) has been used to assist in planning, disaster response and recovery, and emergency preparedness.
This month the United States Geological Survey (USGS) kicks off the fourth year of its grant process that supports collection high-resolution topographic data using LiDAR under its 3D Elevation Program (3DEP). The 3DEP program stemmed from the growing national need for standards-based 3D representations of natural and constructed above-ground features, and provides valuable data and insights to federal and state agencies, as well as municipalities and other organizations across the U.S. and its territories.
With geospatial data collected through 3DEP, these agencies and organizations can mitigate flood risk, manage infrastructure and construction projects, conserve national resources, mitigate hazards and ensure they are prepared for natural and manmade disasters.
Here’s a look at three projects undertaken by Quantum Spatial Inc. on behalf of various government agencies, explaining how the LiDAR data collected has been used to support hurricane recovery and rebuilding efforts, provide risk assessments for potential flooding and address potential volcanic hazards.
Hurricane Sandy Disaster Response and Recovery
Hurricane Sandy was one of the deadliest and most destructive hurricanes of the 2012 Atlantic hurricane season, impacting 24 states, including the entire Eastern seaboard from Florida to Maine. The Disaster Relief Appropriations Act of 2013 enabled the USGS and National Oceanic and Atmospheric Administration (NOAA) to support response, recovery and mitigation of damages caused by Hurricane Sandy.
As a result, USGS and NOAA coordinated the collection of high-resolution topographic and bathymetric elevation data using LiDAR technology along the eastern seaboard from South Carolina to Rhode Island covering coastal and inland areas impacted by the storm. This integrated data is supporting scientific studies related to:
Hurricane recovery and rebuilding activities;
Vulnerability assessments of shorelines to coastal change hazards, such as severe storms, sea-level rise, and shoreline erosion and retreat;
Validation of storm-surge inundation predictions over urban areas;
Watershed planning and resource management; and
Ecological assessments.
The elevation data collected during this project has been included in the 3DEP repository, as well as NOAA’s Digital Coast — a centralized, user-friendly and cost-effective information repository developed by the NOAA Office for Coastal Management for the coastal managers, planners, decision-makers, and technical users who are charged to manage the nation’s coastal and ocean resources to sustain vibrant coastal communities and economies.
In this image, you’ll see a 3D LiDAR surface model colored by elevation centered on the inlet between Bear and Browns Island, part of North Carolina’s barrier islands south of Emerald Isle in Onslow Bay. The Back Bay marshlands and Intercostal Waterway also are clearly defined in this data.
3D LiDAR surface model colored by elevation centered on the inlet between Bear and Browns Island, part of North Carolina’s barrier islands south of Emerald Isle in Onslow Bay.
Flood Mapping and Border Security along the Rio Grande River
Not only is flooding one of the most common and costly disasters, flood risk also can change over time as a result of development, weather patterns and other factors. The Federal Emergency Management Agency (FEMA) works with federal, state, tribal and local partners across the nation to identify and reduce flood risk through the Risk Mapping, Assessment and Planning (Risk MAP) program. Risk MAP leverages 3DEP elevation data to create high-quality flood maps and models. The program also provides information and tools that help authorities better assess potential risk from flooding and supports planning and outreach to communities in order to help them take action to reduce (or mitigate) flood risk.
This image depicts a 3D LiDAR surface model, colored by elevation, for a portion of the City of El Paso, Texas. U.S. and Mexico territory, separated by the Rio Grande River, is shown. Centered in the picture is the Cordova Point of Entry Bridge crossing the Rio Grande. The US Customs and Border Protection, El Paso Port of Entry Station is prominently shown on the north side of the bridge. Not only does this data show the neighborhoods and businesses that could be impacted by flooding, but also it provides up-to-date geospatial data that may be valuable to border security initiatives.
3D LiDAR surface model, colored by elevation, for a portion of the City of El Paso, Texas. U.S. and Mexico territory, separated by the Rio Grande River
Disaster Preparedness Around the Glacier Peak Volcano
The USGS has a Volcano Hazards Program designed to advance the scientific understanding of volcanic processes and lessen the harmful impacts of volcanic activity. This program monitors active and potentially active volcanoes, assesses their hazards, responds to volcanic crises and conducts research on how volcanoes work.
Through 3DEP, USGS acquired LiDAR of Glacier Peak, the most remote, and one of the most active volcanoes, in the state of Washington. The terrain information provided by LiDAR enables scientists to get accurate view of the land, even in remote, heavily forested areas. This data helps researchers examine past eruptions, prepare for future volcanic activity and determine the best locations for installing real-time monitoring systems. The LiDAR data also is used in the design of a real-time monitoring network at Glacier Peak in preparation for installation in subsequent years, at which time the USGS will be able to better monitor activity and forecast eruptions.
This image offers a view looking southeast at Glacier and Kennedy Peaks and was created from the gridded LiDAR surface, colored by elevation.
3D LiDAR surface model of a view looking southeast at Glacier and Kennedy Peaks.
In emergent tech sectors it’s common to find start-up companies which are not in command of the core skills required to do their work.
That’s certainly true in the scanning sector.
Like many new industries the sector infrastructure barely exists and for customers, finding a credible service provider can be challenging.
New sectors are like Wild West towns in the 1870s – there may or may not be a sheriff, a marshal, or a judge, and they may or may not have a grip on local law enforcement; they may or may not be well-versed in the law and they’re operating a long way from established civil society.
For customers, it can be a bit of nightmare, so here are a few of the things to look out for.
1. Tech manufacturers need to sell their tech and software, and their key messages often emphasise ease of use and application. Most tech-competent people can get up and running at an extremely basic level, but not with the performance requirements of a company offering professional services for a fee.
It’s only easy to use if you already have high-level intuitive tech skills (we have in-house tech staff who have developed our own training and operational hubs and can do everything, including adapting software and writing code)!
2. There are no generic, industry-wide, approved training courses. Manufacturers (Leica, Z+F, Faro etc) operate differently, manufacture different types of equipment, generating different outputs. Inexperienced companies get confused by this and overwhelmed by the need to learn different systems.
3. People are learning scanning in all sorts of ways – from YouTube and other online sources, and by trial and error (we train scanning operators and those doing post-production in-house, and we’re developing clear training protocols).
4. There’s a divide in the industry, between tech-led firms that have little understanding of BIM / FM / construction / design and build / project management etc; and firms led by people who have all these skills, but whose tech skills are deficient (we have employees with extensive skills and experience in both areas).
5. The entry level costs are peanuts by the standards of many industries; but for the individual sole-trader start-up entrepreneurs who so often pioneer new markets, they’re horrendous! No qualifications are required and most customers have little experience of the tech and haven’t yet acquired the skills to differentiate between a credible and non-credible offer.
Capital apart, the barriers to entry are, therefore, low, and many of the bottom-feeders are people with low-level business skills. (We have a successful, profitable, 17-year old company with great credit references and access to capital).
6. If you’re an individual sole-trader entrepreneurbuying all the equipment and software you need to run a scanning company, you’ll need a fair number of customers – very quickly – to pay-back your start-up costs, so you need to have the skills to grow a business to scale fast.
Why fast? Because the tech is developing so fast that within months / a year or so you’ll need to invest heavily again, to keep pace – and you can’t do that if you haven’t paid-down your initial investment or you can’t raise the capital (we were doing scanning projects across France, Germany and UK within weeks of starting our own operation; we have access to capital, we have a healthy income stream, and we invested in our second generation of technology before even launching a dedicated scanning company).
7. New entrants sometimes fail to realise that one day of scanning could require multiple days of post-production in order to produce something that is useable by most clients.
If you don’t do that, you’ll give the client an output which is clunky, data-heavy, difficult to use and awkward. They won’t get the most out of it, they won’t use it, they’ll consider the money wasted, and they won’t come back for a second bite (we have skilled in-house staff who can process scanned data quickly and make it very easy for customers to use).
8. The software that ‘comes in the box’ with a scanner isn’t sufficient for a professional operation. You need to apply other software, and probably adapt or write software for your specific requirements.
Writing code is simply beyond many of the people who go in to the business (Xmo Strata has always been an IT-literate company and we have introduced numerous IT-led customer solutions; writing code is second nature to our IT staff).
9. Scanning is essentially a professional services operation. Some of the companies in the field simply don’t have the customer-facing communications skills required; their founders have come out of corporates who are used to being on the ‘client’ side of the table and the cultural shift is beyond them (we have run a professional Business-To-Business service company for 17 years).
10. The initial cost of a basic scanner and everything you need to set-up is below six figures, if you do it on the cheap. But some of those in the sector have no experience of running an SME (Small to Medium Sized Enterprise) and think that they can set-up for the cost of the equipment alone.
Undercapitalised companies don’t last very long. If they haven’t provided customers with useable outputs (which is frequently the case) all the work disappears when they go out of business (we have extensive business experience covering not only our own companies but major brands).
The truth is that providing 3D digital scanning and digital modelling as a professional service is not for amateurs; but a generation of amateurs may have to come, and go, along with their customers’ money and much of the work customers have paid for, before that lesson is properly understood.
By Steve Martin, Managing Director, Xmo Strata and ManagingDirector, SpectisGB
How long does it take to do a scan and provide a client with the output they want?
Like so many services … it depends.
A room that is a simple box can be scanned and processed quickly and easily, but room (of the same size or volume) that is not a simple box, and which contains internal walls and pillars, permanent fixtures and fittings, equipment etc may take longer.
Complexity takes time (but is often the reason why scanning is so important).
Here are some of the things that will affect the timing:
1. The size of the area to be scanned.
2. The shape and dimensions of the architecture / interior design / equipment being scanned.
3. Whether a building has to be scanned both inside and out.
4. Are colour scans required – or is black and white sufficient?
5. The number of floors, rooms, corridors, staircases and internal spaces.
6. Whether the roof has to be scanned; how that will be done; and whether the roof is a simple shape or a complex one.
7. Whether ceiling voids have to be scanned, and whether they can be scanned from a single location (internal ‘shaped’ ceilings, around dormer windows or with recesses, may require additional locations).
8. The number of different scan locations the technician has to use in order to cover the requirement.
9. The precise output required.
10. The distance between locations, if there is more than one.
Remember that the scanning is only one part of the operation.
In order to make the scans useable for a client, post-production work is required – and again, how long that takes will depend on the variables of the project (perhaps as little as a day or so, perhaps as much as five days for every day of scanning).
Your service provider will be able to give more information once they have a full brief.
By Steve Martin, Managing Director, Xmo Strata and ManagingDirector, SpectisGB
With the recent development in the drone surveying space, there has been a lot of myths and misconceptions around UAV LiDAR and photogrammetry. In fact, these two technologies have as many differences as similarities. It is therefore essential to understand that they offer significantly different products, generate different deliverables and require different capture conditions but most importantly they should be used for different use cases.
There are no doubts that compared to traditional land surveying methods both technologies offer results much faster and with a much higher data density (both techniques measure all visible objects with no interpolation). However, the selection of the best technology for your project depends on the use case, environmental conditions, delivery terms, and budget among other factors. This post aims to provide a detailed overview of the strengths and limitations of LiDAR and photogrammetry to help you choose the right solution for your project.
HOW DO BOTH TECHNOLOGIES WORK?
Let’s start from the beginning and have a closer look into the science behind the two technologies.
LiDAR that stands for Light Detection and Ranging is a technology that is based on laser beams. It shoots outs laser and measures the time it takes for the light to return. It is so called active sensor as it emits its energy source rather than detects energy emitted from objects on the ground.
Photogrammetry on the other side is a passive technology, based on images that are transformed from 2D into 3D cartometric models. It uses the same principle that human eyes or 3D videos do, to establish a depth perception, allowing the user to view and measure objects in three dimensions. The limitation of photogrammetry is that it can only generate points based on what the camera sensor can detect illuminated by ambient light.
In a nutshell, LiDAR uses lasers to make measurements, while photogrammetry is based on captured images, that can be processed and combined to enable measurements.
OUTPUTS OF LIDAR AND PHOTOGRAMMETRY SURVEYS
The main product of LiDAR survey is a 3D point cloud. The density of the point cloud depends on the sensor characteristics (scan frequency and repetition rate), as well the flight parameters. Assuming that the scanner is pulsing and oscillating at a fixed rate, the point cloud density depends on the flight altitude and speed of the aircraft.
Various use cases might require different point cloud parameters, e.g., for power line modeling you might want dense point cloud with over 100 points per square meter, while for creating Digital Terrain Model of a rural area 10 pts/m2 cloud be good enough.
It is also important to understand that LiDAR sensor is only sampling positions without RGB, creating a monochrome dataset which can be challenging to interpret. To make it more meaningful, the data is often visualized using false-color based on reflectivity or elevation.
It is possible to overlay color on the LiDAR data in post-processing based on images or other data sources however this adds some complexity to the process. The color can also be added based on classification (classifying each point to a particular type/group of objects, e.g., trees, buildings, cars, ground, electric wires).
Photogrammetry, on the other hand, can generate full-color 3D and 2D models (in the various light spectrum) of the terrain that is easier to visualize and interpret than LiDAR. The main outputs of photogrammetric surveys are raw images, ortophotomaps, Digital Surface Models and 3D points clouds created from stitching and processing hundreds or thousands of images. The outputs are very visual with a pixel size (or Ground Sampling Distance) even below 1cm.
With that in mind, photogrammetry seems to be the technology of choice for use cases where visual assessment is required (e.g., construction inspections, asset management, agriculture). LiDAR, on the other hand, has certain characteristics that make it important for particular use cases.
Laser beams as an active sensor technology can penetrate vegetation. LiDAR is able to get through gaps in the canopy and reach the terrain and objects below, so it can be useful for generating Digital Terrain Models.
LiDAR is also particularly useful for modeling narrow objects such as power lines or telecom towers as photogrammetry might not recognize narrow and poorly visible objects. Besides, LiDAR can work in poor lighting conditions and even at night. Photogrammetry points clouds are more visual (each pixel has RGB), but often with generalized details, so it might be appropriate for objects where a lower level of geometric detail is acceptable but visual interpretation is essential.
ACCURACY
Let’s start with defining what the accuracy is. In surveying, accuracy always has two dimensions: relative and absolute. Relative accuracy is the measurement of how objects are positioned relative to each other. Absolute accuracy refers to the difference between the location of the objects and their true position on the Earth (this is why any survey can have a high relative but low absolute accuracy).
LiDAR is one of the most accurate surveying technologies. This is particularly the case for terrestrial lasers where the sensor is positioned on the ground, and its exact location is measured using geodetic methods. Such a setup allows achieving sub-centimeter level accuracies.
Achieving a high level of accuracy with aerial LiDAR is however much more difficult as the sensor is on the move. This is why the airborne LiDAR sensor is always coupled with IMU (inertial motion unit) and GNSS receiver, which provide information about the position, rotation, and motion of the scanning platform. All of these data are combined on the fly and allow achieving high relative accuracy (1-3cm) out of the box. Achieving high absolute accuracies requires adding 1-2 Ground Control Points (GCPs) and several checkpoints for verification purposes. In some cases when additional GNSS positioning accuracy is needed, one can use advanced RTK/PPK UAV positioning systems.
Photogrammetry also allows achieving 1-3 cm level accuracies however it requires significant experience to select appropriate hardware, flight parameters and process the data appropriately. Achieving high absolute accuracies requires using RTK/PPK technology and additional GCPs or can be based purely on a large number of GCPs. Nonetheless, using $500 DJI Phantom-class drone with several GCPs, you can easily achieve 5-10cm absolute accuracy for smaller survey areas, which might be good enough for most of the use cases.
DATA ACQUISITION, PROCESSING, AND EFFICIENCY
There are also significant differences in the acquisition speed between the two. In photogrammetry one of the critical parameters required to process the data accurately is image overlap that should be at the level of 60-90% (front and side) depending on the terrain structure and hardware applied. The typical LiDAR survey requires only 20-30% overlap between flight lines, which makes the data acquisition operations much faster.
Additionally, for high absolute accuracy photogrammetry requires more Ground Control Points to achieve LiDAR level accuracy. Measuring GCPs typically require traditional land surveying methods which mean additional time and cost.
Moreover, LiDAR data processing is very fast. Raw data require just a few minutes of calibration (5-30min) to generate the final product. In photogrammetry, data processing is the most time-consuming part of the overall process. In addition, it requires powerful computers that can handle operations on gigabytes of images. The processing takes on average between 5 to 10 times longer than the data acquisition in the field.
On the other hand, for many use cases such as power line inspections, LiDAR point clouds require additional classification which might be very labor intensive and often needs expensive software (e.g., TerraScan).
COST
When we look at the overall cost of LiDAR and photogrammetry surveys, there are multiple cost items to be considered. First of all the hardware. UAV LiDAR sensor sets (scanner, IMU, and GNSS) cost between $50.000-$300.000, but for most use cases the high-end devices are preferable. When you invest so much in a sensor, you don’t want to crash it accidentally. With that in mind, most users spend additional $25.000-$50.000 for the appropriate UAV platform. It all adds up to $350.000 for a single surveying set which is equivalent to 5x Telsa Model S. Quite pricey.
For photogrammetry, all you need is a camera-equipped drone, and these tend to be much cheaper. In the $2.000-$5.000 range, you can find a wide selection of professional multirotor devices such as DJI Inspire. For the price level of $5.000-$20.000 you can buy RTK/PPK enabled sets such us DJI Matrice 600 or fixed-wing devices Sensfly eBee and PrecisionHawk Lancaster.
Another cost item is a processing software. In case of LiDAR, it is typically added for free by a sensor manufacturer. However, post-processing, e.g. point cloud classification might require using 3rd party software, such as TerraScan that cost $20.000-$30.000 for a single license. Photogrammetry software prices are closer to the level of $200 a month per license.
Obviously, another important factor that influences the cost of the service is labor and time. Here, LiDAR has a significant advantage over photogrammetry, as it not only requires significantly less time to process the data but also to lay and mark GCPs. Overall depending on the use case business model it is not given that
Overall, depending on the use case and business model photogrammetry services are typically cheaper than LiDAR simply because the investment in the hardware has to be amortized. However, in some cases, the efficiency gains that come with LiDAR can compensate for the sensor cost.
CONCLUSIONS
When comparing LiDAR and photogrammetry, it is a key to understand that both technologies have their applications as well as limitations, and in the majority of use cases they are complementary. None of these technologies is better than the other and none of them will cover all the use cases.
LiDAR should be certainly used when for surveying narrow structures such as power lines or telecom towers and for mapping areas below tree canopy. Photogrammetry will be the best option for projects that require visual data, e.g., construction inspections, asset management, agriculture. For many projects, both technologies can bring valuable data (e.g., mines or earthworks) and the choice of method depends on a particular use case as well as time, budget, and capturing conditions among other.
LiDAR and photogrammetry are both powerful technologies if you use them the right way. It is clear that with decreasing prices of hardware and software it will become more and more available. Both technologies are still in its early days when it comes to UAV applications and in the following years, we will undoubtedly witness further disruptions (especially when it comes to hardware prices, and machine learning software automation). Stay tunes. We will keep you posted.
LiDAR is an acronym for Light Detection and Ranging. It is an active remote sensing technology that measures distance by illuminating a target with a laser and analyzing the reflected light. It is similar to RADAR but instead of using radio signals, it uses laser pulses. LiDAR depends on Infrared, ultraviolet and visible rays to map out and image objects. By illuminating the target using a laser beam, a 3D point cloud of the target and its surrounding can be generated. Three types of information can be obtained using LiDAR:
• Range to target (Topographic LiDAR)
• Chemical Properties of target (Differential Absorption LIDAR)
• Velocity of Target (Doppler LiDAR)
History of LiDAR
The initial attempts were made in the early 1930s to measure the air density profiles in the atmosphere by determining the scattering intensity from the searchlight beams. LiDAR was first created in 1960 shortly after the invention of the laser. The very first initial attempts at LiDAR were made by combining the laser-focused imaging with the ability to calculate distances by measuring the time for a signal to return using appropriate sensors and data acquisition electronics. The first LiDAR application came in meteorology where the National Centre for Atmospheric Research used it to measure clouds.
LiDAR’s accuracy and usefulness was made available to the public first in 1971 during the Apollo 15 Mission. During this mission, astronauts used a laser altimeter map to obtain the correct topographical representation of the moon. The first commercial airborne LiDAR system was developed in 1995.
Accuracy of LiDAR
The accuracy of LiDAR technology is no longer in doubt. LiDAR applications varies in a number of fields across all industries in the world. The most common application of LIDAR is in the field of forestry and agriculture and most recently in the field of autonomous cars. In considering driverless cars, for instance, the accuracy of LiDAR is guaranteed in the sense that manufacturers of these cars trust the technology to maintain order and avoid any incidences on the road. Autonomous cars depend on the laser pulses to measure the distance between the vehicle and any proximate vehicle. The laser pulses are transmitted at the speed of light towards an object and the time taken for the laser pulses to hit the target is recorded. The laser pulses are consequently reflected back to the transmitter and the time taken for the reflected pulse to hit the transmitter is also recorded.
This cycle is repeated over a number of times and the distance between the vehicle and the object can then be calculated. As the distance between the vehicle and the object reduces, the vehicle’s onboard diagnostics are able to decide whether or not to apply the brakes.
A better understanding of the accuracy of LiDAR is perhaps best described on the speed guns often used by cops. The speed guns employ the use of LiDAR technology to determine accurately the speed of approaching vehicles. Previously, radar was used to acquire these speeds but the accuracy of the system was always in doubt. Radar shoots out a short, high-intensity burst of high-frequency radio waves in a cone-shaped pattern. Officers who have been through the painfully technical 40-hour Doppler radar training course know it will detect a variety of objects within that cone pattern, such as the closest target, the fastest moving target or the largest target. Officers are trained to differentiate and properly match targets down range to the radar readings they receive. Under most conditions, skilled users get good results with radar, and it is found to be most effective for open stretches of roadway. But for more congested areas, locking radar on a specific target is more difficult.
Experts opine that Laser systems are more accurate when it comes to providing traffic and speed analysis as compared to other systems including radar. A laser can point at a specific vehicle in a group while radar cannot. A laser beam is a mere 18 inches wide at 500 feet compared to a radar beam’s width of some 150 feet.
Free topography Data Sources – Digital Elevation Models
1. Space Shuttle Radar Topography Mission (SRTM)
NASA only needed 11 days to capture Shuttle Radar Topography Mission (SRTM) 30-meter digital elevation model. Back in February 2000, the Space Shuttle Endeavour launched with the SRTM payload.
Using two radar antennas and a single pass, it collected sufficient data to generate a digital elevation model using a technique known as interferometric synthetic aperture radar (inSAR). C-Band penetrated canopy cover to the ground better but SRTM still struggled in sloping regions with foreshortening, layover and shadow.
In late 2014, the United States government released the highest resolution SRTM DEM to the public. This 1-arc second global digital elevation model has a spatial resolution of about 30 meters. Also, it covers most of the world with absolute vertical height accuracy of less than 16m.
Below, shaded relief images of deeply eroded volcanic terrain in northeast Tanzania demonstrate the improved nature of the highest-resolution SRTM data now being released. The image at left has data samples spaced every 90 meters (295 feet); the image at right has samples spaced every 30 meters (98 feet).
2. ASTER Global Digital Elevation Model
ASTER GDEM is an easy-to-use, highly accurate DEM covering all the land on earth, and available to all users regardless of size or location of their target areas.
Anyone can easily use the ASTER GDEM to display a bird’s-eye-view map or run a flight simulation, and this should realize visually sophisticated maps. By utilizing the ASTER GDEM as a platform, institutions specialized in disaster monitoring, hydrology, energy, environmental monitoring etc. can perform more advanced analysis.
The ASTER Global Digital Elevation Model (ASTER GDEM) is a joint product developed and made available to the public by the Ministry of Economy, Trade, and Industry (METI) of Japan and the United States National Aeronautics and Space Administration (NASA). It is generated from data collected from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), a spaceborne earth observing optical instrument.
The first version of the ASTER GDEM, released in June 2009, was generated using stereo-pair images collected by the ASTER instrument onboard Terra. ASTER GDEM coverage spans from 83 degrees north latitude to 83 degrees south, encompassing 99 percent of Earth’s landmass.
The improved GDEM V2 (released October 17, 2011) adds 260,000 additional stereo-pairs, improving coverage and reducing the occurrence of artifacts. The refined production algorithm provides improved spatial resolution, increased horizontal and vertical accuracy, and superior water body coverage and detection. The ASTER GDEM V2 maintains the GeoTIFF format and the same gridding and tile structure as V1, with 30-meter postings and 1 x 1 degree tiles.
Version 2 shows significant improvements over the previous release. However, users are advised that the data contains anomalies and artifacts that will impede effectiveness for use in certain applications. The data are provided “as is,” and neither NASA nor METI/Japan Space Systems (J-spacesystems) will be responsible for any damages resulting from use of the data.
3. JAXA’s Global ALOS 3D World
ALOS World 3D is a 30-meter resolution digital surface model (DSM) captured by the Japan Aerospace Exploration Agency’s (JAXA). Recently, this DSM has been made available to the public.
The neat thing about is that it is the most precise global-scale elevation data now. It uses the Advanced Land Observing Satellite “DAICHI” (ALOS) based on stereo mapping from PRISM.
JAXA has been processing about 100 digital 3D maps per month as part of our engineering validation activities of DAICHI so for. As we conducted research and development for full automatic and mass processing map compilations, we now have a perspective to process 150,000 maps per month. By applying our research and development results, we will start the 3D map processing in March 2014 to complete the global 3D map in March 2016. JAXA will commission the compiling work and service provision to NTT DATA Corporation and Remote Sensing Technology Center (RESTEC), Japan.
In order to popularize the utilization of the 3D map data, JAXA will also prepare global digital elevation model (DEM) with lower spatial resolution (of about 30 meters under our current plan) to publish it as soon as it is ready. Its use will be free of charge. We expect that data from Japan will become the base map for all global digital 3D maps, and contribute to the expansion of satellite data utilizations and the industrial promotion, science and research activities as well as the Group on Earth Observations.
4. Light Detection and Ranging (LiDAR)
You might think that finding LiDAR is a shot in the dark.
But it’s not anymore.
Slowly and steadily, we are moving towards a global LiDAR map.
With Open Topography topping the list at #1, we’ve put together a list of some of the 6 best LiDAR data sources available online for free.
Because nothing beats LiDAR for spatial accuracy. After you filter ground returns, you can build an impressive DEM from LiDAR.
And if you still can’t find anything in the link above, try your local or regional government. If you tell them what you are using it for, they sometimes hand out LiDAR for free.