Excel Sheet to Make a Gantt Chart in Microsoft Excel 2013
This Gantt Chart spreadsheet is designed to to help you create a simple project schedule. You only need to know some basic spreadsheet operations, such as how to insert, delete, copy and and paste rows and cells. For more advanced uses, such as defining task dependencies, you will need to know how to enter formulas.
In this sheet, we will apply basic finite element techniques to solve general two
dimensional truss problems. The technique is a little more complex than that originally
used to solve truss problems, but it allows us to solve problems involving statically
indeterminate structures.
This spreadsheet calculates the capacity of a cantilever sheet pile in English units and using common US sheeting sections. The geotechnical worksheet computes earth pressures and embedment. The Structural worksheet uses BEAMANAL spreadsheet by Alex Tomanovich, P.E. and the geotechnical analsyis worksheet to compute stresses and deflections.
Design of Steel Structures of the Canadian Standards Association (CSA) governs the design of the majority of steel structures in Canada. Clause 27 of the standard includes the earthquake design provisions for seismic force resisting systems for which ductile seismic response is expected. Technical changes and new requirements have been incorporated in the 2009 edition of CSA S16, including modifications of the expected material properties for HSS members, consideration of protected zones, definitions of brace probable compressive and tensile resistances for capacity design and special requirements for braces intersecting columns between floors for concentrically braced steel frames, new seismic provisions for buckling restrained braced steel frames, design and detailing requirements for built-up tubular ductile links in eccentrically braced steel frames, changes to the requirements for ductile steel plate walls and for plate walls with limited ductility, including allowances for perforations and corner cut-outs in infill plates, and special provisions for steel frames of the Conventional Construction category above 15 m in height. These modifications were developed in parallel with the 2010 National Building Code of Canada (NBCC). The paper summarizes the new CSA S16-09 seismic design requirements with reference to NBCC 2010.
Basic capacity design provisions are given in CSA S16 to ascertain that minimum strength hierarchy exists along the lateral load path such that the intended ductile energy dissipation mechanism is mobilized and the integrity of the structure is maintained under strong ground shaking. In the design process, the yielding components of the SFRS may be oversized compared to the specified design seismic forces, as would be the case when drift limits, minimum member sizes or non-seismic load combinations govern the design. In this case, it is specified both in NBCC 2010 and CSA S16 that the design forces in capacity-protected elements need not exceed those induced by a storey shear determined with RoRd = 1.3. This upper bound essentially corresponds to the elastic seismic force demand reduced by 1.3, recognizing that nonyielding components will likely possess minimum overstrength. This 1.3 reduction factor only applies if the governing failure mode is ductile, and RoRd = 1.0 must be used otherwise.
This file contains formatted spreadsheets to perform the following calculations:
– Section 1: Area of equivalent diagonal brace for plate wall analysis (Walls).
– Section 2: Design of link in eccentrically braced frames (EBF).
– Section 3: Design of Bolted Unstiffened End Plate Connection (BUEP).
– Section 4: Design of Bolted Stiffened End Plate Connection (BSEP).
– Section 5: Design of Reduced Beam Section Connection (RBS).
– Section 6: Force reduction factor for friction-damped systems (Rd_friction).
Additionally, this file contains the following tables:
– Valid beam sections for moment-resisting connections (B_sections).
– Valid column sections for moment-resisting connections (C_sections).
– Valid bolt types for moment-resisting connections (Bolts).
– Database of properties of all sections (Sections Table).
Core-walls have been the most popular seismic force resisting system in western Canada for many decades, and recently have become popular on the west coast of the US for high-rise buildings up to 600 ft (180 m) high. Without the moment frames that have traditionally been used in high-rise concrete construction in the US, the system offers the advantages of lower cost and more flexible architecture.
In the US, such buildings are currently being designed using nonlinear response history analysis (NLRHA) at the Maximum Considered Earthquake (MCE) level of ground motion. In Canada, these buildings are designed using only linear dynamic (response spectrum) analysis at the MCE hazard level combined with various prescriptive design procedures.
This paper presents the background to some of the prescriptive design procedures that have recently been developed to permit the safe design of high-rise core-wall buildings using only the results of response spectrum analysis (RSA).
The series of European standards commonly known as “Eurocodes”, EN 1992 (Eurocode 2, in the following also listed as EC2) deals with the design of reinforced concrete structures – buildings, bridges and other civil engineering works. EC2 allows the calculation of action effects and of resistances of concrete structures submitted to specific actions and contains all the prescriptions and good practices for properly detailing the reinforcement.
In this spreadsheet , the principles of Eurocode 2, part 1-1 are applied to the design of core wall.
Essential spreadsheet for steel design. Due to its form, easy input and clear output it reduces time required for designing steel members. It includes lateral torsional buckling check therefore is a comprehensive and an important tool for structural engineers.
Features:
– A clear and easy to read output (all on a single page); – Quick summary of utilization factors; – Change steel grade: S275; S355; S460; – Supported steel sections: UC, UB, PFC; – Design for Lateral Torsional Buckling (LTB) based on effective length; – Loading options: UDL, 2x Partial UDL, 2x Point Load; – ‘Live’ Loading diagram; – Change between deflection for Dead Load + Imposed Load or Imposed Load only; – Changeable safety factors; – Design is based on British Standard (BS 5950:1 2000).
Mix design plays an imperative function in civil construction projects. With the aim of obtaining the accurate measurement of any construction site, the usage of this user-friendly concrete mix design spreadsheet is absolutely necessary. This handy construction sheet will supply you the amounts of mix design for your construction site.
The concrete mix design refers to a technique for choosing suitable ingredients of concrete as well as establishing their balanced values so as to produce a concrete of the optimal strength, elasticity and feasibility as economically as possible.
The following properties are required to extend basis of choosing and proportioning of mix ingredients:
-The smallest amount of compressive strength is obligatory from structural consideration
-The adequate workability is considered necessary for complete compaction through the obtainable compacting equipment.
-Extreme water-cement ratio and supreme cement content to offer ample force for the specific site conditions
-Highest cement content to steer clear of shrinkage cracking due to temperature cycle in mass concrete.
With the recent development in the drone surveying space, there has been a lot of myths and misconceptions around UAV LiDAR and photogrammetry. In fact, these two technologies have as many differences as similarities. It is therefore essential to understand that they offer significantly different products, generate different deliverables and require different capture conditions but most importantly they should be used for different use cases.
There are no doubts that compared to traditional land surveying methods both technologies offer results much faster and with a much higher data density (both techniques measure all visible objects with no interpolation). However, the selection of the best technology for your project depends on the use case, environmental conditions, delivery terms, and budget among other factors. This post aims to provide a detailed overview of the strengths and limitations of LiDAR and photogrammetry to help you choose the right solution for your project.
HOW DO BOTH TECHNOLOGIES WORK?
Let’s start from the beginning and have a closer look into the science behind the two technologies.
LiDAR that stands for Light Detection and Ranging is a technology that is based on laser beams. It shoots outs laser and measures the time it takes for the light to return. It is so called active sensor as it emits its energy source rather than detects energy emitted from objects on the ground.
Photogrammetry on the other side is a passive technology, based on images that are transformed from 2D into 3D cartometric models. It uses the same principle that human eyes or 3D videos do, to establish a depth perception, allowing the user to view and measure objects in three dimensions. The limitation of photogrammetry is that it can only generate points based on what the camera sensor can detect illuminated by ambient light.
In a nutshell, LiDAR uses lasers to make measurements, while photogrammetry is based on captured images, that can be processed and combined to enable measurements.
OUTPUTS OF LIDAR AND PHOTOGRAMMETRY SURVEYS
The main product of LiDAR survey is a 3D point cloud. The density of the point cloud depends on the sensor characteristics (scan frequency and repetition rate), as well the flight parameters. Assuming that the scanner is pulsing and oscillating at a fixed rate, the point cloud density depends on the flight altitude and speed of the aircraft.
Various use cases might require different point cloud parameters, e.g., for power line modeling you might want dense point cloud with over 100 points per square meter, while for creating Digital Terrain Model of a rural area 10 pts/m2 cloud be good enough.
It is also important to understand that LiDAR sensor is only sampling positions without RGB, creating a monochrome dataset which can be challenging to interpret. To make it more meaningful, the data is often visualized using false-color based on reflectivity or elevation.
It is possible to overlay color on the LiDAR data in post-processing based on images or other data sources however this adds some complexity to the process. The color can also be added based on classification (classifying each point to a particular type/group of objects, e.g., trees, buildings, cars, ground, electric wires).
Photogrammetry, on the other hand, can generate full-color 3D and 2D models (in the various light spectrum) of the terrain that is easier to visualize and interpret than LiDAR. The main outputs of photogrammetric surveys are raw images, ortophotomaps, Digital Surface Models and 3D points clouds created from stitching and processing hundreds or thousands of images. The outputs are very visual with a pixel size (or Ground Sampling Distance) even below 1cm.
With that in mind, photogrammetry seems to be the technology of choice for use cases where visual assessment is required (e.g., construction inspections, asset management, agriculture). LiDAR, on the other hand, has certain characteristics that make it important for particular use cases.
Laser beams as an active sensor technology can penetrate vegetation. LiDAR is able to get through gaps in the canopy and reach the terrain and objects below, so it can be useful for generating Digital Terrain Models.
LiDAR is also particularly useful for modeling narrow objects such as power lines or telecom towers as photogrammetry might not recognize narrow and poorly visible objects. Besides, LiDAR can work in poor lighting conditions and even at night. Photogrammetry points clouds are more visual (each pixel has RGB), but often with generalized details, so it might be appropriate for objects where a lower level of geometric detail is acceptable but visual interpretation is essential.
ACCURACY
Let’s start with defining what the accuracy is. In surveying, accuracy always has two dimensions: relative and absolute. Relative accuracy is the measurement of how objects are positioned relative to each other. Absolute accuracy refers to the difference between the location of the objects and their true position on the Earth (this is why any survey can have a high relative but low absolute accuracy).
LiDAR is one of the most accurate surveying technologies. This is particularly the case for terrestrial lasers where the sensor is positioned on the ground, and its exact location is measured using geodetic methods. Such a setup allows achieving sub-centimeter level accuracies.
Achieving a high level of accuracy with aerial LiDAR is however much more difficult as the sensor is on the move. This is why the airborne LiDAR sensor is always coupled with IMU (inertial motion unit) and GNSS receiver, which provide information about the position, rotation, and motion of the scanning platform. All of these data are combined on the fly and allow achieving high relative accuracy (1-3cm) out of the box. Achieving high absolute accuracies requires adding 1-2 Ground Control Points (GCPs) and several checkpoints for verification purposes. In some cases when additional GNSS positioning accuracy is needed, one can use advanced RTK/PPK UAV positioning systems.
Photogrammetry also allows achieving 1-3 cm level accuracies however it requires significant experience to select appropriate hardware, flight parameters and process the data appropriately. Achieving high absolute accuracies requires using RTK/PPK technology and additional GCPs or can be based purely on a large number of GCPs. Nonetheless, using $500 DJI Phantom-class drone with several GCPs, you can easily achieve 5-10cm absolute accuracy for smaller survey areas, which might be good enough for most of the use cases.
DATA ACQUISITION, PROCESSING, AND EFFICIENCY
There are also significant differences in the acquisition speed between the two. In photogrammetry one of the critical parameters required to process the data accurately is image overlap that should be at the level of 60-90% (front and side) depending on the terrain structure and hardware applied. The typical LiDAR survey requires only 20-30% overlap between flight lines, which makes the data acquisition operations much faster.
Additionally, for high absolute accuracy photogrammetry requires more Ground Control Points to achieve LiDAR level accuracy. Measuring GCPs typically require traditional land surveying methods which mean additional time and cost.
Moreover, LiDAR data processing is very fast. Raw data require just a few minutes of calibration (5-30min) to generate the final product. In photogrammetry, data processing is the most time-consuming part of the overall process. In addition, it requires powerful computers that can handle operations on gigabytes of images. The processing takes on average between 5 to 10 times longer than the data acquisition in the field.
On the other hand, for many use cases such as power line inspections, LiDAR point clouds require additional classification which might be very labor intensive and often needs expensive software (e.g., TerraScan).
COST
When we look at the overall cost of LiDAR and photogrammetry surveys, there are multiple cost items to be considered. First of all the hardware. UAV LiDAR sensor sets (scanner, IMU, and GNSS) cost between $50.000-$300.000, but for most use cases the high-end devices are preferable. When you invest so much in a sensor, you don’t want to crash it accidentally. With that in mind, most users spend additional $25.000-$50.000 for the appropriate UAV platform. It all adds up to $350.000 for a single surveying set which is equivalent to 5x Telsa Model S. Quite pricey.
For photogrammetry, all you need is a camera-equipped drone, and these tend to be much cheaper. In the $2.000-$5.000 range, you can find a wide selection of professional multirotor devices such as DJI Inspire. For the price level of $5.000-$20.000 you can buy RTK/PPK enabled sets such us DJI Matrice 600 or fixed-wing devices Sensfly eBee and PrecisionHawk Lancaster.
Another cost item is a processing software. In case of LiDAR, it is typically added for free by a sensor manufacturer. However, post-processing, e.g. point cloud classification might require using 3rd party software, such as TerraScan that cost $20.000-$30.000 for a single license. Photogrammetry software prices are closer to the level of $200 a month per license.
Obviously, another important factor that influences the cost of the service is labor and time. Here, LiDAR has a significant advantage over photogrammetry, as it not only requires significantly less time to process the data but also to lay and mark GCPs. Overall depending on the use case business model it is not given that
Overall, depending on the use case and business model photogrammetry services are typically cheaper than LiDAR simply because the investment in the hardware has to be amortized. However, in some cases, the efficiency gains that come with LiDAR can compensate for the sensor cost.
CONCLUSIONS
When comparing LiDAR and photogrammetry, it is a key to understand that both technologies have their applications as well as limitations, and in the majority of use cases they are complementary. None of these technologies is better than the other and none of them will cover all the use cases.
LiDAR should be certainly used when for surveying narrow structures such as power lines or telecom towers and for mapping areas below tree canopy. Photogrammetry will be the best option for projects that require visual data, e.g., construction inspections, asset management, agriculture. For many projects, both technologies can bring valuable data (e.g., mines or earthworks) and the choice of method depends on a particular use case as well as time, budget, and capturing conditions among other.
LiDAR and photogrammetry are both powerful technologies if you use them the right way. It is clear that with decreasing prices of hardware and software it will become more and more available. Both technologies are still in its early days when it comes to UAV applications and in the following years, we will undoubtedly witness further disruptions (especially when it comes to hardware prices, and machine learning software automation). Stay tunes. We will keep you posted.
LiDAR is an acronym for Light Detection and Ranging. It is an active remote sensing technology that measures distance by illuminating a target with a laser and analyzing the reflected light. It is similar to RADAR but instead of using radio signals, it uses laser pulses. LiDAR depends on Infrared, ultraviolet and visible rays to map out and image objects. By illuminating the target using a laser beam, a 3D point cloud of the target and its surrounding can be generated. Three types of information can be obtained using LiDAR:
• Range to target (Topographic LiDAR)
• Chemical Properties of target (Differential Absorption LIDAR)
• Velocity of Target (Doppler LiDAR)
History of LiDAR
The initial attempts were made in the early 1930s to measure the air density profiles in the atmosphere by determining the scattering intensity from the searchlight beams. LiDAR was first created in 1960 shortly after the invention of the laser. The very first initial attempts at LiDAR were made by combining the laser-focused imaging with the ability to calculate distances by measuring the time for a signal to return using appropriate sensors and data acquisition electronics. The first LiDAR application came in meteorology where the National Centre for Atmospheric Research used it to measure clouds.
LiDAR’s accuracy and usefulness was made available to the public first in 1971 during the Apollo 15 Mission. During this mission, astronauts used a laser altimeter map to obtain the correct topographical representation of the moon. The first commercial airborne LiDAR system was developed in 1995.
Accuracy of LiDAR
The accuracy of LiDAR technology is no longer in doubt. LiDAR applications varies in a number of fields across all industries in the world. The most common application of LIDAR is in the field of forestry and agriculture and most recently in the field of autonomous cars. In considering driverless cars, for instance, the accuracy of LiDAR is guaranteed in the sense that manufacturers of these cars trust the technology to maintain order and avoid any incidences on the road. Autonomous cars depend on the laser pulses to measure the distance between the vehicle and any proximate vehicle. The laser pulses are transmitted at the speed of light towards an object and the time taken for the laser pulses to hit the target is recorded. The laser pulses are consequently reflected back to the transmitter and the time taken for the reflected pulse to hit the transmitter is also recorded.
This cycle is repeated over a number of times and the distance between the vehicle and the object can then be calculated. As the distance between the vehicle and the object reduces, the vehicle’s onboard diagnostics are able to decide whether or not to apply the brakes.
A better understanding of the accuracy of LiDAR is perhaps best described on the speed guns often used by cops. The speed guns employ the use of LiDAR technology to determine accurately the speed of approaching vehicles. Previously, radar was used to acquire these speeds but the accuracy of the system was always in doubt. Radar shoots out a short, high-intensity burst of high-frequency radio waves in a cone-shaped pattern. Officers who have been through the painfully technical 40-hour Doppler radar training course know it will detect a variety of objects within that cone pattern, such as the closest target, the fastest moving target or the largest target. Officers are trained to differentiate and properly match targets down range to the radar readings they receive. Under most conditions, skilled users get good results with radar, and it is found to be most effective for open stretches of roadway. But for more congested areas, locking radar on a specific target is more difficult.
Experts opine that Laser systems are more accurate when it comes to providing traffic and speed analysis as compared to other systems including radar. A laser can point at a specific vehicle in a group while radar cannot. A laser beam is a mere 18 inches wide at 500 feet compared to a radar beam’s width of some 150 feet.
Collaboration is key: How BIM helps a project from concept to operations
Talking about collaboration and delivering a truly collaborative project through the use of BIM are two very different things. Ryan Simmonds of voestalpine Metsec Framing discusses the keys to success
At voestalpine Metsec, we recognise the fact that BIM is more than just a 3D modelling tool for design. BIM, at its core – and done correctly – is an integrated management system that allows 3D design, together with onsite construction and information, that enables handover to operationally manage the client’s facility. Metsec was the first company to achieve BIM Kitemark for design and construction and also for BIM objects.
Within BIM sits key elements for success. Coordination with other team members, or those working on a project, is crucial to ensure nothing is missed, as well as making sure there are no unnecessary duplications. Cooperation is another important area, and one where teams can often fall down through a lack of communication or sharing of vital information.
Together, cooperation and coordination help to contribute to true collaboration, with all parties working together to achieve a single goal and BIM has proved to be an essential tool to allow this approach.
Benefits of collaboration in construction projects
Collaboration is a method that the construction industry has historically struggled to adopt, but one that has been consistently demonstrated to greatly benefit the industry as a whole.
Collaborating on a project from the initial stages brings numerous benefits, including reducing time delays and the need for contingency funds. The appointed design team, contractors, manufacturers and installers all working collaboratively means designs, issues, priorities and construction methods are all agreed upon in the initial stages and fully understood by all parties.
While the theory of collaboration can seem abstract, it is a very real requirement for successful projects. If co-dependent elements of a project are executed in silos with no communication or coordination, projects can hit stumbling blocks.
For example, if the installer of the framing solution on a project has not communicated with the main contractor as to when they are required onsite, the project can either be delayed as the installer is not ready, or alternatively they’ll turn up onsite but not be able to gain access and begin the installation, resulting in wasted days and money.
Similarly, if the framing manufacturer and installer have not cooperated and communicated, the project could be delivered before it’s required, taking up valuable space onsite, or be delayed – again resulting in lost days.
BIM as a collaborative method
However, collaboration needs to go deeper and this is where Building Information Modelling (BIM) is vital. A structured, measured and comprehensive approach to team working, BIM has a fixed set of processes and procedures to guide users and participants how best to employ collaborative methods. Design coordination is an in-depth and involved process and BIM’s regular data exchanges ensure that the whole team is working on the same, and most up-to-date, model.
The notion of BIM is the process of designing, constructing or operating a building, infrastructure or landscape asset using electronic information. In practice, this means that a project can be designed and built using datasets and images digitally, even before the first spade goes in the ground.
Detecting conflict at early stages means they are addressed and resolved promptly and still during the planning stages. Without BIM, issues are often only picked up at major project milestones and at this point they can be difficult and expensive to rectify.
The objective of BIM is to satisfy the three components of a successful project, namely time, cost and quality, by managing the project using an efficient, collaborative and reliable method of work.
Sharing a 3D model with all parties communicates the planned end result in a clear, concise and fully comprehensible way – helping the full project team to understand the requirements and see what they are working towards. The information held within the model can be extracted from within in the form of Cobie files, which is also essential. Within these, if done to Level 2 standard, the manufacturer will host the correct file extensions and product parameters to allow asset management in future years.
However, another crucial element of BIM is the promotion, and adoption, of collaborative working. The digital designs, including product parameters, are shared with all parties to outline the work planned and give everyone the opportunity to fully understand what is proposed and all the requirements, including specifications such as fire and acoustic data. The BIM Execution Plan (BEP) is a critical document as it underpins project integration and is a written plan to bring together all of the tasks, processes and related information.
The BEP should be agreed at the outset and defines what BIM means for the project. It outlines the standards being adopted, outputs required, when these should be supplied and in what format, plus any supporting documentation.
As a working document, the BEP is regularly reviewed and evolves throughout the project, ensuring design teams, suppliers, manufacturers and all other stakeholders have all the relevant information, promoting collaboration between all parties.
The BIM Implementation Plan (BIP) is the blueprint for integrating BIM into an organisation’s working practices. This should align to the objectives and aspirations of the organisation, its business partners, its skill base, levels of investment and the nature and scale of projects that it wishes to undertake now and in the future.
Hosting both of these documents in a centrally coordinated Common Data Environment (CDE) means they can be updated, accessed or extracted at any time throughout the project. Adding all other BIM documents, including the 3D drawings, gives all of those involved in the overall project full visibility and input, promoting a collaborative approach throughout.
Conclusion
Talking about collaboration and delivering a fully collaborative project through the use of BIM are two very different things, and will have very different outcomes when it comes to a construction project.
While there have been moves to adopt a more collaborative approach, using BIM ensures that all stakeholders are consulted at all stages throughout the project and that the most up-to-date documents are hosted in one central location, reducing errors in file versions or timing plans.
In addition, the use of BIM means that a design and build is fixed from a certain, agreed point onwards, removing the need for additional contingency budget or project delays due to unplanned changes caused by a lack of communication, coordination, cooperation or collaboration.