I was curious if anyone knew if I can feed lidar data into webodm to stitch cloud point data?
Not directly and not with WebODM. Using ODM you could run the pipeline until the odm_filterpoints
step (--end-with odm_filterpoints
), then replace the point cloud in dataset/odm_filterpoints
with one generated with LIDAR (it should already be geographically aligned and vertices need to be offset by odm_georeferencing/coords.txt
(substract from the second line on the coords.txt file every point’s X/Y coordinate), then resume the pipeline with --rerun-from odm_meshing
. The LIDAR point cloud will be used for orthorectification, DSM generation, etc.
It’s an interesting use case, this wouldn’t have been possible until a few weeks ago when we switched to a geographic CRS for the point cloud, we could look to add this as a fully automated feature (interested in contributing a PR?)
Also, do you have a LIDAR dataset you could share with associated aerial images? I’ve heard this feature request often, but never got any data to try it out myself.
Yes, I have a direct use case that I can use to contribute dataset. It is a .las file.
We have a data set for you
Photos - https://drive.airborneinnovationsgroup.com/d/f/589753662253868535
LAS - https://drive.airborneinnovationsgroup.com/d/f/594326327480329437
The photos folder there are singles shot at both nadir and oblique as well as a zip file with them already stitched into a GEOTiff Ortho.
I’ve started looking into this dataset btw. Exactly what I was looking for
Can you share some insights on how the Lidar point cloud is georeferenced? The trickiest part of getting a hybrid mechanism to work is alignment between the photogrammetry results and those obtained from Lidar.
Edit: PDAL to the rescue: https://pdal.io/stages/filters.cpd.html#filters-cpd
Haha! I was about to suggest PDAL.
Also, wondering if you could share information about the methodology that was used to capture both images and the Lidar scan. The two seem to have been captured on different timeframes? Certain items such as cars are missing in the Lidar dataset.
E.g.
vs
So both the photos and LiDAR are pulling via ntrip for the corrections into each separately on the drone. Both should be WGS84. I would need to go back and look at the data but they may be from different days. That could be the issue.
Thanks for the details!
It would be super-cool to have a dataset that has been captured within a short time span, or even at the same time. Not having a consistent elevation model would lead to some artifacts (although minor ones) in the orthorectification process.
Should be flying some this weekend.
Looking forward to it!!!
Apologies… I totally forgot about sending this. So I flew a different area. Both the lidar and photos were taken at the exact same time.
Awesome!
Any luck with that data?
I haven’t had the time to look at this yet; hoping to find some time in the upcoming weeks.
I hope this is not off topic, Is there any way to utilize iPhone 12 images with the lidar data together? Or strip lidar model?
I found a way to hook up my iPhone 12 to my Mavic 2 Zoom!
The lidar sensor on the iPhone isn’t strong enough to register points at the altitude you would be flying with the drone. If I had to guess, likely it would only be able to scan something within ten feet max.
Any luck?
Still haven’t had time to look at this yet