Georeference odm with LiDAR

I was curious if it is possible to use a LiDAR derived point cloud or DEM (dtm, dsm) to help augment the ODM process? I’m still fairly new to ODM in general, but seems like the mvs and dem stages are fairly heavy processes.

This topic might be a bit philosophical or even theoretical… but co-collected direct referenced imagery + LiDAR is becoming more of a commonplace practice for UAV and manned acquisition alike. All these systems are tightly coupled and boresighted/calibrated pretty exhaustively.

Lastly, it is my understanding that ODM only produces DSM based orthophotos (true orthos)? Or is there a way to generate non-true DTM orthos as well?

My apologies if this is a topic previously discussed, but my searches didn’t pull up much. Thanks!



I am sorry not to provide you with a solution, but I am working with the exact same issue.

My current workflow is using Pix4D, which support importing a point cloud for DSM generation. I still need to run the calibration steps for camera position (makes sense as even with great IMU and RTK gnss, the camera positions will not match 100%). (I import image position and orientation to Pix4D, but I believe ODM also supports this now). My processing time is reduced to 1/3 (or less, depending on level of densification) of traditional.

that makes sense… we use Correlator in the same way you described Pix4d. We can import a DTM or point cloud into Correlator and this makes ortho generation very quick. We use direct georeferencing because we are using corrected PPK EO with omega-phi-kappa.

This workflow works great if the goal is simply a 2D product. Obviously Correlator and Pix4d are much more limited in other ways, hence the attraction to ODM. But currently Correlator is orders of magnitude faster unfortunately.

Yes i agree. I did use Correlator for a period, but i was more happy with the results from Pix4D and the processing time was not that different. Correlator supports more photos, but as long as my projects are less than 1000-1500 pictures (24mp) it works fine with Pix4D.

I use direct georeferencing including the omega-phi-kappa from post processed trajectory using Applanix APX-20. Pix4D will still adjust my camera positions and orientations slightly. In particular on the the yaw.

I fund this in another thread:
“If the LIDAR point cloud is in the same coordinate reference frame, you could technically stop the ODM process after the odm_filterpoints stage, replace the point cloud (from multiview stereo, stored in the odm_filterpoints directory ) with your LIDAR scan, then resume the process from odm_meshing.”

Have not tested, but this is close to what i was hoping for, except i would like to omit the point cloud densification step to save (the majority of the) processing time. Assuming the GCP step has already been done at this stage this might work. I normally end up 0-30cm off without GCP, so i would like to close this before orthorectification starts.

And yes, we will get nothing but the orthomosaic using the Pix4D + Lidar approach.

1 Like

that is really interesting and makes sense…

we are using an ap-20, so virtually the same performance as your apx-20. Curious what you like better about Pix4d? I usually get better products out of Correlator, but haven’t used Pix4d in quite some time.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.