When undertaking a 3D modelling mission over the course of two days I got a quite good but distorted 3D model.
Web ODM seemed to create two models, a small one below the first. The smaller was only partial and seemed to only use the mages captured on day 2 from the rear of the property.
I can only assume it’s due to differences in brightness and or shadows which made it unable to link them to the first days images.
You can hopefully view the model in the link below to see what I mean.
I took 294 images, 232 were taken from the same height using an autonomous pre planned flight, and the remaining 62 free flight at multiple heights at front and rear of the property. Both missions flown using Pix4D capture. The processing settings were default 3D model as follows:
26/09/2022, 01:07:45 Processing Node: node-odm-1 (auto) Options: auto-boundary: true, mesh-octree-depth: 12, use-3dmesh: true, pc-quality: high, mesh-size: 300000 Average GSD: 0.55 cm Area: 17,544.66 m² Reconstructed Points: 42,909,734
Manual free flight at various altitudes won’t be an issue, I almost always include some of that in addition to autonomous flights.
Have you got enough RAM to go with ultra feature quality and point cloud? It might be worth trying, along with increasing the minimum number of features to say 25000.
I’ve had a similar issue, but that was due to attempting sfm-algorithm: triangulation rather than incremental, with an M2P drone, which isn’t supported.
Hi Gordon,
I have 16GB DDR4 RAM so quite a bit although it could be more. Maybe my i5 is a bottle neck as that definitely needs an upgrade.
I’ll try again on a lower quality setting to rule that out. Thanks for your suggestion. Is there a way to remove point cloud and just go with high res feature quality?