i have big dataset of 30000 images wishing to generate orthopohto. i tried split and merge flags on some of the images but it is not generating report and 3D model for the same?
what should be strategy to generate the orthomosaic for larger dataset. even i read about cluster nodeodm. but can i use odm and see the results?
Not generating the 3D model is expected for split/merge. That is a known limitation.
I’m uncertain about the report, however.
ClusterODM will have the same limitation of local split/merge in regards to not producing a 3D model.
You can try tuning some quality parameters, resizing images, or potentially using our planar reconstruction pipeline (under the “Field” preset in WebODM).
Report generation is also not supported with split merge at this time.
ok, what exactly happens in split and merge?
are these the stpes followed?
if yes then how will it be for 30000 images? it may require more memory to execute all these stpes
Hopefully, I can represent the process for multiple-node split-merge accurately enough. Where I’m wrong, I welcome corrections. You can glean much of this from:
Processing starts on the lead node with some basic prep work: in addition to taking inventory of available data (photos, GCPs, geo.txt files, etc.) and settings, typically the lead node will proceed to submodel splitting with overlap between submodels, then the datasets get shipped off to the other nodes (and processed on the lead node as well) to do feature extraction, matching, and reconstruction (if local, then the feature extraction, matching, and reconstruction are all done locally instead, of course).
Those products are then returned to the lead node for alignment. OpenSfM traditionally could do this on the sparse point clouds or the camera pose, finding a sensible simple transform between each of the submodels from one or the other. I don’t know what the approach is now. At this stage much of the OpenSfM work is done, and the submodels are sent back out to the nodes for dense matching (OpenMVS), meshing, texturing, georeferencing, dem, and orthophoto.
Once those stages are complete on the nodes, then the data are tranfered back to the lead node one final time where they are combined: point clouds are merged (maybe clipped and merged, not sure), blended cutlines generated for the orthophotos allow for smooth mosaicking of the orthophoto, euclidian distance weighted blending of elevation models completes those, and meshes and final report are ignored.
Historically, the align step was the biggest memory problem, but I haven’t seen any issues with that recently. Most of the rest is super lightweight: not really any heavier in resource usage than processing a dataset roughly equal to the split size + the images in the overlap.
thanks you so much! fantastically explained! can you please tell me how GCPs will work in this case,
I will split the dataset and run separately on different parallel nodes then will combine them together to form a single orthophoto (actually I m planning it to split 3000 images in one dataset so I will required 10 different nodes), in this case how do I split the GCPs file? or it will be done automatically?
I have 5 local machines ready!
what if I spilt the images using split command and once I get all the submodel ready I will process the submodels individually and parallel on 5 different local machines. after all orthophotos ready Iwill merge them on local machine
can I do this?
That is handled by ClusterODM and the split/merge pipeline. No need to do anything special aside from setting up ClusterODM and using split/merge.