I am trying to get split and merge to work on my PC. I chose a dataset with 425 images from ODM data. and tried to run it on my PC. i knew my system would not be able to handle it in the default settings, which is why i chose that perticular dataset since i thought it was a good opportunity to learn how to use split and merge to try and process datasets larger than i normally can. i read the ODM cookbook and some other posts on how to use split and merge and got to work.
I chose the split size to be 40 and the overlap to be 30 thinking that its small enough for my system to be able to handle it.
as i started it process the dataset i ran into an error and opened the task output to see what was happening. I read through all the output line and found a perticular line that said 2021-07-07 14:54:29,289 INFO: Expecting to process 127 images.. this is odd. if it decided to process 127 images, then what happened to my input where i instructed it to just run 40 images at a time?
my system configuration is, ryzen 5 1500x, 24gb ram, windows 10 pro, docker.
update, even though the process crashed a long time ago, ODM is still using a lot of system resources. If its not processing anything, then why does it need so much of my systems resources?
ODM is still running in docker. That’s whats taking up my system resources. i don’t know whats happening in the pipeline once its thrown an error at me and shown me that the process has failed.
I thought I had replied. My apologies for the delay: when you specify the split size plus the overlap for the splits, that total area gives you what I’ll call the batch size. So a split of 40 plus 30 meters of overlap results in a batch of around ~140 which is larger than your machine can handle.
I would recommend reducing your batch size and potentially your overlap so that your batches are small enough to process.
all of them are at default values. I don’t mind scaling back the feature quality but i don’t see how it bears a co relation with the number of images that it is choosing to process.