Processing Condundrum

Hi there,

I have a conundrum in connection with my attempt to complete a 3D model of a friend’s field for a biodiversity project.

Due to its size I have imaged the field using a mapping (grid pattern) in two parts of roughly equal size, one required 840 images and the other 977.

Also, a little while ago, as a practice, I imaged a part of the field with 1,101 images and achieved a reasonable model result. So I have used the exact same settings on the later survey with its set of 977 images and it resulted in ‘Not enough memory’. Both sets of images were resampled to 1,250px. The smaller 840 image one was processed ok.

These two attempts, one successful and the other not are shown in this screen grab.

So my query is what could possibly be causing the smaller set of images to fail and run out of memory when the larger set was successful given that the individual image size is the same and taken by the same drone etc?

I have also tried lowering the feature quality and resizing down to 1,000 px and got some strange results where the camera positions seem to be chaotic and bunched up in one particular area, despite an almost identical mapping flight plan being used.

There clearly seem to be other factors involved in the amount of memory used in processing but I have no idea what they might be.

Thank you

2 Likes

I’d drop PC Geometric, since the fields look reasonably flat- no buildings etc.
Is the texture of the ground similar for both areas? Are the actual number of features detected in each image similar?
Can you increase virtual memory?

1 Like

I will try without PC Geometric although I presume that will have no effect on the memory needed, which is where it seems to fail.

Yes, the texture and sorts of features in both areas are the same, although the ground slopes up more in the section that produced 977 images (requiring an AGL flight plan, but used for both sections).

When you say features detected, do you mean the Min Features at 30,000? This was set the same for both.

By virtual memory I take it you’re referring to the Swap file setting. I have set this at max 4Gb in Docker as in grab:

2 Likes

No, the actual number detected, from the console log:

2022-05-29 15:31:46,181 DEBUG: Found 248288 points in 14.906344175338745s
Min number of features set to even 50000 would be superfluous in this case.

I would think more points= more memory required.

If swap is limited to only 4GB, then that could be a problem. I don’t know anything about how Macs operate, but is there any way to increase that?
I’ve seen datasets of under 1000 images, but with large numbers of features, require all my 96GB RAM + some virtual memory, although that was with ultra/high features/PC.

1 Like

Not currently, unfortunately. Docker on MacOS is looking to switch to a different virtualization backend that will use the system swap, but they’ve not mainlined that yet. But, from what I gather, you don’t have much control over the MacOS system swap size anyway, but it should be more intelligent about it than just a static max of 4GB.

1 Like

I assume you mean the Processing Log (from the last successful model using the larger number of images):

The Sparse number of reconstructed points is: 1,038,091, and Reconstructed features: 3,796 so I can see why leaving it at the default 20,000 would be ok but it doesn’t take more memory, just time perhaps.

Mac uses an automated Swap system and allocates VM as it’s required but, as Saijin says, the Docker system is not accessing that but instead a fixed 4GB. I will post on Apple communities to see if anything can be done to adjust that at the moment, before hopefully one day Docker amends the Mac version to access the flexible arrangement.

All that aside, and going back to my original query, I can’t see why a smaller number of images (977 compared to 1,101) and using the same settings, will fail whereas the larger number completes. Is there something about the GPS data for example, with some images that complicate things and requires more working memory?

1 Like

It would also be useful to know if the json boundary file came into operation before or after all the images have been processed. If before then the boundary file would prevent certain images from being used and so reduce the memory load.

Thanks

1 Like

It’s after the image calibration. We have a note about this in the docs for boundary/auto-boundary.

Piero is investigating the idea of throwing out images or features beyond the area of interest:

That’s the Quality Report, produced at the end of the task.
The console log updates intermittently when the task is in progress, I’m sure Docker on a Mac would produce something similar to what I see with the Windows native version.

It looks like this-
[INFO] DTM is turned on, automatically turning on point cloud classification
[INFO] Initializing ODM 2.8.4 - Tue Jun 07 11:03:04 2022
[INFO] ==============
[INFO] auto_boundary: True
[INFO] boundary: {}
[INFO] build_overviews: False
[INFO] camera_lens: auto
[INFO] cameras: {}
[INFO] cog: True
[INFO] copy_to: None
[INFO] crop: 3
[INFO] debug: False
[INFO] dem_decimation: 1
[INFO] dem_euclidean_map: False
[INFO] dem_gapfill_steps: 3
[INFO] dem_resolution: 10.0
[INFO] depthmap_resolution: 640
[INFO] dsm: True
[INFO] dtm: True
[INFO] end_with: odm_postprocess

etc, and updates through the various stages of the task, starting with feature extraction, with lines like this, showing number of features extracted-

2022-06-07 11:09:27,677 INFO: Extracting ROOT_SIFT features for image DJI_0001_10.JPG
2022-06-07 11:09:27,739 DEBUG: Computing sift with threshold 0.066
2022-06-07 11:09:27,858 INFO: Reading data for image DJI_0744_18.JPG (queue-size=1
2022-06-07 11:09:27,858 INFO: Extracting ROOT_SIFT features for image DJI_0744_17.JPG
2022-06-07 11:09:27,889 DEBUG: Found 8100 points in 0.46240663528442383s
2022-06-07 11:09:27,889 DEBUG: reducing threshold
2022-06-07 11:09:27,889 DEBUG: Computing sift with threshold 0.044000000000000004
2022-06-07 11:09:27,905 DEBUG: Found 9224 points in 0.4780292510986328s
2022-06-07 11:09:27,905 INFO: Reading data for image DJI_0249_20.JPG (queue-size=1
2022-06-07 11:09:27,905 DEBUG: reducing threshold
2022-06-07 11:09:27,905 DEBUG: Computing sift with threshold 0.044000000000000004
2022-06-07 11:09:27,905 INFO: Extracting ROOT_SIFT features for image DJI_0249_2.JPG
2022-06-07 11:09:27,920 DEBUG: Computing sift with threshold 0.066
2022-06-07 11:09:27,936 DEBUG: Found 12637 points in 0.5092840194702148s
2022-06-07 11:09:27,936 INFO: Reading data for image DJI_0499_17.JPG (queue-size=1
2022-06-07 11:09:27,936 DEBUG: done
2022-06-07 11:09:27,936 INFO: Extracting ROOT_SIFT features for image DJI_0499_16.JPG
2022-06-07 11:09:27,999 DEBUG: Computing sift with threshold 0.066
2022-06-07 11:09:27,999 DEBUG: Found 14145 points in 0.5717837810516357s
2022-06-07 11:09:27,999 DEBUG: done

1 Like

Hi Gordon, Ok I will try and find such a log in Docker that I presume will remain there till I start a new task, even if the task failed.

I’m still confused about why a section of the 3D map (in first post) was all distorted and the camera positions all bunched up. Is that anything to do with the original drone photos or is that an artefact of the processing?

Thanks

2 Likes

Both, kinda. Normally means poor matching in that area. So, you might need slightly tighter transects, more frequent photos, slightly higher flight altitude, etc. Or, crank up feature-quality and min-num-features.

1 Like

I’ve seen that with perfectly good datasets too, but I was able to prevent it by using ultra feature quality rather than high. Of course that may be an issue for you with memory limitations.
Do the bunched up camera positions mostly consist of images of tree foliage? If it was windy and the leaves move around a lot, finding more matches in non-moving parts of the scene becomes more important.

1 Like

Thank you both for the suggestions. Sounds like I’m caught between a shrub and a soft squidgy place!

There are no trees on the site but low level shrubs and plants, up to 1m high. If I increase flight height I get less photos (good) but less detail and resolution (bad). So after practice tests I settled on 50ft AGL on this sloping hillside and designed the flight plan with paths close enough to give good overlap.

If one particular area is confusing for match finding, can I change any of the other variables to help given that I can’t go above med or high with Feature Quality? ‘Matcher Neighbours’ for example?

2 Likes

For comparison here is the first half of the field made from 840 images and using medium Feature Quality and PC Quality and with resizing the images to 1,250px

2 Likes

Wow, that would give me a GSD of 3 millimetres (and I’d have to fly at <4km/hr), what GSD do you have on your full size images at that height?

It might be worth going significantly higher so that you can use ultra feature quality on fewer images.

I think that is determined automatically now, overriding the value you enter

1 Like

I fly at max 4mph and the WebODM details say average GSD is 1.73cm

1 Like

1.73cm must be after resizing. Any resizing involves adding noise, so I suspect you may get better results with 150ft flight height and no resizing.

I think you said you were using a Mavic Mini? 150’ gives 1.7cm GSD with it, no resizing required. You can fly faster too, 18km/hr/10mph :slight_smile:

1 Like

There’s some interesting thoughts. I think the Mini has images of 3000x4000. The idea of flying faster to reduce the overall flight time of around 1.5hrs is appealing but the images are still taken at about 1.5-2s intervals. Does that factor into your calculations to give sufficient overlap?

1 Like

One of the reasons why I had chosen to fly slow is so that the images would be sharp since the drone does not pause to take an image. If I fly at 10mph, even at 150ft, would the images have no movement blurr? All would be for nothing if the raw data is fuzzy.

1 Like

Yes, timing would still be the same, just make sure your drone motion during the exposure time is around the same as your GSD.
Keeping images sharp is very important!
See Incomplete Orthomosaic - #17 by Gordon

1 Like