Creating 3D model from phone photos - best settings

I took a set of photos of an object out in the garden but the resulting model had some blurred parts that I assumed were because there was not enough overlap on some of the images. So I took another 15 or so pictures of the relevant area and added those to the previous set and processed them.

The resulting model now has a chunk missing and some of the added images are showing offset from the model so they appear like a separate item (see image).

If I am right in thinking that it does not matter if images are out of sequence, then what has caused this result?



Sequence doesn’t matter, so long as the scene is the same.

In a big mapping job of 23500 images covering ~1400ha last year, the photos were done in 3 separate multi-day visits over a month, and some areas were revisited on the 2nd or 3rd visit due to the late afternoon shadows being a bit too prominent on the first visit.
They were all successfully stitched in Agisoft Metashape by my client.

As to what causes your problem, which I have often seen too, I don’t know, but using Ultra settings, not resizing for feature extraction, and perhaps increasing minimum number of features (if you note images with relatively low numbers in the console log) should help.


As I’m relatively new to WebODM I’m not sure what Ultra settings are yet and for image resizing, the default seems to be around 2000px, or are you referring to something else?

Where do I increase the minimum number of features?



Default for resizing is 2048, but set that to -1 to prevent resizing

You need to go into EDIT where lots of parameters, see below, can be changed -

[INFO] Initializing ODM 2.8.0 - Fri Apr 08 18:58:53 2022
[INFO] ==============
[INFO] auto_boundary: True
[INFO] boundary: {}
[INFO] build_overviews: False
[INFO] camera_lens: auto
[INFO] cameras: {}
[INFO] cog: True
[INFO] copy_to: None
[INFO] crop: 3
[INFO] debug: False
[INFO] dem_decimation: 1
[INFO] dem_euclidean_map: False
[INFO] dem_gapfill_steps: 3
[INFO] dem_resolution: 5
[INFO] depthmap_resolution: 640
[INFO] dsm: True
[INFO] dtm: False
[INFO] end_with: odm_postprocess
[INFO] fast_orthophoto: False
[INFO] feature_quality: ultra
[INFO] feature_type: sift
[INFO] force_gps: False
[INFO] gcp: None
[INFO] geo: None
[INFO] gps_accuracy: 10
[INFO] ignore_gsd: False
[INFO] matcher_neighbors: 10
[INFO] matcher_type: bruteforce
[INFO] max_concurrency: 16
[INFO] merge: all
[INFO] mesh_octree_depth: 12
[INFO] mesh_size: 300000
[INFO] min_num_features: 12000
[INFO] name: c04c0c80-2f04-42c3-a02e-36eb89a59422
[INFO] optimize_disk_space: False
[INFO] orthophoto_compression: DEFLATE
[INFO] orthophoto_cutline: False
[INFO] orthophoto_kmz: False
[INFO] orthophoto_no_tiled: False
[INFO] orthophoto_png: False
[INFO] orthophoto_resolution: 0.05
[INFO] pc_classify: True
[INFO] pc_csv: False
[INFO] pc_ept: True
[INFO] pc_filter: 0.0
[INFO] pc_geometric: True
[INFO] pc_las: False
[INFO] pc_quality: ultra
[INFO] pc_rectify: False
[INFO] pc_sample: 0
[INFO] pc_tile: False
[INFO] primary_band: auto
[INFO] project_path: E:\WebODM\resources\app\apps\NodeODM\data
[INFO] radiometric_calibration: none
[INFO] rerun: None
[INFO] rerun_all: False
[INFO] rerun_from: None
[INFO] resize_to: -1
[INFO] sfm_algorithm: incremental
[INFO] skip_3dmodel: False
[INFO] skip_band_alignment: False
[INFO] skip_orthophoto: False
[INFO] skip_report: False
[INFO] sm_cluster: None
[INFO] smrf_scalar: 1.25
[INFO] smrf_slope: 0.15
[INFO] smrf_threshold: 0.5
[INFO] smrf_window: 18.0
[INFO] split: 999999
[INFO] split_image_groups: None
[INFO] split_overlap: 150
[INFO] texturing_data_term: gmi
[INFO] texturing_keep_unseen_faces: False
[INFO] texturing_outlier_removal_type: gauss_clamping
[INFO] texturing_skip_global_seam_leveling: False
[INFO] texturing_skip_local_seam_leveling: False
[INFO] texturing_tone_mapping: none
[INFO] tiles: False
[INFO] time: False
[INFO] use_3dmesh: True
[INFO] use_exif: False
[INFO] use_fixed_camera_params: False
[INFO] use_hybrid_bundle_adjustment: False
[INFO] verbose: False

1 Like

Thanks. I can see a setting for ‘feature quality’ and ‘PC quality’ so they should both be ultra?
Also increase ‘min number of features’ from 12000 to say 15000 whatever the amount of detail in the subject matter?


They don’t have to be, as ultra takes significantly more time, but gives a better result with more points.
Min number of features, in my experience only helps sometimes, highly dependent on the scene. Usually the number of points is well in excess of the default 10000. If the number of points detected is below the setting, then the threshold for detection is lowered and feature detection re-runs to hopefully detect more**. I’m not sure of the limit to repeats with lowered thresholds, but the number of repeats with lower thresholds probably increases with higher min_num_features.

** I don’t think this is necessarily the case when GPU feature extraction is used, at this time.

1 Like

Thanks. I will report back if the problem is resolved or improved :blush:


Ok, I have tried two variations, first setting the Feature quality to Ultra with 2000px images and, secondly with those settings and also the Number of features set to 15000.

The ‘anomaly’ persists but in the second case it’s of higher quality. See pics.

I can see I’m going to have to start the whole process all over again :expressionless:


How does your camera distribution look?
Have you looked at the report, it can show up problem areas.
Are you able to share the images for others to have a go at?

1 Like

I can download a Camera Report as a Geojson file - is that what you mean?

Here is the Quality report but I can’t say it means too much to me at the moment :slight_smile:
Kerrow-09-04-2022-report.pdf (1.8 MB)

I can try uploading them to my Mega account and get back with a link.


Here is a link to the 92 images that should work:



Not that file, and the quality report showing camera positions isn’t very useful in cases like this where the camera was facing near horizontal, but if you display 3D and show cameras, you can rotate to give a better idea of distribution and any holes that might exist.
I’m downloading the images now so will see how it looks soon.

1 Like

I tried including a DSM, which was a mistake, as processing failed, perhaps due to no images looking down at the ground from above, so eliminating DSM, processing was able to complete.
The point cloud had holes, but textures were almost complete, see screen grabs of the 3D model from front and sides below

If the vegetation hasn’t changed too much, I’d suggest photos with similar good coverage around the back and sides, plus some looking down across the stone and a metre or 2 of ground around it.

Camera positions, looking into the hollow back side of the model

Above model created with these settings:
[Kerrow - 09/04/2022](javascript:void(0):wink:

91 images (I removed 1 duplicated image) 01:22:14

|Options:|auto-boundary: true, dem-resolution: .1, mesh-octree-depth: 12, mesh-size: 300000, min-num-features: 20000, pc-filter: 0, pc-geometric: true, pc-quality: ultra, resize-to: -1, use-3dmesh: true|
|Average GSD:|0.03 cm|
|Area:|16.93 m²|
|Reconstructed Points:|29,767,137|

1 Like

Thanks. That looks very good. I can’t take photos around the back as the ground level at the back is up to three quarters of its height. The Neolithic rock markings are all at the front and top areas of this stone. Maybe if it’s excavated around the back one day and yes, there are no shots from above except those holding my arm up.

So should I set DSM to False in settings?

And can you say which setting you used was different from the defaults ones and which removed those problems I had? You used a higher ‘min num features’ of 20000 and no resizing.

Interesting that your run automatically removed one duplicate image whereas mine didn’t.

My processing of a large area heritage site from drone images [ - Boscawen Un] didn’t produce any of the problems I encountered here so I’m still trying to pinpoint why default settings with that all worked fine but for this smaller subject using phone pics it didn’t. That would help for future work. Thanks again.


That wont be a problem, so long as you have enough overlap for plenty of common features between images. Standing on higher ground at the back might allow you to grab some imagery from above with a selfie stick or similar? It would be good to have a little surrounding ground for context.

False is the default when you choose 3D model, so yes for the current set of images, as I suspect just one side of an upright rock with no surrounding ground is unlikely to produce anything useful.

no resizing (set to -1 instead of 2048) should help with feature extraction. However, because the GPU is used for feature extraction, the number of features isn’t displayed ( maybe it is is Verbose is used, but I haven’t tried that), so I don’t really know which particular parameter was most helpful in improving the result. Perhaps Saijin_Naib can offer some insights?

I manually removed that duplicate ‘A’ image, as I couldn’t see any obvious difference from the original.

The very impressive model you produced of Boscawen Un perhaps benefitted from the better coverage from more angles, particularly I think having a large area of ground helps. Finding the right settings for small areas taken with a phone is often quite tricky in my experience.

Just having a play with animations and I can see that the surface of the stone and lichen has holes in the 3D model where some parts are not in the plane of the surface, but are lifted out a little bit, most noticeable when the surface is viewed nearly side on.
So rather than having all the photos with camera pointing straight at the rock face, angle some at say 70 degrees to the rock face, equivalent to non-nadir photos taken from the drone, which should better define the stone and lichen surface in the 3D model.

1 Like

Animation here-


Thank you for the insights.

With the drone based model I used 80degree gimbal for the vertical mapping shots rather than 90 so that there is a bit more of a side view when going back and forth over an area.

I agree that smaller ground based objects introduce their own issues, especially when one is taking hand held i.e. not automated, shots of them. Clearly some more experimentation is required at both the imaging and processing stages but at least you have demonstrated that adjustment of some of the settings can significantly improve the end result :blush:

When it comes to handheld sized objects then I can use a motorised rotating platform and my phone as here : - 'Cross' Artefact

The even regular shots all round and at 3 or 4 angles works well. This model was made with 48 shots I think.


Hi Gordon, Nice! That is one of the features I have been wondering about. How to do a visual tour or animation? Is that from within the WebODM viewer or are you using some other means to rotate and move the object in the way you want and then screen recording? (I have Debut software for recording)


Hi Gordon,

This is what happened when I used your settings - looks like a bomb has gone off! Much worse than my original model so heavens only knows what is going on . . . .

Here’s another view:


I only recently figured it out myself, the animation is done in WebODM, then I used OBS Studio to record the screen, then in Windows Video Editor I saved it in 720 pixel wide format to reduce the file size.

Ideally we’d be able to export from WebODM, but since that isn’t possible I tried OBS Studio to record the whole screen, and I’ve yet to find a video recorder/editor that will allow just a part of the screen to be recorded.
Under Navigation select camera animation with the green lines and red dots, then its a matter of aligning them to get the views you want to start and end at, and how it moves in between- it does take some practice!

I think pc-rectify: true will make the task fail, rather than DSM:true as I thought earlier, after looking through the console error in another attempt that failed, but for the same reason I mentioned earlier.

Ouch! can you paste in your settings direct from the task? ie
Options: auto-boundary: true etc