Yaml settings to get more 2D straight down visual perspective forest survey ortho

Been using ODM to generate orthos of my forest research surveys. Orthos produced to now all give me a fairly top down 2D view to then locate infested trees.

Latest survey from near Lake Tahoe is over a more spaced stand and the view shows the tree stems leaning, with 2-3 meters between the base and tree canopy. As a result none of my ground plots are aligning with the ortho, even after georeferencing.

My settings.yaml has been tweaked from default to provide an image that performs better for my research. Any suggestions for settings to provide a less 3D view and more direct downward facing perspective for my survey ortho?

I’m not exactly sure this is what you’re goign for but something to try is to lower --meshing-octree-depth and --meshing-solver-divide both to 6 or so. I’ve found it makes really clean-looking orthos.

What I’m after is a more overhead image that does not include parts of source images that are farther from the centre which include side angles of tree stems. Have > 80% overlap in both directions, so thinking cropping images would eliminate the outlier trees. Not worried about 3D accuracy.

Current settings are:
#resize_to: 2400
resize_to: 2000
#start_with: ‘resize’
#end_with: ‘odm_orthophoto’
#rerun_all: False
#zip_results: False
#verbose: False
verbose: True
#time: False
time: True
#min_num_features: 4000 #default
min_num_features: 15000
#matcher_threshold: 2.0 # zombie code, does nothing
#matcher_ratio: 0.6
#matcher_neighbors: 8
matcher_neighbors: 12
#matcher_distance: 0
matcher_distance: 250
#use_pmvs: False # The cmvs/pmvs settings only matter if ‘Enabled’ is set to True
#cmvs_maximages: 500
#pmvs_level: 1
#pmvs_csize: 2
#pmvs_threshold: 0.7
#pmvs_wsize: 7
#pmvs_min_images: 3
#pmvs_num_cores: 4 # by default this is set to $(nproc)
#mesh_size: 100000
mesh_size: 30000
#mesh_octree_depth: 9 #default - when pulls images off
mesh_octree_depth: 5
#mesh_samples: 1.0
#mesh_solver_divide: 9
mesh_remove_outliers: 3
#texturing_data_term: ‘gmi’
#texturing_outlier_removal_type: ‘gauss_clamping’
#texturing_skip_visibility_test: False
#texturing_skip_global_seam_leveling: False
#texturing_skip_local_seam_leveling: False
#texturing_skip_hole_filling: False
#texturing_keep_unseen_faces: False #default
#texturing_keep_unseen_faces: True #? adds shelf to mesh around textured section
#texturing_tone_mapping: ‘none’
#gcp: !!null # YAML tag for None
#use_exif: False # Set to True if you have a GCP file (it auto-detects) and want to use EXIF
use_exif: True
#dtm: False # Use this tag to build a DTM (Digital Terrain Model
#dsm: False # Use this tag to build a DSM (Digital Surface Model
#dem-gapfill-steps: 4
#dem-resolution: 0.1
#dem-maxangle:20
#dem-maxsd: 2.5
#dem-approximate: False
#dem-decimation: 1
dem-terrain-type: ComplexForest
#orthophoto_resolution: 20.0 # default Pixels/meter 20.0px/m = 5.0cm/px
orthophoto_resolution: 30.0
#orthophoto_target_srs: !!null # Currently does nothing
#orthophoto_no_tiled: False
orthophoto_no_tiled: True
#orthophoto_compression: DEFLATE # Options are [JPEG, LZW, PACKBITS, DEFLATE, LZMA, NONE] Don’t change unless you know what you are doing

Because I have 80-90% overlap, I want to try cropping the pictures. Idea is to limit the parts of the images that ODM uses to build the texture to trees that were taken from a more vertical angle. Does resize crop from the centre out, or from the widest side in?

Hopefully the results will show a more 2d overhead perspective, reducing tree lean and crown misalignment from its base.

Is this possible in ODM? Or do I need to perform the crop prior, then import the images?

There’s work from @pierotofy that weights nadir images: https://github.com/OpenDroneMap/mvs-texturing/commit/6b53bdef3e3a4bb111119bbfc08bf1446f054705

Can you test how it works from the master branch?