Problem calculating tree height

Hey ODM community!

First, let me start by introducing myself. I’m Nicolas from Argentina, and I’m currently working on my thesis with ODM. The thesis is about detecting changes in structural parameters in forests in Argentina, using UAVs. The plan is to implement the project in Argentina’s national parks.

I’ve added a few plugins / features to WebODM on a fork I created, such as an elevation map, integration with a Piwigo server and integraton with a LabelMe server we have. I someone thinks that any of these features would be useful in ODM, I would be more than happy to talk about integrating them.

But going back to my question, as part of forest classification, it’s super important to have a precise height measurement of the trees. Sadly, I’m currently failing to do so. First, I’m not sure what the best flight parameters would be.

For example, should the camera be facing down (90 degrees) or slightly tilted (like 75 degrees)? Should the mission be a simple or double grid? Would GCP help, even if they are on the floor?

And once I have the images, would you recommend any special parameters when running ODM? I’ve used the ‘Forest’ preset, but I haven’t seen any significant difference.

I have a few runs where the height is off. For example, I have this run (no special params, only dsm and dtm) with GCP (camera was facing down, 90 degrees):
Elevation map (since the reference is the floor, I’m comparing the DSM and DTM and creating layers every 2m).
Highest point in the elevation map:

You can see that the highest point is between 40 and 42m, and that’s not correct. The trees are smaller than 20m. At first I thought that the problem could be with the dtm calculation, since sometimes it’s not correct, but even if I only use the dsm I get this big 40m between the lowest level and the highest level.

I read some other posts where people recommended using another camera angle. Since I couldn’t go back to where the images where taken, and since it will help me experimenting, I wrote a small drone simulator in Unity that flies over a predefined terrain, taking pictures. I didn’t add any GCP yet.

At 90 degrees, I got a highest level of between 44 and 46m, when the trees were 30m tall (pictures and info in the link below). At 75 degrees, I got the same problem:
Elevation map:
Highest point:

Anyway, does anyone have any recommendations? I’m not sure how to fix this problem.

Links to Task images & console:


  • I’m using the docker images of both ODM and NodeODM (without any modifications).

Thank you so much for taking the time to read this!

You’ll need GCPs for that kind of accuracy. I’d recommend flying cross hatch too and see if that helps, with a larger overlap (>80%).

I’ve generally found that oblique produces better 3D models and DSMs, though I have a client that does volumetric calculations for aggregate companies who swears that he gets more accurate results using nadir. I’d recommend that you try both.

Hey @nchamo :hand: that’s awesome work! Both additions are very interesting (I also looked at your dev mode changes, which are a good improvement). If you open a PR there are a few things that might need adjustments (like not using additional docker volumes, removing certain hard-coded paths, perhaps adding plugin entrypoints for image import vs. hard-coding a new pending_action state), but we welcome contributions.

I’d be very interested in seeing your drone simulator too! Is that available as open source?

Onto your question, maximizing scale variation (fly at different elevations), camera angle (fly both nadir and non-nadir at cross pattern) and fly higher are all things that should improve your reconstruction. Aside from that, trees in general are tricky to reconstruct because they don’t have lots of identifiable features and that’s why lidar still outperforms with these tasks.

Flying nadir-only, at the same elevation in a grid pattern is bad for self-calibration and results will suffer.

GCPs will give you better georeferencing accuracy, but will not improve height estimation by much (if any).

Hey @pierotofy and @ITWarrior!

Thank you so much for your answers.

I already created a PR for the elevation map plugin, but integrating Piwigo and LabelMe will be harder since they are already completely different systems with their own http servers. That’s why I’m currently using docker images. I’ll see what I can do about it :slight_smile:

Also, about the trees problem itself, I’ll take your input, add some different flight plans to the simulator and then I’ll upload it to a github repo so you can take a look.

I’ll keep you posted :smile:

Thank you so much!

1 Like

Hey @pierotofy!

One more question. I see that ODM reads lat, long, and altitude with exiftool from each image. Does it also read other properties such as camera pitch and yaw somewhere else in the code?

I’m asking so I know if I should tag those properties with the simulator.


Good question, it does not. Only lat/lon/alt (and a few others such as focal length, make and model) are used (if they are available).

77-83% cross grid would be where I’d start with vegetation.

If it’s possible to share even the simulated data, that’d be great for testing and verification purposes, with bonus points if the simulator itself can be shared.

Awesome work!

Hey @smathermather-cm!

I just uploaded the project to github:

It would still need a lot of love, but that’s what I’ve got so far :slight_smile:

Hope it helps!

PS: If you happen to make any changes, please don’t hesitate to create a PR!


Hey everyone!

So after a year from this post, I wanted to post what I learnt about this topic since it might help someone (cc @Nicholas_K).

I’m presenting my thesis next month, but I thought I would sum up the findings.

First, I want to start by explaining a little bit about the process. I used the simulator I built (described above) to test ODM and see how it works. We found some parameters (some mission related and other not) that improved the results. We then tested the non-mission parameters on images we had from missions we had already executed. The pandemic made it impossible for us to execute new missions.

Ok, so the first experiments just confirmed some usual recommendations you hear on forums. For example, the best 3D reconstruction was done by flying a double grid, using GCP (that need to be well distributed across the whole area), high overlap (~85%) and different camera angles on each pass (the best result for me was 90 and 60 degrees). These parameters reduced the avg error for the 3D reconstruction by 65%, when compared to single grid, no GCP, nadir flight.

Now, even with all that, I still had problems detecting tree height. The main problem was the fact that the floor could not be calculated correctly because of the dense canopy. That’s when we built a ground rectification algorithm that re-classifies the point could and even adds some points where it is needed. That algorithm was added to ODM here:

With it, the avg error for the DTM was reduced by 45%. On one of the simulations, the expected canopy cover percentage was 46.80%. With the algorithm, it went from 36.56% to 40.86%.

I think that is a pretty short summary of everything, but let me know if you have any more questions.

I hope it helps someone!