Aerial Arborist

I’m an aerial arborist (I climb trees to prune and remove them) in the Midwest portion of the United States. I got my start in drones by modifying a racing drone and then building a cinewhoop to set throwlines (basically a string with a weight on the end for pulling a climb line back around a solid branch).

I got into ODM by realizing that I might be able to use a drone to create a map of worksites and even individual trees. I currently use an ANAFI for photo capturing, webODM (in conjunction with the Lightning node) for processing, and Blender for post-processing of 3d models.

I’ve only been using the software for a couple weeks, but I’ve been able to have a degree of success getting basic canopy heights and dimensions, as well as canopy position relative to landscape and structures; I’ve also been able to get a decent model of a major defect in a dead tree.

I’m hoping, eventually, to be able to accurately model leafed-out trees. I’ve gotten to the point where I can see where the major leads/branches are in generated point clouds but am still working on getting this information to accurately render in the 3d model.

5 Likes

The problem is that trees move around and things that move doesn’t work well with SFM.

If you REALLY need to get a Modell of tree the only way is LiDAR.

1 Like

Thank you for your response. I’ll have to look into LiDAR a bit more. I decided to start with photogrammetry and SFM because of cost and because of a couple recent scientific articles with good results, especially on calm, overcast days.

I know, from my small amount of experience, that getting tree structure from overhead is definitely not possible for most live trees during spring and summer, but I have had a small degree of success by flying overlapping circles and patterns at lower altitudes with the camera angled upward (which is why I bought a somewhat outdated ANAFI). Using this method, I can pick out major structural features out of the generated point cloud with a fair degree of accuracy.

As I mentioned above, the main issue I’m dealing with right now is getting a better model out of the point cloud. There are two problems that I’m working on related to this: the first is how to get the sky out of point cloud models; the second is how to have the algorithm generating the mesh recognize partial cylindrical and conical shapes (of branches and leads) and either complete them or close them off.

I’m not sure I’ll succeed, but I’m enjoying the attempt right now. In the meantime, if you have any recommendations regarding drone-based lidar systems, I’d be interested.

1 Like

Thanks for using ODM, If you want detailed tree, I think it will suitable if you use Lidar instead of photogrammetry.

BTW, please have a look for ZenMuse DJI Drone. I never use it. But, as I remembered, it has Lidar capability

It is quite feasible to get a good tree with photogrammetry. It does help if it’s not windy, but a small breeze is fine:
ezgif-3-d02f37ba2c

The comparative instantaneousness of lidar is nice. But with a little persistence, trees make good enough photogrammetric models.

image

3 Likes

This is like a tree, only smaller. Set Z axis as up vector, created with 55 phone photos

3 Likes

Those are really neat SFM examples. I’ve been slowly getting better at my flying and photography attempts and can show the following two examples of how far I’ve been able to get on the same tree.
This is where I started:


And this is where I am at now:

I’m still having difficulty transitioning from point cloud to 3d model, though.

Here is an example of a point cloud from a branch lead from the same tree I’ve been working on:

It appears alright in the 3d model until it’s rotated to a certain point, at which time, the whole object turns to modern art:


The more I’ve read and messed around, the more I’ve been feeling that I need to one or more of the following:

  1. Mask around major leads and branches at the photo stage
  2. Use machine learning at the point cloud stage (it would be tedious, but pretty much how most articles I’ve read have been handling similar issues, even for LIDAR data)
  3. Use a different algorithm for modeling from the point cloud (or software that lets me tweak something similar to what ODM is using)

I’ll have to start adding programs at different stages in the process (GIMP, Cloud Compare, and Meshlab) to see what I can come up with and what workflow makes sense, but I’m happy to see how far I’ve gotten already. I’ll still be working on better flight patterns and photo overlap, since solving these kinds of puzzles is often best done at the collection stage and not in the processing stage.

2 Likes

@sprucetorch,
I have some terrestrial LiDAR data scans have that have some trees. The intent was not detailed scans of the trees but of the property before it was fully reclaimed.
Willing to share if it helps you, even if is to see what poorly scanned trees look like. The scans are CC compatible.

Cheers,
Jeff

1 Like

I’ve had many similar tasks where large “bubbles” of incorrect texture, (often it is sky, but not always) become attached to the object of interest.

The point cloud under the canopy is clear of any extraneous points.

That would be awesome, Jeff! As I said above, LiDAR is out of my price range right now, but I’d be really interested in knowing what the possibilities and toolsets are for the future.

Take a look at the sky-removal flag. It’s not perfect: I think it’s probably too aggressive at sky pixel removal, but it uses an AI technique to pre-process and mask sky pixels. Feedback on outputs from it are most welcome.

As far as meshing thin structures: we don’t have a great approach yet for this. So the point cloud may look amazing, while the mesh may be meh. The more examples you provide, the more incentive the devs may have to look into these challenges.

Also: wow! Cool tree work! You are way ahead of me. Any screen shots of your camera positions with those models you can share?

I’ve tried the sky-removal flag, but each time I do, I have to lower the min-num-features for it not to generate an error. The couple of times I’ve done that, I’ve had to drop it so low, the data hasn’t lined up correctly. This could also be because my overlap is still not quite where it needs to be. On the other hand, I’m wondering if the sky removal function might work better if it was done after the point cloud was generated and before the mesh. Of course, that could wind up creating a whole host of new problems.

All my meshes of fully leafed trees have turned out like the above example. It’s almost like the meshing algorithm is trying to treat the tree like it is a building. I’ve gotten a little closer to making a good model with Cloud Compare and Meshroom; however, I don’t think I’ll be able to get anything really nice until I can segment the points that represent the leaves and twigs from the leads and branches, mesh them separately, and put them back together.

Thanks for the kind words on my model! Here are the camera positions for this point cloud:


I definitely went a bit overboard with the lateral overlap and have even found that deleting several of the extraneous pictures has helped the detail a little bit. It also would have helped to have one more vertical layer.

1 Like