Mission planning and some basic concept question for 3d models

Hi all, I’m trying to get my mind straight.

Previously I’ve tried lots of datasets with orthomosaic from ODM and there’s no issue, but never really tried to have a look at the output of the point clouds and 3d meshes. So I might be wrong with what I’m going to say below.

In my mind, orthomosaic result would be better off if the input drone images is all nadir angle. But for 3d model, it’s better to have some more input like the image below: flying at a few different altitudes with different camera angle.

I’m also speaking from my past experience that non-nadir angle images always give me a nasty result (e.g. very overly stretched trees, weird artifects around seam lines)

If that’s the case, then my question would be, is it possible for a dataset good for both purposes (generate orthophoto and produce 3d model)?

Or is it more like no matter what I will separate the dataset into nadir and non-nadir images, use the nadir images to process the orthophoto, and use all images to process 3d model (unless there’s a secret flag in ODM that can do that for me that I’m not aware of).

Or the worst case, I might have some fundamental misunderstanding about the 3d model mission planning?

Thanks in advance for any help.

2 Likes

Slightly off nadir will actually give you the best results as it will prevent the lens calibration from getting wonky.

So, you should have a fine orthophoto result even from semi-oblique imagery, provided you keep overlap/sidelap sufficiently high.

2 Likes

I agree with slightly oblique ones. I’ve tried with images that are 20 degree of nadir and they all seem fine.

But I’m curious if those images like 45 degree oblique or even 80 degree (almost horizontal) ones, they are for sure bad for orthophoto, but would they be good for 3d modeling? Does ODM have that difference?

1 Like

If you fly multiple missions (slightly off from nadir and oblique) and combine the datasets, ODM should pick the appropriate images/pixels to texture things from and you should end up with both a good orthophoto and good pointcloud/models.

This has improved over time, so if you last tried it in 1.x or early 2.x ODM, try again now :slight_smile:

3 Likes

I suspect your Set 4 would be superfluous, as it will only cover features already seen in Set 1 and 2, only with a larger GSD.

2 Likes

Yeah, I find that one a little weird as well. But somehow heard a lot of people recommending that. I even got people recommending capture images following concentric circles (i.e pointing at the object, but moves from 10m to 20m then 30m radius), which completely don’t make sense to me.

2 Likes

It shouldn’t make sense to anybody! Do the people recommending it have any basis for the recommendation, or any evidence that it helps?

Moving outwards to capture the same area at larger GSD can’t help, you should really concentrate on properly recording from around the distance that gives the desired resolution/GSD, making sure every point is recorded in 3, and preferably more, images from different angles.

Of course there are circumstances when recording from around the same distance is not possible, due to obstructions like trees etc, but manual flying around them, or even using an SLR to fill in where necessary can help.

2 Likes

I’ve also seen this example online before. In this case, I would think that it will be a dataset that full of oblique images (almost all horizontal images since they are all framegrabs). In the ODM cli, they didn’t put any flag or option. In this case, would it just failed at the orthomosaic step, but still give an output of texture and point cloud?

Also this might be a stupid question, since I’m not really familiar with 3d models. With all the output odm is generating (texture, mesh, point cloud), what will be the most common file to use to generate a 3d model.

My understanding is that point cloud → mesh → texture --3d model and I’m trying to correlate with the output structure below:

|-- images/
    |-- img-1234.jpg
    |-- ...
|-- opensfm/
    |-- see mapillary/opensfm repository for more info
|-- odm_filterpoints/
    |-- point_cloud.ply                    # the very first point cloud? might be classified but not georeferenced?
|-- odm_meshing/
    |-- odm_mesh.ply                    # A 3D mesh
|-- odm_texturing/
    |-- odm_textured_model.obj          # Textured mesh
    |-- odm_textured_model_geo.obj      # Georeferenced textured mesh
|-- odm_georeferencing/
    |-- odm_georeferenced_model.laz     # LAZ format point cloud, also georeferenced as the name indicating
|-- odm_orthophoto/
    |-- odm_orthophoto.tif              # Orthophoto GeoTiff

So normally, people should grab the |-- odm_textured_model_geo.obj # Georeferenced textured mesh to build 3d model using other software?
If that’s case, what are other common uses of point cloud and 2.5d mesh etc? just to make better textured mesh?

1 Like

Here’s a good result ortho and 3D model with overhead and side-on imagery combined

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.