Previously I’ve tried lots of datasets with orthomosaic from ODM and there’s no issue, but never really tried to have a look at the output of the point clouds and 3d meshes. So I might be wrong with what I’m going to say below.
In my mind, orthomosaic result would be better off if the input drone images is all nadir angle. But for 3d model, it’s better to have some more input like the image below: flying at a few different altitudes with different camera angle.
I’m also speaking from my past experience that non-nadir angle images always give me a nasty result (e.g. very overly stretched trees, weird artifects around seam lines)
If that’s the case, then my question would be, is it possible for a dataset good for both purposes (generate orthophoto and produce 3d model)?
Or is it more like no matter what I will separate the dataset into nadir and non-nadir images, use the nadir images to process the orthophoto, and use all images to process 3d model (unless there’s a secret flag in ODM that can do that for me that I’m not aware of).
Or the worst case, I might have some fundamental misunderstanding about the 3d model mission planning?
I agree with slightly oblique ones. I’ve tried with images that are 20 degree of nadir and they all seem fine.
But I’m curious if those images like 45 degree oblique or even 80 degree (almost horizontal) ones, they are for sure bad for orthophoto, but would they be good for 3d modeling? Does ODM have that difference?
If you fly multiple missions (slightly off from nadir and oblique) and combine the datasets, ODM should pick the appropriate images/pixels to texture things from and you should end up with both a good orthophoto and good pointcloud/models.
This has improved over time, so if you last tried it in 1.x or early 2.x ODM, try again now
Yeah, I find that one a little weird as well. But somehow heard a lot of people recommending that. I even got people recommending capture images following concentric circles (i.e pointing at the object, but moves from 10m to 20m then 30m radius), which completely don’t make sense to me.
It shouldn’t make sense to anybody! Do the people recommending it have any basis for the recommendation, or any evidence that it helps?
Moving outwards to capture the same area at larger GSD can’t help, you should really concentrate on properly recording from around the distance that gives the desired resolution/GSD, making sure every point is recorded in 3, and preferably more, images from different angles.
Of course there are circumstances when recording from around the same distance is not possible, due to obstructions like trees etc, but manual flying around them, or even using an SLR to fill in where necessary can help.
I’ve also seen this example online before. In this case, I would think that it will be a dataset that full of oblique images (almost all horizontal images since they are all framegrabs). In the ODM cli, they didn’t put any flag or option. In this case, would it just failed at the orthomosaic step, but still give an output of texture and point cloud?
Also this might be a stupid question, since I’m not really familiar with 3d models. With all the output odm is generating (texture, mesh, point cloud), what will be the most common file to use to generate a 3d model.
My understanding is that point cloud → mesh → texture --3d model and I’m trying to correlate with the output structure below:
|-- images/
|-- img-1234.jpg
|-- ...
|-- opensfm/
|-- see mapillary/opensfm repository for more info
|-- odm_filterpoints/
|-- point_cloud.ply # the very first point cloud? might be classified but not georeferenced?
|-- odm_meshing/
|-- odm_mesh.ply # A 3D mesh
|-- odm_texturing/
|-- odm_textured_model.obj # Textured mesh
|-- odm_textured_model_geo.obj # Georeferenced textured mesh
|-- odm_georeferencing/
|-- odm_georeferenced_model.laz # LAZ format point cloud, also georeferenced as the name indicating
|-- odm_orthophoto/
|-- odm_orthophoto.tif # Orthophoto GeoTiff
So normally, people should grab the |-- odm_textured_model_geo.obj # Georeferenced textured mesh to build 3d model using other software?
If that’s case, what are other common uses of point cloud and 2.5d mesh etc? just to make better textured mesh?