During the isolation, I’m playing with WebODM trying to create some 3D models of small rock samples that I have at home (my aerial datasets are at the office, and my laptop wouldn’t handle them anyway).
I’m sorry, I’m a little confused. Are you saying that in Metashape you could process the top and bottom images and then merge the two mesh models/point clouds? And what do you mean by “normal view”? For context, what view(s) does Metashape give you that ODM doesn’t? I think if I understand that I can be of more help!
Also, I’m always a fan of ODM being used for stuff like this… BUT, just for fun consider taking Alicevision’s Meshroom for a spin. It’s another awesome opensource free software that also uses SfM for reconstruction. It has some other magic bits in the background that I can’t explain, but sometimes it produces amazing models.
It also has an interesting flowchart-style GUI for changing parameters (also has command line if that’s your jam).
In matashape you can divide your projects in “chuncks”. You can use this to process a large dataset in parts, for instance. Or you can use this to make a 3d solid in two parts. Imagine you have a statue and you take photos for the 3d reconstruction with the statue in “upright” and in “upside-down” position. Then you make one chunk for the top half of the statue and another for the bottom half (so you have photos of the statue’s feet). After you have your two meshes, you can then merge them into one single model. This youtube video shows the process: Creating Artifact Models in Agisoft Photoscan Part 1 - YouTube
For may sample, this is what I would call “normal” or “upside-up”: