3D model with 1070 pics

Hello,
I just discovered WebODM after realizing I could use my drone to make some photogrammetry.
My primary goal would be to make 3D models to print with my 3D printer.
My operating system is Linux (Ubuntu 20.04) and the browser is Firefox. My computer is a 6-core i5, it had 16 GB RAM but now upgraded to 48GB.

I shot a large-ish set of pictures of my work place area, it’s 1070 pictures, taken from two different heights (35m and 70m), from different directions, and with a camera inclination of about 15° from vertical.

I rendered the model with the standard settings, and it worked. It took a very short time (about 1 hour) and the dense point cloud was beautiful, but the actual 3D mesh was a bit disappointing - the buildings were not well-defined with sharp edges, but they had a “round-ish” appearance. Not so good for a 3D print.
So I tried again with higher settings… I set the following options:
depthmap-resolution: 1024
feature quality: ultra
mesh size: 400000
min-num-features: 40000
pc-quality: high
No resizing of original photos

With these settings, I got an “out of memory” error. So I upgraded the RAM from 16GB to 48GB, but I still got an “out of memory” after about 6 hours.

Now I’m trying to reduce the settings, I have set:
depthmap-resolution: 1000
feature quality: ultra
mesh size: 400000
min-num-features: 40000
pc-quality: high

Basically I let it resize the original photos, and decreased the depthmap resolution a bit. Will this significantly decrease memory usage, or should I change something else?
What would be good settings for my purpose - the creation of a sharp and detailed 3D model of terrain and buildings (trees aren’t as important) for 3D printing?

Also, I got a message “nvidia-smi not found in PATH, using CPU” but if I run the command “nvidia-smi” from the terminal, it works, so it is installed and it is in the path…

Thanks
Cristian

1 Like

Welcome!

Can you provide more information about your images? What MP are they?

48GB RAM is quite slim for that image count. Most folks with Tasks of that size would be working with 64GB+ (128GB+ ideally for ultra).

Do you have a swapfile that is 2-4x the size of your physical RAM available to help?

As for your parameters, depthmap-resolution and pc-quality control the same thing. Ideally, you should be using pc-quality unless you specifically need to set the resolution in pixels vs relative sizing from input image dimensions.

Resizing will help with memory consumption, but it may work counter to your goal of having the sharpest building reconstruction.

Hi,
Thank you for your reply.
The photos were taken with a DJI Mini 2 drone, at 12MP 4:3 aspect. A few are the tiniest bit blurry due to drone motion and low light. I have not selected the camera model anywhere, I assumed the EXIF data would be adequate, is that correct?
Since I use linux, I don’t have a swap file but a swap partition, and that’s not as easy to enlarge as a swap file. It was set to 16GB when I had 16GB RAM, and it has remained 16GB. I can try to resize the partition, but that could prove complicated since there are several other partitions on the disk.
So, what could my options be… I’d like to avoid buying even more RAM, since my budget is limited. Shoud I remove some of the pictures, possibly the low altitude ones? This would decrease the overlap, which is already less than stellar (it’s a large area).
Or should I split the job in separate areas, to be then merged in Blender? (I would have no idea how, but I suppose it’s feasible)…
I don’t need super detail. Just nice square buildings. What is the most memory-hungry option?
Would it help if I posted the report from the first run?

Cristian

1 Like

Yep! I was inquiring more for our purposes as the memory usage increases quite sharply with higher MP images.

Understood. Is your setup LVM2 so you can resize volumes, or not? I have a dedicated 120GB SSD as my swap volume at this point.

If you’re already low on overlap/sidelap… It may not help.

Have you added the --auto-boundary option, or explored possibly adding the --boundary GeoJSON to limit things a bit?
https://docs.opendronemap.org/arguments/auto-boundary/
https://docs.opendronemap.org/arguments/boundary/

We do have the concept of the split/merge pipeline, but you’ll not generate a textured mesh.
https://docs.opendronemap.org/large/#local-split-merge

Probably pc-quality, depedning.

Report and Task log are always welcome :slight_smile:

I don’t think I’m using LVM. I remember manually partitioning the disk with gparted when I installed this system, a few years ago. So unless the system changed things by itself, I still have hard partitions. I can still resize and move them around, it’s just going to be slower and more complicated…

I honestly don’t know these options. I started with the basic default settings and just “pumped them up” a bit. I’ll read the links you gave me, thanks.

But isn’t PC exactly what I need to be good, to get good buildings?

Here’s the report from the first, short run with default settings:
Catalano-27-5-2022-report.pdf (6.4 MB)

Thanks again
Cristian

1 Like

The last job failed too, right after the depthmap filtering phase the memory use skyrocketed.
I was able to increase the swap from 16 to 70 GB, this should help. Now I started another job using the “building” setting, modified like this:

auto-boundary: true, boundary: {"type":"FeatureCollection","features":[{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[12.400678396224976,42.29665526301039],[12.40055501461029,42.296417188492384],[12.401665449142456,42.29611959407923],[12.403451800346375,42.29550852914243],[12.405919432640076,42.294826034022],[12.406128644943237,42.29479825790116],[12.406337857246399,42.29531806613193],[12.404755353927612,42.29636163764201],[12.40382730960846,42.29699650158329],[12.402019500732422,42.29693698336067],[12.40081787109375,42.29670684570399],[12.400678396224976,42.29665526301039]]]}}]}, feature-quality: ultra, mesh-size: 400000, min-num-features: 40000, pc-geometric: true, pc-quality: high

By the way, the “boundary” option is amazing, I actually needed it because there’s big part of the pictured area that I don’t need and would have had to trim away later, anyway. But I read that this option only “kicks in” after everything has been calculated, and thus does not affect memory usage?

Cristian

1 Like

I’ll need to do more profiling and might need to adjust the documentation for --boundary. It should, like --auto-boundary help constrain memory usage a bit, especially if you really limit the area of reconstruction.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.