Is there any clear guidance on what settings should be enabled for high resolution orthophoto? I found some threads and some posts. My feeling is that it is quite scattered and would be nice if there was some central guidance for a base configuration.
I guess like anything, whatâs your definition of High Resolution? Whatâs your use case? What kind of hardware do you have available to process with? All of these things put various hard/soft constraints on what parameters are viable and what level/setting for each is viable, which is likely why things are so fuzzy.
That being said, I hope to expand the documentation in such a way that you will have more guidance on this matter.
Lol. We should have done that months ago. No one should be using that flag, unless they have an old 1TB dell blade sitting on their kids lego table in their spouseâs office. Then⌠maybe.
edit:
Hmm have to admit they look pretty similar here on forum but when i look in viewer odm is bit more pixel looking and Agisoft smooth things out it seems.
You can also drop --depthmap-resolution as that is set by --pc-quality.
Any particular reason youâre skipping 3d model and choosing gauss_damping?
As Gordon noted, you canât get better resolution than what you actually captured, but with --ignore-gsd dropped, having --orthophoto-resolution 0.5 doesnât hurt anything.
As for the display, it looks like Agisoft might be using biquad/bicubic or something similar for their image resizing algorithm. We might look into changing our default behavior, but it can come at a significant time cost when generating pyramids/overviews.
I am kind of not following on the --ignore-gsd flag haha. If I get it right you will remove the option so it always is true?
Great point about --depthmap-res, read about it being old setting and pc quality will do the job, thats great!
Not really any reason for gauss_damping, just trying around. 3d model skipped mostly because I am looking at making ortophoto. Will it improve quality of ortophoto with 3d model?
Haha, nope! I would want to either put it behind a nag prompt that makes you confirm like, 30 times, or maybe do something like desktop goose and have it run away when people try to turn it on:
In other words, it should be false 99.99% of the time, and even that last 0.01% of the time, it should also most likely be false. People turning it on without supercomputer-sized hardware is a recipe for distaster and leads to a lot of our failure to process errors. And 99.99% of the time, it isnât going to help what theyâre trying to do.
Fantastic! Keep track of your screenshots/exports so you can compare what works better for you in this situation!
It really depends what the purpose of your exports are, and what youâve imaged. It can lead to some weirdness at building edges, but it can also generate products that are much more precise for natural features.
Amazing input and yes I see what you mean, when running this I get the error that I did not have enough ram with a machine that has 64GB, however, I am running webodm and node on the same machine.
In an idealized sense, you gain quite a significant amount.
--pc-quality:
ultra: 0.50
high: 0.25
--feature-quality:
ultra: 1.00
high: 0.50
In reality, it depends upon the data and how it was collected, as well as your desired output. Youâre getting a finer point cloud, but in many cases it doesnât meaningfully improve your output products. It also carries a significant cost in RAM, processing time, and potentially storage.
If you have the RAM/swap, the processing power, the storage, and the time, Iâd see no reason not to use ultra for both. You can always refine/simplify the products in post, but you canât enhance, no matter what BladeRunner taught us.