New methods for model decimation/smoothing

Hi all,

I just saw this brilliant work by NVIDIA where the auto-decimate complex leafy tree models from >5bn triangles down to 20M using some computer vision black magic. Since photogrammetry outputs are notoriously crappy and too high poly count, I was wondering if a method like this might be usable as a tool to improve model outputs, but this is far outside of my skillset to I’m not equipped to evaluate the technical constraints of this method or whether or not it would be suitable for use with photogrammetry outputs.

2 minute papers video: NVIDIA’s New Technique: Beautiful Models For Less! 🌲 - YouTube
Project page: https://research.nvidia.com/publication/2021-04_Appearance-Driven-Automatic-3D
Code on github: GitHub - NVlabs/nvdiffmodeling: Differentiable rasterization applied to 3D model simplification tasks

Thoughts?

Cheers

Tim

1 Like

That’s really neat!

Perhaps; it’s just too bad we can’t touch it because it’s released under a proprietary license. nvdiffmodeling/LICENSE.txt at main · NVlabs/nvdiffmodeling · GitHub

1 Like

The closed source IP monsters strike again :frowning:

They post a bunch of their algorithms in the paper… how does that work with IP, can they IP protect the method (i.e. algorithms) in addition to the implementation (code)?

1 Like

IANAL, but I suspect reimplementation would be legal, assuming they don’t have any software patents on it.

1 Like

Not just a proprietary license but one which looks somewhat viral, like someone weaponizing the copyleft. Whew! Hopefully I’m just reading it wrong.

Anyway, re-implementation would be the way to go, probably after testing it to make sure it does the needful.

1 Like