What is your biggest pain with ODM / OpenSfM?

Hi everyone,

Looks like I am the new maintainer of ODM, so as part of my onboarding, I want to gather a big list of TODOs that will be later discussed and compiled into a sensible roadmap.

Now, while ODM is of course the bestest software ever, nothing is perfect. So let’s use this thread to share what your main friction, frustration, pain or general rants that are induced by using OpenSfM / ODM, and hopefully we can address them soon enough.

7 Likes

First of all thanks for your support and role as a maintainer for ODM!

I will try to form a list. I’m not sure if all the ideas / wishes / bugs are OpenSFM-related or another part of (web)ODM. Feel free to filter out the ‘noise’.

To be continued :slight_smile:

3 Likes

I frequently need to check processing quality, investigate intermediate data to understand the issue, which a bit pain to do though. I need to use multiple tools and sometimes write custom code to achieve it. For example, to review the original image GPS value in 3D space, I have a script to convert them into xyz format, for viewing sparse reconstruction relies on opensfm’s viewer(which is also having issue when loading reconstruction.json whose file size exceed certain limit, 500mb or so I think, it would only load part of the model). I use cloudcompare to load the pointcloud and mesh model, use qgis to load the raster data, then for other stuff like features detection, feature matching, and depth map, I need to either use the opensfm’s command line tool(not maintained well, sometimes hard to use), or write custom code to display the binary depth map data, and so on. I don’t expect a full-fledged GUI ODM which can be quite complex and webODM already exists, but a visualizer(or some helper tools) to help data processor to understand data quality at different stage would be very helpful.

1 Like

I wonder if Yann’s experiments with preparing OpenSfM output to work in Rerun would help your investigations?

1 Like

Yes, something like this. I guess we can extend this to extend the data coverage with Rerun? It looks like a very promising tool to use.

1 Like

Welcome @DodgySpaniard! Good to have you onboard.

My biggest pain with ODM is the lack of unit and integration tests. This makes it difficult to know if changes I want to contribute are going to break something for someone else.

I would like to see a rule applied to the ODM repo “No new features without accompanying test coverage” to encourage contributors to start making progress on test coverage. The plus side for me as a contributor is that if I add a sweet feature that is covered by tests, I can be confident that no one else will break it going forward (assuming we also have rules against breaking existing tests :slight_smile: ).

Adding some good examples of “how we do tests” would also be super helpful, along with a few sentences in the documentation that points out the examples that future tests should mimic.

4 Likes

5000 series NVidia support is number 1 for me. One of the biggest negatives about webODM for me is the speed. Ive got a reasonably capable computer (9950x, 128gb ram) but the processing time is terribly slow compared to alternative software. I have a 5070 GPU, and have to disable GPU to be able to successfully complete a task.

Something Id also like to see would potentially be a end-user friendly viewing application for the deliverables. A way to provide the client with the geotiff etc in a simple piece of desktop software that can run on modest hardware. I wonder if its just a case of stripping down the software to a viewer-only variant?

I also seem to have an issue where the orthographic output resolution isnt nearly as high as the source data, but that may possibly be my settings. Putting the same photo set through DJI Terra I get higher resolution results.

3 Likes

Cameron_Morison

11h

5000 series NVidia support is number 1 for me. One of the biggest negatives about webODM for me is the speed. Ive got a reasonably capable computer (9950x, 128gb ram) but the processing time is terribly slow compared to alternative software. I have a 5070 GPU, and have to disable GPU to be able to successfully complete a task.

Yep, another vote for this, been patiently waiting since last August

3 Likes

Biggest issues:

  • The latest version is broken. The latest stable version was v3.5.6. I think a throughout review of the v3.6.0 update needs to be done.
  • Memory and speed: competing software is able to process larger datasets with less memory, in less time (e.g. Matic).
  • RTX 50** series support
  • Lack of native macOS support. Requiring people to use docker is… a pain (but so is making a native port of ODM).
7 Likes

Fixing the latest release is probably top priority imo - something I went some way to helping with, but have run out of time.

I agree with @NathanMOlson - ODM has many moving parts, so having a good test suite in place would ensure upgrades still produce good results. Ideally these run via a CI/CD system based on PRs.

[oats](GitHub - OpenDroneMap/oats: OpenDroneMap Automated Testing Suite) goes some way to solving that, but could probably do with a bit of an update. We have quite a few good imagery datasets for testing.

Developer documentation (for contribution) and general dev experience could probably be improved a bit too (currently the setup, building, testing, etc is a bit of a slow process).

3 Likes

Welcome to your new role! You’ve got some big shoes to fill. :slight_smile:

The pain points I’ve hit with ODM-the-actual-processing have mostly been hit upon already in the thread: macOS support being the biggest. The only thing I want more than native tools is support for the M-class chips that takes advantage of all their CPU/GPU goodness and doesn’t just “run the code” blindly. I recognize that from a certain perspective ODM is eighteen libraries in a trench coat, but every little bit helps.

That being said, most of my recent rough spots have been with other tools in the ecosystem – if WebODM or ClusterODM are now part of your scene, I definitely have a few ideas.

Thank you for taking on the role, and I wish you well in it!

2 Likes

As far as I see, we are all friends here, so I am happy to support the community to improve the Dev/User experience wherever it is most needed. That said, my interests and skillset are more aligned with OpenSfM and 3D CV. In any case, if changes/fixes need to span other repos, I am happy to help or drive such changes.

1 Like

If you require someone to help with beta testing RTX50** series support, then I have a spare drive that I’m more than willing to put a native windows test version on

3 Likes

I’m a relatively new user of ODM and my use has been exclusively through WebODM Here are some things that have caused issues for me:

Models fail due to out of memory conditions and It took a while for me to learn that this was the case. Solution: a prominent display/log of resource use and warnings/logs of out of memory kills on the system.

Issues with ghost features caused by ambiguous matches with high density oblique data. I had the assumption that RTK data would help with this issue but it doesn’t appear to. Is it possible to put checks in place to eliminate this? I have added min_track_length and depthmap_min_consistent_views to my workflow and I’m currently testing larger values. I think these two parameters ought to be brought to webODM.

But is it possible to also add verification to the matcher that considers gps priors? For example, if the match cause an RTK position to be off by a meter, then it’s a false match. The way I understand it is that with incremental algorithm, camera poses are built after the matching takes place so it would be a filtering process. But I think it would be helpful, nonetheless..

2 Likes

This may not belong in this thread, but I use the Windows Native version of WebODM, and wish that it was more regularly updated to track the main ODM changes.

5 Likes

Great thread. Agreed that fixing 3.6.0 is a high priority, and hopefully a reasonable lift. @NathanMOlson spent a lot of time on troubleshooting that, @spwoodcock did not small amount of lift on it, and it’s seen extensive testing both by me and others. Near the end of testing it was passing all oats tests and then some. My hope is much of the hard work is done, but it does need a lift to get over the finish by someone with more expertise than me. But payoff will be great: lots of library upgrades, OS upgrade, etc., which hopefully will make additional updating easier in the near term.

Echoing that I like the themes of testing/tests, benchmarking, developer documentation. And I’m excited to reap the benefits of the OpenSfM update from @yannoun especially improved GCP handling and optimization that will raise all ships. And agreed that the 5000 series Nvidia support is a consistent request.

On the theme of benchmarking/testing: memory profiling is probably a gap. Not sure the best pathway to that. One brute force way to handle it might be to expose and enforce hard memory limits through containerization in oats, but I hope there are more elegant ways that capture quick memory pressure events like what we see in MVS-Texturing.

3 Likes

Compared with 2D work, it is too slow to generate 3D models. It takes me 142 minutes to get 3D models, but when I set the option ‘fast-orthophoto’ and ‘skip-3dmodel’ to true, it only takes 46 minutes.

My environment is Ubuntu@22.04, 32core@2.2Ghz, 32GB ,without GPU.
My dataset contains 358 photos, 4000*3000px.

1 Like

When’s your start date @DodgySpaniard?

Technically, I already started; however, I am doing a slow ramp-up as I have limited availability at the moment. I expect to be up to speed by the end of March.

Regarding the roadmap discussions, I am expecting to start the conversation for long-term plans then. There seems to be enough short-term work to keep me going for now (e.g. broken builds).

4 Likes

Same here, thank you all for your great effort in this toolchain and esp. thank you for taking the burden to maintain this project in future :slightly_smiling_face:
As hobbies users, that just makes some flights on his farmyard (when he has the time), I noticed some points that could be improved:

  • GCP editing - To be honest, the editing of visual markers and giving it a georeference is currently pretty unintuitive. While everybody would be able to do this job, it’s currently only possible if you know what you do (how many markers, found in how many photos, …) and how to do (strange behavior of the webeditor, ignore GCP list when starting processing, …).
    I would love to get a editor that makes it that easy that even a child could add the infos, while keeping the ability to do it fast (e.g. "Please add 3 landmarks distributed on the whole are and mark it in at least 4 photos each). Perfect would be a detector for aruco markers, where you just need to provide the information
  • custom TMS / WMS integration - Currently we only offer a limited set of maps / aerial imagery. How about extend it e.g. with the OSM JOSM / ID imagery catalogue? Why not make it easier with a assistant like in QGIS to add custom WMS background layers and bookmark it
  • (re)processing a Project step by step - Most projects I start with a simple visualization to see if the reconstruction works at least and then later on I do all the processing. Unfortunately, I never get it to recycle the previous steps, but always starting from 0. Maybe this is just something to make it easier (or bulletproof?) via the UI, or there need to be done more work?
  • adjusting project settings over time - As I said, I adjust a project over time (different GCPs, tuned params, …) and would love to keep track of my changes over the time, to have the ability to get back / see my adjustments. Maybe some versioning and keeping results is possible in the long run?
  • Housekeeping on projects - This is something I usually cant do within the tool itself. Keep track of older projects over time (keep only results, free temp files, …)

While I added this ideas, please let me say, that ODM hass a long way gone. All of it like deployment, usability of pipeline, … has become so much better in the past years. As a result, it already brings using consumer drones for 3D / DOP reconstruction to consumers with running on consumer hardware. Something really impressive :slight_smile:

1 Like