I had 3 degenerate hyperplanes, but now have a 3D model after removing GPS from EXIF

I’m attempting to produce a 3D model of a vertebrae with 80 images, but keep running into a failure to process. There is good focus and I have removed images with saturated areas to ensure plenty of features are detectable in all images.

I’ve tried with and without GPU feature extraction, SIFT and Bruteforce, but not having any success so far.

I’ll try bumping up feature numbers, but apart from that, any ideas on what the issue might be?

Auto-boundary: true, feature-quality: ultra, matcher-type: bruteforce, mesh-octree-depth: 13, mesh-size: 500000, min-num-features: 12000, orthophoto-resolution: 0.05, pc-filter: 5, pc-quality: ultra, resize-to: -1, skip-orthophoto: true, use-3dmesh: true

End of the console log -

2022-09-17 15:07:32,762 DEBUG: No segmentation for 20220917_144102.jpg, no features masked.
2022-09-17 15:07:33,094 DEBUG: No segmentation for 20220917_144109.jpg, no features masked.
[INFO] running “D:\WebODM\resources\app\apps\ODM\SuperBuild\install\bin\opensfm\bin\opensfm” match_features “D:\WebODM\resources\app\apps\NodeODM\data\13ce8fd6-2fa1-4f8c-ae62-fa77028bcb44\opensfm”
Traceback (most recent call last):
File “D:\WebODM\resources\app\apps\ODM\SuperBuild\install\bin\opensfm\bin\opensfm_main.py”, line 25, in
File “D:\WebODM\resources\app\apps\ODM\SuperBuild\install\bin\opensfm\opensfm\commands\command_runner.py”, line 38, in command_runner
command.run(data, args)
File “D:\WebODM\resources\app\apps\ODM\SuperBuild\install\bin\opensfm\opensfm\commands\command.py”, line 13, in run
self.run_impl(data, args)
File “D:\WebODM\resources\app\apps\ODM\SuperBuild\install\bin\opensfm\opensfm\commands\match_features.py”, line 13, in run_impl
File “D:\WebODM\resources\app\apps\ODM\SuperBuild\install\bin\opensfm\opensfm\actions\match_features.py”, line 14, in run_dataset
pairs_matches, preport = matching.match_images(data, {}, images, images)
File “D:\WebODM\resources\app\apps\ODM\SuperBuild\install\bin\opensfm\opensfm\matching.py”, line 46, in match_images
pairs, preport = pairs_selection.match_candidates_from_metadata(
File “D:\WebODM\resources\app\apps\ODM\SuperBuild\install\bin\opensfm\opensfm\pairs_selection.py”, line 630, in match_candidates_from_metadata
g = match_candidates_by_graph(
File “D:\WebODM\resources\app\apps\ODM\SuperBuild\install\bin\opensfm\opensfm\pairs_selection.py”, line 253, in match_candidates_by_graph
triangles = spatial.Delaunay(points).simplices
File “qhull.pyx”, line 1840, in scipy.spatial.qhull.Delaunay.init
File “qhull.pyx”, line 356, in scipy.spatial.qhull._Qhull.init
scipy.spatial.qhull.QhullError: QH6154 Qhull precision error: Initial simplex is flat (facet 1 is coplanar with the interior point)

While executing: | qhull d Qz Qbb Qt Qc Q12
Options selected for Qhull 2019.1.r 2019/06/21:
run-id 674689332 delaunay Qz-infinity-point Qbbound-last Qtriangulate
Qcoplanar-keep Q12-allow-wide _pre-merge _zero-centrum Qinterior-keep
Pgood _max-width 4.4e-16 Error-roundoff 1.2e-15 _one-merge 8.2e-15
Visible-distance 2.4e-15 U-max-coplanar 2.4e-15 Width-outside 4.7e-15
_wide-facet 1.4e-14 _maxoutside 9.4e-15

precision problems (corrected unless ‘Q0’ or an error)
3 degenerate hyperplanes recomputed with gaussian elimination
3 nearly singular or axis-parallel hyperplanes
3 zero divisors during back substitute
3 zero divisors during gaussian elimination

The input to qhull appears to be less than 3 dimensional, or a
computation has overflowed.

Qhull could not construct a clearly convex simplex from points:

  • p2(v4): 0.64 0.85 0
  • p1(v3): 0.64 0.85 0
  • p80(v2): 0.64 0.85 0.85
  • p0(v1): 0.64 0.85 0

The center point is coplanar with a facet, or a vertex is coplanar
with a neighboring facet. The maximum round off error for
computing distances is 1.2e-15. The center point, facets and distances
to the center point are as follows:

center point 0.6446 0.848 0.212

facet p1 p80 p0 distance= 0
facet p2 p80 p0 distance= 0
facet p2 p1 p0 distance= -0.12
facet p2 p1 p80 distance= 0

These points either have a maximum or minimum x-coordinate, or
they maximize the determinant for k coordinates. Trial points
are first selected from points that maximize a coordinate.

The min and max coordinates for each dimension are:
0: 0.6446 0.6446 difference= 4.441e-16
1: 0.848 0.848 difference= 0
2: 0 0.848 difference= 0.848

If the input should be full dimensional, you have several options that
may determine an initial simplex:

  • use ‘QJ’ to joggle the input and make it full dimensional
  • use ‘QbB’ to scale the points to the unit cube
  • use ‘QR0’ to randomly rotate the input for different maximum points
  • use ‘Qs’ to search all points for the initial simplex
  • use ‘En’ to specify a maximum roundoff error less than 1.2e-15.
  • trace execution with ‘T3’ to see the determinant for each point.

If the input is lower dimensional:

  • use ‘QJ’ to joggle the input and make it full dimensional
  • use ‘Qbk:0Bk:0’ to delete coordinate k from the input. You should
    pick the coordinate with the least range. The hull will have the
    correct topology.
  • determine the flat containing the points, rotate the points
    into a coordinate plane, and delete the other coordinates.
  • add one or more points to make the input full dimensional.

[INFO] running “D:\WebODM\resources\app\apps\ODM\SuperBuild\install\bin\opensfm\bin\opensfm” create_tracks “D:\WebODM\resources\app\apps\NodeODM\data\13ce8fd6-2fa1-4f8c-ae62-fa77028bcb44\opensfm”
2022-09-17 15:07:35,387 INFO: reading features
2022-09-17 15:07:38,623 DEBUG: Merging features onto tracks
2022-09-17 15:07:38,623 DEBUG: Good tracks: 0
[INFO] running “D:\WebODM\resources\app\apps\ODM\SuperBuild\install\bin\opensfm\bin\opensfm” reconstruct “D:\WebODM\resources\app\apps\NodeODM\data\13ce8fd6-2fa1-4f8c-ae62-fa77028bcb44\opensfm”
2022-09-17 15:07:39,525 INFO: Starting incremental reconstruction
2022-09-17 15:07:39,532 INFO: 0 partial reconstructions in total.
[ERROR] The program could not process this dataset using the current settings. Check that the images have enough overlap, that there are enough recognizable features and that the images are in focus. You could also try to increase the --min-num-features parameter. The program will now exit.

I’ve tried a number of dataset sizes and numerous different parameter sets, but all fail with the same sort of error.

Not understanding most of the error messages, I wondered if the EXIF GPS info was the cause of the problem. Checking, I saw they all had the same latitude and longitude, as expected as it is only recorded to the nearest arc second, and all had 0 for altitude, which should have been 693m.

Desperate to remove all traces of degeneracy from my computer I set about removing the GPS info from EXIF, and 800 mouse clicks later I was ready to run the dataset through again!

There is the usual problem of the textured model having parts of the background stuck onto it, and also the ‘up’ axis is incorrect, but otherwise it turned out quite well.

80 images 00:20:04
|Options:|auto-boundary: true, feature-quality: ultra, mesh-octree-depth: 13, mesh-size: 500000, min-num-features: 15000, pc-quality: ultra, resize-to: -1, skip-orthophoto: true, sky-removal: true, use-3dmesh: true|

1 Like

Hi Gordon,

I hope that whosever spine that comes from died peacefully :joy:

That’s a nice result. I haven’t been able to apply myself to the task of sorting out the best way of imaging small objects yet but clearly I would have come up against this issue in due course.

Is there a simple way to remove the gps data from a batch of images or does it have to be done one by one? Is it done on the phone?

What about the suggestions of using a ‘random dark background’ or a ‘non repeating pattern’ to help with orientation? What do these look like in practice as I tend towards using a small rotating platform with the object held in place by some putty or similar.

Good stuff


Gordon, can you also remind me of the difference between skipping resizing using -1 in the settings and the yes/no resizing option in px on the dashboard before the upload starts? Thanks


I did it on my computer 1 image at a time, each taking 10 mouse clicks to complete. There may be other ways to do it which I am unaware of.

The random dark background, ie a dark cloth to hide background objects does work, with or without a rotating platform, but in this case I did it in a darkened room, using the phone “flash” for illumination. This did cause some dark shadows at times, and I can see evidence of that not being smoothed out in one area. Next time I’ll use a DSLR and diffused flash to see if I can improve on the result.

The initial yes/no resizing before any processing reduces the size of the image, so any settings you make (ultra-high-medium etc) will apply to the reduced size image.
The resize -1 only applies to the image for feature extraction, and does not resize the image, as far as I understand.

Hi Gordon,

I can feel some experimentation coming on when I can get the time.

My eldest son used to work for Apple so I will see if there is a batch program option for removing the EXIF data. I assume you are referring to just the location specified in the image info?


Yes, I only removed Lat, Long and Alt, leaving everything else. That solved the failure to process problem for me.
You might want to leave image name, file dimensions info etc, rather than removing everything, although I’m not sure if WebODM absolutely needs them in EXIF.

Thanks Gordon.

I will see if there’s a quick way to do just that for several hundred images.

Ok here is a quick way. Assuming you took the originals on an iPhone, they will be synched to your Photos app on an iPad or Mac pc and you can see them all there.

Then in the Photos app you select all the photos you want to remove the location info from, go to ‘Export’ under ‘File’ - ‘Export x items’, and deselect the ‘Location Information’. Then ‘Export’ and chose where you want them.

Hey presto and all the Location info has been removed. It took 40 seconds to export 45 photos and about 6 mouse clicks so very quick.

Answer courtesy of my son Lorin Perry :slight_smile:


It’s fun to see a feature designed to improve privacy when sharing content via social media reused to improve the quality of 3D models. :slight_smile:


For those who have similar issues but don’t run this specific configuration, exiftool is king for any kind of batch exif modification. Should be a simple 1 liner:

exiftool -gps:all= “FILE or Directory”


I added 33 images taken with my old Nikon D200, the first task I’ve done using 2 different cameras. Running the task with a few more parameter tweaks and I’m quite happy with the result.

I played around with the pc-filter parameter in an effort to remove extraneous bits of background, which turned out to be quite successful, even removing most of the skewer support. I don’t really understand gauss damping as a type of outlier removal, but it sounded like something that might be useful, and it clearly didn’t hurt :slight_smile:

Kangaroo Vertebra 113 images 01:23:41

|Options:|auto-boundary: true, feature-quality: ultra, mesh-octree-depth: 14, mesh-size: 1000000, min-num-features: 20000, pc-filter: 0.1, pc-quality: ultra, resize-to: -1, skip-orthophoto: true, texturing-outlier-removal-type: gauss_damping, use-3dmesh: true|


This looks fantastic.

1 Like

Hi Gordon,

I’m not sure if the technique I shared for removing location data was of much value to you, given your particular camera, but your latest result is fabulous.

Did you use a plain black background cloth and a revolving platter for that?

Also did you have to do a bit of editing to tidy up the part of the image where the support held the structure - assuming you flipped it over to get the underneath images?


Hi Julian, It looks like ExifTool should be useful for me in future cases like this.

Previously I did use a black backing cloth, but found that due to the size of it I had to have it fairly close to block out all the background, which resulted in bits of it being a bit too obvious in numerous photos. This time I went back to a technique I’ve used when photographing flowers, which is to make sure the illumination on the subject is many stops brighter than the background, which can just be a darkened room. Then reducing the pc-filter value a lot appears to remove the remaining extraneous “features” (=noise) in the background.
No tidying up has been applied at all, the above and the animation I posted in the showroom are direct outputs from WebODM. I didn’t flip the bone as the skewer mount wasn’t very tight, and would have moved, so I just got down low and photographed looking up from underneath.

Thanks Gordon. That’s great info for me to explore. I expect with something like a small flint tool I would have to make sure that the point of contact and support is as small as possible or it will appear in the result.

I haven’t used ‘pc-filter’ before. The default is off I presume but this variable will also help remove the support from the final result.

With flint, watch your light position and exposures to make sure you don’t get any reflective areas that are near saturation. I find it better to underexpose to avoid that problem, which hides potential features.

pc-filter default is 2.5 standard deviations, in many cases I increase it to 5 in an attempt to not lose features, but in cases like this, I do want to lose them :wink:
Setting it to zero meanes nothing gets filtered out, and any patchy bits of background will stay in the point cloud. Large patches of illuminated background probably wont be filtered out though.
See p110 of Open Drone Map- the missing guide which explains a bit about how it works.

In any case, the smaller the support you can use, the better, and make it black or perhaps very shiny, to minimise the chances of features being found on it.

Hi Gordon,

I have had a good go at using WebODM to produce a 3D model of a small object using the ideas indicated in your previous posts, but to no avail. Even with the Ultra features setting I get exploded results and when trying to remove the signs of the support by flipping the object, I get equally strange results.

Let me elaborate on what I have tried:

Firstly, I built a support device to go on my rotating platten. Made of wood and a piece of dowl and then sprayed matt black, it allowed for different support pieces made of putty to be added so I could rotate the item around while keeping the camera still.

I also used the ‘Lapse It’ app to automate exposures at 800ms each with no location recorded (see grab of interface) which makes it very easy to keep the camera still while it takes all the images. For one rotation that takes 16s, 20 images are taken at each of 5 angles/positions so I have 100 images for the flint in this case.

I used this in a darkened room with a localized light to illuminate just the item. Here is an example, with settings, of what resulted.

I tried flipping the object and recording another set of images and that produced this:

So instead I used a PC app called ‘3D Photos’ which produced a good result except for the hole where the support blocked the imaging. Some editing in Blender did a modest repair and so I ended up with the following.

The full model of this can be seen on Sketchfab at: Flint - Download Free 3D model by Kerrowman [a818fba] - Sketchfab

While I can certainly live with that, I find it frustrating that I can only get a good model with WebODM if the object is large, like the stone in my garden, or a large area of land as from drone imaging. I’m sure there is a solution but it escapes me at the moment.


Strange that you can’t get a good result with WebODM, I had a good result with the vertebra, which is probably of a similar size to your flint, and also with a larger gourd, which I posted in The Showroom a few hours ago.

With your flint, is one side really quite smooth and the other a quite rough texture, or is that an artefact of the processing?

Yes one side is smooth and the other hand worked. It’s an accurate model except for the support area but, unlike photoshop, you can’t simple clone over the area as the textures are mixed up and cover various parts of the model.

Would setting ‘pc-filter’ to 0.1 help?