Impact of EXIF data on Point cloud


Hi All,

A naive doubt here - how does EXIF data of images impact point cloud?
Just for testing, I removed the exif data from images and processed the dataset…looks like the point cloud from smvs was better than the one that came when exif data was present.(…I mean the georeferenced point cloud.)

Here is the comparision of point cloud. the parameters for both were almost same:
Left one is with EXIF data…while right is without EXIF -

Any suggestions on how to correct this?

Thank you.


This is interesting. How do the sparse point clouds compare?


When there’s exif information, pre-matching using GPS information reduces the number of image pairs that OpenSfM compares to find matching features across images. This can be disabled with --matcher-neighbors 0 --matcher-distance 0.

--matcher-neighbors <integer>
                      Number of nearest images to pre-match based on GPS
                      exif data. Set to 0 to skip pre-matching. Neighbors
                      works together with Distance parameter, set both to 0
                      to not use pre-matching. OpenSFM uses both parameters
                      at the same time, Bundler uses only one which has
                      value, prefering the Neighbors parameter. Default: 8
--matcher-distance <integer>
                      Distance threshold in meters to find pre-matching
                      images based on GPS exif data. Set both matcher-
                      neighbors and this to 0 to skip pre-matching. Default:


Sorry, i do not know where i can find the sparse point cloud…The one above is dense point cloud in smvs folder.
Could you please help me locate the sparse point cloud in the odm output folders?

Thank you.


Thank you @pierotofy for the explanation.

I thought the GPS exif data helps in processing the point cloud more accurately…but it was making the point cloud wavy.

I still have doubt whether it is because of any fault in GPS data of my images or its the same case with any other dataset.

Regarding pre-matching - yes, when exif data was removed from images, … /bin/opensfm match_features took longer to complete because of many image pairs.


I don’t think that’s what this is. Look at the structure of the cloud around the windows. It looks to me like issues with the alignment of the cameras. Given OpenSfM’s tendency to favor GPS over SfM to avoid global position drift, I think this is a tuning issue with the appropriate parameters for reconstruction with exif.

I’ve noticed this effect in more subtle ways with ordinary datasets, but this is the most obvious example.

@garlac — is this a datase you can share?


@smathermather-cm Here is the link :

It was taken using DJI phantom 4. I think its circular and double grid mission flight.

Thank you.


I’ll do some testing this week, but @pierotofy is right – it’d be helpful to try --matcher-neighbors 0 --matcher-distance 0 to see if this is a function of how many neighbors are being matched. @garlac – can you rerun with these parameters on the data with GPS exif to test Piero’s theory? I’ll work on testing my theory in the meant time.


Hmm, initial testing with one of my own datasets points to some likelihood that the issue is an alignment issue, not a matching issue. On the left is the version run through WebODM with exif. The one on the right, the version run through OpenDroneMap on the command line with the exif data stripped out.

@garlac – I look forward to diving into your example though, as it looks to be more clear cut.


I tried the same dataset having exif data but passing parameters –matcher-neighbors 0 and –matcher-distance 0

Dense point cloud now looks same as that came from images without exif data. Below is the screenshot:
So, is it just a matching issue?

But shouldn’t additional information like GPS data help in building the point cloud properly?..If I remove exif data, it takes lot of time in matching and hence increasing processing time.


I got some feedback from Pau of Mapillary/OpenSfM on this. What is happening is that when we are using matching, it is using XY and Z to determine neighbors, so as the two paths are far apart, two independent point clouds are generated, with the larger one (the one from the gridded flight) being retained and the orbit is being disposed of.


The way around this is either use the matching flags as listed, or if we need to be more efficient, get OpenSfM to match ignoring Z. @pierotofy – do we allow the user to pass a flag for this or no?

Oh ya. And Piero was right, I was wrong. :slight_smile:


Great analysis, what option in OpenSfM tells it to ignore Z? If there’s one I don’t think we currently expose it.


That’s what I thought – I don’t remember addressing this at all in the past. Hopefully there’s a flag in OpenSfM: I am awaiting Pau’s response.


Maybe it’s the use_altitude_tag, in which case we always enable it if there are altitude values in the EXIF. See That could be changed.


I’ll test that.


I’m thinking we want that off by default, but we may want to pass a flag for it (although I hate to add more flags).

Anyway, I should know in an hour or so how that affects this… .


Strange. Virtually no change when I changed that flag. We’ll see what Pau says… .


I understood above comments more clearly now…

I tested on another similar dataset -i.e having both single grid(69) and circular mission(114) flight paths.

My observations:

  1. At first, I clubbed both circular mission and single grid mission images - gave

–matcher-neighbors and --matcher-distance as 0

==> The point cloud came wavy and distorted. From the terminal output I read that it was - Matching 5310 image pairs.
And, Only 1 partial reconstruction was formed.
Reconstruction 0: 183 images, 404636 points

  1. Next, I separated the images of circular mission alone and processed them - also gave

–matcher-neighbors and --matcher-distance as 0

==>But only side views of building came good and it took more time as it had to match 1686 image pairs.
This time also, only 1 partial reconstruction was formed.
Reconstruction 0: 114 images, 241832 points

3.This time again I clubbed both circular mission images and single grid mission images - but gave default values for

–matcher-neighbors (8) and --matcher-distance (0).

ALSO, I edited the file at line no. 87 - config.append(“use_altitude_tag: False”)

==> The process completed fast and side views of the building also came good
it had only 767 image pairs to match
But formed - 2 partial reconstructions.
Reconstruction 0: 114 images, 245994 points
Reconstruction 1: 69 images, 171194 points

At the endI think it took reconstruction 0 with 114 images forward for next stage. Some lines from terminal output:

…Running SMVS Cell
…Writing bundle (183 cameras, 245994 features)
…it skipped some views : Invalid camera
…Created 183 views with 114 valid cameras.
…Imported 114 undistorted images.

…Initialized 114 views (max ID is 182), took 5ms.
…Reading Photosynther file (183 cameras, 245994 features)
…Generating Pointcloud, Cutting surfaces for 114 views …

Is there any way to let it use both the partial reconstructions - so that the roof of the buildings will also get included in the point cloud? Below is the screenshot of roof of the buildings -



I tried on another similar dataset and got exactly what you said - 2 partial reconstructions were formed with larger being retained and smaller one discarded.

I even tried changing the use-altitude-tag( to False : there were still 2 partials reconstructions formed.

I also tried setting the gps-altitude of all images at some constant number, but still it was dividing the images of two flight paths as different and taking only one of the two partial reconstructions.

Any leads on how to make ODM use both reconstructions and create dense point cloud?
Is this something we need to change in opensfm or can be handled in ODM itself?

Thank you.


Still looking into it. Standby. It does appear to be an OpenSfM issue.