I am trying to project bboxes defined in camera coordinate onto world coordinate using the parameters stored in /opensfm after orthoimage generation based on ODM with precision gcp information.
An issue is that I observe a slight global rotation of the projection compared with (seemingly) more precise projection based on Metashape (see the following animation).
I assume the possible reason is that there exists some levels of biases in extracting camera origins and the bearing vectors, but I am not sure.
I’ll highly appreciate if you contribute to finding the reasons and sharing the solutions with me.
The detailed procedure of the projection is as follows,
I add some more materials for (hopefully) active discussions.
The objective of our project is to specify positions of some rice plants in world coordinate.
The following image shows examples of mapped bboxes on world coordinate (EPSG:32654).
As you can see, Metashape’s projections (red polygons) accurately surround rice plants while OpenSfM’s projections (blue polygons) are out of position.
As for the accuracy of the georeferencing, please see the following GCP error details and the projections.
Thanks for your suggestion.
I downloaded the latest soruce codes (v2.8.4) from https://github.com/OpenDroneMap/ODM/archive/refs/tags/v2.8.4.zip, then built a docker image in Windows 11.
The result was more or less identical to the previous … (see the following image. Red polygons: Metashape’s projection, dashed blue polygons: OpenSfM’s projections).
This time, the project was built by the following command with some customizations.
docker run -ti --rm -v F:/ODM/datasets:/datasets --gpus all odm284:gpu --project-path /datasets Project --pc-quality high –-orthophoto-resolution 1.0 --ignore-gsd
I also compared the result with a result without any customization (dashed orange polygons in the image) to see there is no difference occurred by the options.
And as a final run, would you be able to test it without GPU acceleration as well?
We’ve noticed some differences in reconstruction behavior between the CPU and GPU pipeline in the past and I don’t know if that will influence the rotation you’re finding or not.
I’ve just been looking at Google Earth images to see what date images are being used in WebODM, and noticed a slight rotation between GE and what is shown in WebODM. Could that be the issue here?
It looks like the trial plot boundaries which appear to be surveyed are rotated when reconstructed via OpenSFM/OpenDroneMap, irrespective of the background. Or at least I hope the boundaries were surveyed not from Google Earth imagery.
But yes, it’s interesting that the canvas has a slight difference between Google Earth and our basemap…
@kuniakiuto, can you please provide a bit more detail about how the boundaries of the plots were established? RTK survey for the corners of the plots? I’m trying to see how we can approach looking further into this, but in order to do so, we need to be confident that what we’re looking at is pretty unambiguous ground-truth data.
FYI, I generated gcp file for ODM based on marker position file exported by Metashpae.
The maker position file is in xml format exported by [Export]-[Export Markers] in Metashape.
Then, I converted the xml file to gcp_list.txt by using the following code.
Finally, I added ‘+proj=utm +zone=54 +ellps=WGS84 +datum=WGS84 +units=m +no_defs’ at the top of gcp_list.txt.
import xml.etree.ElementTree as ET
from pyproj import CRS, Transformer
import csv
filename_xls = 'marker_metashape.xml'
filename_out_csv = f'./gcp_list.txt'
label_list = ['#Label', 'X/Longitude', 'Y/Latitude', 'Z/Altitude']
crs_4326 = CRS.from_epsg(4326)
crs_32654 = CRS.from_epsg(32654)
transformer = Transformer.from_crs(crs_4326,crs_32654)
tree = ET.parse(filename_xls)
root = tree.getroot()
camera_id_list = []
camera_label_list = []
for a in root.findall('./chunk/cameras/camera'):
camera_id_list.append(a.attrib['id'])
camera_label_list.append(a.attrib['label'])
marker_id_list = []
marker_label_list = []
marker_x_list = []
marker_y_list = []
marker_z_list = []
for a in root.findall('./chunk/markers/marker'):
marker_id_list.append(a.attrib['id'])
marker_label_list.append(a.attrib['label'])
for b in a.findall('./reference'):
print(b.attrib)
lon = b.attrib['x']
lat = b.attrib['y']
z = b.attrib['z']
gcp = transformer.transform(lat,lon)
marker_x_list.append(gcp[0])
marker_y_list.append(gcp[1])
marker_z_list.append(z)
print(marker_id_list)
print(marker_label_list)
print(marker_x_list)
print(marker_y_list)
print(marker_z_list)
with open(filename_out_csv, 'w') as f:
writer = csv.writer(f,delimiter='\t')
for a in root.findall('./chunk/frames/frame/markers/marker'):
m_id = a.attrib['marker_id']
index = marker_id_list.index(m_id)
m_label = marker_label_list[index]
for b in a.findall('./location'):
if 'valid' not in b.attrib:
c_id = b.attrib['camera_id']
x = b.attrib['x']
y = b.attrib['y']
m = marker_id_list.index(m_id)
c = camera_id_list.index(c_id)
temp = [marker_x_list[m],marker_y_list[m],marker_z_list[m],x,y,f'{camera_label_list[c]}.JPG']
writer.writerow(temp)