Following from the post,
Hello.
I am interested in mapping 2D pixel values from the original images to the point cloud. The inverse mapping (point cloud to pixel values) is quite straightforward using the build-in methods in OpenSfM:
shot = rec.shots[image]
pt2D = shot.project(pt3D)
pt2D_px = cam.normalized_to_pixel_coordinates(pt2D)
However, I did not manage to find the suitable methods to map a 2D pixel in the original image to the corresponding point in the 3D point cloud. From my limited experience in Metashape, …
pose = shot.pose
but shot dont have attribute called pose. here are the attributes of shot
{‘rotation’: [1.5854896963247729, -0.12476936903940479, 0.008493908445915507],
‘translation’: [47.36082101311921, 537.9664018217298, 13.741220414358704],
‘camera’: ‘v2 5760 2880 spherical 0.85’,
‘orientation’: 1,
‘capture_time’: 1639826222.0,
‘gps_dop’: 10.0,
‘gps_position’: [-8.199110207800036, -2.62234557906595, 540.1094722272828],
‘vertices’: [],
‘faces’: [],
‘scale’: 1.0,
‘covariance’: [],
‘merge_cc’: 0}
1 Like
ThorZ
8 July 2022 15:21
#2
The Shot class contains member Pose, defined by translation and rotation
Or you can check the older opensfm version which implements shot, pose in python, would be easier to understand (I’m not sure if it’s still working with the newly created reconstruction.json though)
1 Like
system
Closed
7 August 2022 15:22
#3
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.