I am interested in mapping 2D pixel values from the original images to the point cloud. The inverse mapping (point cloud to pixel values) is quite straightforward using the build-in methods in OpenSfM:
shot = rec.shots[image] pt2D = shot.project(pt3D) pt2D_px = cam.normalized_to_pixel_coordinates(pt2D)
However, I did not manage to find the suitable methods to map a 2D pixel in the original image to the corresponding point in the 3D point cloud. From my limited experience in Metashape, there is a direct approach for this mapping, however, I’m still unable to accomplish this even indirectly.
I have tried the following:
nube = o3d.io.read_point_cloud('./opensfm/undistorted/openmvs/scene_dense_dense_filtered.ply') data = dataset.DataSet('./opensfm') rec = data.load_reconstruction() shot = rec.shots[image] pose = shot.pose cam = shot.camera pt2D = cam.pixel_to_normalized_coordinates(pt2D_px) bearing = cam.pixel_bearing(pt2D) t3D_world = pose.inverse().transform(bearing)
I understand I need some form of scaling with the depth of the point, but there are no details in the documentation on how to obtain the correct depth for the given pixel.