Mapping 2D pixel values to 3D point cloud coordinates

Hello.

I am interested in mapping 2D pixel values from the original images to the point cloud. The inverse mapping (point cloud to pixel values) is quite straightforward using the build-in methods in OpenSfM:

shot = rec.shots[image]
pt2D = shot.project(pt3D)
pt2D_px = cam.normalized_to_pixel_coordinates(pt2D)

However, I did not manage to find the suitable methods to map a 2D pixel in the original image to the corresponding point in the 3D point cloud. From my limited experience in Metashape, there is a direct approach for this mapping, however, I’m still unable to accomplish this even indirectly.

I have tried the following:

nube = o3d.io.read_point_cloud('./opensfm/undistorted/openmvs/scene_dense_dense_filtered.ply')
data = dataset.DataSet('./opensfm')
rec = data.load_reconstruction()[0]
shot = rec.shots[image]
pose = shot.pose
cam = shot.camera
pt2D = cam.pixel_to_normalized_coordinates(pt2D_px)
bearing = cam.pixel_bearing(pt2D)
t3D_world = pose.inverse().transform(bearing)

I understand I need some form of scaling with the depth of the point, but there are no details in the documentation on how to obtain the correct depth for the given pixel.

Thanks.

1 Like

Welcome!

This seems like very interesting work. May I ask what you’re looking to accomplish with this “backwards” mapping from 2D to Point Cloud?

2 Likes

Thank you for the interest :slight_smile:

I have several ideas in mind. One of them is to annotate a single image, using a bounding box for example, and use the point cloud to annotate the overlapping images automatically.

1 Like

That’s really powerful stuff! Unfortunately, I’m not going to be a great resource at this point as my understanding of the code is pretty poor. Maybe someone more knowledgeable will pop in soon!

Would you be interested in pushing these abilities upstream once you get them working? I think they’d be a huge asset for our community!

2 Likes

I will be happy to.

2 Likes

Awesome!

From a cursory search, it looks like this was at least asked of OpenSFM a while back, and it looks like @Yannoun provided some nice pointers. Reading the above, it looks like this is all old news to you, though.

1 Like

Indeed I have found this answer. Unfortunately, there are no guidelines or an efficient implementation for the depth estimation nor the intersection with the point cloud.

1 Like

Architecturally, do you think that everything you need and need to accomplish is doable from within OpenSFM? You just need to implement new functions?

1 Like

I believe so, however, I did not yet manage to implement a working POC.

1 Like