I’m going to visit a future office space to rent and we didn’t want to to bug with measurements and instead simply take a cellphone (Samsung S23 RE + Gimball) and film around the office space so we can create a point cloud and have the measurements and design/organize the office space.
My question is :
Is it possible to create a point cloud with a video from a Samsung S23 FE ?
Thank you in advance !
Trying to learn the limits of photogrammetry with WebODM
What if the camera location were “faked”? Just compute relatively precise GPS locations, from a single “faked”, GPS location. Measure 2, 3 or 4 feet in a known direction, and compute a second(or and a third), relatively very accurate(?), GPS location, then take overlapping photos from those “fake”, GPS locations. It would just be a matter of the EXIF files, right?
Processing back then was done in a commercial application (I switched to WebODM later) but results were very impressive. This gave a nice scaled Pointcloud, but not Georeferenced.
With Cloudcompare I could place the “indoor pointcloud” inside the “outside pointcloud”. So although Im not sure how the results would look like in WebODM I know there are possibilities. Maybe nice to test this again with the WebODM capabilities.
Unfortunately I’m not at liberty to share the inside results.
As far as this is could be a very common use case, resolvable with good but expensive pro instruments or with questionable others at less cost or just in the traditional way with a meter, if exploiting your experience we could sort out a kind of workflow and find an acceptable compromise maybe should be nice for other odm users, me first.
Forgive me if I try to recap in few random thoughts:
for this use case may not be important being georef but relative precision instead
should be avoided topo instruments as should be a relative quick indoor process
should also be avoided math headhaches as should leed to buying a good instrument
the final result should be with an accuracy let’s say below 0,5 cm, too mutch?
as I read from your replies there can be 3 approachs
the first compute in odm an then scale as needed, surely works most cases but makes lose control of the precision process sometimes scaling one side corrupt the other
the second faking the exif from measured points, maybe puts a lot of work in measuring, taking pictures from the measured points and editing the exifs
the third using a “local” GCP grid, I’ve heard also recently from a colleague using metashape with sparse CGP misured locally
The problem turns maybe to be how to measure the GCP and how to pass them to odm
What approach should you reccomend?
I’ve done a part bedroom PC with my old Samsung S8, so I would think it should be possible with your more modern phone too.
Georeferencing was an issue though, it did turn out with correct orientation, but I didn’t attempt any measurements, which would no doubt have been incorrect.
3D short-range Lidar is not very expensive … and much more precise than 0.5cm,
I have taken low level photos of my project(30-40 feet), with a tilted camera, and can see under a porch in WebODM, and I can see through the void of the building interior, to see the other sides, but I’ve never had good luck with walking around the building and with WebODM including those photos, to get around the building … I gave up years ago … I should try it again …
Do I understand this thread correctly, you can use your phone to make “ground view images” like street view and feed that into the process along with drone images which are RTK corrected and the webODM can use the street view images to generate even better cloud point data so you don’t only have point cloud generated from above, but even using the street view for proper facade generation of point cloud?
You can go the photogrammetry route with a phone but even if you get a good quality scan there are often issues with orientation that are quite a pain, like they often render sideways or upside down and potree won’t let you rotate them into the correct orientation (see discussion here). This isn’t as much of an issue if you only need the model and are planning to post-process it on your computer.
For a quick scan from your phone that you can measure, I’d recommend looking at polycam. On the iphone at least, it is quite easy to scan a large space and then measure things on it or output a blueprint. There is even a “room scan” feature (for the iphone at least) that outputs a nice clean floorplan with measurements and all.
I’d note that I have an iphone with lidar though and I haven’t used the android polycam, but I’m pretty sure it works as well and their options for doing stuff with the model are really good.
There are a bunch of other apps like Scaniverse and Sitescan, but I haven’t used them enough to provide an opinion and polycam works really well (and is mostly free).
Polycam also has a pretty good gaussian splat editor that now lets you measure things within the gaussian splat, so that’s definitely worth looking into as well (and often makes super nice looking models and is fund to play with)
One last note: If you do want to make a gaussian that is scale accurate, make sure you measure a few (obvious and straight) things in the scene when you are one site. Polycam now has a feature that let’s you rescale the gaussian to accurate dimensions, but you need to know the length of things in the scan to do that correctly.