Hi
I am new to WebODM.
I am interested in creating 3d models of interior spaces, perhaps similar to Matterport. I have also experimented with benaco.com which is similar (but not as impressive) but allows you to use your own images. I have also tried in metashape, however the texture building takes too long to be worthwhile.
Using WebODM, I managed to create a model of a small house by using 360º images taken at close intervals and was very impressed by the short amount of time it took to render. 27x spherical images. The model looked great from the outside box, however, I couldn’t really navigate inside of the space.
I am wondering if anyone has mastered this or at least has some best practice rendering tips to share.
Also, is there a gallery page on here so we can see some masterpieces?
Thanks
Rich
2 Likes
This is fantastic. I did a bit of testing inside my house in advance of doing the interior of an orangutan exhibit, and got pretty good results too.
The trick for viewing the model is a good viewer: WebODM as a viewer is ok, but it meant as a validation, not necessarily a fully navigable space. You can try meshlab.org as an alternative, or the best FOSS tool blender.org. Blender has a steep but worthwhile learning curve, but there are plenty of great videos on Youtube that will take you through every aspect in detail.
2 Likes
Thanks for the response, I use meshlab as a viewer. Ill take a deep breath and take a look at blender.
I’d love to see what you accomplished with the Orangutan enclosure.
Just collected it yesterday and processed it today. Not all the photos are integrated yet, but it turned out pretty well:

2 Likes
Hi Smathermather
…I got distracted…
How did you stitch/blend etc the orangutan enclosure? Looks awesome. Did you use a rectilinear lens and how many images did you combine?
I am a Mac user (and have parallels and bootcamp) which isnt great for 3d applications. I did get an Nvidia egpu, but couldn’t get it to work properly. I might have to get a PC to move forward
Ha! Totally understand. So far I’ve just integrated the images from a small drone. We also have floor level wide angle shots.
2 Likes
As to the number question, it more than we needed: probably 300+ from the air, and maybe 500 on the floor. This is a challenge of not flight planning, but better to over do it, than under do it.
I will post a followup when we integrate the floor images.
Hi,
Thanks for the Support…
I’m able to generate the interior sparse 3d models with spherical images from a handheld 360 camera but the scale of those 3d models is very much different from the actual scale of the captured area. Maybe this is because the images don’t have the GPS values but I do have local co-ordinates or x y z positions of each image by using this info how can I achieve the actual scale for the generated 3d models.
1 Like
We’d need to implement Scale Constraints to fix this during reconstruction…
Maybe you can do it afterwards using MeshLab or Blender?
Or maybe you can tweak the cloud using CloudCompare and then let the rest of the reconstruction pipeline go off of that tweaked cloud?
2 Likes
Thanks for the reply.
Currently I’m using Cloudcompare to scale the cloud, I wanted to know that using local co-ordinates X Y Z values i.e Odometry from a Arcore or SLAM can solve this problem or not.
Thanks…
1 Like