I was looking at ODM360 and was quiet excited to see what was going on in the project. In the picture and in the parts list that multiple raspberry pi 0 boards are required to run ODM360 so i had a question on my mind which i wanted to ask,
can a multi camera adapter not be used to make things simpler? if not please point me out to something that i can read to enhance my knowledge in this domain.
Mmm, I’m not sure how well it works in practice. They seem to be hedging quite conservative in their writing about it.
Further, only one camera at a time is active so the image acquisition time per camera is a concern. How long does it take to acquire, store, deactivate, switch module, activate, acquire, etc?
If there’s a good way to reduce pi0 boards, that would be great (though pi0s are cheap). That particular board is discontinued and not compatible with the high quality pi camera. This one is intriguing, but sequential:
Timing is why we have a dedicated board per camera. Solutions like this are pretty exciting: uctronics.com/arducam-12mp-2-synchronized-stereo-camera-bundle-kit-for-raspberry-pi-two-12-3mp-imx477-camera-modules-with-cs-lens-and-arducam-camarray-stereo-camera-hat.html
But very expensive. They’re good engineering solutions, but outside the scope of the project for now.
Also, the 360 configuration is just one configuration. There are lots of other options…
thank you so much for letting me know about all this. further more, is there anything that i can help with on the project? it would be a good learning experience for me.
Testers and help with documentation is always a good place to start: and you don’t need to outlay all the hardware to test: just a parent pi and child pi with camera are enough to start testing.
does it works using a point-and-shot 360 camera? just like those cameras from Ricoh or Insta360
For sure, you just need to change OpenDroneMap’s camera lens to
Here is an elevation model of my front lawn, sidewalk, and driveway derived from 360 cameras:
There isn’t that much slope in reality: that’s a function of a small dataset with relatively large GPS errors, but that’s one of the problems we are solving. The other is that the optics and overall quality of these cameras are pretty trash unless you put a lot of money into it, or unless you build it yourself (our approach).
With the low cost COTS solution, we need a lot of shots to get even this result though:
Is it possible to use GCP?
A friend of mine owns a property with some small old underground mine workings. And for the last weeks we were trying to figure out how to build a model from surface to underground.
So far, we think the best way is to build surface model using a drone and a separate model for the underground structures using a 360 camera. Then have both models merged in blender. By using GCP on the surface and underground, we are not expecting to be an impossible task. That why I’m asking if it’s possible to use GCP with 360 cameras.
Second option is to use the same drone hand held and take it to the mine. By using the same camera, we believe it is possible to build the model in a single process.
Every option will require an even illumination, and that’s a challenging task. So, there is a lot to be solved before we start this project.
I hope someone can bring new ideas.
The DEM looks nice, I think it can be useful in places where drones cannot do the job.
I think, for some uses a sparce point cloud will be sufficient.
With the whole array built out, I expect a sparse cloud will be enough for most purposes.
I assume so. I cannot see why not, though I haven’t actually tried it.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.