It has been suggested that I share one of the first 3D models I have made using the WebODM software together with some detail on how I started this area of activity and how the whole process was done, from data capture to finished result.
Firstly, some info on viewing the model in the P3D viewer:
The images below indicate which is the correct render setting and then the optional rotate feature once the model has loaded. The model link is below that.
You want to use the ‘Shadeless’ setting as shown in the first picture, with the second chequered ball selected. Then the rotate feature in the next image is a nice way to be moved clockwise around the model at any angle you choose. It takes a second or so to start and the spacebar pauses it, as does using the mouse to change the viewing position.
The left mouse button is tilt and rotate, the scroll wheel is zoom and the right button is drag. Double left-click will take you back to the starting position.
The link to the model is here:
Regarding how I captured the 400+ images required to make this model, I used a Mavic Mini drone that I acquired in 2020 which has a 2.7k video camera and 12Mp camera, modest by the latest standards.
I have been using this to make videos of the local landscape around where I live in far west Cornwall, UK. An example of one of these is here called ‘This Enchanted Land’:
I have also become interested in archeology since this area has the highest density of Neolithic and later monuments of any area in the UK and there are many stones and marks on the landscape attesting to that. So I started to explore Photogrammetry, extracting useful information and measurements from photographs and 3D models of single small structures. This led to the notion of creating models of much larger areas and sites that would require hundreds of images to construct.
Enter the drone that was starting to feel a bit neglected. The only way to gather suitable imagery for a large site is to use an automated system that will ensure a suitable overlap (approx 80%) of the images for the 3D model building process. There were two contenders for that with my Mavic Mini; Drone Harmony or Drone Link. Both are good systems but DH was not fully realized for use with my Mac IPad or iPhone so in the end, I went for DroneLink and which turned out for me to be the better choice.
This lets you create a flight plan on a desktop in a very comprehensive way involving waypoints, map layouts or orbitals, or combinations of those, as well as a range of other more advanced mapping options (such as KML file imports from Google Earth Pro).
Then there’s the DL mobile app that you run the missions and flight plans you have made and which can also let you edit the plans in the field.
So to run a mission you set up the drone in the DJI drone app in the normal way and load your geofencing license if required, set camera exposure settings and do any calibrations etc.
Then open up the DL app and select your flight plan. In most cases it is then best to close down the DJI app, although that seems to vary with what phone or pad you are using.
Hit run and the drone will take off and fly to its start point. My stone circle map flight plan had a duration of about 22mins and so at any time, or due to a Return to Home triggered at 20% battery, you can pause the mission, fly the drone back manually (or with auto RTH), swap over the battery and take off again (using DJI app to initiate) and tap Resume, whereupon the drone will fly back to exactly where it left off and continue its plan and image capture.
I tested out a plan on a local burial chamber to get used to using the software and combined vertical shots, where the gimbal is almost vertically down (80degrees in fact), with orbital shots taken around a series of circles of differing heights, diameters and gimbal angles to give greater detail to the model.
The DroneLink web-based application lets you preview the plan and here is a screen video of such a preview for my burial chamber test flight running at 4x speed:
I have used my results in the field to refine a plan to give the optimum results. Lowering the drone height will give better image resolution of the structures being captured but a smaller coverage area per shot and so covering the whole area will require more images. So there’s a balance to be found between the level of detail in the final result and the number of photos your computer will need to crunch. Be aware of lighting too. Some say that diffuse lighting from cloud cover gives better results in that it reduces the contrast that might be too high for an acceptable result with the algorithms. In my case, there was bright sunlight and shadows but the results seem to have captured it all ok with acceptable contrast and shadow detail in the final result.
Having run the flight mission I quickly scan through the resulting images and remove any that might produce strange results in the model. Typically these are ones where an important object is under the shadow or blocked by another (over-hanging tree for example) and where a lower object has not got sufficinent images to render it well. The model would end up showing the higher object transplanted onto the lower one. In such cases it might be necessary to take a few images from ground level of the obscured structure so that there is at least something for the algorithms to work with.
Using a new Mac Mini (with its M1 chip) the image limit for processing is around 500 images so far if I set the Docker resources to 14Gb (of 16) RAM and 4Gb swap file and also reduce the image size in WebODM to 1500 px from a default value of about 2000. I could probably go lower since the result is screen-based and would happily reduce to 1000 px if required which will increase significantly the number of images I can crunch. Note that your disk image will fill quite quickly with Gb of data so it’s worth purging every so often but seek guidance on the best way to do this and don’t use the Clean/Purge option in Docker unless you have good reason to as that removes more than one needs!
The WebODM output I need from the download options is the Textured model which consists of an Obj, Mtl and Conf file and a folder of texture images in png format.
I then import the Obj file only into Blender for the purpose of trimming unwanted parts of the model. In the stone circle example, there were sections around the edges that were not needed so I trimmed it to a circle although in fact, the stone circle is not a precise circle.
After editing I need to export a range of files and make a compressed zip file from them to be imported to a viewer like P3D or Sketchfab. For this one needs to save the edited Blender file and then, to the same folder, export from Blender an Obj file, together with its material file, and also a folder of textures (see pic). This is done by using External data - pack and unpack to the same folder.
This collection of files and the texture folder are then compressed to a zip file resulting in a size of something considerably less than 100Mb, which is handy as the free viewer account limit is typically 100Mb.
So to summarise the steps required to obtain my 3D model are:
Design flight plan in DroneLink
Capture images at the site using the flight plan
Remove or replace any poor images
Run WebODM with suitable settings
Export the ‘Textured model’
Import Obj file to Blender and edit model
Save Blender file and export Obj, Mtl, and Texture files
Compress to a zip file
Import to viewer
There you have it. If anyone is enticed into this sort of work, have fun, the results are worth the effort !