I’m working on a multispectral data set but the camera we used is from a firm named Hiphen and each image consists of 6 different spectral ranges.
which are different from the cameras which were listed in the docs.
I have uploaded and processed MS images and in the report,
i couldn’t actually see any Ortho image
Later, I used QGIS to open the file and I was able to see the processed image. But I noticed multiple overlays on the right side of the image and the outputs for the DSM and DEM were not at accurate.
I tried to process the contours using the MS ortho but the results were not accurate.
I’m attaching the MS image i have processed.!
Do you guys have suggestions for me to process a better ortho?
or else is the problem due to the insufficient data for processing better ortho?
I do think it might be a problem due to the waypoints but I’m not quite sure.
for reference, I’m attaching the fight path details!
I tried to attach links but I couldn’t so did this
[https://ibb.co/x6B5sjB https://ibb.co/G0cSq3W https://ibb.co/zNb5HH2]
I hope I can get a few suggestions or corrections I can do process better ortho…!
if the problem is with the flight path or some other issues, please do let me know, so I can be prepared for the next flight.
Guys your work has been amazing and Thank you…!
I was able to get this info
and is there a place or link where I can find more information about Multispectral image analysis that can be done using WebODM such that I can distinguish the layers based on spectral ranges?
We have processed a few more Multispectral datasets using WebODM and the results weren’t satisfactory. I tried to use another Software to process the images and it was able to obtain a decent ortho.
During this process, I was able to notice that the problem for stitching orthos in ODM may have occurred due to one of the following reasons.
Existing algorithms might have referred to the Tiepoints of the images randomly rather than in sorted format.
As multispectral have multiple spectral images may be a lack of coordination and aligning images might be a reason.
I’ve noticed a problem with Georeferencing too, during the processing I was able to notice that the Geo-coordinates for the Multispectral images weren’t as accurate as they were for the RGB Data. This might be the critical issue for not building the Ortho.
Apart from this, An additional feature such as aligning the Images on the forehand and splitting the process into multiple steps can also save the processing time. With this method, we will be able to identify where the problem is.
These are mere suggestions based on my observations during the processing, I hope this helps. I would be glad if you got any suggestions or iterations for me to follow during the processing.
Additionally, it would be a better option to manually/Semi automate the process of changing the reflectance parameters based on the reflectance on the day images were acquired rather than automating it. Most of the multispectral sensor providers provide us with reflectance boards for calibrating the reflectance data. Semi automate: Use the images which contain the reflectance board and extract the parameters from the images and substitute them for calibration.
Do you want me to share the datasets and processing parameters I worked on?
and the console log I’m not sure if I can do that, because I was processing that in the institute and we have to delete the files or logs that we created after the usage.