Multispectral - odm_orthophoto bad lens alignments


First of all thank you for this open source software! It was a really great software to work with.

I am at the moment using odm with multispectral red-edge camera and I got a RGB map like this:

The CIR image:

They look pretty much good. However when I tried to origin the NDVI output I get:

It can be observed the bad alignment between some bands in some spots on the map (represented by red color). The following command is the command that I use to run:

"bash /code/ --mesh-size 100000 --max-concurrency 8 --orthophoto-cutline --crop 0.5 --min-num-features 16000 --rerun-all"

Can you please help me to find a solution to improve the alignments spots errors? I can share the dataset if you want.

Thank you very much!


Hello again!

I solved the problem. Tomorrow I can share with you all what I did and maybe we could improve ODM :slight_smile:


Sounds great!

Hello again,

Thanks again for having developed ODM, being for me, the best open source platform for building UAV maps. However, as all platforms always need to be improved and I think I can help with this issue in concrete (in multispectral images). Currently the ODM does the multispectral mapping comparing between the different multispectral bands. I think here is the biggest flaw when trying to compare different bands in agricultural fields because it is very difficult to achieve a perfect overlay of different multispectral images. An example follows in the images below:

As shown above, you can conclude that it is really difficult to match the different images (different bands). So, in this case, my point of view was doing a full map in each band, i.e. a solo blue map, green, red, nir and rededge map (but before doing the five maps, I calibrate each image from micasense code to originate the reflectance image instead of dn image). The next two images are an example of a full map of each band (to simplify i just show two bands- green and blue):

As you can see, the maps are too dark. Thus I just applied a gamma correction:

Next I applied the next steps:

  1. Extract features with SIFT, SURF or ORB
  2. Find the Homography matrices between the 5 different bands maps
  3. Applied the Warp Perspective to overlay these maps.

The results now are these:

And the biggest improvement NDVI:

PS- the white points are due to the terrible ubuntu viewer :sweat_smile:

Besides that you can observe the improvement alignment:
Before (rgb map with zoom):

After (rgb map with zoom):

To finish I just want to say that I will clean my code and after I can share with all community :slight_smile: Do you have a specific place to share?


Very cool approach! GitHub is the preferred place for all code sharing activities. I’d be interested to re-run a few datasets with this method.


@pierotofy I have already clean the code here
The steps to work with this project:

  1. git clone the project
  2. Inside odm folder write “docker-compose build”
  3. Create a folder at Documents path called “data”
  4. Inside data folder create another one called “1” folder
  5. Inside “1” create another folder called “data_raw”
  6. Inside “data_raw” folder create 2 folders “Mission” and “Panel”
  7. Inside Mission folder put all mission images and panel images in Panel folder (tu run multispectral images)
  8. If you don’t want to run multispectral images just put in folder “Mission” the images you want to mapping
  9. After creating the folders and put the data just go to odm folder and type “docker-compose up”
  10. Now just sent a get or post to start mapping: curl -XPOST localhost:5001/postjson -d '{"id":"1","is3d":true,"bands":"--rgb --ndvi --cir --ndwi --ndvi --ndre","isMultispectral":true}' -H "Content-Type: application/json" (to run Multispectral images)
  11. curl -XPOST localhost:5001/postjson -d '{"id":"1","is3d":true,"bands":"--rgb ","isMultispectral":false}' -H "Content-Type: application/json" (to run ‘normal’ images)

Hope this helps you and if you need something just ping me

1 Like