TIFF/Multispectral support

First screenshot of 16bit TIFFs from Mapir camera, stitched in ODM:

image

4 Likes

Multi camera support (Parrot Sequoia, MicaSense, etc.) coming up next!!

4 Likes

Whoa ho ho! Fantastic!

Sparse reconstruction: image

1 Like

Different band texturing rendering on mesh. (Perfectly aligned)

anim

1 Like

I need to stitch geotagged tiff images(16 bits) captured from micasense altum. I have installed webodm in my system. How do I stitch them to create an orthomosaic now?

1 Like

You can’t yet; this is a work in progress. Keep an eye out for announcements in the upcoming months.

2 Likes

First ever, multi-spectral, 16bit rendering of Micasense dataset with additional NIR/RedEdge bands.

image

Each band is shot with a different camera (5 cameras, 5 output bands).

3 Likes

Whoa! Looking great.

1 Like

It looks like DN not reflectance value

What do you mean @kikislater?

There are typically 3 possible numbers for quantitative spectral data.

Digital Number

Digital number is the value as stored onboard the sensor, typically an 8, 12, or 16-bit value.

Radiance

Radiance is the quantitative strength of the signal. It can be thought of as the brightness, but as brightness depends on distance, it is measured per spherical angle to compensate, e.g. as watts per meters squared per steridian (sometimes as watts per meters squared per steridian per nanometer when dealing with hyperspectral data).

Reflectance

With DN as the stored value, and radiance as the quantized value in measureable units, reflectance would be the ratio of reflected light and incident light for a given spectra. So it can be thought of as the inherent brightness of the object in a given spectrum, independent of e.g. how bright the sun is shining.

In a way, reflectance is the most like the concept of color, or color as independent from lighting anyway. As such, it is useful to use in indices like NDVI because it represents the object independent color.

(All that said, the model is complicated by the fact that in addition to reflectance, we have scattering, absorption, and fluorescence (at least in plants). So, in certain spectra, the apparent reflectance values can be increased by absorbing in one frequency to be fluorescence in another frequency.)

As a ratio, it should be a value between 0-1.

1 Like
1 Like

Ah, great explanation, thanks @smathermather-cm. Currently the values are Digital Number. Would it be ideal to store values as reflectance? Any good resources on this? Also need to look into calibration and calibration targets.

It is better to operate in radiance and reflectance values than DN, as the equations to convert DNs to radiance or reflectance values varies per sensor type, per channel, and indeed per sensor as well as through time. Hence the need for calibration targets.

On the sensefly eBee with the Parrot Sequoia, there is a calibration target that is scanned before each flight. I assume other systems have similar set-ups, such as this reflectance panel for a sentera 6x sensor: https://sentera.com/product/6x-sensor/

Thank you Stephen, you got it !
Reflectance is real world usage if you want to compare accross time … Radiance is needed to compute reflectance from DN.

Calibration target is like GCP for Spectral data. It’s always needed even if parrot say no in the last sequoia revision.

In fact there is some process needed to have good spectral data :

  • Bands registration : What you put above @pierotofy
  • Reflectance estimation
  • Correction of Sunshine sensor angle effect if sunshine sensor is present. Sunshine sensor corrects reflectance errors due to lighting variations
  • Correction of vignetting effect

Ah vignetting, which with narrow band filters is also a spectral filter (that we usually ignore). Are there know vignetting profiles for sensors from the manufacturer?

Yes ! In parrot sequoia they are in exifs if I remember well.
Could share a non public document in private message if interested …

Ok, lot’s of stuff to model the information, obviously. But starting with something simple, any references (mathematical / algorithm references) for going from DN to Reflectance in absence of a calibration target?

Do any of these values from EXIFs help?

Spectral Irradiance             : 0.93760901689529419
Center Wavelength               : 475
Bandwidth                       : 20
Irradiance Exposure Time        : 0.10100000351667404
Irradiance Gain                 : 16
Irradiance                      : 0.93760901689529419
Irradiance Yaw                  : 94.023407785859874
Irradiance Pitch                : -8.2713926549523134
Irradiance Roll                 : 6.156254310576478
Vignetting Center               : 542.73635777406855, 483.77113826908686
Vignetting Polynomial           : -0.00017257827506974826, 2.3668617696751355e-06, -1.4159742230455686e-08, 3.6290807856318675e-11, -4.3053827057826296e-14, 1.8860011572793591e-17
Dark Row Value                  : 5314, 5330, 5275, 5241
Radiometric Calibration         : 0.00014568261619008727, 1.0865645691885123e-07, -8.9156511369331752e-06

There is some confusing (as they tend to keep partnership with pix4d) notes here : https://forum.developer.parrot.com/t/parrot-announcement-release-of-application-notes/5455?source_topic_id=6558
Parrot is the worst company existing in drone area especially their employers such as original poster link above. Sequoia is the worst product I have ever buy … Not a plug and play sensor (and ultra cheap materials inside. Only center of image is good and that’s one reason it needs to have 80% side/overlap) and they tend to hide informations. It was a hard time to get informations published in this link. Furthermore, parrot team doesn’t seem to be aware about remote sensing process : it’s not funny when you put 3500 bucks in a simple sensor !

Micasense have published some docs here :

https://micasense.github.io/imageprocessing/MicaSense%20Image%20Processing%20Tutorial%201.html

May be others things are available in github …

I repeat, I could share one private (not from me) document to you guys.

1 Like