Yes but remember that it’s a consumer camera, they have embedded (I guess and others manufacturers as well) algorithm to provide a good picture not for good sensors values.
Multispectral sensors as well as satellite data have some kind of ugly watching : high contrast when in RGB mode (mono band combination), some kind of blur and lack of sharpness but value is others parts : good values for producing index and others analysis …
For sure it is a compromise. I can’t easily mess with their embedded algorithm. And as such I don’t know when I calibrate it against one set of conditions, how does that algorithm effect the stored pixel values.
The multispectral photos are .tifs. I can’t upload them here as .tifs are not accepted. I will email you the subset.
Good tutorial for DN to reflectance: imageprocessing/MicaSense Image Processing Tutorial 1.ipynb at e3744671c521056de46c3d05c3e9446a971662c9 · micasense/imageprocessing · GitHub
Also more reference formulas: How to convert Landsat DNs to Top of Atmosphere (ToA) Reflectance | Center for Earth Observation
Only for satellite data which are not the same as a monoband or RGB camera sensor on a drone. Even if both are passive sensor …
Right: TOA issues are just that: 10s of KMs of atmosphère between the sensor and the sensed.
If we disregard atmosphere for the drone use case (which is nearly tenable, especially for any wavelength longer than blue), then reflectance is the ratio of reflected spectra (in watts / m^2 / steradian or similar) to the incident radiation. If you measure both with your sensor, voila, it’s easy to calculate.
This doesn’t take into account the following:
- atmospheric attenuation, scattering etc.
- anisotropic reflection
- fluorescence in plants
- etc.
@pierotofy Excellent work, I’m really looking forward to this. How far into development is this? Will we have the option to output both a standard orthomosaic, and reflectance maps for each band? If you need a hand with this I would be happy to help.
Also, I have data from the following camera which should be useful in testing the “camera only” calibration: https://sentera.com/product/inspire-double-4k-upgrade-crop-health-sensor/
- Adam
Hey @adamjson we’d welcome help testing and adding more features (like the ability to output both orthomosaics and reflectance, currently you can choose either one or the other, but not both at the same time, and not per band).
I’d recommend building from Radiometric Calibration, Sentera6X sensor support by pierotofy · Pull Request #1082 · OpenDroneMap/ODM · GitHub and help us test it?
Sounds good. I’ll take a look today.
@pierotofy I am trying to build and run locally using docker (I’m on OSX so start-dev-env.sh isn’t supported). Should the following work?
I have images in $(pwd)/images and I run the following command which results in an error:
(base) [email protected] OpenDroneMap % docker run -it --rm -v "$(pwd)/images:/code/images" -v "$(pwd)/odm_orthophoto:/code/odm_orthophoto" my_odm_image
[ERROR] You need to set the project path in the settings.yaml file before you can run ODM, or use --project-path <path>
. Run python run.py --help
for more information.
According to the Github readme thats all I should need to run it. I haven’t had trouble with WebODM, but this is my first time running ODM directly from the command line.
Sorry for asking such a 101 question here but I’ve spent hours trying to figure this out and I’d like to test the new feature.
Check your quotes when mounting the volumes using the -v flags. Make sure to read up on docker also, docs.docker.com (it’s not straightforward)
@pierotofy Thanks. Should running the docker command from the github readme verbatim work (I’m trying sanity check myself before building the branch)?
docker run -it --rm \
-v "$(pwd)/images:/code/images" \
-v "$(pwd)/odm_orthophoto:/code/odm_orthophoto" \
-v "$(pwd)/odm_texturing:/code/odm_texturing" \
opendronemap/odm
I’ve used Docker quite a few times before which is why I’m scratching my head
I got it to work. I needed to specify --project-path=/
at the end of the docker run command.
@pierotofy
Is this the place to discuss test results? I don’t want to spam this thread if possible.
I did find an issue in testing. It only happens when I have radiometric-calibration=camera
set. With that flag removed, everything processes normally. Note that I had --fast-orthophoto
set for both test runs. I’m running now with --fast-orthophoto
removed to see if that circumvents the issue. Note that I only fed RGB images in for this test (this was not a multicamera run).
Stacktrace
Traceback (most recent call last):
File "/code/SuperBuild/src/opensfm/bin/opensfm", line 34, in <module>
command.run(args)
File "/code/SuperBuild/src/opensfm/opensfm/commands/export_visualsfm.py", line 43, in run
self.export(reconstructions[0], graph, udata, args.points, export_only)
File "/code/SuperBuild/src/opensfm/opensfm/commands/export_visualsfm.py", line 60, in export
shot_size_cache[shot.id] = udata.undistorted_image_size(shot.id)
File "/code/SuperBuild/src/opensfm/opensfm/dataset.py", line 622, in undistorted_image_size
return io.image_size(self._undistorted_image_file(image))
File "/code/SuperBuild/src/opensfm/opensfm/io.py", line 707, in image_size
image = imread(filename)
File "/code/SuperBuild/src/opensfm/opensfm/io.py", line 662, in imread
with Image.open(filename) as f:
File "/usr/local/lib/python2.7/dist-packages/PIL/Image.py", line 2822, in open
raise IOError("cannot identify image file %r" % (filename if filename else fp))
IOError: cannot identify image file u'/code/opensfm/undistorted/images/IMG_00194.jpg.tif'
Traceback (most recent call last):
File "/code/run.py", line 57, in <module>
app.execute()
File "/code/stages/odm_app.py", line 92, in execute
self.first_stage.run()
File "/code/opendm/types.py", line 344, in run
self.next_stage.run(outputs)
File "/code/opendm/types.py", line 344, in run
self.next_stage.run(outputs)
File "/code/opendm/types.py", line 344, in run
self.next_stage.run(outputs)
File "/code/opendm/types.py", line 325, in run
self.process(self.args, outputs)
File "/code/stages/run_opensfm.py", line 97, in process
octx.run('export_visualsfm --points')
File "/code/opendm/osfm.py", line 23, in run
(context.opensfm_path, command, self.opensfm_project_path))
File "/code/opendm/system.py", line 76, in run
raise Exception("Child returned {}".format(retcode))
Exception: Child returned 1
Image sample & Full Debug Log
I’m trying to zero in on a root cause but haven’t been successful. It looks like it could be related to a Pillow dependency issue but I haven’t been able to confirm that.
Also, my Sentera camera has the following relevant tags when it comes to camera calibration settings:
(base) [email protected] RGB % exiftool IMG_00195.jpg -ColorTransform -ISO -FNumber -ExposureTime -BitsPerSample
Color Transform : 1.150, -0.110, -0.034, -0.329, 1.421, -0.199, -0.061, -0.182, 1.377
ISO : 506
F Number : 2.5
Exposure Time : 1/609
Bits Per Sample : 8
I think the only change needed to support my camera would be to append the conditional at ODM/photo.py at reflectance · pierotofy/ODM · GitHub
with
elif 'EXIF ISO' in tags:
self.iso_speed = self.int_value(tags['EXIF ISO'])
Everything else seems to match.
Finally, I’m not sure if these have been shared already but I found them to be useful:
-Adam
Might be better to open a separate topic; I don’t see IMG_00194.jpg
in the drive link (might be easier to share the entire dataset to reproduce this error).
The EXIF tag name may or may not be at fault here (have you tried to change it and see if it fixes it?)
Cool paper! I’d post it also in Research Papers - OpenDroneMap Community
@adamjson I encountered the issue on a dataset of mine; it’s due to Pillow’s inability to read multiband float32 TIFFs. It’s not related to EXIFs (unfortunately, that would have been an easier fix).
@pierotofy Glad you were able to reproduce it. I’ve been quite busy, but I should be able to circle back with you soon.
Hey, I feel like you got feedback that the reference formulas for highly calibrated satellite imagery are similar but not the same for UAV based imagery. But I’m not sure if anyone actually pointed you in the correct direction?
Rant begins
Regardless, I’m a researcher who has been manually creating calibrated reflectance maps from non-commercial reflectance panels for a couple years. Something that has been bugging me for a while now is that seemingly every single UAV photogrammetry software company acts as if the only calibration targets they can receive input for are the $$$ commercially provided options from each drone company.
This is incredibly incorrect.
Perfectly effective calibration is achieved via the “Empirical Line Method” of a reasonably spaced grey-scale gradient. The empirical method is a logistic regression of calibration target reflectance values that have been transformed with the negative natural log with a few simple extra steps. A calibration equation derived from the empirical method with need to be calculated for each band. Here’s the awesome thing that matters for ODM: the software doesn’t have to know what the expected values of each target is to calibrate the images before processing (yes, it’s helpful and allows for some accuracy calculations, but it’s not needed for the essential steps). All that matters is that the the user inputs locations of each calibration gradient. From there the steps are simple:
- Calculate reflectance with associated irradiance data
- Transform relflectance values with -ln()
- Run a linear regression of -ln(reflectance),DN (y,x)
- Calculate separate means of -ln(reflectance) and DN
- Calculate m (y=mx+b) of empirical line method with:
( Mean(-ln(reflectance)) - linear reg constant “b” ) / Mean(DN) - Assemble new equation as such -ln(reflectance) = m * DN + Lin Reg “b”
This will create a y=mx+b formula where x=DN and y = -ln(reflectance).
There are other ways to calculate m, but the transformed linear regression above is perfectly valid for comparing data across time. This method will work for both the “industry-standard” calibration targets and user-created calibration targets.
It would also be awesome if ODM was an industry leader for integrated calibration of not just multispec but also for independent RGB imagery. I’m personally doing research on the efficacy of average ground-irradiance measurements and the accuracy of calibrating native RGB data from that data, but many RGB drones come with the possibility of mounting irradiance sensors anyway. This would be accomplished by having a thorough and flexible process for the application of the empirical line method as opposed to the current “pre-baked” versions Pix4D and Agisoft use.
I have no idea if you already knew this stuff, but if not I would be happy to assist with it’s integration into ODM. My programming skills are weak but existent. I would be happy to contribute however I could, at the very least outlining the the mathematical steps involved/general structure.
Rant over