P4M processing issue (Multispectral processing option) - maybe after update?

Hi folks,

Has anyone had issues processing P4M datasets since the upgrade? I have attempted to process a new dataset over the last couple of days and keep getting “unable to process dataset” after 2 to 3 minutes, regardless of actions or settings.
the dataset is ~7800 images captured with P4M through GS PRO and is ~28GB. I’m running 128GB RAM

So far I have run the following:

  • standard Multispectral option
  • Multispectral + split/merge @ 500 and 1000
  • standard Multispectral option with all TIFF files moved to devoted index and renamed
  • Multispectral + split/merge @ 500 and 1000, with all TIFF files moved to devoted index and renamed
  • Multispectral + split/merge @ 500 and 1000, with all TIFF files moved to devoted index and renamed and resized to factor of 0.8 (1843)

I have also run through the TIFF dataset to find any stray jpg files (none) and I’m now running the jpg dataset as a fast ortho which seems to be processing fine (one hour into processing).

I have updated and have also restarted, shutdown and re assigned node with no luck.

I last processed sets on 19th, 13th and 11th November with the standard Multispectral option and all processed fine. Have just queued a sample dataset of ~500 images from those surveys to see if it is in the new dataset somewhere but interested to know if anyone else is having the same issue.

Thanks,
dp

1 Like

Hmm… Odd for sure. If you try full runs again, could you grab the logs for us?

I have narrowed it down to something in my dataset, or perhaps volume of images. Ran a test dataset overnight (500 images) from a job I did in October and it ran without any issue. I have a smaller dataset queued up from the last job (around 2200 images, split/merge 500) and will see how it goes. If it falls over I’ll get the log.

1 Like

Are you 100% sure that each capture has all five images? I’ve noticed our P4M randomly fails to capture images all the time, so it isn’t assured.

Have a look at this conversation for a bit more deail on binning images by capture if you don’t already have a workflow for that (if you do and can post it, let us know! :slight_smile: )

4 Likes

Thanks Tim,
Yes, that occurred to me overnight (one of those thoughts that wake you up at 2am!) I’m yet to allocate time to go through and check, would love to see that link if you can post it.

dp

2 Likes

Coming back to this,
Tim, you were correct. I went through the dataset and found a single image had not been written to the card in the 101 index of my second flight (how I managed to find it by eye in 7000+ images within a couple of hours I don’t know! I’m attributing it to an almighty fluke.)

I’m not sure how it would work, but perhaps a thought to develop a solution in which the processing can drop out image subsets from the written files (i.e if subset DJI_xxx1 - DJI_xxx5 do not exist then omit from processing set) noting that this would almost certainly require a specific DJI/Sentera/Sequoia workflow and a fair bit of development.

dp

2 Likes

The quickest solution outside of something being built into odm, would probably be for to update our code snippet (see discussion here) to bin the images for each capture using the DJI CaptureID in the EXIF data (see end of page 2 here) and then omit the captures that have less than the expected number of images. I could whip this up in matlab in a few minutes, but alas my python skills are next to non-existant. I can probably get someone from my lab to do it at some point but we’re all pretty busy atm (and this is Australia so come end of Dec everyone here is on summer holiday until Feb :slight_smile:), so it might be a while. So if anyone here can do it sooner, have at it!

Cheers

1 Like

If you write it in Matlab, I can probably port it to python.

Also, if you need me to move to Australia, I could really use a proper holiday… .

1 Like

Come visit! The beaches here are fabulous and no doubt once our 3-year la nina ends we’ll be back to drought and fires which is not nearly as nice as it is now.

The CaptureID is a unique number for each set of the TIF image recorded from each band for a given capture. That is, for each “capture”, it will record say 5 images (R,G,B,NIR,RedEdge) that share a unique ID. Since we already have to read the exif data to get band name, the approach I would take would be to use the CaptureID as a dictionary key or array index (whichever array type structure you can easily index this way) and add the band and filepath to that array. The when you’re done parsing the images, you’d loop through the dictionary and for each CaptureID key, flag any set that doesn’t have the expected number of images, write out that list to a log file and for sets that have the expected number of images, rename them by adding the band name to the file name (this is what the code currently does) and copy them to the output folder. Then save out the list of captures that didn’t have the right number of images so the user can review it.

I’ll ask the person in our lab to put it on their todo list but it might be a while so anyone else feel free to tackle it first.

2 Likes

It would be good to visit before the continent catches fire again.

Ok, that’s pretty straightforward, probably an hour or so. If I get to it before February/March, that’ll be fun to pound out. We’ll see what January looks like.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.