Python script to process a large number of datasets

I am developing python script to process a large number of small-medium size datasets.
I am getting processing errors apparently related to the processing options not being received by the processing nodel.

The datasets give perfect results when processed with the same options using webodm

Here is the code im using:

    print("Uploading images...")
    task = node.create_task(glob.glob(Input_Dir),{"smrf-scalar": "1.28","mesh-octree-depth": "5","min-num-features": "15000","smrf-slope": "0.17","use-3dmesh": "true","texturing-nadir-weight": "29","ignore-gsd": "true","matcher-distance": "12","dsm": "true","smrf-threshold": "0.6"})
        print("Task completed, downloading results...")
        os.rename(os.listdir()[0],Output_File_name+ '.zip')
        print("Assets saved in (%s)" % os.listdir(Output_Dir))
    except exceptions.TaskFailedError as e:
except exceptions.NodeConnectionError as e:
    print("Cannot connect: %s" % e)
except exceptions.NodeResponseError as e:
    print("Error: %s" % e)

Any idea will be appreciated !!

Hey @Mauricio :hand: welcome!

What errors are you getting?

Thanks for answering

Im getting “errorMessage”:“Process exited with code 1” for all the processes

How about the task output?


These are the last lines …

/code/SuperBuild/src/opensfm/bin/opensfm create_tracks “/var/www/data/bfa16a14-bf06-4433-a91c-0799c3f49023/opensfm”",“2019-12-31 18:04:21,196 INFO: reading features”,“2019-12-31 18:04:24,052 DEBUG: Merging features onto tracks”,“2019-12-31 18:04:24,139 DEBUG: Good tracks: 11872”,"[INFO] running /code/SuperBuild/src/opensfm/bin/opensfm reconstruct “/var/www/data/bfa16a14-bf06-4433-a91c-0799c3f49023/opensfm”",“2019-12-31 18:04:25,302 INFO: Starting incremental reconstruction”,“2019-12-31 18:04:25,414 INFO: 0 partial reconstructions in total.”,"[INFO] Updating /var/www/data/bfa16a14-bf06-4433-a91c-0799c3f49023/opensfm/config.yaml","[INFO] undistorted_image_max_size: 5472","[INFO] running /code/SuperBuild/src/opensfm/bin/opensfm undistort “/var/www/data/bfa16a14-bf06-4433-a91c-0799c3f49023/opensfm”","[INFO] running /code/SuperBuild/src/opensfm/bin/opensfm export_visualsfm --undistorted --points “/var/www/data/bfa16a14-bf06-4433-a91c-0799c3f49023/opensfm”",“Traceback (most recent call last):”,“File “/code/SuperBuild/src/opensfm/bin/opensfm”, line 34, in “,“”,“File “/code/SuperBuild/src/opensfm/opensfm/commands/”, line 30, in run”,“reconstructions = data.load_undistorted_reconstruction()”,“File “/code/SuperBuild/src/opensfm/opensfm/”, line 585, in load_undistorted_reconstruction”,“filename=‘undistorted_reconstruction.json’)”,“File “/code/SuperBuild/src/opensfm/opensfm/”, line 575, in load_reconstruction”,“with io.open_rt(self._reconstruction_file(filename)) as fin:”,“File “/code/SuperBuild/src/opensfm/opensfm/”, line 555, in open_rt”,“return, ‘r’, encoding=‘utf-8’)”,“IOError”,”: [Errno 2] No such file or directory: ‘/var/www/data/bfa16a14-bf06-4433-a91c-0799c3f49023/opensfm/undistorted_reconstruction.json’”,“Traceback (most recent call last):”,“File “/code/”, line 57, in “,“app.execute()”,“File “/code/stages/”, line 92, in execute”,“”,“File “/code/opendm/”, line 370, in run”,“”,“File “/code/opendm/”, line 370, in run”,“”,“File “/code/opendm/”, line 370, in run”,“”,“File “/code/opendm/”, line 351, in run”,“self.process(self.args, outputs)”,“File “/code/stages/”, line 70, in process”,“‘export_visualsfm --undistorted --points’)”,“File “/code/opendm/”, line 21, in run”,”(context.opensfm_path, command, self.opensfm_project_path))”,“File “/code/opendm/”, line 76, in run”,“raise Exception(“Child returned {}”.format(retcode))”,“Exception: Child returned 1”]

The code seems to be working as expected. The problem seems with the input images. If you think it’s linked to the parameters not being passed properly, check the first lines of the task output, they contain the parameters used for processing.

The images process successfully using WEBODM and the same options

I have tried with many different datasets, proven to give excellent result. They all give the same error

Therefore I am lost

I manage to process the images without errors using the Python API as follows:

I tried to pass a different set of processing options, and by mistake did it using the wrong format.

The node complained with an error message saying that only 11 parameters were allowed an the dataset had 12.

I simply removed uno of the parameters and using the wright format the process ran OK

That limitation does not surface when calling the container directly via command line or using the cloud utility, ODM.

Appreciate your comments.