Running ODM container on a read-only file system

Hello everyone!

We have been trying to get OpenDroneMap working on a high performance computing environment. The environment uses Singularity (https://sylabs.io/singularity/) for Docker containers. On this kind of shared high performance environment, the containers are allowed to run with read-only rights only and thus all files must be written to folders mounted to the container.

This is where we arrive to our problems. The ODM container seems to write files inside the container to folders that are unmountable as they have files in them already. For example the application writes files to the main /code/ folder.

These kind of files for example (maybe more):
/code/img_list.txt
/code/images.json

which then gives an error in our read-only environment

Traceback (most recent call last):
File “/code/run.py”, line 57, in
app.execute()
File “/code/stages/odm_app.py”, line 92, in execute
self.first_stage.run()
File “/code/opendm/types.py”, line 351, in run
self.process(self.args, outputs)
File “/code/stages/dataset.py”, line 93, in process
save_images_database(photos, images_database_file)
File “/code/stages/dataset.py”, line 13, in save_images_database
with open(database_file, ‘w’) as f:
IOError: [Errno 30] Read-only file system: ‘/code/images.json’

Do you think that - for at least now - it is impossible to run this container in a read-only file system? Or can you think of any fix to this issue. I guess modifying the source code and creating a ‘tmp’ folder where all temporary files (like the ones above) would be created into, would solve this as it could be mounted.

Thanks for your time already!

Cheers,

Johannes Nyman

1 Like

Hey @johannesnyman :hand: you can mount a folder and then use --project-path to specify the location where you want to write the files.

docker run -ti --rm -v /folder:/tmp/code opendronemap/odm --project-path /tmp

3 Likes

Thank you!

That solved it!

  • Johannes Nyman
1 Like