Restore not working

Hi,

I was running webodm on my Mac laptop as a docker, created some projects, all fine … wanted to move all to an AWS box running docker.

I used the backup commands from GitHub:-

mkdir -v backup
docker run --rm --volume webodm_dbdata:/temp --volume `pwd`/backup:/backup ubuntu tar cvf /backup/dbdata.tar /temp
docker run --rm --volume webodm_appmedia:/temp --volume `pwd`/backup:/backup ubuntu tar cvf /backup/appmedia.tar /temp

moved the resulting 2 x files to my AWS box, and ran the restore commands on the AWS box

ls backup # --> appmedia.tar  dbdata.tar
./webodm.sh start && ./webodm.sh down # Create volumes
docker run --rm --volume webodm_dbdata:/temp --volume `pwd`/backup:/backup ubuntu bash -c "rm -fr /temp/* && tar xvf /backup/dbdata.tar"
docker run --rm --volume webodm_appmedia:/temp --volume `pwd`/backup:/backup ubuntu bash -c "rm -fr /temp/* && tar xvf /backup/appmedia.tar"
./webodm.sh start

but my tasks / projects are not there, just initial “welcome to webodm, create account”

here’s the startup log, after running the “restore” docker command (removed the full list, was a 6GB file!)

— BEGIN LOG —

temp/project/2/task/7ba2d72b-5598-4223-800c-801d32affe0c/DJI_0010.JPG
temp/project/2/task/7ba2d72b-5598-4223-800c-801d32affe0c/DJI_0015.JPG
temp/project/2/task/7ba2d72b-5598-4223-800c-801d32affe0c/DJI_0024.JPG
temp/project/2/task/7ba2d72b-5598-4223-800c-801d32affe0c/DJI_0043.JPG
temp/project/2/task/7ba2d72b-5598-4223-800c-801d32affe0c/DJI_0022.JPG
temp/CACHE/
temp/CACHE/images/
temp/CACHE/images/settings/
temp/CACHE/images/settings/logo512/
temp/CACHE/images/settings/logo512/731db2a2858e254fed9caa3ec4170dc4.png
temp/CACHE/images/settings/logo512/50f6009f0bdc5be2c8894895181fb226.png
temp/settings/
temp/settings/logo512.png
temp/.gitignore
[[email protected] ~]$ cd /opt/WebODM/
[[email protected] WebODM]$ ls
app         CONTRIBUTING.md           docker-compose.dev.yml         docker-compose.yml  LICENSE.md  package.json      screenshots  wait-for-it.sh        webpack.config.js
build       db                        docker-compose.nodeodm.yml     Dockerfile          manage.py   plugins           service      wait-for-postgres.sh  webpack-server.js
CONDUCT.md  devenv.sh                 docker-compose.ssl-manual.yml  ISSUE_TEMPLATE.md   nginx       README.md         slate        webodm                worker
contrib     docker-compose.build.yml  docker-compose.ssl.yml         jest.config.js      nodeodm     requirements.txt  start.sh     webodm.sh             worker.sh
[[email protected] WebODM]$ ./webodm.sh start
Checking for docker...   OK
Checking for git...   OK
Checking for python...   OK
Checking for pip...   OK
Checking for docker-compose...   OK
Starting WebODM...

Using the following environment:
================================
Host: localhost
Port: 8000
Media directory: appmedia
SSL: NO
SSL key:
SSL certificate:
SSL insecure port redirect: 80
Celery Broker: redis://broker
================================
Make sure to issue a ./webodm.sh down if you decide to change the environment.

docker-compose -f docker-compose.yml -f docker-compose.nodeodm.yml start || docker-compose -f docker-compose.yml -f docker-compose.nodeodm.yml up
Starting db         ... done
Starting node-odm-1 ... done
Starting broker     ... done
Starting worker     ... done
Starting webapp     ... done
ERROR: No containers to start
Creating network "webodm_default" with the default driver
Creating broker     ... done
Creating db         ... done
Creating node-odm-1 ... done
Creating worker     ... done
Creating webapp     ... done
Attaching to broker, db, node-odm-1, worker, webapp
broker        | 1:C 25 Jul 13:18:13.689 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
broker        | 1:C 25 Jul 13:18:13.695 # Redis version=4.0.10, bits=64, commit=00000000, modified=0, pid=1, just started
broker        | 1:C 25 Jul 13:18:13.695 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
broker        | 1:M 25 Jul 13:18:13.696 * Running mode=standalone, port=6379.
broker        | 1:M 25 Jul 13:18:13.696 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
broker        | 1:M 25 Jul 13:18:13.696 # Server initialized
broker        | 1:M 25 Jul 13:18:13.696 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
broker        | 1:M 25 Jul 13:18:13.696 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
broker        | 1:M 25 Jul 13:18:13.696 * Ready to accept connections
db            | The files belonging to this database system will be owned by user "postgres".
db            | This user must also own the server process.
db            |
db            | The database cluster will be initialized with locale "en_US.utf8".
db            | The default database encoding has accordingly been set to "UTF8".
db            | The default text search configuration will be set to "english".
db            |
db            | Data page checksums are disabled.
db            |
db            | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db            | creating subdirectories ... ok
db            | selecting default max_connections ... 100
db            | selecting default shared_buffers ... 128MB
db            | selecting dynamic shared memory implementation ... posix
db            | creating configuration files ... ok
db            | creating template1 database in /var/lib/postgresql/data/base/1 ... ok
db            | initializing pg_authid ... ok
worker        | psql: could not connect to server: Connection refused
worker        | 	Is the server running on host "db" (172.22.0.3) and accepting
worker        | 	TCP/IP connections on port 5432?
worker        | Postgres is unavailable - sleeping
db            | initializing dependencies ... ok
db            | creating system views ... ok
db            | loading system objects' descriptions ... ok
db            | creating collations ... ok
webapp        | psql: could not connect to server: Connection refused
webapp        | 	Is the server running on host "db" (172.22.0.3) and accepting
webapp        | 	TCP/IP connections on port 5432?
webapp        | Postgres is unavailable - sleeping
db            | creating conversions ... ok
db            | creating dictionaries ... ok
db            | setting privileges on built-in objects ... ok
db            | creating information schema ... ok
db            | loading PL/pgSQL server-side language ... ok
node-odm-1    | info: Authentication using NoTokenRequired
node-odm-1    | info: No tasks dump found
node-odm-1    | info: Checking for orphaned directories to be removed...
node-odm-1    | info: Server has started on port 3000
db            | vacuuming database template1 ... ok
worker        | psql: could not connect to server: Connection refused
worker        | 	Is the server running on host "db" (172.22.0.3) and accepting
worker        | 	TCP/IP connections on port 5432?
worker        | Postgres is unavailable - sleeping
db            | copying template1 to template0 ... ok
db            | copying template1 to postgres ... ok
db            | syncing data to disk ... ok
db            |
db            | Success. You can now start the database server using:
db            |
db            |     pg_ctl -D /var/lib/postgresql/data -l logfile start
db            |
db            |
db            | WARNING: enabling "trust" authentication for local connections
db            | You can change this by editing pg_hba.conf or using the option -A, or
db            | --auth-local and --auth-host, the next time you run initdb.
db            | ****************************************************
db            | WARNING: No password has been set for the database.
db            |          This will allow anyone with access to the
db            |          Postgres port to access your database. In
db            |          Docker's default configuration, this is
db            |          effectively any other container on the same
db            |          system.
db            |
db            |          Use "-e POSTGRES_PASSWORD=password" to set
db            |          it in "docker run".
db            | ****************************************************
db            | waiting for server to start....LOG:  could not bind IPv6 socket: Cannot assign requested address
db            | HINT:  Is another postmaster already running on port 5432? If not, wait a few seconds and retry.
db            | LOG:  database system was shut down at 2018-07-25 13:18:18 UTC
db            | LOG:  MultiXact member wraparound protections are now enabled
db            | LOG:  database system is ready to accept connections
db            | LOG:  autovacuum launcher started
webapp        | psql: could not connect to server: Connection refused
webapp        | 	Is the server running on host "db" (172.22.0.3) and accepting
webapp        | 	TCP/IP connections on port 5432?
webapp        | Postgres is unavailable - sleeping
worker        | psql: could not connect to server: Connection refused
worker        | 	Is the server running on host "db" (172.22.0.3) and accepting
worker        | 	TCP/IP connections on port 5432?
worker        | Postgres is unavailable - sleeping
db            |  done
db            | server started
db            | ALTER ROLE
db            |
db            |
db            | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/init-db.sql
db            | ALTER ROLE
db            | CREATE DATABASE
db            | ALTER DATABASE
db            | ALTER DATABASE
db            |
db            |
db            | LOG:  received fast shutdown request
db            | LOG:  aborting any active transactions
db            | waiting for server to shut down....LOG:  autovacuum launcher shutting down
db            | LOG:  shutting down
db            | LOG:  database system is shut down
webapp        | psql: could not connect to server: Connection refused
webapp        | 	Is the server running on host "db" (172.22.0.3) and accepting
webapp        | 	TCP/IP connections on port 5432?
webapp        | Postgres is unavailable - sleeping
worker        | psql: could not connect to server: Connection refused
worker        | 	Is the server running on host "db" (172.22.0.3) and accepting
worker        | 	TCP/IP connections on port 5432?
worker        | Postgres is unavailable - sleeping
db            |  done
db            | server stopped
db            |
db            | PostgreSQL init process complete; ready for start up.
db            |
db            | LOG:  database system was shut down at 2018-07-25 13:18:19 UTC
db            | LOG:  MultiXact member wraparound protections are now enabled
db            | LOG:  database system is ready to accept connections
db            | LOG:  autovacuum launcher started
webapp        | Postgres is up - executing command
webapp        | wait-for-it.sh: waiting for broker:6379 without a timeout
webapp        | wait-for-it.sh: broker:6379 is available after 0 seconds
webapp        |
webapp        |  _       __     __    ____  ____  __  ___
webapp        | | |     / /__  / /_  / __ \/ __ \/  |/  /
webapp        | | | /| / / _ \/ __ \/ / / / / / / /|_/ /
webapp        | | |/ |/ /  __/ /_/ / /_/ / /_/ / /  / /
webapp        | |__/|__/\___/_.___/\____/_____/_/  /_/
webapp        |
webapp        |
webapp        | Checking python version... 3.x, good!
worker        | Postgres is up - executing command
worker        | wait-for-it.sh: waiting for broker:6379 without a timeout
worker        | wait-for-it.sh: broker:6379 is available after 0 seconds
worker        | wait-for-it.sh: waiting for webapp:8000 without a timeout
webapp        | Checking GDAL version... GDAL 2.3.0, released 2018/05/04, excellent!
webapp        | Running migrations
webapp        | Generated secret key
webapp        | INFO Initializing GRASS engine using /usr/bin/grass74
webapp        | INFO Booting WebODM 0.5.2
db            | ERROR:  relation "auth_group" does not exist at character 52
db            | STATEMENT:  SELECT "auth_group"."id", "auth_group"."name" FROM "auth_group" WHERE "auth_group"."name" = 'Default'
webapp        | WARNING Could not touch the database. If running a migration, this is expected.
webapp        | Operations to perform:
webapp        |   Apply all migrations: admin, app, auth, contenttypes, guardian, nodeodm, sessions
webapp        | Running migrations:
webapp        |   Applying contenttypes.0001_initial... OK
webapp        |   Applying auth.0001_initial... OK
webapp        |   Applying admin.0001_initial... OK
webapp        |   Applying admin.0002_logentry_remove_auto_add... OK
webapp        |   Applying contenttypes.0002_remove_content_type_name... OK
webapp        |   Applying auth.0002_alter_permission_name_max_length... OK
webapp        |   Applying auth.0003_alter_user_email_max_length... OK
webapp        |   Applying auth.0004_alter_user_username_opts... OK
webapp        |   Applying auth.0005_alter_user_last_login_null... OK
webapp        |   Applying auth.0006_require_contenttypes_0002... OK
webapp        |   Applying auth.0007_alter_validators_add_error_messages... OK
webapp        |   Applying auth.0008_alter_user_username_max_length... OK
webapp        |   Applying nodeodm.0001_initial... OK
webapp        |   Applying app.0001_initial... OK
webapp        |   Applying app.0002_task_auto_processing_node... OK
webapp        |   Applying app.0003_auto_20170615_1300... OK
webapp        |   Applying app.0004_auto_20170707_1014... OK
webapp        |   Applying app.0005_auto_20170707_1014... OK
webapp        |   Applying app.0006_task_available_assets... OK
webapp        |   Applying app.0007_auto_20170712_1319... OK
webapp        |   Applying app.0008_preset... OK
webapp        |   Applying app.0009_auto_20170721_1332... OK
webapp        |   Applying app.0010_auto_20170725_1324... OK
webapp        |   Applying app.0011_auto_20171109_1237... OK
webapp        |   Applying app.0012_public_task_uuids... OK
webapp        |   Applying app.0013_public_task_uuids... OK
webapp        |   Applying app.0014_public_task_uuids... OK
webapp        |   Applying app.0015_public_task_uuids... OK
webapp        |   Applying app.0016_public_task_uuids... OK
webapp        |   Applying app.0017_auto_20180219_1446... OK
webapp        |   Applying app.0018_auto_20180311_1028... OK
webapp        |   Applying app.0019_remove_task_processing_lock... OK
webapp        |   Applying auth.0009_alter_user_last_name_max_length... OK
webapp        |   Applying guardian.0001_initial... OK
webapp        |   Applying nodeodm.0002_processingnode_token... OK
webapp        |   Applying nodeodm.0003_auto_20180625_1230... OK
webapp        |   Applying sessions.0001_initial... OK
webapp        | INFO Initializing GRASS engine using /usr/bin/grass74
webapp        | Checking for celery...   OK
webapp        | Scheduler has shutdown.
webapp        | Generating nginx configurations from templates...
webapp        | - nginx/nginx-ssl.conf
webapp        | - nginx/nginx.conf
webapp        | celery beat v4.1.0 (latentcall) is starting.
worker        | wait-for-it.sh: webapp:8000 is available after 11 seconds
worker        | Checking for celery...   OK
worker        | Starting worker using broker at redis://broker
webapp        | [2018-07-25 13:18:33 +0000] [86] [INFO] Starting gunicorn 19.7.1
webapp        | [2018-07-25 13:18:33 +0000] [86] [INFO] Listening at: unix:/tmp/gunicorn.sock (86)
webapp        | [2018-07-25 13:18:33 +0000] [86] [INFO] Using worker: sync
webapp        | [2018-07-25 13:18:33 +0000] [96] [INFO] Booting worker with pid: 96
webapp        | INFO Initializing GRASS engine using /usr/bin/grass74
webapp        | __    -    ... __   -        _
webapp        | LocalTime -> 2018-07-25 13:18:33
webapp        | Configuration ->
webapp        |     . broker -> redis://broker:6379//
webapp        |     . loader -> celery.loaders.app.AppLoader
webapp        |     . scheduler -> celery.beat.PersistentScheduler
webapp        |     . db -> celerybeat-schedule
webapp        |     . logfile -> [stderr]@%WARNING
webapp        |     . maxinterval -> 5.00 minutes (300s)
worker        | INFO Initializing GRASS engine using /usr/bin/grass74
worker        | /usr/local/lib/python3.6/site-packages/celery/platforms.py:795: RuntimeWarning: You're running the worker with superuser privileges: this is
worker        | absolutely not recommended!
worker        |
worker        | Please specify a different user using the -u option.
worker        |
worker        | User information: uid=0 euid=0 gid=0 egid=0
worker        |
worker        |   uid=uid, euid=euid, gid=gid, egid=egid,
webapp        |
webapp        |
webapp        | Congratulations! └@(・◡・)@┐
webapp        | ==========================
webapp        |
webapp        | If there are no errors, WebODM should be up and running!
webapp        |
webapp        | Open a web browser and navigate to http://localhost:8000
webapp        |
webapp        | NOTE: Windows users using docker should replace localhost with the IP of their docker machine's IP. To find what that is, run: docker-machine ip
webapp        | INFO Initializing GRASS engine using /usr/bin/grass74
webapp        | INFO Booting WebODM 0.5.2
webapp        | INFO Created default group
webapp        | INFO Added view_processingnode permissions to default group
webapp        | INFO Regenerate cache for app/static/app/css/theme.scss
webapp        | INFO Created default theme
webapp        | INFO Regenerate cache for app/static/app/css/theme.scss
webapp        | INFO Regenerate cache for app/static/app/css/theme.scss
webapp        | INFO Created settings
webapp        | INFO Registered [plugins.measure.plugin]
webapp        | INFO Registered [plugins.osm-quickedit.plugin]
webapp        | INFO Running npm install for posm-gcpi
webapp        | npm notice created a lockfile as package-lock.json. You should commit this file.
webapp        | npm WARN [email protected] No description
webapp        | npm WARN [email protected] No repository field.
webapp        |
webapp        | added 1 package in 3.57s
webapp        | INFO Registered [plugins.posm-gcpi.plugin]
webapp        | INFO Added admin to default group
broker        | 1:M 25 Jul 13:23:14.079 * 100 changes in 300 seconds. Saving...
broker        | 1:M 25 Jul 13:23:14.080 * Background saving started by pid 15
broker        | 15:C 25 Jul 13:23:14.084 * DB saved on disk
broker        | 15:C 25 Jul 13:23:14.084 * RDB: 0 MB of memory used by copy-on-write
broker        | 1:M 25 Jul 13:23:14.180 * Background saving terminated with success
broker        | 1:M 25 Jul 13:28:15.020 * 100 changes in 300 seconds. Saving...
broker        | 1:M 25 Jul 13:28:15.020 * Background saving started by pid 16
broker        | 16:C 25 Jul 13:28:15.023 * DB saved on disk
broker        | 16:C 25 Jul 13:28:15.024 * RDB: 0 MB of memory used by copy-on-write
broker        | 1:M 25 Jul 13:28:15.121 * Background saving terminated with success
broker        | 1:M 25 Jul 13:33:16.084 * 100 changes in 300 seconds. Saving...
broker        | 1:M 25 Jul 13:33:16.084 * Background saving started by pid 17
broker        | 17:C 25 Jul 13:33:16.087 * DB saved on disk
broker        | 17:C 25 Jul 13:33:16.088 * RDB: 0 MB of memory used by copy-on-write
broker        | 1:M 25 Jul 13:33:16.185 * Background saving terminated with success
broker        | 1:M 25 Jul 13:38:17.026 * 100 changes in 300 seconds. Saving...
broker        | 1:M 25 Jul 13:38:17.026 * Background saving started by pid 18
broker        | 18:C 25 Jul 13:38:17.029 * DB saved on disk
broker        | 18:C 25 Jul 13:38:17.030 * RDB: 0 MB of memory used by copy-on-write
broker        | 1:M 25 Jul 13:38:17.126 * Background saving terminated with success
broker        | 1:M 25 Jul 13:43:18.068 * 100 changes in 300 seconds. Saving...
broker        | 1:M 25 Jul 13:43:18.068 * Background saving started by pid 19
broker        | 19:C 25 Jul 13:43:18.072 * DB saved on disk
broker        | 19:C 25 Jul 13:43:18.072 * RDB: 0 MB of memory used by copy-on-write
broker        | 1:M 25 Jul 13:43:18.168 * Background saving terminated with success
broker        | 1:M 25 Jul 13:48:19.003 * 100 changes in 300 seconds. Saving...
broker        | 1:M 25 Jul 13:48:19.003 * Background saving started by pid 20
broker        | 20:C 25 Jul 13:48:19.007 * DB saved on disk
broker        | 20:C 25 Jul 13:48:19.007 * RDB: 0 MB of memory used by copy-on-write
broker        | 1:M 25 Jul 13:48:19.104 * Background saving terminated with success

— END LOG —

Mm, from your log it’s apparent that the backup was not restored. (The database migrations should not have happened if the data was restored correctly).

On the AWS machine, what’s the output of:

docker volume ls

?

Hi,

thanks for the reply …

[[email protected] WebODM]$ docker volume ls
DRIVER              VOLUME NAME
local               0068348af6cd09af861759ba61558309cf1f8341e56223838c303b0e2d638008
local               7ce57dd7dfdc5060d6f4b7e166b77b168d347a0e6c14a83f1f691b1cee0b5539
local               8e58c2b155b5ee7983be213420b0f4520f23a5fd1b2dd915cf745b64a8dae77f
local               a79fc77b062bb0c62d3071866219d15588b529efb10c0b21380f8c68a78f5a58
local               c5b37def229c965ab477dcda7f8ef8f07a7d68d4d1ec722f068c365125737e32
local               webodm_appmedia
local               webodm_dbdata
local               webodm_letsencrypt
[[email protected] WebODM]$

Volumes look OK.

Did you run the restore commands from the WebODM directory?

oops … I think I know my problem, my dbdata.tar is empty … just has a “/temp” folder in it :frowning:

so, if I don’t have a backup of my Postgres db, but I have my media … I suppose I can just re-submit the original images and run the processing again?

e

Mm yeah, unfortunately you need both. It wouldn’t be impossible to manually retype all the information in the database, but that would require a fairly in-depth knowledge of WebODM’s internals. It might be quicker to just reprocess.