I saw some topics similar to this, but not exactly.
I’m running server install on ubuntu 16.04, lots of memory, disk and cpu.
Yesterday I ran a job with high res and 169 images, job completed in 44 mins.
Today I ran another job with 739 images - I set option to “HighRes” and “Do not resize images”, upload to AWS took forever… but progress bar was fine - after progress bar hit 100%, the screen changed to just “uploading” and has been hung like that now for a long time. TOP on the server shows pretty much that nothing is going on. I check in the directory /webodm/app/media/project/2/task/XXXXX/ and all the images are actually there with the real size, but webodm still shows uploading - see screenshots
yes restarted and rebooted machine - still same status just says uploading… but images are definitely there. Anyway I can resolve this without re-uploading the images
Hey @pierotofy I’m actually running the purchased server install. I’ve tried restarting those services and rebooting the machine. Is their a way to create a new task on the server and copy the images in? Maybe add 1 image to a new project and then manually copy the rest? Have lots of disk space (few TBs)
I just noticed something on the server regarding webodm-celerybeat , it was not running, I restart it and for a few seconds its OK, but then it dies - here is the output:
[email protected]:~$ sudo service webodm-celerybeat restart [email protected]:~$ sudo service webodm-celerybeat status
● webodm-celerybeat.service - Start WebODM Celery Scheduler Service Container
Loaded: loaded (/webodm/service/webodm-celerybeat.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2018-11-03 18:08:22 UTC; 687ms ago
Process: 2305 ExecStop=/bin/kill -s QUIT $MAINPID (code=exited, status=0/SUCCESS)
Main PID: 2309 (celery)
Tasks: 1
Memory: 43.4M
CPU: 685ms
CGroup: /system.slice/webodm-celerybeat.service
└─2309 /webodm/python3-venv/bin/python3 /webodm/python3-venv/bin/celery -A worker beat
Nov 03 18:08:22 ip-XXXXXXX systemd[1]: Started Start WebODM Celery Scheduler Service Container. [email protected]:~$ sudo service webodm-celerybeat status
● webodm-celerybeat.service - Start WebODM Celery Scheduler Service Container
Loaded: loaded (/webodm/service/webodm-celerybeat.service; enabled; vendor preset: enabled)
Active: inactive (dead) (Result: exit-code) since Sat 2018-11-03 18:08:26 UTC; 8s ago
Process: 2343 ExecStop=/bin/kill -s QUIT $MAINPID (code=exited, status=0/SUCCESS)
Process: 2336 ExecStart=/webodm/python3-venv/bin/celery -A worker beat (code=exited, status=73)
Main PID: 2336 (code=exited, status=73)
Nov 03 18:08:26 ip-XXXXXXX systemd[1]: webodm-celerybeat.service: Unit entered failed state.
Nov 03 18:08:26 ip-XXXXXXX systemd[1]: webodm-celerybeat.service: Failed with result ‘exit-code’.
Nov 03 18:08:26 ip-XXXXXXX systemd[1]: webodm-celerybeat.service: Service hold-off time over, scheduling restart.
Nov 03 18:08:26 ip-XXXXXXX systemd[1]: Stopped Start WebODM Celery Scheduler Service Container.
Nov 03 18:08:26 ip-XXXXXXX systemd[1]: webodm-celerybeat.service: Start request repeated too quickly.
Nov 03 18:08:26 ip-XXXXXXX systemd[1]: Failed to start Start WebODM Celery Scheduler Service Container.
interesting question which actually has to do with another feature request I have… but that’s for later.
When I actually just do the uploading I save cost - and allocate 4GB, this server I just started with 8GB - when I actually run the ODM stuff I restart a new server with 144GB of RAM.
So this error is happening with 8GB
odd I didn’t shut down, but now after killing that PID, the celerybeat seems fine and doesn’t shutdown anymore. But I’m still in the same status as before:
what do you mean by “upload” ? As I said in the OP the images had completed uploading (took 12 hours…) it went it to this state after the progress bar completed.
All the images (739 of them) are in the directory on the server already
understood, I hope thats what its doing…
Is there anyway to see the progress of this or if its really happening? Because right now at least from TOP it seems that things are pretty idle…
This actually has to do with another feature request which I will detail on github for you, but server install on AWS would be much better if we could spin up two separate server as a DEFAULT. One for webodm (which can be on like a T3.Medium which is quite cheap and one for node and odm which needs usually a much bigger machine - the second machine can be turned on, only when a job needs to be done. This will save a lot of money.