Lightning - Tasks stuck in queue

I have the Lightning tier that allows for 3000 images, 4 tasks. Have had some tasks entering queue unexpectedly when using Lightning.

I assumed the top tier meant I could have 4 tasks with up to 3000 images each task running simultaneously.

Is that correct? Or could it be that it is 3000 max images across all tasks?

Also, I have noticed that sometimes on occasion, tasks will enter in and out of queue, or get stuck in queue and never go back running. Is this just the nature of the beast?

Large tasks (> 1500 images) might get (temporarily) queued if there’s a lot of processing happening and the network is at full capacity. Usually this doesn’t take too long though. It’s normal to sometimes see the task “running” and then see it switch to “queued” (if the full capacity scenario is happening).

1 Like

Thanks for that info. The network must be busy last day or two.

I had 3 projects running overnight, 444 images, 1700 images, and 2400 images.

The two larger ones failed with error related to checking my tokens.

I cleared out all tasks, refreshed lightning, restarted PC, and then started the 1700 image set again.

It uploaded but is now sitting in Queue for some reason.

1 Like

It is now processing :slight_smile:

3 Likes

I’m also in the process of buying new servers, they should be setup by next week and will help reduce congestion/queueing during peak times.

3 Likes

Thanks for that news. I am getting great value out of Lightning. Just a bit of trouble the last few days as it is a busy time of year for mapping.

Ever consider an additional price tier? I could pay more if it meant a smoother ride in terms of queueing, max image resolution/capacity.

This reminds me of another question. I have been reading about how to deal with extra-large datasets such as up to 10,000 images. From my reading it sounds like split-merge takes the limits off. Is this something that would ever be handled by Lightning in the future? Or will this require me to build my own machine?

1 Like

It’s a possibility, we just haven’t had too much demand up to this point.

Maybe, for the same reason; the issue is that datasets this large are few, and setting up the infrastructure is very costly. Because there are few people that need to process datasets this large, it’s difficult to take advantage of economies of scale, and the cost becomes too high.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.