Offer of help, if needed

My main idea is to allow people to use these machines as your programming model progresses, namely distributed compute model, CUDA capability, and automated enough where I don’t have to do much, if anything. :slight_smile: :slight_smile:

I have some computers, mostly doing BOINC, that I was wondering if they can be useful for the ODM project. There is also an NAS server with a few unused terabytes. :slight_smile:
They aren’t the absolute latest and greatest but they have lots of idle time. First one is an AMD 8350 with 16 Gb RAM, a GTX 960, runs windows 7 (gack) with about 5 Tb free space, but could easily run Debian 9 (stretch) instead. The second, an AMD 9370 with 16 Gb, runs Debian 9 fulltime (down for repair at the moment,) has lots of free space (I forget how much while it’s down,) and uses an R9 video card. The main system runs a Ryzen7, 32 Gb, about 8 Tb free disk space, GTX 1050Ti, and uses 2 nvme cards as primary hard drive. The main system can be upgraded to 64 Gb memory as needed. Also, have residential gigabit to the house.

if it starts getting crazy, I might have to ask for help with the electricty which is about 12 cents per kwh here. I haven’t checked their consumption lately but it’s not all that much and still cheaper than AWS (maybe.)

So, my idea is to use the NAS as a gateway and image repository, with all the heavy lifting performed on the desktops. The NAS has a whopping I3 (yeah, buddy!) so it could carry some light loading as necessary but it is so difficult to program for…

There are quite a few older, much less powerful machines that I just can’t seem to throw away, so if needed, they could be the gatekeepers, and etc. :slight_smile:

Would any of this be useful to the ODM project?

(I plan to upgrade the Ryzen system to GTX 1080Ti as soon as (if?) the cards actually become affordable.)


Hey @skypuppy thanks for offering the use of your resources!

This brings back an idea we threw out some time ago about organizing a processing node network managed by volunteers:

It would be pretty much a network running with a central repository for temporary S3 data storage.

This network would then be added to (which is a work in progress) and be used as a choice for processing datasets.

There are a few things still to plan out / solve. For example:

  • Who’s going to manage such network
  • How does the network fund itself / motivate node operators to continue operating (pay for electricity at the very least?)
  • How do we know a processing node hasn’t been compromised and is stealing/altering data without the users’ consent or knowledge. Most people are good actors, but in every distributed system these are concerns to think about.

In short, we could use the resources sometimes in the near future. Currently what we need most however is help figuring out the organizational details of such operation and help implementing it.

I’d be happy to give guidance and be part of such effort, but I currently can’t lead it.

How about the BOINC tool?