Okay I have to admit, I love Subspace Clusters. With Docker it is pretty straight forward to get going, and you should not have any issues migrating from a non-Cluster farmer to a Cluster farmer. There are official docs here which are a bit generic. I will have an entire Subspace Cluster article coming out right after this, so hold tight if you want to implement it entirely on Docker!
The docs specify four components to Cluster Farming but I think it’s more like six if we are talking about the entire Subspace ecosystem.
NATs: This is where the magic of Cluster Farming happens. All of your components (except the Node) will connect to the NATs server.
Node: This runs like normal, and I prefer to keep it in its own stack file. You only need a single node. The Controller (described below) connects to the node.
Controller: This is sort of the brains of the operation and it connects to the Node (and NATs).
Cache: This is where your Piece Cache will go, you need to hook up a directory to this and the docs say 200GB is recommended. Don’t worry this actually saves you space as you won’t need to reserve space on your farming drives anymore.
Farmer: This will container your rewards address and the path to the farming disk.
Plotter: This will do the plotting, and it will help plot any farm that is connected to Nats.
So there are a lot of things going on - here are a few diagrams to help visualize everything:
Okay that’s kind of small. I’d recommend watching me explain this on my YT channel if this isn’t clicking. The basic idea is that you have a NATs server that allows you to connect a bunch of plotters up to a bunch of farmers. You can have a Server be JUST a farmer or plotter depending on how it’s set up. This really does change the game quite a bit. A quick warning though, this is bandwidth intensive! So I recommend keeping this on your local network, and if you can run 10 or 40gbe even better. You only need a single NATs server, Node, Farmer Controller and Farmer Cache.
Here is one more diagram before we jump into getting things set up, in this one I remove references to specific servers so you can see a more general flow:
You can see that the Cache, Controller and Node have only one instance. You can actually run more instances of the Cache if you want though. Then you can attach as many Farmers as you want (with as many Farms). Then attach as many Plotters as you want.
Running this is Docker is easy. I’ll be releasing a tutorial here and on my YT channel with details on getting it set up. Here are a few key things for running a Farming Cluster in Docker:
I create a dedicated stack file for my Node. It doesn’t connect to NATs and you only need a single Node. It just felt more natural to do it this way.
You only need a single NATs instance, so I also keep this in a separate stack file.
NATs must be running in order to start your controller and other components. If NATs goes down your other components will be very grumpy. Not to get too inceptiony on you but you can also run a NATs cluster to have a failover for your Farming Cluster. I’ll have a tutorial on this at some point.
You only need a single Controller and Cache. But you can hook up more Caches, I’m not entirely sure under what scenario you would want more Caches. Cache is recommended to be 200GB.
You can run your Farmer and Plotter on the same Server as all your other stuff, or dedicate a specific server to host it all.
You can run just a Farmer or just a Plotter, you don’t have to run both. Technically you could put all your disks on a single low powered server and then hook up a bunch of high powered plotters. Keep in mind you would need to keep the plotters hooked up for replots - but this is great if you have some extra resources on the network and want to speed up plotting.
From what I can tell this will use a lot of bandwidth. 10-40gbe is recommended but you should probably be fine with 1gbe, but that is really the minimum. And I don’t think it would be useful to try setting this up over the internet.
Stay tuned for the next article where I will go over my Farming Cluster setup.