This tutorial will cover setting up NATs, a Subspace Node, and a Farming Cluster. I will cover everything in 3 sections:
Cluster Architecture - Diagram and explanation of what I’m hoping to achieve
NATs & Subspace Node - Getting the NATs server and Node going
Farming Cluster - Connecting the Farming Clusters
Cluster Architecture
Alpha Server
This server I use for a lot of various tasks, and I don’t want to have a Plotter or Farmer running. As a result I will only be running the four containers required to set up Farming Clusters on my other servers. This will be divided into three stacks:
NATs Stack: This will run the NATs server.
Node Stack: This will run the Subspace Node
Cluster Stack: This will run the Controller and the Cache
Bravo & Charlie Servers
These servers were previously running just a Farmer, but with clusters they will be running a Farmer + Plotter. The Farmer will point to all the disks available on the PC.
With clusters, Bravo would only be able to plot on the disks physically conneted to Bravo, and the same for Charlie. But now both Bravo and Charlie can help plot all the disks connected to the Cluster.
NATs & Subspace Node
Now that the plan is clear, let’s implement. First, create a new directory on the server you will be using for NATs. I am going to make a generic “subspace_cluster” directory that I will also store the cache files in. If you are going to run the Farming Cache on a different server, you will have to make a directory on that server as well. But as shown in my system architecture diagram, the Cache and the NATs server will be on the same server.
mkdir ~/subspace_cluster && mkdir ~/subspace_cluster/nats && mkdir ~/subspace_cluster/cache && mkdir ~/subspace_cluster/controller
Set Ownership to nobody:nogroup:
sudo chown -R nobody:nogroup ~/subspace_cluster/cache/ && sudo chown -R nobody:nogroup ~/subspace_cluster/controller/
Now there is a folder within subspace_cluster for ‘nats’, the ‘cache’ and ‘controller’.
Next, create the nats.config file:
nano ~/subspace_cluster/nats/nats.config
Then paste in the following:
max_payload = 2MB
Press CTRL+X, then Y, and then ENTER to save and exit nano.
Now go to Portainer and create a new stack file for the server you will run NATs on. I call this stack file ‘nats’. In the web editor paste in:
version: '3.8'
services:
nats:
image: nats
container_name: nats
restart: unless-stopped
ports:
- "4222:4222"
volumes:
- /home/hakedev/subspace_cluster/nats/nats.config:/nats.config:ro
command: ["-c", "/nats.config"]
networks:
cluster_network:
ipv4_address: 172.25.0.2
networks:
cluster_network:
driver: bridge
ipam:
config:
- subnet: 172.25.0.0/16
In this stack we:
Create the NATs service
Open ports 4222 (change this if there is a conflict)
Pass in the nats.config
Assign NATs the IP address 172.25.0.2
Create the “cluster_network” network
For any other containers that need to connect to NATs on this server we can the IP address 172.25.0.2, but if another server needs to connect to NATs it will have to use the host IP, you will see this later.
Now set up the Node by creating another stack called subspace_node. If you already have a node there should be no issues just leaving it as it is. You will need to make sure your controller can connect to it though. Here is my node stack:
version: "3.8"
services:
node:
container_name: subspace_node
image: ghcr.io/subspace/node:gemini-3h-2024-may-24
volumes:
- node-data:/var/subspace:rw
ports:
- "0.0.0.0:30334:30334/tcp"
- "0.0.0.0:30434:30434/tcp"
- "9944:9944"
restart: unless-stopped
command:
[
"run",
"--chain", "gemini-3h",
"--base-path", "/var/subspace",
"--listen-on", "/ip4/0.0.0.0/tcp/30334",
"--dsn-listen-on", "/ip4/0.0.0.0/tcp/30434",
"--rpc-cors", "all",
"--rpc-methods", "unsafe",
"--rpc-listen-on", "0.0.0.0:9944",
"--farmer",
"--name", "hakehardware"
]
networks:
nats_cluster_network:
ipv4_address: 172.25.0.3
labels:
com.hakedev.name: "Subspace Alpha Node"
environment:
- TZ=America/Phoenix
volumes:
node-data:
networks:
nats_cluster_network:
external: true
In this stack file we:
Mapped the node-data volume
Exposed port 9944 so our controller can connect
Set the network to use the nats_cluster_network with IP 172.25.0.3
Assigned a label (change to your liking)
Set the environments TZ to America/Phoenix
Created a volume for the Node
Imported the nats_cluster_network
The NATs and the Subspace Node should now be running.
Farming Cluster
I will be running the Controller and the Cache ONLY on Alpha Server. Keep in mind you can also run the Farmer and/or Plotter as well. This last part will be composed of two steps, first the Controller and Cache will be deployed. Then on Bravo and Charlie server the Plotter and Farmer will be deployed.
Create a new stack called ‘cluster_stack’. Then paste in:
version: '3.8'
services:
farmer_controller:
container_name: subspace_farmer_controller
image: ghcr.io/subspace/farmer:gemini-3h-2024-may-24
volumes:
- /home/hakedev/subspace_cluster/controller:/controller
command:
[
"cluster",
"--nats-server", "nats://172.25.0.2:4222",
"controller",
"--base-path", "/controller",
"--node-rpc-url", "ws://172.25.0.3:9944"
]
labels:
com.hakedev.name: "Alpha Farmer Controller"
environment:
- TZ=America/Phoenix
networks:
nats_cluster_network:
ipv4_address: 172.25.0.4
farmer_cache:
container_name: subspace_farmer_cache
image: ghcr.io/subspace/farmer:gemini-3h-2024-may-24
volumes:
- /home/hakedev/subspace_cluster/cache:/cache
command:
[
"cluster",
"--nats-server", "nats://172.25.0.2:4222",
"cache",
"path=/cache,size=200GB"
]
labels:
com.hakedev.name: "Alpha Farmer Cache"
environment:
- TZ=America/Phoenix
networks:
nats_cluster_network:
ipv4_address: 172.25.0.5
networks:
nats_cluster_network:
external: true
In this stack file we:
Deploy the Farmer Controller
Bind the controller folder to the container
Specify our NATs IP, base-path, and Node RPC URL
Set Label to Alpha Farmer Controller (change to your liking)
Set TZ
Set IP to 172.25.0.4 for the Controller
Deploy the Farmer Cache
Bind the cache folder to the container
Specify our NATs IP and cache path/size
Set Label to Alpha Farmer Cache (change to your liking)
Set TZ
Assign IP address of 172.25.0.5 to the Cache
Imported the nats_cluster_network
A lot happening here, but this is the core of a Farming Cluster. You should see a log on your Controller stating that a Cache has been discovered:
With this deployed the Plotters and Farmers can be hooked up from anywhere that can reach the NATs server. In my case I will deploy them on Bravo and Charlie. As mentioned a few times now, you could also deploy a Farmer and Plotter on the same server as the Controller and Cache, in this case you would just add it to the “cluster_stack”. Simply paste this in below the farmer_cache and above the networks:
farmer:
container_name: subspace_farmer
image: ghcr.io/subspace/farmer:gemini-3h-2024-may-24
volumes:
- /media/subspace/subspace00:/subspace00
command:
[
"cluster",
"--nats-server", "nats://172.25.0.2:4222",
"farmer",
"--reward-address", "st9uEvR9ZqnovgwmZ5s7rWAh2ACNZzBdVsBNHdjmqBUgXHS9B",
"path=/subspace00,size=3.6T"
]
environment:
- TZ=America/Phoenix
labels:
com.example.name: "Delta Farmer"
networks:
nats_cluster_network:
ipv4_address: 172.25.0.6
farmer_plotter:
container_name: subspace_farmer_plotter
image: ghcr.io/subspace/farmer:gemini-3h-2024-may-24
command:
[
"cluster",
"--nats-server", "nats://172.25.0.2:4222",
"plotter"
]
environment:
- TZ=America/Phoenix
labels:
com.example.name: "Delta Cluster Plotter"
networks:
nats_cluster_network:
ipv4_address: 172.25.0.7
Here we are:
Deploy the Farmer
Binding the farming disk I mounted to subspace00 (update for your case)
Specifying the NATs server using the local IP/Port
Specifying the reward address and disk path/size (update for your case)
Setting the TZ & Label (update for your case)
Assinging it to IP 172.25.0.6 on the nats_cluster_network
Deploy the Plotter
Specifying the NATs server using the local IP/Port
Setting the TZ & Label (update for your case)
Assinging it to IP 172.25.0.7 on the nats_cluster_network
You would only do this if you want to farm and plot a disk on the same server as your Controller and Cache. In addition (or in leiu of this), you can also deploy the Plotter and Farmer on other servers which I will no do.
For each server create a stack file “cluster_stack”. Then paste in:
version: '3.8'
services:
farmer:
container_name: subspace_farmer
image: ghcr.io/subspace/farmer:gemini-3h-2024-may-24
volumes:
- /media/subspace/subspace00:/subspace00
command:
[
"cluster",
"--nats-server", "nats://192.168.69.101:4222",
"farmer",
"--reward-address", "st9uEvR9ZqnovgwmZ5s7rWAh2ACNZzBdVsBNHdjmqBUgXHS9B",
"path=/subspace00,size=3.6T"
]
environment:
- TZ=America/Phoenix
labels:
com.example.name: "Bravo Farmer"
farmer_plotter:
container_name: subspace_farmer_plotter
image: ghcr.io/subspace/farmer:gemini-3h-2024-may-24
command:
[
"cluster",
"--nats-server", "nats://192.168.69.101:4222",
"plotter"
]
environment:
- TZ=America/Phoenix
labels:
com.example.name: "Bravo Cluster Plotter"
This is similar to deploying the Farmer and Plotter on Alpha but there are important differences now that we are on a different server.
Deploy Farmer
Bind mount the farming disks
For the NATs server, I am now using the Host IP for Alpha (192.168.69.101). Update this for your case.
Specify reward address and disk path/size (update for your case)
Set TZ and Label (update for your case)
Deploy Plotter
For the NATs server, I am now using the Host IP for Alpha (192.168.69.101). Update this for your case.
Set TZ and Label (update for your case)
The main takeaway is you need to use the Host IP for your NATs server now. You can repeat this for every server. Also, note that you don’t have to run the Farmer and the Plotter, you can run one or the other if you want depending on your set up.
For each Farmer that hook up to NATs you can check in your Controller for logs saying each Farm was discovered:
At this point everything should be connected. The logs are not super verbose so keep an eye on everything. Here are some logs for each of the Farming Cluster components that I got:
Controller
Cache
(not a whole lot here - but you can see in the Controller the Cache was discovered)
Farmer
Plotter
Note that for each component there was at least a message saying that nats was connected.
That’s it! Hopefully you were able to get everything working. If not feel free to leave a comment on the YT channel or find me in the Subspace Discord. Happy (cluster) Farming.
Alex,
I'm trying to set up your cluster concept for Autonomy but I keep getting an error and I've not been able to find out what is causing it. I've set up the two containers "nats" and node" but cannot get the node to run without error. Not sure where the error is coming but this is in the node error log (don't see subspace node actually running). I'd have previously run the node as a cli node and it still runs fine. I've tried my own setup with names and ip address and I've also gone back and duplicated your setup exactly. Here are some of the errors repeated over and over,
Error: SubstrateService(Other("Failed to convert network keypair: Os { code: 13, kind: PermissionDenied, message: \"Permission denied\" }"))
Error: SubstrateService(Other("Failed to convert network keypair: Os { code: 13, kind: PermissionDenied, message: \"Permission denied\" }"))
Thought it ran as root so don't why where there should be a permission issue. Error 13 is an unable to mount but the node data is on /home/subspace which is always mounted and the actual error is a key pair error. It appears that the node starts over and over so I guess the error is coming from the node starting up. Starts up find with my cli setup. I'm still looking but if you have seen this before or any suggestions, I would appreciate them.
Thanks