Github: Hake Hardware
Okay so one thing is obvious. We haven’t even gone over how to create postdata yet! I know. That is coming. But I think it’s important to get your nodes up and running first. If you survive that, then it’s time to create some postdata! But actually before that, I want to cover a few things about this setup.
How do I query GRPC?
Typically when you run a node directly on the host you can query the GRPC server for various things like the event stream. This gets you access to important things like your layers. Well turns out it’s the same, but different, but better! Just like usual, first you need Go and GRPCurl on the host.
Download Go:
wget https://go.dev/dl/go1.21.0.linux-amd64.tar.gz
Extract
tar -xf go1.21.0.linux-amd64.tar.gz
Remove any previous installation
sudo rm -rf /usr/local/go
Set extracted folder ownership
sudo chown -R root:root ./go
Move Folder
sudo mv -v go /usr/local
Update Profile
sudo nano /etc/profile
Go to the bottom and paste:
# Go Lang Path
export GOPATH=$HOME/go
export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin
Reload Profile
source /etc/profile
Install GRPCurl
go install github.com/fullstorydev/grpcurl/cmd/grpcurl@latest
So far everything has been the same. But what is different is we need to use the IP address of the container. So if you followed my guide exactly, your first node should be assigned ‘172.18.0.101’, so to query it with GRPCurl do the following:
grpcurl --plaintext -d "{}" 172.18.0.101:9092 spacemesh.v1.NodeService.Status
In my mind this is better! All you have to do is switch out the IP and you can easily query all your nodes. NOTE: You must have modified your config.mainnet.json file so that the ‘grpc-private-listener’ is ‘0.0.0.0:9093’. If you used my config this already has been changed. github
Public and Private Nodes
Okay this is a good one. I have 15 nodes, with 30 peers each that is A LOT of connections. This will need to be its own article. I haven’t actually implemented this yet and I need to do some testing. But it should be totally possible! I prefer to make my public nodes on a separate host so it will be a little bit more complicated, but I’ll make a tutorial for how to do it on the same host as well. Stay tuned!
Updates
So when a new version of go-spacemesh launches, all you need to do is go into your stack and update the version listed for each node. If you want to just update one node, then only update one node in the stack. When you deploy, it will only deploy changes. So if you want to update incrementally you can do that, or you can do it all at once. This is WAY faster than managing them independently.
Logs
Yes there are logs. By default they are handled by Docker. This is.. okay. It’s fine. You can actually view the logs in Portainer by going to the containers tab, and then click the little document icon:
But this is really a pain. In your host system you can also access them, but first you need to know your container ID. In the same screen as above - you can get the container ID by clicking the node you want to get the ID for. It will be listed
Armed with your container ID, enter the following on the host:
docker logs --tail 1000 -f <your container id>
This will print out the logs, going back 1000 lines. You can do more complex queries for specific times, reference the Official Docs
If you need the whole log file for diagnosing issues by uploading it to discord, you can also do this by pushing the log files into a txt file:
docker logs <your container id> > logs.txt
Note that the second ‘>’ is part of the command, do not delete it when adding your container id. Also, it’s important to note Docker will rotate your log files, but you can override those settings in the Docker compose file if you want. In fact there are containers you can download that do a bunch of crazy awesome stuff with logs so feel free to explore!
Conclusion
So Docker is pretty cool right? If nothing else it’s a good technology to learn. If I missed anything shoot me a message either on YT, Discord, or here. We have one last article, and that will cover generating postdata. See you there!