Home Lab
By Adrian Sutton
One of the downsides of moving from working on the Ethereum consensus layer is that you often need a real execution node sync’d, and they don’t have the near instantaneous checkpoint sync. So recently I bit the bullet and custom built a PC to run a whole bunch of different Ethereum chains on. I’m really quite happy with the result.
There’s actually a really good variety of public endpoints available for loads of Ethereum-based chains these days so while running your own is maximally decentralised, it’s not just a choice between Infura or your own node now. Public Node provide very good free JSON-API and consensus APIs. Alchemy and Quicknode both have quite usable free tiers etc. The downside with all of them though is that their servers are in the Americas or Europe and that’s a whole lot of latency away from Australia. When you’re syncing L2 nodes or particularly running fault proof systems, you wind up making a lot of requests and that latency becomes very painful very quickly. More than anything it was wanting to avoid that latency that drove me to want to run my own nodes locally.
To be useful though, I really want it to run quite a few different chains. Currently it’s running:
- Ethereum MainNet
- Ethereum Sepolia
- OP Mainnet
- OP Sepolia
- Base Mainnet
- Base Sepolia
I’m quite tempted to add a Holeksy node just so I can run some validators again - shame most of the L2 stacks and apps use Sepolia but it has a locked down validator set.
Hardware-wise running this many nodes is primarily about disk space so I wound up with an MSI Pro Z790-P motherboard which has a rather ridiculous number of ports that you can plug SSDs into - not all at full speed but plenty at fast enough speeds. It’s been nearly 20 years since I built a custom PC so there’s likely a bunch of things that aren’t the perfect trade offs but I’m quite happy with the overall result. One of the mistakes which I’m actually happy about was that I mistook the case sense names and wound up with a much larger case than I expected. That does give it capacity to shove a heap of spinning rust drives into it and leverage that for things like historic data that doesn’t need the fast disk. Its got a Intel Core i7 CPU which is barely being used. I had wanted to add 128Gb of RAM since Ethereum nodes do like to cache stuff but apparently using 4 sticks of RAM can cause instability so I’ve stuck to just 64Gb for now. It seems to be plenty for now but is probably the main limiting factor at the moment. For disk it currently has two 4Tb NVME drives.
For software, the L1 consensus nodes are obviously all Teku and they’re doing great. The team has done a great job continuing to improve things since I left so even with the significant growth in validator set, its running very happily with less memory and CPU than it had been “back in my day”. The L1 Mainnet execution client is a reth archive node which has been quite successful. I did try a reth node for sepolia but hit a few issues (which I think have now been fixed) so I’ve wound up running executionbackup and have both geth and reth for sepolia.
The L2 nodes are all op-node and op-geth - always good to actually run the software I’m helping build. For OP Sepolia, I’m also running op-dispute-mon and op-challenger to both monitor the fault proof system and participate in games to ensure correct outcomes. I really do like the fact that OP fault proofs are fully permissionless so anyone can participate in the process just like my home lab now does.
For coordination, everything is running in docker via docker-compose which made it much easier to avoid all the port conflicts that would otherwise occur. Each network has its own docker-compose file, though there’s a bunch of networks shared between chains so the L2s can connect to the L1s and everything can connect to metrics. All the compose files and other config is in a local git repo with a hook setup to automatically apply any changes. So I’ve wound with a home grown gitops kind of setup. I did try using k8s with ArgoCD to “do it properly” at one point but it just made everything far more complex and less reliable so switched back to simple docker compose.
For monitoring, I’ve got Victoria Metrics capturing metrics and Loki capturing logs - both automatically pick up any new hosts. Then there’s a grafana instance to visualise it all. I even went as far as running ethereum-metrics-exporter to give a unified view of metrics when using different clients.
The final piece is a nginx instance that exposes all the different RPC endpoints at easy to remember URLs, ie /eth/mainnet/el
, /eth/mainnet/cl
, /op/mainnet/el
etc. All the web UIs for the other services like Grafana are exposed through the same nginx instance. My initial build exposed all the RPCs on different ports and it was a nightmare trying to remember which chain was one which port, so the friendly URLs have been a big win.
Overall I’m really very happy with the setup and it is lightning fast even to perform quite expensive queries like listing every dispute game ever created. Plus it was fun to play with some “from scratch” system admin again instead of doing everything in the cloud with already existing templates and services setup.