It is difficult to make an introduction for such a project. I wanted to get familiar with Kubernetes (kubernetes.io), but decided that minikube – is not what I want.
I will be pleased if you subscribe to my Instagram (instagram.com/uptime.lab) and Twitter (twitter.com/Merocle). I do not post any spam — only my own projects and quite rarely. Your subscription is essential to me!
I сhose an option with four Raspberry Pi’s 4 with 4 Gb (8 Gb version hasn’t been announced back at the time). This setup had enough power to deploy a full K8s cluster.
Further, I wanted to create a unique cooling system for Raspberry Pi. Developing the idea, I’ve came with the following model:
I wouldn’t dive deep into software part of the project. Cluster was assembled in the end of 2019 and now the setup part is quite trivial.
For the first 3 months, everything was working without any additional cooling. Cluster was throttling under the heavy load, however continued to work.
Such a delay with cooling was due to the fact, that most of the parts were ordered from Aliexpress and, unfortunately, in most of the cases pictures wasn’t reflecting the reality. The most difficult part was to find a radiator that would fit. I wanted to have a copper radiator with 4 holes by 6mm. Talking to the sellers didn’t result in anything as they weren’t interested in selling 1-2 custom made pieces. Available options weren’t a good fit for my project, too.
I gave up started to search for aluminum options and chose the ones you see on a final product:
Another complication – is how to bend thermal tubes. I knew that it wouldn’t be that easy, therefore ordered more of them. And before bending, I’ve printed plastic models that reflect all the complex curves. Thermal tubes are quite fragile, so I had to to order a special bending tool (which had bigger curving radius than expected), and it was used only once for every pipe.
The next step was realizing how to connect the heat tube to the CPU. I ordered few copper radiators from Aliexpress and used dremel to process them, so that the tubecan be neatly placed and doesn’t interfere with PoE HAT. PoE HAT is not standard as well, the link on Amazon: DSLRKIT Raspberry Pi 3B+ 3B Plus Power Over Ethernet PoE HAT. I’ve chosen it based on the size and “open”-structure type.
At first, I wanted to use lead and solder the tubes to the radiators, but decided that it’s not worth it: difficult, risky and would be hard to disassemble in case Raspberry Pi has to be changed. Thus, I’ve just ordered 10 pieces of the heat conductive glue and using only one of them was enough. From experience, using only passive cooling is enough to keep the system working without overheating. As a temporary solution for holding the tubes I used hand tools. It would be funny, if the glue would be brittle or viscous after drying out, however it proved to be a perfect fit.
Further, I assembled everything based on the Fusion 360 model.
List of used items/materials:
- 4pcs. – Rapsberry Pi 4 (4Gb)
- 4pcs. – DSLRKIT Raspberry Pi 3B+ 3B Plus Power Over Ethernet PoE HAT
- 2pcs. – Aluminum heat sink (aliexpress.com)
- 4pcs. – Raspberry Pi 4 Model B Copper Heat Sink (aliexpress.com)
- 2pcs. – Heat pipe 6x240mm (aliexpress.com)
- 2pcs. – Heat pipe6x220mm (aliexpress.com)
- 1pcs. – Thermal Pad Heatsink (10pcs) (aliexpress.com)
- 1pcs. – Tube Bender (aliexpress.com)
- 1pcs. – Thermal Glue
- 1pcs. – Threaded Rod 2.5M
This article was written after my post on Reddit (link), where lots of people showed interest in this project and asked to tell the details.
At the moment, cluster stands on my desk and works for continuous 9 months. It hosts my own application, which I will talk about in the following article.
Thanks for your attention and please subscribe to my Instagram page instagram.com/uptime.lab.
Models are available (as is):
Kubernetes Cluster Uptime Labs
P.S. I had a hard time working on this project and I wouldn’t say that it is 100% finished. The very first model was meant to have kinematics – a motor that would move the block of 4x Raspberry Pi’s above the switch and will open the access to USB-C and Micro HDMI ports. Switch hasn’t been chosen from the first time as well, I’ve tested 3 different models (thanks to the weeks return policy in Germany), as I wanted to find a compact model that would allow me to disable ports and control the energy consumption. As well, I wanted to have a setup, that will allow me to route the cluster to my personal DNS name from any network with the access to the internet. Uniquiti switch was almost a perfect option, however it was a bit oversized and didn’t allow to implement all the ideas without additional settings. In the end I decided to go with the simplest option that would fit the size and I don’t regret.
If you like what I do, you can always support me with PayPal:
I’m a systems engineer in JetBrains company. Uptime Lab founder. I’m glad to see you on my website! I hope you find my content useful. Please subscribe to my Instagram and Twitter. I post the newest updates there.
My RPi4 is running a single node k3s and I plan to add more nodes where your post comes very handy. Would you mind sharing the effect of your cooling design on the performance of your RPis i.e. your cluster?
btw, great work. also your Mark III looks amzing for a home-made cluster. Thanks.
Thank you very much!
It works without OC, with average 50 degrees temperature (low-middle load)
Mark II in server room with air condition (20 degrees) works under OC (2GHz if I’m not mistake) with about 60 degrees under very high load. But it’s 14 hodes in 2U with active ventilation.