How to Build a Turing Pi 2 Home Cluster

In June 2022 I came across a Kickstarter about a board capable of having up to 4 Raspberry Pi 4 Compute Modules at once. I decided to back the project and purchase the board. Two years later, my Turing Pi 2 was still sitting unopened in its box on a shelf. I wanted to learn more about clustering and had never built a complete Kubernetes cluster from scratch before.


This content originally appeared on HackerNoon and was authored by Tomas Sirio

\

Table of Contents

  • The Story
  • The Plan
  • Turing Pi 2
  • The Setup
  • The Flashing
  • The Storage
  • The Kubernetes
  • The Applications
  • The final build
  • The Future
  • The End

The Story

In June 2022, I came across a Kickstarter about a board capable of having up to 4 Raspberry Pi 4 Compute Modules at once.

Without much deliberation, I decided to back the project and purchase the board.

\ Fast-forward two years, and my Turing Pi 2 was still sitting unopened in its box on a shelfLife had gotten in the way, and I was unsure why I had bought it in the first place.

\n Turing Pi Board

However, I finally decided to give it a try. I wanted to learn more about clustering and had never built a complete Kubernetes cluster from scratch before. So, I went into spending mode and acquired three Raspberry Pi 4s (8GB RAM, 8GB internal storage) and one Nvidia Jetson Nano (4GB).

\n Raspberry Pi 4 CMs

Given the board's versatility, I could mix different Compute Modules. I decided to include a Jetson Nano, thinking it might allow me to experiment with CUDA drivers in the future and delve into machine learning. Who knows? I might even end up hosting my own GPT assistant on this Kubernetes cluster. \n Jetson Nano(Spoilers: It didn't happen)

The Plan

My initial plan included the 3 Pi 4 CM and the Jetson Nano hosted on the board. Also, I planned to use a 1TB SSD drive for storage and a Wi-Fi card for internet access. However, after encountering numerous difficulties with the Jetson Nano's setup process and its poor documentation, I decided to return it. Instead, I opted for a fourth Raspberry Pi 4.

\ Additionally, I had an old Raspberry Pi 4 with 4GB of RAM lying around, so I decided to incorporate it as a fifth node. \n Old Raspberry

Turing Pi 2

The Turing Pi 2 is a Mini ITX form factor board that can accommodate up to four Raspberry Pi Compute Modules (also compatible with Jetson Nanos and a Turing Compute Module). It features a PCI Express port, two NVME ports, two SATA ports, and a USB port for flashing the Compute Modules. \n Plan 0

  • Node 1:

  • USB 2.0 port (For flashing the Compute Modules)

  • HDMI port (For debugging)

  • PCI Express port (For the Wi-Fi card)

    \

  • Node 2:

  • I would use this one for NVME storage, but it's not compatible with Raspberry Pi 4s.

    \

  • Node 3:

  • The SATA ports, however, can be used here. So, this node will have the NFS shared drive.

    \

  • Node 4:

  • USB 3.0 ports (If I ever need them).

    \

  • My Old Raspberry:

  • Would be the Kubernetes master node. There is no special reason; I can think better about my setup this way

    \n Plan 2

Ultimately, the idea is to host a Media Server with some add-ons. \n Flow Chart

The Set-up

It's been a while since I've put together a computer, and it was the first time I played around with Compute Modules and their adapters, so that was plenty of fun for the weekend. Since my wallet was hot but still not burning, I thought, why the hell not add a nice case for it? \n Unboxing 1

Given the Mini ITX form factor of the board, I could fit it into whatever fancy ITX case I could find on Amazon. The Qube 500 got me through and through. I was already making a DIY cluster, the best case for such a thing, was a DIY as well. \n Qube 500

I also added a 650W power supply (total overkill), one small Wi-Fi Mini PCI Express card, and a 1TB SATA SSD.

\ Putting up the 'thing' together was fairly simple. A bit of thermal paste between the Compute Modules and their heat sinks and clutching them together with their Adapters before setting them up in order in the Turing Board; \n Turing Pi Board 1

I mentioned the order because it was a significant part of the project. The Turing Pi 2 offers management of its ports distributed throughout the compute modules. In this case, the PCI Express 1 was managed by the First node while the SSD drive was managed by the 3rd one. The 2nd could handle the NVME port and the 4th on the other SSD IIRC, but I had no use for them right now.

The Flashing

I've installed Raspberry Pis before but never Compute Modules. The Turing Pi 2 has a USB port in the back which is used for flashing the Compute Modules.

\ Unfortunately, I tried using a USB A to USB A cable that was not a data transfer cable, so while I waited for Amazon to deliver a cable, I found another way of flashing the Compute Modules.

\ The Turing Pi 2 has a CLI tool that can be used not only to flash the Compute Modules but also to manage their power, reset them, check some stats, and so on.

\ The command used for installing the Compute Modules was:

\

tpi flash -i /path/to/image -n {nodes 1 to 4}

\ Pretty straightforward process I thought to myself before realizing that the Raspbian image doesn't come with SSH enabled by default.

This, of course, is not Turing's responsibility. I should have waited for that cable, but oh well.

\ To fix this, I had to mount the image on my local machine and add an empty file named ssh in the boot partition. This would enable SSH by default.

\

sudo mkdir /mnt/pi-boot
sudo mount /dev/sdX1 /mnt/pi-boot

sudo touch /mnt/pi-boot/ssh
sudo umount /mnt/pi-boot

\ Now, all my pis were ready to be used. I connected them to the network and started configuring them. There was little to be configured since I was going to use them as Kubernetes nodes.

\ But things like vim and updating the system were necessary.

\n Turing 1

This also gave me the chance to learn how to use Tmux. The best tool I've learned in a while.

The Storage

If you recall a few paragraphs above, I mentioned that the 3rd node would be used for the NFS shared drive. I had the 1TB SSD drive that I was going to use for this purpose. I had to format it and mount it in the 3rd node.

\ But I also needed to install the NFS server in this node and configure it in the other nodes. Is this recommended for a production environment? Hell no, but it's a home cluster, so I'm not too worried about it. \n Tmux1

Here are the steps I took to configure the NFS server:

pi@turing-03:/mnt/ssd/data $ lsblk
NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda            8:0    0 953.9G  0 disk /mnt/ssd
mmcblk0      179:0    0   7.3G  0 disk
├─mmcblk0p1  179:1    0   512M  0 part /boot/firmware
└─mmcblk0p2  179:2    0   6.8G  0 part /
mmcblk0boot0 179:32   0     4M  1 disk
mmcblk0boot1 179:64   0     4M  1 disk

\ First, I checked the drive was mounted correctly. Then I installed the NFS server:

sudo mkdir /mnt/nfs_share
sudo mount /dev/sda /mnt/nfs_share

\ Added it to the fstab file to make sure it was mounted on boot:

(adding the following line to the /etc/fstab file)

echo '/dev/sda /mnt/ssd ext4 defaults 0 0' | sudo tee -a /etc/fstab

\ Now, installing nfs-kernel-server:

sudo apt update
sudo apt install nfs-kernel-server

\ And adding my drive to the /etc/exports file:

echo '/mnt/ssd *(rw,sync,no_subtree_check,no_root_squash)' | sudo tee -a /etc/exports

\ Now, on the other nodes, I had to install nfs-common:

sudo apt update
sudo apt install nfs-common

\ And hooking the drive to each node:

sudo mount -t nfs {IP-for-the-drives-node}:/mnt/ssd /mnt

Tmux 0

Neofetch is installed in all nodes because I'm fancy.

The Kubernetes Cluster

I had never set up a Kubernetes cluster from scratch before, but I've been watching a lot of Jeff Geerling's videos on the subject… This is experience enough, right?

\ Jeff led me to K3s using Ansible, a lightweight Kubernetes distribution that is perfect for my home cluster and a pre-defined way of installing it because I don't have pre-requirements nor the idea of how to set it up otherwise.

\ The installation was pretty straightforward. I had to install it in all nodes, but I had to make sure the master node was the first one to be installed.

\ So first, I cloned the k3s-ansible repository:

git clone https://github.com/k3s-io/k3s-ansible.git

\ Then I had to configure the inventory file. My master node, as I mentioned before, was my old Raspberry Pi 4. So I had to make sure it was the first one in the inventory file. I also had to make sure the other nodes were in the correct groups.:

k3s_cluster:
  children:
    server:
      hosts:
        192.168.2.105:
    agent:
      hosts:
        192.168.2.101:
        192.168.2.102:
        192.168.2.103:
        192.168.2.104:

\ In that same file, I had to set up an encryption token. The file indicates how to do this, so I won't go into details here.

\ Then I had to run the playbook:

cd k3s-ansible
ansible-playbook playbooks/site.yml -i inventory.yml

\ That's it. As far as the installation goes, I had a Kubernetes cluster up and running. I had to install K9s on my local machine to manage the cluster and bind the cluster to the ./kube/config file.

The Applications

Lastly, I had to install the applications I wanted to run in the cluster. I had some ideas on what I wanted.

  • I wanted to have a Media Server with scheduled downloads.

\

  • A Pi-hole instance to my network's DNS and block all ads in all devices at home.

\

  • A Retroarch instance to play some old games and share the save games across my home's network (looking at you Megaman Battle Network 6 on all my devices)

\ That's where my repository comes in. \n Repository 0

For the Media Server, I decided to use:

\ As an example, I'll show you how I installed Sonarr using kubectl. The other applications were installed in a similar fashion.

\ For each Application, I created 3 files:

  • deployment.yaml is the configuration for each of the pods running the application
  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: sonarr
  spec:
    replicas: 1
    selector:
      matchLabels:
        app: sonarr
    template:
      metadata:
        labels:
          app: sonarr
      spec:
        containers:
        - name: sonarr
          image: linuxserver/sonarr
          ports:
          - containerPort: 8989
          env:
          - name: PUID
            value: "911"
          - name: PGID
            value: "911"
          - name: TZ
            value: "Europe/Amsterdam"
          volumeMounts:
          - mountPath: /data
            name: data
          - name: config
            mountPath: /config
        volumes:
        - name: data
          persistentVolumeClaim:
            claimName: nfs-pvc
        - name: config
          persistentVolumeClaim:
            claimName: nfs-config-pvc

\

  • service.yaml is the configuration for the service that will expose the application to the cluster
  apiVersion: v1
  kind: Service
  metadata:
    name: sonarr
  spec:
    selector:
      app: sonarr
    ports:
      - port: 80
        targetPort: 8989
    type: ClusterIP
  • ingress.yaml and this is the configuration for the ingress that will expose the application to my network

    \

Then we deploy all of them using kubectl:

kubectl apply -f sonarr/deployment.yaml
kubectl apply -f sonarr/service.yaml
kubectl apply -f sonarr/ingress.yaml

\ As you can see, I'm using NFS-backed persistent storage for the data and the configuration of the applications.

\ In the repository, you can find the nfs-pv.yaml and nfs-pvc.yaml files that I used to create the NFS storage.

\ Additionally, I created another persistent volume claim for the configuration of the applications. \n K9s 0

The Final Build

Even though the case looks amazing, it's a bit too big for a Raspberry Pi Cluster. A Mini ITX case would have suited my needs as well, but I have to admit, I'm a sucker for the DIY stuff. \n Full Build 0

\ Also, a Sucker for LEDs in general. I didn't add any more lights to the case, but I think that the board does a nice job already. Unfortunately, the fan pins were not compatible with the board, and I didn't buy a fan controller or a pin for the motherboard. I might in the future. \n Working 0

Sometimes, you just have to sit back and enjoy the view. \n Full Build 1

And finally, the Turing Pi 2 Home Cluster is up and running, and my house is not a mess anymore. \n Full build 2

The Future

Only time will tell what I'll do with this cluster.

\ However, I've been thinking of adding Prometheus and Grafana to have some metrics and nice graphs to check on the cluster.

\ Migrating all my Kubernetes files to Helm would be a good idea as well.

\ Lastly, the Retroarch instance is still in the works. Maybe in the works is a bit too optimistic given that the pod lives in CrashLoopBackOff state. But I'll get there.

The End

If you've reached the end of this post, I thank you for your time. I hope you've enjoyed it as much as I did both putting up the cluster together and writing about it.


This content originally appeared on HackerNoon and was authored by Tomas Sirio


Print Share Comment Cite Upload Translate Updates
APA

Tomas Sirio | Sciencx (2024-10-07T23:58:24+00:00) How to Build a Turing Pi 2 Home Cluster. Retrieved from https://www.scien.cx/2024/10/07/how-to-build-a-turing-pi-2-home-cluster/

MLA
" » How to Build a Turing Pi 2 Home Cluster." Tomas Sirio | Sciencx - Monday October 7, 2024, https://www.scien.cx/2024/10/07/how-to-build-a-turing-pi-2-home-cluster/
HARVARD
Tomas Sirio | Sciencx Monday October 7, 2024 » How to Build a Turing Pi 2 Home Cluster., viewed ,<https://www.scien.cx/2024/10/07/how-to-build-a-turing-pi-2-home-cluster/>
VANCOUVER
Tomas Sirio | Sciencx - » How to Build a Turing Pi 2 Home Cluster. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2024/10/07/how-to-build-a-turing-pi-2-home-cluster/
CHICAGO
" » How to Build a Turing Pi 2 Home Cluster." Tomas Sirio | Sciencx - Accessed . https://www.scien.cx/2024/10/07/how-to-build-a-turing-pi-2-home-cluster/
IEEE
" » How to Build a Turing Pi 2 Home Cluster." Tomas Sirio | Sciencx [Online]. Available: https://www.scien.cx/2024/10/07/how-to-build-a-turing-pi-2-home-cluster/. [Accessed: ]
rf:citation
» How to Build a Turing Pi 2 Home Cluster | Tomas Sirio | Sciencx | https://www.scien.cx/2024/10/07/how-to-build-a-turing-pi-2-home-cluster/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.