8 minute read

Introduction

Hi everyone! Setting up a home file server that syncs with family devices and backs up regularly can take an evening. All you need is basic bash and k8s skills, and knowledge of the related software. I created a list of steps for myself and am sharing it here to help others with similar goals.

I’ve only tested these steps on a Raspberry Pi with the default “Raspbian GNU/Linux 11 (bullseye)” OS, but I think they should work on most Linux-based systems with only a few tweaks. Please leave a comment if you have any tips on how to simplify or improve the steps.

Installation

Notes

  1. To make things clearer for your specific setup, I used ${FOO_BAR} style variables to indicate places where you might need to use different devices, folder names, etc.
  2. This gist comment has saved me from having to reinstall the OS several times.
  3. I am using the Nextcloud Files software. It is open-source, regularly updated, and supports desktop and mobile clients. It also offers office collaboration features and other addons.
  4. I am using two external USB drives — a smaller one for storing files and a larger one for backups.
  5. I am using this, more or less, compact case that accommodated my Raspberry Pi and both USB drives. Please share a link to a smaller case that can fit them, plus a fan if you know one.
  6. I am using this fan control module to enable fan only when my Raspberry Pi becomes too hot.
  7. I am using this USB hub with a power adapter to avoid Raspberry Pi power overburdening by external USB drives.

Fresh Raspberry Pi

  1. Connect a mouse, a keyboard and a monitor to a Raspberry Pi. Follow the prompts.
  2. Install a helm.
  3. [Optional] Set up a VNC instead of using peripherals per this article.
  4. [Optional] Set up an SSH connection per this article.
  5. [Optional] Update security settings per this article.

Fans setup via GUI

  1. Follow the fan control module manual.
  2. Open Raspberry > Preferences > Raspberry Pi Configuration > Performance tab
    1. Set Fan = true
    2. Set Fan GPIO = 4
    3. Set Fan Temperature = 60. In this case the fan will turn on when the temperature hits 60C. You can set higher temperature if needed.
  3. [Optional] Add CPU temperature gauge per this article with the following colors:
    1. Normal color #163c8c
    2. Warning1 color #d68549
    3. Warning2 color #e3001f

USB drives

Device names

Get the device names like dev/name via lsblk. For example:

$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
...
sda           8:0    0 465.8G  0 disk
└─sda1        8:1    0 465.8G  0 part /media/500
sdb           8:16   0   1.8T  0 disk
├─sdb1        8:17   0   200M  0 part
└─sdb2        8:18   0   1.8T  0 part /media/2000
...

New variables (you will probably have different values)
${MAIN_USB} = “dev/sda1”
${BACKUP_USB} = “dev/sdb2”

Format USB

I have decided to format USB drives in ext4:

$ sudo umount ${MAIN_USB}
$ sudo mkfs.ext4 ${MAIN_USB}

$ sudo umount ${BACKUP_USB}
$ sudo mkfs.ext4 ${BACKUP_USB}

Mount USB drives

New variables
${BACKUP_USB_MOUNT} - path to the folder with the backup USB.
${BACKUP_USB_UUID} - UUID from the blkid ${BACKUP_USB}.
${MAIN_USB_MOUNT} - path to the folder with the main USB.
${MAIN_USB_UUID} - UUID from the blkid ${MAIN_USB}.
${USB_MOUNT_SCRIPT} - path to file with mount script.

Sometimes USB drives did not mount after a restart, so I took a few extra steps:

Note: First, I tried to mount using fstab, the following way:

$ sudo cp /etc/fstab /etc/fstab.backup

Add the following to the /etc/fstab:

UUID="${MAIN_USB_UUID}" ${MAIN_USB_MOUNT} ext4 defaults,nofail,x-systemd.device-timeout=1,noatime  0       0
UUID="${BACKUP_USB_UUID}" ${BACKUP_USB_MOUNT} ext4 defaults,nofail,x-systemd.device-timeout=1,noatime  0       0
$ sudo reboot

But USB drives often failed to mount during the boot. So, inspired by this comment and this article I used the following approach:

Create a ${USB_MOUNT_SCRIPT} file with the following content:

#!/bin/sh

echo "mounting ${MAIN_USB} to ${MAIN_USB_MOUNT} folder:"
while ! lsblk | grep "${MAIN_USB_MOUNT}"; do
   echo "5 sec break..."; sleep 5
   sudo mount ${MAIN_USB} ${MAIN_USB_MOUNT}
done
echo "Done for the main USB"

echo "mounting ${BACKUP_USB} to ${BACKUP_USB_MOUNT} folder:"
while ! lsblk | grep "${BACKUP_USB_MOUNT}"; do
   echo "5 sec break..."; sleep 5
   sudo mount ${BACKUP_USB} ${BACKUP_USB_MOUNT}
done
echo "Done for 2T"

echo "Set 777 permissions for the ${MAIN_USB_MOUNT} folder to avoid NextCloud errors"
sudo chmod 777 -R ${MAIN_USB_MOUNT}
echo "Permissions were set"

Update a crontab

$ sudo chmod +x ${USB_MOUNT_SCRIPT}
$ sudo crontab -e

Add the following:

@reboot ${USB_MOUNT_SCRIPT}>/tmp/usb_mount.log

Backup

I have decided to use borg for backups.

New variables
${BACKUP_LOG} - path to file with backup logs.
${BACKUP_NAME} - the name of the backup to use in borg.
${BACKUP_PATH} - path to the folder that will contain backups (you should create an empty folder).
${BACKUP_SCRIPT} - path to file with the backup script.
${MAX_BACKUP_SPACE} - How much space to allocate for backups. This number should at least equal, but preferably be several times larger than the main storage. For example 1.5T when main storage is 500G.
${NEXTCLOUD_PATH} - path to the folder that will contain NextCloud and all related files (you should create an empty folder).

Set up borg repository

$ touch ${BACKUP_LOG}

$ sudo apt install borgbackup
$ sudo su -

# Setup
$ borg init --storage-quota ${MAX_BACKUP_SPACE} --encryption=none ${BACKUP_PATH} 2> ${BACKUP_LOG}
$ borg create --compression auto,zstd ${BACKUP_PATH}::${BACKUP_NAME}`date +%Y-%m-%d_%H:%M` ${NEXTCLOUD_PATH} 2> ${BACKUP_LOG}

# To restore a backup:
$ borg list ${BACKUP_PATH}  # See available backups
$ borg extract ${BACKUP_PATH}::${BACKUP_NAME} ${NEXTCLOUD_PATH} # Restore latest backup
$ borg extract --timestamp 2022-01-01T12:00:00 ${BACKUP_PATH}::${BACKUP_NAME} ${NEXTCLOUD_PATH} # Restore per timestamp

$ exit

Note: I have decided to set up backups without encryption. But borg recommends to use one. You can find more details here and here.

Set up periodic backups

Create a ${BACKUP_SCRIPT} file with the following content:

#!/bin/sh

echo "Backup started on `date +%Y-%m-%d_%H:%M`" >> ${BACKUP_LOG} 2>&1
borg prune --keep-daily=7 --keep-weekly=4 --keep-monthly=6 ${BACKUP_PATH} >> ${BACKUP_LOG} 2>&1
borg create --compression auto,zstd ${BACKUP_PATH}::${BACKUP_NAME}`date +%Y-%m-%d_%H:%M` ${NEXTCLOUD_PATH} >> ${BACKUP_LOG} 2>&1
echo "Backup finished on `date +%Y-%m-%d_%H:%M`" >> ${BACKUP_LOG} 2>&1

Update the crontab

$ sudo chmod +x ${BACKUP_SCRIPT}
$ sudo crontab -e

Add the following:

0 0 * * * /usr/bin/sh ${BACKUP_SCRIPT}

Install the k8s

New variables
${NAMESPACE} - Kubernetes namespace for NextCloud.

I did not plan to learn to install vanilla k8s. So I looked for alternatives:

  1. Minikube - is not supporting mounts with > 600 files by default (link). That was too few for me.
  2. I have got the error when trying to install MicroK8s and failed to fix it:
    # error: snap "microk8s" is not available on 1.21/stable for this architecture (armhf) but exists on
    #        other architectures (amd64, arm64).
    
  3. I did not face any blocking issues with k3s:
    $ curl -sfL https://get.k3s.io | sh -
    

Configure k3s

  1. Add cgroup_memory=1 cgroup_enable=memory in the end of the /boot/cmdline.txt without adding any new lines
  2. Reboot the Raspberry Pi.
  3. Confirm install was successful:
    $ sudo kubectl get nodes
    # Expect the output similar to
    NAME          STATUS   ROLES                  AGE   VERSION
    raspberrypi   Ready    control-plane,master   56d   v1.25.7+k3s1
    
  4. Add the following to the /etc/rancher/k3s/config.yaml to allow running kubectl without sudo:
    write-kubeconfig-mode: "0644"
    
  5. Reboot the Raspberry Pi.
  6. Confirm the change was successful:
    $ kubectl get nodes
    
  7. Create a namespace for the NextCloud:
    $ kubectl create namespace ${NAMESPACE}
    $ kubectl get namespace
    NAME              STATUS   AGE
    default           Active   56d
    kube-system       Active   56d
    kube-public       Active   56d
    kube-node-lease   Active   56d
    ${NAMESPACE}      Active   48d
    

High CPU Usage

Apply this fix in case you see high CPU usage:

$ sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
$ sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy

Storage

You will need to create PersistentVolume and PersistentVolumeClaim for the NextCloud

New variables
${PERSISTENT_VOLUME_CLAIM_NAME} - PersistentVolumeClaim name. For example next-cloud-volume-claim.
${PERSISTENT_VOLUME_CLAIM_YAML} - Path to a .yaml file with a PersistentVolumeClaim manifest.
${PERSISTENT_VOLUME_NAME} - PersistentVolume name. For example next-cloud-volume.
${PERSISTENT_VOLUME_YAML} - Path to a .yaml file with a PersistentVolume manifest.
${STORAGE_SIZE} - NextCloud (main) storage size. For example 123Gi. This is a reminder that it should be equal or less than ${MAX_BACKUP_SPACE}.

PersistentVolume

Create a ${PERSISTENT_VOLUME_YAML} file with the following content:

---
apiVersion: v1
kind: PersistentVolume
metadata:
  namespace: "${NAMESPACE}"
  name: "${PERSISTENT_VOLUME_NAME}"
  labels:
    type: "local"
spec:
  storageClassName: "manual"
  capacity:
    storage: "${STORAGE_SIZE}"
  accessModes:
    - ReadWriteMany
  hostPath:
    path: "${NEXTCLOUD_PATH}"
---

Apply the manifest:

$ kubectl apply -f ${PERSISTENT_VOLUME_YAML}
$ kubectl get pv

PersistentVolumeClaim

Create a ${PERSISTENT_VOLUME_CLAIM_YAML} file with the following content:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  namespace: "${NAMESPACE}"
  name: "${PERSISTENT_VOLUME_CLAIM_NAME}"
spec:
  storageClassName: "manual"
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: "${STORAGE_SIZE}"
---

Apply the manifest:

$ kubectl apply -f ${PERSISTENT_VOLUME_CLAIM_YAML}
$ kubectl get pvc -n "${NAMESPACE}"

NextCloud

New variables
${ADMIN_NAME} - Nextcloud admin name.
${ADMIN_PASSWORD} - Nextcloud admin password.
${NEXTCLOUD_VALUES_YAML} - Path to .yaml file for NextCloud config values.
${RASPBERRY_PI_IP} - IP of your Raspberry Pi in the local network.

Install the NextCloud

Prepare for the NextCloud installation:

$ helm repo add nextcloud https://nextcloud.github.io/helm/
$ helm show values nextcloud/nextcloud >> ${NEXTCLOUD_VALUES_YAML}

Update the following lines in the ${NEXTCLOUD_VALUES_YAML}:

---
...
nextcloud:
  username: ${ADMIN_NAME}
  password: ${ADMIN_PASSWORD}
...
persistence:
  enabled: true
  existingClaim: "${PERSISTENT_VOLUME_CLAIM_NAME}"
  accessMode: ReadWriteMany
  size: "${STORAGE_SIZE}"
...
---

Install the NextCloud:

$ export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
$ helm install nextcloud nextcloud/nextcloud \
  --namespace ${NAMESPACE} \
  --values ${NEXTCLOUD_VALUES_YAML}

Check that the installation was successful:

$ kubectl get services -n next-cloud

Open http://<CLUSTER-IP>:<PORT(S)> using the Raspberry Pi web browser locally or via VNC.

Access

Config

Append Raspberry Pi IP to the trusted_domains in the ${NEXTCLOUD_PATH}/config/config.php the following way:

...
'trusted_domains' =>
  array (
    0 => 'localhost',
    1 => 'nextcloud.kube.home', # You might have a different default value in this line
    2 => '${RASPBERRY_PI_IP}',
  ),
...

Expose locally

$ kubectl expose service nextcloud \
  --target-port 80 \
  --port 8080 \
  --name nextcloud-exp \
  --type=LoadBalancer \
  -n ${NAMESPACE}

Create Users

  1. Open http://${RASPBERRY_PI_IP}:8080
  2. Login with ${ADMIN_NAME} and ${ADMIN_PASSWORD}
  3. Create non-admin users
  4. Now, at home, you can log in to the NextCloud via browser using http://${RASPBERRY_PI_IP}:8080 or via supported clients.

Maintenance

New variables
${NEXTCLOUD_POD_NAME} - k8s NextCloud pod name. Get it like kubectl get pods -n ${NAMESPACE}.

I have run into some issues when I was uploading and deleting many files at once. I fixed them by running some commands on the nextcloud pod:

$ kubectl exec --stdin --tty ${NEXTCLOUD_POD_NAME} -n ${NAMESPACE} -- bash
$ su -s /bin/bash www-data
$ cd /var/www/html

File Locks

Sometimes I could not view, update or delete files and/or folders. I have fixed this by running:

$ php occ files:scan --all
$ php occ files:cleanup

Deleted Files Errors

Sometimes error was thrown for me when I opened the “Deleted files” page. I have fixed this by running:

$ php occ trashbin:cleanup --all-users

Other References

This is it. Here are a few more references that I used:

Updated:

Comments