Helm & Kubernetes
This guide explains how to deploy Rafiki using Helm charts on a Kubernetes cluster. Helm is a package manager for Kubernetes that allows you to define, install, and upgrade complex Kubernetes applications through Helm charts.
Rafiki uses the following key components:
- Tigerbeetle: High-performance accounting database used for financial transaction processing and ledger management
- PostgreSQL: Used for storing application data and metadata
- Redis: Used for caching and messaging between components
Before you begin, ensure you have the following:
- Kubernetes cluster deployed
- kubectl installed and configured
- Helm installed
Add the official Interledger Helm repository which contains the Rafiki charts:
helm repo add interledger https://interledger.github.io/chartshelm repo update
Create a values.yaml
file to customize your Rafiki deployment.
Install Rafiki using the following command:
helm install rafiki interledger/rafiki -f values.yaml
This will deploy all Rafiki components to your Kubernetes cluster with the configurations specified in your values.yaml
file.
If you want to install to a specific namespace:
kubectl create namespace rafikihelm install rafiki interledger/rafiki -f values.yaml -n rafiki
Check the status of your deployment with the following commands:
# Check all resources deployed by Helmhelm status rafiki
# Check the running podskubectl get pods
# Check the deployed serviceskubectl get services
Configure ingress with NGINX Ingress Controller
Section titled “Configure ingress with NGINX Ingress Controller”To expose Rafiki services outside the cluster using NGINX Ingress Controller:
If you don’t already have NGINX Ingress Controller installed, you can install it using Helm:
# Add the ingress-nginx repositoryhelm repo add ingress-nginx https://kubernetes.github.io/ingress-nginxhelm repo update
# Install the ingress-nginx controllerhelm install nginx-ingress ingress-nginx/ingress-nginx \ --set controller.publishService.enabled=true
Wait for the Load Balancer to be provisioned:
kubectl get services -w nginx-ingress-ingress-nginx-controller
Once the Load Balancer has an external IP or hostname assigned, create DNS records:
auth.example.com
pointing to the Load Balancer IP/hostnamebackend.example.com
pointing to the Load Balancer IP/hostname
Apply your updated configuration:
helm upgrade rafiki interledger/rafiki -f values.yaml
Check if your ingress resources were created correctly:
kubectl get ingress
You should find entries for the auth server and backend API ingress resources.
If you don’t want to use ingress to access Rafiki services, you can use port forwarding to directly access the services:
Service | Port-Forward Command |
---|---|
Auth Server | kubectl port-forward svc/rafiki-auth-server 3000:3000 |
Backend API | kubectl port-forward svc/rafiki-backend-api 3001:3001 |
Admin UI | kubectl port-forward svc/rafiki-backend-api 3001:3001 |
PostgreSQL | kubectl port-forward svc/rafiki-postgresql 5432:5432 |
Redis | kubectl port-forward svc/rafiki-redis-master 6379:6379 |
To upgrade your Rafiki deployment to a newer version:
# Update the Helm repositoryhelm repo update
# Upgrade Rafikihelm upgrade rafiki interledger/rafiki -f values.yaml
To uninstall Rafiki from your cluster:
helm uninstall rafiki
Note that this won’t delete Persistent Volume Claims (PVC) created by the PostgreSQL and Redis deployments. If you want to delete them as well:
kubectl delete pvc -l app.kubernetes.io/instance=rafiki
If a component isn’t working correctly, you can check its logs:
# List all podskubectl get pods
# Check logs for a specific podkubectl logs pod/rafiki-auth-server-0
# List pods and their statuskubectl get pods
# Check logs for a specific podkubectl logs pod/rafiki-auth-server-0
# Get details about a podkubectl describe pod/rafiki-auth-server-0
# Check services and their endpointskubectl get services
# Check Persistent Volume Claimskubectl get pvc
# Check ingress resourceskubectl get ingress
- Check if PostgreSQL pods are running:
kubectl get pods -l app.kubernetes.io/name=postgresql
- Check PostgreSQL logs:
kubectl logs pod/rafiki-postgresql-0
- Verify that the database passwords match those in your
values.yaml
- Check Tigerbeetle logs:
kubectl logs pod/tigerbeetle-0
- Ensure that the PVC for Tigerbeetle has been created correctly
kubectl get pvc -l app.kubernetes.io/name=tigerbeetle
- Verify that the cluster ID is consistent across all components
- Verify NGINX Ingress Controller is running:
kubectl get pods -n ingress-nginx
- Check if your DNS records are correctly pointing to the ingress controller’s external IP
- Check the ingress resource:
kubectl get ingress
- Check ingress controller logs:
kubectl logs -n ingress-nginx deploy/nginx-ingress-ingress-nginx-controller
- Verify that TLS secrets exist if HTTPS is enabled:
kubectl get secrets
- If using cert-manager, check if certificates are properly issued:
kubectl get certificates
- Check certificate status:
kubectl describe certificate [certificate-name]
- Check cert-manager logs:
kubectl logs -n cert-manager deploy/cert-manager
- Check if services are running:
kubectl get services
- Verify pod health:
kubectl describe pod [pod-name]
- Check for resource constraints:
kubectl top pods
- Ensure all required services are running:
kubectl get services
- Verify service endpoints:
kubectl get endpoints
- Test connectivity between pods using temporary debugging pods:
kubectl run -it --rm debug --image=busybox -- sh# Inside the podwget -q -O- http://rafiki-auth-server:3000/health
When deploying Rafiki in production, consider the following security practices:
- Use secure passwords: Replace all default passwords with strong, unique passwords
- Enable TLS: Use HTTPS for all external communications
- Implement network policies: Use Kubernetes network policies to restrict traffic between pods
- Use RBAC: Use Kubernetes Role-Based Access Control to limit access to your cluster
- Use secrets management: Consider using a secrets management solution
- Perform regular updates: Keep your Rafiki deployment updated
To create a backup of your PostgreSQL database:
# Forward PostgreSQL port to local machinekubectl port-forward svc/rafiki-postgresql 5432:5432
# Use pg_dump to create a backuppg_dump -h localhost -U rafiki -d rafiki > rafiki_pg_backup.sql
Tigerbeetle is designed to be fault-tolerant with its replication mechanism. However, to create a backup of Tigerbeetle data, you can use the following approach:
# Create a snapshot of the Tigerbeetle PVCkubectl get pvc tigerbeetle-data-tigerbeetle-0 -o yaml > tigerbeetle-pvc.yaml
# Create a volume snapshotcat <<EOF | kubectl apply -f -apiVersion: snapshot.storage.k8s.io/v1kind: VolumeSnapshotmetadata: name: tigerbeetle-snapshotspec: volumeSnapshotClassName: csi-hostpath-snapclass source: persistentVolumeClaimName: tigerbeetle-data-tigerbeetle-0EOF
To restore from a PostgreSQL backup:
# Forward PostgreSQL port to local machinekubectl port-forward svc/rafiki-postgresql 5432:5432
# Use psql to restore from backuppsql -h localhost -U rafiki -d rafiki < rafiki_pg_backup.sql
To restore Tigerbeetle from a snapshot:
# Create a new PVC from the snapshotcat <<EOF | kubectl apply -f -apiVersion: v1kind: PersistentVolumeClaimmetadata: name: tigerbeetle-data-restoredspec: dataSource: name: tigerbeetle-snapshot kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteOnce resources: requests: storage: 10GiEOF
# Update the Tigerbeetle StatefulSet to use the restored PVCkubectl patch statefulset tigerbeetle -p '{"spec":{"template":{"spec":{"volumes":[{"name":"data","persistentVolumeClaim":{"claimName":"tigerbeetle-data-restored"}}]}}}}'