ffcd788c67
All checks were successful
/ publish (push) Successful in 15s
| datasource | package | from | to | | ---------- | ------- | ----- | ----- | | helm | velero | 6.7.0 | 8.1.0 | |
||
---|---|---|
.forgejo | ||
.tilt | ||
api | ||
cluster | ||
cmd | ||
config | ||
docs | ||
gen/proto/portability/v1alpha1 | ||
hack | ||
internal | ||
pkg | ||
proto/portability/v1alpha1 | ||
renovate | ||
scripts | ||
testing | ||
tools/lsh-gen | ||
.dockerignore | ||
.gitignore | ||
.gitlab-ci.yml | ||
.golangci.yml | ||
buf.gen.yaml | ||
Dockerfile | ||
Dockerfile.tilt | ||
go.mod | ||
go.sum | ||
homeserver.yml | ||
kind-config.yaml | ||
LICENSE | ||
local.nix.example | ||
Makefile | ||
PROJECT | ||
README.md | ||
renovate.json5 | ||
shell.nix | ||
Tiltfile |
libre.sh
libre.sh is a platform to manage many instances of different applications at scale.
Use Cases
The use cases directory lists things we try to achieve with libre.sh.
Glossary
Application: an application is a web application that is usable by an end user (For instance: HedgeDoc, Discourse, …). Object Store (S3 API “standard”): An http API to store and retrieve objects. PITR: Point in Time Recovery
Personas
Cluster Operator
A Cluster Operator is a System Administrator, or Site Reliability Engineer that is transforming raw machines (physical, virtual) into a production Kubernetes cluster. This person is typically root on servers and on Kubernetes API.
Application Operator
An Application Operator is a person that is less technical than a Cluster Operator, and doesn’t necessarily understand the command line interface. But this person, through a nice User interface, is able to manipulate high level objects that represent the application.
End User
A user that will interact only with an application.
Architecture decision records
Systems
libre.sh runtime
A collection of controllers and services that are required to deploy applications instances.
libre.sh runtime manager
The controller in charge of installing/configuring/upgrading the runtime.
Development
Requirements
- at least 16GB of RAM
- nix-shell installed
- enough file watchers (see below)
- allow binding ports 80 and 443 for non privileged users (see below)
Quickstart
# To increase the numbers of file watchers:
# https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files
sudo sed -i -n -e '/^fs\.inotify\.max_user_watches/!p' -e '$afs.inotify.max_user_watches = 1048576' /etc/sysctl.conf
sudo sed -i -n -e '/^fs\.inotify\.max_user_instances/!p' -e '$afs.inotify.max_user_instances = 1024' /etc/sysctl.conf
# https://discuss.linuxcontainers.org/t/error-with-docker-inside-lxc-container/922/5
sudo sed -i -n -e '/^kernel\.keys\.maxkeys/!p' -e '$akernel.keys.maxkeys = 5000' /etc/sysctl.conf
# Allows unprivileged users to bind ports as low as 80 instead of standard 1024
# Kind a besoin de bind les ports 80 et 443 avec podman en root-less
sudo sed -i -n -e '/^net\.ipv4\.ip_unprivileged_port_start/!p' -e '$anet.ipv4.ip_unprivileged_port_start = 80' /etc/sysctl.conf
# Immediatly apply kernel configuration changes (or reboot if you prefer)
sudo sysctl --system
# Adds records to /etc/hosts to allow us to access our hosted services
# It is required to have s3.dev.local defined to bring up our cluster
sudo sed -i '/dev.local$/d' /etc/hosts
cat << EOF | sudo tee -a /etc/hosts
127.0.0.1 minio.dev.local
127.0.0.1 s3.dev.local
127.0.0.1 keycloak.dev.local
127.0.0.1 nuage.dev.local
127.0.0.1 decidim.dev.local
127.0.0.1 mobilizon.dev.local
127.0.0.1 mailhog.dev.local
127.0.0.1 forgejo.dev.local
EOF
Enter the shell:
nix-shell
Create a Kubernetes cluster and start deploying libre.sh using Tilt:
kind create cluster --config kind-config.yaml
tilt up
Ensure rootCA.pem
is populated (cat rootCA.pem
). If not, relaunch mkcert
Tilt job), then you can install the certificate in local trust store:
# After this, Minio Config job should pass
CAROOT=. mkcert -install
If needed, relaunch each job that is still failing, at the end, lsh operator job should pass.
Then we can start using libre.sh.
# Install flux (doc: https://fluxcd.io/flux/installation/)
kubectl apply -f https://github.com/fluxcd/flux2/releases/latest/download/install.yaml
# Add Helm repositories
kubectl create -f ./cluster/libresh-repositories.yaml
# Add libre.sh' GitRepository ressource
kubectl create -f ./cluster/libresh-cluster.yml
# You can check that `repositories` kustomization reconciliation status:
flux -n libresh-system get kustomizations
Then, we can install other tools (observability, etc.), please note that you shouldn't install ingress-nginx, cert-manager using flux (as stated below) if you provisioned them with tilt: you may encounter some conflicts
Cleaning up
When you're done, you can cleanup your environment.
Deleting the cluster:
kind delete cluster --name libresh-dev
Uninstalling CA:
CAROOT=. mkcert -uninstall
Minimal install
kubectl create ns libresh-system
kubectl create -f - << EOF
apiVersion: v1
kind: Secret
metadata:
name: cluster-settings
namespace: libresh-system
type: Opaque
stringData:
CLUSTER_DOMAIN: my-cluster.my-domain.fr
CLUSTER_EMAIL: admin@my-domain.fr
CLUSTER_NAME: my-cluster
DEFAULT_CLUSTERISSUER: letsencrypt
EOF
# Install flux (doc: https://fluxcd.io/flux/installation/)
kubectl apply -f https://github.com/fluxcd/flux2/releases/latest/download/install.yaml
# Add libre.sh' GitRepository ressource
kubectl create -f ./cluster/libresh-cluster.yml
# Add Helm repositories
kubectl create -f ./cluster/libresh-repositories.yaml
kubectl create -f ./cluster/priorityclasses/flux-ks.yml
kubectl create -f ./cluster/components/networking/cert-manager/flux-ks.yml # only if not using Tilt
kubectl create -f ./cluster/components/networking/ingress-nginx/flux-ks.yml # only if not using Tilt
kubectl create -f ./cluster/components/databases/postgres-zalando/flux-ks.yaml # only if not using Tilt
Then, you can use libre.sh objects. There is some examples in the config/samples directory
Deploy CertManager ClusterIssuer
kubectl apply -f ./cluster/components/networking/cert-manager-issuers/self-signed.yaml
Deploy MinIO Tenant
# deploy minio operator
cd ./cluster/components/objectstore/minio/
kubectl create -f ./flux-ks.yml
cd tenant-example
cp ./config-example.env ./config.env
vi ./config.env
kubectl -n minio create secret generic --from-file=./config.env prod-storage-configuration
# deploy minio tenant - This part is given as a rough example, read and modify carefuly, but you can get the idea
export CLUSTER_DOMAIN=my-cluster.my-domain.fr
envsubst < ./tenant-example.yaml > ./tenant.yaml
vi ./tenant.yaml
kubectl apply -f ./tenant.yaml
configure libresh-system
kubectl create -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: libresh-config
namespace: libresh-system
type: Opaque
stringData:
object-storage.yml: |
apiVersion: objectstorage.libre.sh/v1alpha1
kind: ObjectStorageConfig
mapping:
data: my-s3
pitr: my-s3
providers:
- name: my-s3
host: CHANGE_ME
insecure: false
accessKey: CHANGE_ME
secretKey: CHANGE_ME
mailbox.yml: |
apiVersion: config.libre.sh/v1alpha1
kind: MailboxConfig
spec:
providers: []
keycloak.yml: |
default: ""
providers: []
EOF
make install
IMG=registry.libre.sh/operator:v1.0.0-alpha.1 make deploy
Deploy observability
We'll deploy components in this order:
- Loki
- Thanos
- Prometheus (with alertmanager)
- Promtail
- Grafana
# Install flux (doc: https://fluxcd.io/flux/installation/)
kubectl apply -f https://github.com/fluxcd/flux2/releases/latest/download/install.yaml
# Add Helm repositories
kubectl create -f ./cluster/libresh-repositories.yaml
# Add libre.sh' GitRepository ressource
kubectl create -f ./cluster/libresh-cluster.yml
# Create loki admin credentials and install Loki (https://grafana.com/oss/loki/, using flux)
kubectl create secret generic loki.admin.creds --from-literal=username=admin --from-literal=password=$(cat /dev/random | tr -dc '[:alnum:]' | head -c 40)
kubectl apply -f ./cluster/components/observability/loki/flux-ks.yaml
# Install prometheus CRDs, needed by Thanos and Prometheus
kubectl apply -f ./cluster/crds/prometheus-crds/flux-ks.yaml
# Install Thanos (https://thanos.io/, using flux)
kubectl apply -f ./cluster/components/observability/thanos/flux-ks.yaml
# Install install Prometheus (https://prometheus.io, via flux)
kubectl apply -f ./cluster/components/observability/prometheus-stack/flux-ks.yaml
# Install promtail (https://grafana.com/docs/loki/latest/send-data/promtail/, via flux)
kubectl apply -f ./cluster/components/observability/promtail/flux-ks.yaml
# Create grafana admin credentials, and install grafana (https://grafana.com/grafana/, via flux)
kubectl create secret generic grafana.admin.creds --from-literal=username=admin --from-literal=password=$(cat /dev/random | tr -dc '[:alnum:]' | head -c 40)
kubectl apply -f ./cluster/components/observability/grafana/flux-ks.yaml
You can use the following command to watch flux reconcilation progress:
flux get kustomizations -n libresh-system
Grafana is available at grafana.my-cluster.my-domain.fr.
Grafana credentials can be found using scripts/lsh-get-secrets.sh
Deploy velero backups
k apply -f https://raw.githubusercontent.com/vmware-tanzu/helm-charts/main/charts/velero/crds/schedules.yaml
k apply -f ./cluster/components/backups/velero/flux-ks.yaml
# to sync creds of the bucket:
flux -n libresh-system suspend ks velero
flux -n libresh-system resume ks velero
Upgrade
Renovabot runs regularly, it will create MR against main
branch.
Currently a human has to accept them.
Once you are happy with a state of the main branch, you can tag a release.
Then, to update your cluster, you just need to edit the tag in the gitrepository:
kubectl -n libresh-system edit gitrepositories libresh-cluster
This will update all components managed by libre.sh.
Update libre.sh operator on configured cluster
If you use tilt, a new build of the operator will be triggered and deployed automatically.
If you prefer to go the manual way, you should build a new image (via
make docker-build
), then install CRDs (via make install
) and deploy (via
make deploy
) :
# Build a docker img which will be deployed in `lsh-operator`
make docker-build && make install && make deploy
Use a local cache for container images
See docs/development/using-local-caches.md.
Add personal customizations/tools to your dev env using nix-shell
You can obtain a shell with libre.sh tooling (shell.nix
) plus your personal
tools/customizations by using a local.nix
file (there is an example:
local.nix.example
). Using nix-shell local.nix
you'll have both libre.sh and
yours tooling/configuration.
Known issues
MountVolume.SetUp failed for volume "config" : secret "libresh-config" not found
libresh-config
secret is created in the libresh-system
namespace by a Tilt
job named Minio Config
.
Ensure the following builds are successful: cert-manager
and uncategorized
,
then trigger Minio Config
MinioConfig job loops on "Waiting for minio"
Ensure s3.dev.local
is present in your /etc/hosts for 127.0.0.1
ip address.
Check if the self signed root ca certificate has been written (cat rootCA.pem
) ; then install
it with CAROOT=. *mkcert* -install
(Tilt creates this certificate from the following secret: cert-manager/root-ca-secret
)