No description
Find a file
libresh_bot a56fbe8494
All checks were successful
/ publish (push) Successful in 22s
feat: update registry.k8s.io/cpa/cluster-proportional-autoscaler docker tag to v1.9.0
| datasource | package                                             | from   | to     |
| ---------- | --------------------------------------------------- | ------ | ------ |
| docker     | registry.k8s.io/cpa/cluster-proportional-autoscaler | v1.8.9 | v1.9.0 |
2024-11-27 23:01:45 +00:00
.forgejo ci: add test forgejo action 2024-09-12 12:43:06 +02:00
.tilt fix(dev): pin nginx controller version 2024-11-27 12:46:17 +01:00
api feat(wikijs): add minimal setup 2024-11-15 18:08:31 +01:00
cluster feat: update registry.k8s.io/cpa/cluster-proportional-autoscaler docker tag to v1.9.0 2024-11-27 23:01:45 +00:00
cmd feat(wikijs): add minimal setup 2024-11-15 18:08:31 +01:00
config feat(wikijs): add minimal setup 2024-11-15 18:08:31 +01:00
docs docs: Adds comments, updates readme and postgres controller documentation 2024-11-24 16:05:04 +01:00
gen/proto/portability/v1alpha1 fix(proto/portability): rename manifests to app 2024-02-27 00:33:35 +01:00
hack misc: fusion lsh projects 2024-01-11 15:40:45 +01:00
internal fix(listmonk): run upgrade as a second command 2024-11-18 14:50:24 +01:00
pkg fix: use spec version instead of status version in labels 2024-10-10 11:44:00 +02:00
proto/portability/v1alpha1 fix(proto/portability): rename manifests to app 2024-02-27 00:33:35 +01:00
renovate ci: try to simplify renovate 2023-10-02 16:17:53 +02:00
scripts chore: adds tools URLs in scripts/lsh-get-secrets.sh 2024-11-24 16:05:17 +01:00
testing fix: cleanup 2023-04-14 15:58:52 +02:00
tools/lsh-gen chore: move conditions accessors to lsh-gen 2024-02-12 16:20:11 +01:00
.dockerignore misc: fusion lsh projects 2024-01-11 15:40:45 +01:00
.gitignore chore: Allows overriding shell.nix to have personal tools/configuration in a shell provided by nix-shell 2024-11-24 16:05:06 +01:00
.gitlab-ci.yml ci: disable registry expiration 2024-08-29 14:31:54 +02:00
.golangci.yml misc: fusion lsh projects 2024-01-11 15:40:45 +01:00
buf.gen.yaml chore: cleanup proto 2024-02-20 19:15:23 +01:00
Dockerfile feat(forgejo): support mailbox & expose config 2024-09-11 14:22:51 +02:00
Dockerfile.tilt feat(forgejo): support mailbox & expose config 2024-09-11 14:22:51 +02:00
go.mod feat(postgres): support migrating borked collate 2024-09-18 16:47:51 +02:00
go.sum feat(postgres): support migrating borked collate 2024-09-18 16:47:51 +02:00
homeserver.yml feat: allows for manual oidc client 2024-04-19 17:44:28 +02:00
kind-config.yaml chore: we need at least 3 worker nodes to launch services like loki 2024-11-24 16:04:50 +01:00
LICENSE feat: add cli 2023-04-14 15:53:41 +02:00
local.nix.example refactor(nix-shell): rollback to use expression 2024-11-24 17:15:02 +01:00
Makefile chore: bump controller-gen 2024-08-13 17:03:24 +02:00
PROJECT feat(wikijs): add minimal setup 2024-11-15 18:08:31 +01:00
README.md docs: README: a bit of rework 2024-11-24 16:05:12 +01:00
renovate.json5 misc(renovate): use develop as base branch 2024-01-11 15:30:55 +01:00
shell.nix refactor(nix-shell): rollback to use expression 2024-11-24 17:15:02 +01:00
Tiltfile docs: Adds comments, updates readme and postgres controller documentation 2024-11-24 16:05:04 +01:00

libre.sh

libre.sh is a platform to manage many instances of different applications at scale.

Use Cases

The use cases directory lists things we try to achieve with libre.sh.

Glossary

Application: an application is a web application that is usable by an end user (For instance: HedgeDoc, Discourse, …). Object Store (S3 API “standard”): An http API to store and retrieve objects. PITR: Point in Time Recovery

Personas

Cluster Operator

A Cluster Operator is a System Administrator, or Site Reliability Engineer that is transforming raw machines (physical, virtual) into a production Kubernetes cluster. This person is typically root on servers and on Kubernetes API.

Application Operator

An Application Operator is a person that is less technical than a Cluster Operator, and doesnt necessarily understand the command line interface. But this person, through a nice User interface, is able to manipulate high level objects that represent the application.

End User

A user that will interact only with an application.

Architecture decision records

Systems

libre.sh runtime

A collection of controllers and services that are required to deploy applications instances.

libre.sh runtime manager

The controller in charge of installing/configuring/upgrading the runtime.

Development

Requirements

  • at least 16GB of RAM
  • nix-shell installed
  • enough file watchers (see below)
  • allow binding ports 80 and 443 for non privileged users (see below)

Quickstart

# To increase the numbers of file watchers:
# https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files
sudo sed -i -n -e '/^fs\.inotify\.max_user_watches/!p' -e '$afs.inotify.max_user_watches = 1048576' /etc/sysctl.conf
sudo sed -i -n -e '/^fs\.inotify\.max_user_instances/!p' -e '$afs.inotify.max_user_instances = 1024' /etc/sysctl.conf

# https://discuss.linuxcontainers.org/t/error-with-docker-inside-lxc-container/922/5
sudo sed -i -n -e '/^kernel\.keys\.maxkeys/!p' -e '$akernel.keys.maxkeys = 5000' /etc/sysctl.conf

# Allows unprivileged users to bind ports as low as 80 instead of standard 1024
# Kind a besoin de bind les ports 80 et 443 avec podman en root-less
sudo sed -i -n -e '/^net\.ipv4\.ip_unprivileged_port_start/!p' -e '$anet.ipv4.ip_unprivileged_port_start = 80' /etc/sysctl.conf

# Immediatly apply kernel configuration changes (or reboot if you prefer)
sudo sysctl --system

# Adds records to /etc/hosts to allow us to access our hosted services
# It is required to have s3.dev.local defined to bring up our cluster
sudo sed -i '/dev.local$/d' /etc/hosts
cat << EOF | sudo tee -a /etc/hosts
127.0.0.1 minio.dev.local
127.0.0.1 s3.dev.local
127.0.0.1 keycloak.dev.local
127.0.0.1 nuage.dev.local
127.0.0.1 decidim.dev.local
127.0.0.1 mobilizon.dev.local
127.0.0.1 mailhog.dev.local
127.0.0.1 forgejo.dev.local
EOF

Enter the shell:

nix-shell

Create a Kubernetes cluster and start deploying libre.sh using Tilt:

kind create cluster --config kind-config.yaml
tilt up

Ensure rootCA.pem is populated (cat rootCA.pem). If not, relaunch mkcert Tilt job), then you can install the certificate in local trust store:

# After this, Minio Config job should pass
CAROOT=. mkcert -install

If needed, relaunch each job that is still failing, at the end, lsh operator job should pass.

Then we can start using libre.sh.

# Install flux (doc: https://fluxcd.io/flux/installation/)
kubectl apply -f https://github.com/fluxcd/flux2/releases/latest/download/install.yaml

# Add Helm repositories
kubectl create -f ./cluster/libresh-repositories.yaml

# Add libre.sh' GitRepository ressource
kubectl create -f ./cluster/libresh-cluster.yml

# You can check that `repositories` kustomization reconciliation status:
flux -n libresh-system get kustomizations

Then, we can install other tools (observability, etc.), please note that you shouldn't install ingress-nginx, cert-manager using flux (as stated below) if you provisioned them with tilt: you may encounter some conflicts

Cleaning up

When you're done, you can cleanup your environment.

Deleting the cluster:

kind delete cluster --name libresh-dev

Uninstalling CA:

CAROOT=. mkcert -uninstall

Minimal install

kubectl create ns libresh-system

kubectl create -f -  << EOF
apiVersion: v1
kind: Secret
metadata:
  name: cluster-settings
  namespace: libresh-system
type: Opaque
stringData:
  CLUSTER_DOMAIN: my-cluster.my-domain.fr
  CLUSTER_EMAIL: admin@my-domain.fr
  CLUSTER_NAME: my-cluster
  DEFAULT_CLUSTERISSUER: letsencrypt
EOF

# Install flux (doc: https://fluxcd.io/flux/installation/)
kubectl apply -f https://github.com/fluxcd/flux2/releases/latest/download/install.yaml

# Add libre.sh' GitRepository ressource
kubectl create -f ./cluster/libresh-cluster.yml

# Add Helm repositories
kubectl create -f ./cluster/libresh-repositories.yaml
kubectl create -f ./cluster/priorityclasses/flux-ks.yml
kubectl create -f ./cluster/components/networking/cert-manager/flux-ks.yml      # only if not using Tilt
kubectl create -f ./cluster/components/networking/ingress-nginx/flux-ks.yml     # only if not using Tilt
kubectl create -f ./cluster/components/databases/postgres-zalando/flux-ks.yaml  # only if not using Tilt

Then, you can use libre.sh objects. There is some examples in the config/samples directory

Deploy CertManager ClusterIssuer

kubectl apply -f ./cluster/components/networking/cert-manager-issuers/self-signed.yaml

Deploy MinIO Tenant

# deploy minio operator
cd ./cluster/components/objectstore/minio/
kubectl create -f ./flux-ks.yml
cd tenant-example
cp ./config-example.env ./config.env
vi ./config.env
kubectl -n minio create secret generic --from-file=./config.env prod-storage-configuration
# deploy minio tenant - This part is given as a rough example, read and modify carefuly, but you can get the idea
export CLUSTER_DOMAIN=my-cluster.my-domain.fr
envsubst < ./tenant-example.yaml > ./tenant.yaml
vi ./tenant.yaml
kubectl apply -f ./tenant.yaml

configure libresh-system

kubectl create -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: libresh-config
  namespace: libresh-system
type: Opaque
stringData:
  object-storage.yml: |
    apiVersion: objectstorage.libre.sh/v1alpha1
    kind: ObjectStorageConfig
    mapping:
      data: my-s3
      pitr: my-s3
    providers:
    - name: my-s3
      host: CHANGE_ME
      insecure: false
      accessKey: CHANGE_ME
      secretKey: CHANGE_ME
  mailbox.yml: |
      apiVersion: config.libre.sh/v1alpha1
      kind: MailboxConfig
      spec:
        providers: []
  keycloak.yml: |
    default: ""
    providers: []
EOF

make install
IMG=registry.libre.sh/operator:v1.0.0-alpha.1 make deploy

Deploy observability

We'll deploy components in this order:

  • Loki
  • Thanos
  • Prometheus (with alertmanager)
  • Promtail
  • Grafana
# Install flux (doc: https://fluxcd.io/flux/installation/)
kubectl apply -f https://github.com/fluxcd/flux2/releases/latest/download/install.yaml

# Add Helm repositories
kubectl create -f ./cluster/libresh-repositories.yaml

# Add libre.sh' GitRepository ressource
kubectl create -f ./cluster/libresh-cluster.yml

# Create loki admin credentials and install Loki (https://grafana.com/oss/loki/, using flux)
kubectl create secret generic loki.admin.creds --from-literal=username=admin --from-literal=password=$(cat /dev/random | tr -dc '[:alnum:]' | head -c 40)
kubectl apply -f ./cluster/components/observability/loki/flux-ks.yaml

# Install prometheus CRDs, needed by Thanos and Prometheus
kubectl apply -f ./cluster/crds/prometheus-crds/flux-ks.yaml

# Install Thanos (https://thanos.io/, using flux)
kubectl apply -f ./cluster/components/observability/thanos/flux-ks.yaml

# Install install Prometheus (https://prometheus.io, via flux)
kubectl apply -f ./cluster/components/observability/prometheus-stack/flux-ks.yaml

# Install promtail (https://grafana.com/docs/loki/latest/send-data/promtail/, via flux)
kubectl apply -f ./cluster/components/observability/promtail/flux-ks.yaml

# Create grafana admin credentials, and install grafana (https://grafana.com/grafana/, via flux)
kubectl create secret generic grafana.admin.creds --from-literal=username=admin --from-literal=password=$(cat /dev/random | tr -dc '[:alnum:]' | head -c 40)
kubectl apply -f ./cluster/components/observability/grafana/flux-ks.yaml

You can use the following command to watch flux reconcilation progress:

flux get kustomizations -n libresh-system

Grafana is available at grafana.my-cluster.my-domain.fr. Grafana credentials can be found using scripts/lsh-get-secrets.sh

Deploy velero backups

k apply -f https://raw.githubusercontent.com/vmware-tanzu/helm-charts/main/charts/velero/crds/schedules.yaml
k apply -f ./cluster/components/backups/velero/flux-ks.yaml
# to sync creds of the bucket:
flux -n libresh-system suspend ks velero
flux -n libresh-system resume ks velero

Upgrade

Renovabot runs regularly, it will create MR against main branch.

Currently a human has to accept them.

Once you are happy with a state of the main branch, you can tag a release.

Then, to update your cluster, you just need to edit the tag in the gitrepository:

kubectl -n libresh-system edit gitrepositories libresh-cluster

This will update all components managed by libre.sh.

Update libre.sh operator on configured cluster

If you use tilt, a new build of the operator will be triggered and deployed automatically.

If you prefer to go the manual way, you should build a new image (via make docker-build), then install CRDs (via make install) and deploy (via make deploy) :

# Build a docker img which will be deployed in `lsh-operator`
make docker-build && make install && make deploy

Use a local cache for container images

See docs/development/using-local-caches.md.

Add personal customizations/tools to your dev env using nix-shell

You can obtain a shell with libre.sh tooling (shell.nix) plus your personal tools/customizations by using a local.nix file (there is an example: local.nix.example). Using nix-shell local.nix you'll have both libre.sh and yours tooling/configuration.

Known issues

MountVolume.SetUp failed for volume "config" : secret "libresh-config" not found

libresh-config secret is created in the libresh-system namespace by a Tilt job named Minio Config.

Ensure the following builds are successful: cert-managerand uncategorized, then trigger Minio Config

MinioConfig job loops on "Waiting for minio"

Ensure s3.dev.local is present in your /etc/hosts for 127.0.0.1 ip address.

Check if the self signed root ca certificate has been written (cat rootCA.pem) ; then install it with CAROOT=. *mkcert* -install (Tilt creates this certificate from the following secret: cert-manager/root-ca-secret)