No description
Find a file
2024-07-10 16:17:07 +02:00
.tilt feat(juicefs)!: abstract all under pv/pvc 2024-06-12 17:21:53 +02:00
api feat: add listmonk 2024-06-18 12:43:27 +02:00
cluster feat: add optional secret 2024-07-10 11:50:36 +02:00
cmd feat: add endpoint monitor 2024-06-19 16:35:06 +02:00
config fix: rbac 2024-07-09 11:33:59 +02:00
docs feat: add docs 2023-05-09 15:45:56 +02:00
gen/proto/portability/v1alpha1 fix(proto/portability): rename manifests to app 2024-02-27 00:33:35 +01:00
hack misc: fusion lsh projects 2024-01-11 15:40:45 +01:00
internal fix(nextcloud): disable client_max_body_size 2024-07-10 16:17:07 +02:00
pkg feat(mmr): add short name 2024-04-22 14:44:59 +02:00
proto/portability/v1alpha1 fix(proto/portability): rename manifests to app 2024-02-27 00:33:35 +01:00
renovate ci: try to simplify renovate 2023-10-02 16:17:53 +02:00
scripts build: add build script 2023-11-15 11:29:28 +01:00
testing fix: cleanup 2023-04-14 15:58:52 +02:00
tools/lsh-gen chore: move conditions accessors to lsh-gen 2024-02-12 16:20:11 +01:00
.dockerignore misc: fusion lsh projects 2024-01-11 15:40:45 +01:00
.gitignore dev: install kyverno in its own ns & ignore rootCA.pem 2024-04-15 11:50:20 +02:00
.gitlab-ci.yml ci: init 2024-07-04 14:36:52 +02:00
.golangci.yml misc: fusion lsh projects 2024-01-11 15:40:45 +01:00
buf.gen.yaml chore: cleanup proto 2024-02-20 19:15:23 +01:00
Dockerfile build(gocloak): use go mod 2024-05-10 15:58:25 +02:00
Dockerfile.tilt misc: fusion lsh projects 2024-01-11 15:40:45 +01:00
go.mod build(gocloak): use go mod 2024-05-10 15:58:25 +02:00
go.sum build(gocloak): use go mod 2024-05-10 15:58:25 +02:00
homeserver.yml feat: allows for manual oidc client 2024-04-19 17:44:28 +02:00
kind-config.yaml misc: fusion lsh projects 2024-01-11 15:40:45 +01:00
LICENSE feat: add cli 2023-04-14 15:53:41 +02:00
Makefile build: use podman by default 2024-04-25 11:14:06 +02:00
PROJECT feat: add listmonk 2024-06-18 12:43:27 +02:00
README.md feat(doc): add instruction to update cluster 2024-06-06 18:19:05 +02:00
renovate.json5 misc(renovate): use develop as base branch 2024-01-11 15:30:55 +01:00
shell.nix feat(doc): add instruction to update cluster 2024-06-06 18:19:05 +02:00
Tiltfile dev: add local ca 2024-04-04 17:25:40 +02:00

libre.sh

libre.sh is a platform to manage many instances of different applications at scale.

Use Cases

The use cases directory lists things we try to achieve with libre.sh.

Glossary

Application: an application is a web application that is usable by an end user (For instance: HedgeDoc, Discourse, …). Object Store (S3 API “standard”): An http API to store and retrieve objects. PITR: Point in Time Recovery

Personas

Cluster Operator

A Cluster Operator is a System Administrator, or Site Reliability Engineer that is transforming raw machines (physical, virtual) into a production Kubernetes cluster. This person is typically root on servers and on Kubernetes API.

Application Operator

An Application Operator is a person that is less technical than a Cluster Operator, and doesnt necessarily understand the command line interface. But this person, through a nice User interface, is able to manipulate high level objects that represent the application.

End User

A user that will interact only with an application.

Architecture decision records

Systems

libre.sh runtime

A collection of controllers and services that are required to deploy applications instances.

libre.sh runtime manager

The controller in charge of installing/configuring/upgrading the runtime.

Development

Requirements

  • nix-shell

Enter the shell

nix-shell

Creating the cluster

kind create cluster --config kind-config.yaml

Running tilt

tilt up

Installing CA

CAROOT=. mkcert -install

Deleting the cluster

kind delete cluster --name libresh-dev

Uninstalling CA

CAROOT=. mkcert -uninstall

Minimal install

kubectl create ns libresh-system

kubectl create cm -f - << EOF
apiVersion: v1
kind: Secret
metadata:
  name: cluster-settings
  namespace: libresh-system
type: Opaque
stringData:
  CLUSTER_DOMAIN: my-cluster.my-domain.fr
  CLUSTER_EMAIL: admin@my-domain.fr
  CLUSTER_NAME: my-cluster
  DEFAULT_CLUSTERISSUER: letsencrypt
EOF

kubectl create -f ./cluster/libresh-cluster.yml
kubectl create -f ./cluster/priorityclasses/flux-ks.yml
kubectl create -f ./cluster/components/networking/cert-manager/flux-ks.yml
kubectl create -f ./cluster/components/networking/ingress-nginx/flux-ks.yml
kubectl create -f ./cluster/components/databases/postgres-zalando/flux-ks.yaml

Deploy CertManager ClusterIssuer

kubectl apply -f ./cluster/components/networking/cert-manager-issuers/self-signed.yaml

Deploy MinIO Tenant

# deploy minio operator
cd ./cluster/components/objectstore/minio/
kubectl create -f ./flux-ks.yml
cd tenant-example
cp ./config-example.env ./config.env
vi ./config.env
kubectl -n minio create secret generic --from-file=./config.env prod-storage-configuration
# deploy minio tenant - This part is given as a rough example, read and modify carefuly, but you can get the idea
export CLUSTER_DOMAIN=my-cluster.my-domain.fr
envsubst < ./tenant-example.yaml > ./tenant.yaml
vi ./tenant.yaml
kubectl apply -f ./tenant.yaml

configure libresh-system

kubectl create -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: libresh-config
  namespace: libresh-system
type: Opaque
stringData:
  object-storage.yml: |
    apiVersion: objectstorage.libre.sh/v1alpha1
    kind: ObjectStorageConfig
    mapping:
      data: my-s3
      pitr: my-s3
    providers:
    - name: my-s3 
      host: CHANGE_ME
      insecure: false
      accessKey: CHANGE_ME
      secretKey: CHANGE_ME
  mailbox.yml: |
      apiVersion: config.libre.sh/v1alpha1
      kind: MailboxConfig
      spec:
        providers: []
  keycloak.yml: |
    default: ""
    providers: []
EOF

make install
IMG=registry.libre.sh/operator:v1.0.0-alpha.1 make deploy

Deploy observability

We'll deploy components in this order:

  • Loki
  • Thanos
  • Prometheus (with alertmanager)
  • Promtail
  • Grafana
kubectl create secret generic loki.admin.creds --from-literal=username=admin --from-literal=password=$(cat /dev/random | tr -dc '[:alnum:]' | head -c 40)
k apply -f ./cluster/components/observability/loki/flux-ks.yaml

k apply -f ./cluster/components/observability/thanos/flux-ks.yaml

k apply -f ./cluster/crds/prometheus-crds/flux-ks.yaml
k apply -f ./cluster/components/observability/prometheus-stack/flux-ks.yaml

k apply -f ./cluster/components/observability/promtail/flux-ks.yaml

kubectl create secret generic grafana.admin.creds --from-literal=username=admin --from-literal=password=$(cat /dev/random | tr -dc '[:alnum:]' | head -c 40)
k apply -f ./cluster/components/observability/grafana/flux-ks.yaml

Deploy velero backups

k apply -f https://raw.githubusercontent.com/vmware-tanzu/helm-charts/main/charts/velero/crds/schedules.yaml
k apply -f ./cluster/components/backups/velero/flux-ks.yaml
# to sync creds of the bucket:
flux -n libresh-system suspend ks velero
flux -n libresh-system resume ks velero

Upgrade

Renovabot runs regularly, it will create MR against main branch.

Currently a human has to accept them.

Once you are happy with a state of the main branch, you can tag a release.

Then, to update your cluster, you just need to edit the tag in the gitrepository:

kubectl -n libresh-system edit gitrepositories libresh-cluster

This will update all components managed by libre.sh.

Update libre.sh operator on configured cluster

make install && make deploy