Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All
DevOps-GitLab
Configure the GitLab chart with persistent volumes | GitLab





  • Locate the GitLab Volumes
  • Before making storage changes

  • Making storage changes


    • Changes to an existing Volume

      • Update the volume to bind to the claim
    • Switching to a different Volume
  • Make changes to the PersistentVolumeClaim
  • Apply the changes to the GitLab chart

Configure the GitLab chart with persistent volumes

Some of the included services require persistent storage, configured through
Persistent Volumes that specify which disks your cluster has access to.
Documentation on the storage configuration necessary to install this chart can be found in our
Storage Guide.

Storage changes after installation need to be manually handled by your cluster
administrators. Automated management of these volumes after installation is not
handled by the GitLab chart.

Examples of changes not automatically managed after initial installation
include:


  • Mounting different volumes to the Pods
  • Changing the effective accessModes or Storage Class
  • Expanding the storage size of your volume* 1

1 In Kubernetes 1.11, expanding the storage size of your volume is supported
if you have allowVolumeExpansion configured to true in your Storage Class.

Automating theses changes is complicated due to:


  1. Kubernetes does not allow changes to most fields in an existing PersistentVolumeClaim
  2. Unless manually configured, the PVC is the only reference to dynamically provisioned PersistentVolumes

  3. Delete is the default reclaimPolicy for dynamically provisioned PersistentVolumes

This means in order to make changes, we need to delete the PersistentVolumeClaim
and create a new one with our changes. But due to the default reclaimPolicy,
deleting the PersistentVolumeClaim may delete the PersistentVolumes
and underlying disk. And unless configured with appropriate volumeNames and/or
labelSelectors, the chart doesn’t know the volume to attach to.

We will continue to look into making this process easier, but for now a manual
process needs to be followed to make changes to your storage.

Locate the GitLab Volumes

Find the volumes/claims that are being used:

kubectl --namespace <namespace> get PersistentVolumeClaims -l release=<chart release name> -ojsonpath='{range .items[*]}{.spec.volumeName}{"\t"}{.metadata.labels.app}{"\n"}{end}'


  • <namespace> should be replaced with the namespace where you installed the GitLab chart.

  • <chart release name> should be replaced with the name you used to install the GitLab chart.

The command prints a list of the volume names, followed by the name of the
service they are for.

For example:

$ kubectl --namespace helm-charts-win get PersistentVolumeClaims -l release=review-update-app-h8qogp -ojsonpath='{range .items[*]}{.spec.volumeName}{"\t"}{.metadata.labels.app}{"\n"}{end}'
pvc-6247502b-8c2d-11e8-8267-42010a9a0113 gitaly
pvc-61bbc05e-8c2d-11e8-8267-42010a9a0113 minio
pvc-61bc6069-8c2d-11e8-8267-42010a9a0113 postgresql
pvc-61bcd6d2-8c2d-11e8-8267-42010a9a0113 prometheus
pvc-61bdf136-8c2d-11e8-8267-42010a9a0113 redis

Before making storage changes

The person making the changes needs to have administrator access to the cluster, and appropriate access to the storage
solutions being used. Often the changes will first need to be applied in the storage solution, then the results need to
be updated in Kubernetes.

Before making changes, you should ensure your PersistentVolumes are using
the Retain reclaimPolicy so they don’t get removed while you are
making changes.

First, find the volumes/claims that are being used.

Next, edit each volume and change the value of persistentVolumeReclaimPolicy
under the spec field, to be Retain rather than Delete

For example:

kubectl --namespace helm-charts-win edit PersistentVolume pvc-6247502b-8c2d-11e8-8267-42010a9a0113

Editing Output:

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubernetes.io/createdby: gce-pd-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: "yes"
pv.kubernetes.io/provisioned-by: kubernetes.io/gce-pd
creationTimestamp: 2018-07-20T14:58:43Z
labels:
failure-domain.beta.kubernetes.io/region: europe-west2
failure-domain.beta.kubernetes.io/zone: europe-west2-b
name: pvc-6247502b-8c2d-11e8-8267-42010a9a0113
resourceVersion: "48362431"
selfLink: /api/v1/persistentvolumes/pvc-6247502b-8c2d-11e8-8267-42010a9a0113
uid: 650bd649-8c2d-11e8-8267-42010a9a0113
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 50Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: repo-data-review-update-app-h8qogp-gitaly-0
namespace: helm-charts-win
resourceVersion: "48362307"
uid: 6247502b-8c2d-11e8-8267-42010a9a0113
gcePersistentDisk:
fsType: ext4
pdName: gke-cloud-native-81a17-pvc-6247502b-8c2d-11e8-8267-42010a9a0113
# Changed the following line
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
status:
phase: Bound

Making storage changes

First, make the desired changes to the disk outside of the cluster. (Resize the
disk in GKE, or create a new disk from a snapshot or clone, etc).

How you do this, and whether or not it can be done live, without downtime, is
dependent on the storage solutions you are using, and can’t be covered by this
document.

Next, evaluate whether you need these changes to be reflected in the Kubernetes
objects. For example: with expanding the disk storage size, the storage size
settings in the PersistentVolumeClaim will only be used when a new volume
resource is requested. So you would only need to increase the values in the
PersistentVolumeClaim if you intend to scale up more disks (for use in
additional Gitaly pods).

If you do need to have the changes reflected in Kubernetes, be sure that you’ve
updated your reclaim policy on the volumes as described in the Before making storage changes
section.

The paths we have documented for storage changes are:


  • Changes to an existing Volume
  • Switching to a different Volume

Changes to an existing Volume

First locate the volume name you are changing.

Use kubectl edit to make the desired configuration changes to the volume. (These changes
should only be updates to reflect the real state of the attached disk)

For example:

kubectl --namespace helm-charts-win edit PersistentVolume pvc-6247502b-8c2d-11e8-8267-42010a9a0113

Editing Output:

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubernetes.io/createdby: gce-pd-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: "yes"
pv.kubernetes.io/provisioned-by: kubernetes.io/gce-pd
creationTimestamp: 2018-07-20T14:58:43Z
labels:
failure-domain.beta.kubernetes.io/region: europe-west2
failure-domain.beta.kubernetes.io/zone: europe-west2-b
name: pvc-6247502b-8c2d-11e8-8267-42010a9a0113
resourceVersion: "48362431"
selfLink: /api/v1/persistentvolumes/pvc-6247502b-8c2d-11e8-8267-42010a9a0113
uid: 650bd649-8c2d-11e8-8267-42010a9a0113
spec:
accessModes:
- ReadWriteOnce
capacity:
# Updated the storage size
storage: 100Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: repo-data-review-update-app-h8qogp-gitaly-0
namespace: helm-charts-win
resourceVersion: "48362307"
uid: 6247502b-8c2d-11e8-8267-42010a9a0113
gcePersistentDisk:
fsType: ext4
pdName: gke-cloud-native-81a17-pvc-6247502b-8c2d-11e8-8267-42010a9a0113
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
status:
phase: Bound

Now that the changes have been reflected in the volume, we need to update
the claim.

Follow the instructions in the Make changes to the PersistentVolumeClaim section.

Update the volume to bind to the claim

In a separate terminal, start watching to see when the claim has its status change to bound,
and then move onto the next step to make the volume available for use in the new claim.

kubectl --namespace <namespace> get --watch PersistentVolumeClaim <claim name>

Edit the volume to make it available to the new claim. Remove the .spec.claimRef section.

kubectl --namespace <namespace> edit PersistentVolume <volume name>

Editing Output:

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubernetes.io/createdby: gce-pd-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: "yes"
pv.kubernetes.io/provisioned-by: kubernetes.io/gce-pd
creationTimestamp: 2018-07-20T14:58:43Z
labels:
failure-domain.beta.kubernetes.io/region: europe-west2
failure-domain.beta.kubernetes.io/zone: europe-west2-b
name: pvc-6247502b-8c2d-11e8-8267-42010a9a0113
resourceVersion: "48362431"
selfLink: /api/v1/persistentvolumes/pvc-6247502b-8c2d-11e8-8267-42010a9a0113
uid: 650bd649-8c2d-11e8-8267-42010a9a0113
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 100Gi
gcePersistentDisk:
fsType: ext4
pdName: gke-cloud-native-81a17-pvc-6247502b-8c2d-11e8-8267-42010a9a0113
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
status:
phase: Released

Shortly after making the change to the Volume, the terminal watching the claim status should show Bound .

Finally, apply the changes to the GitLab chart

Switching to a different Volume

If you want to switch to using a new volume, using a disk that has a copy of the
appropriate data from the old volume, then first you need to create the new
Persistent Volume in Kubernetes.

In order to create a Persistent Volume for your disk, you will need to
locate the driver specific documentation
for your storage type.

There are a couple of things to keep in mind when following the driver documentation:


  • You need to use the driver to create a Persistent Volume, not a Pod object with a volume as shown in a lot of the documentation.
  • You do not want to create a PersistentVolumeClaim for the volume, we will be editing the existing claim instead.

The driver documentation often includes examples for using the driver in a Pod, for example:

apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
# This GCE PD must already exist.
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4

What you actually want, is to create a Persistent Volume, like so:

apiVersion: v1
kind: PersistentVolume
metadata:
name: test-volume
spec:
capacity:
storage: 400Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4

You normally create a local yaml file with the PersistentVolume information,
then issue a create command to Kubernetes to create the object using the file.

kubectl --namespace <your namespace> create -f <local-pv-file>.yaml

Once your volume is created, you can move on to Making changes to the PersistentVolumeClaim

Make changes to the PersistentVolumeClaim

Find the PersistentVolumeClaim you want to change.

kubectl --namespace <namespace> get PersistentVolumeClaims -l release=<chart release name> -ojsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.app}{"\n"}{end}'


  • <namespace> should be replaced with the namespace where you installed the GitLab chart.

  • <chart release name> should be replaced with the name you used to install the GitLab chart.

The command will print a list of the PersistentVolumeClaim names, followed by the name of the
service they are for.

Then save a copy of the claim to your local filesystem:

kubectl --namespace <namespace> get PersistentVolumeClaim <claim name> -o yaml > <claim name>.bak.yaml

Example Output:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/gce-pd
creationTimestamp: 2018-07-20T14:58:38Z
labels:
app: gitaly
release: review-update-app-h8qogp
name: repo-data-review-update-app-h8qogp-gitaly-0
namespace: helm-charts-win
resourceVersion: "48362433"
selfLink: /api/v1/namespaces/helm-charts-win/persistentvolumeclaims/repo-data-review-update-app-h8qogp-gitaly-0
uid: 6247502b-8c2d-11e8-8267-42010a9a0113
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: standard
volumeName: pvc-6247502b-8c2d-11e8-8267-42010a9a0113
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 50Gi
phase: Bound

Create a new YAML file for a new PVC object. Have it use the same metadata.name , metadata.labels , metadata,namespace , and spec fields. (With your updates applied). And drop the other settings:

Example:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: gitaly
release: review-update-app-h8qogp
name: repo-data-review-update-app-h8qogp-gitaly-0
namespace: helm-charts-win
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
# This is our updated field
storage: 100Gi
storageClassName: standard
volumeName: pvc-6247502b-8c2d-11e8-8267-42010a9a0113

Now delete the old claim:

kubectl --namespace <namespace> delete PersistentVolumeClaim <claim name>

Create the new claim:

kubectl --namespace <namespace> create PersistentVolumeClaim -f <new claim yaml file>

If you are binding to the same PersistentVolume that was previous bound to
the claim, then proceed to update the volume to bind to the claim

Otherwise, if you have bound the claim to a new volume, move onto apply the changes to the GitLab chart

Apply the changes to the GitLab chart

After making changes to the PersistentVolumes and PersistentVolumeClaims,
you will also want to issue a Helm update with the changes applied to the chart
settings as well.

See the installation storage guide
for the options.


Note : If you made changes to the Gitaly volume claim, you will need to delete the
Gitaly StatefulSet before you will be able to issue a Helm update. This is
because the StatefulSet’s Volume Template is immutable, and cannot be changed.

You can delete the StatefulSet without deleting the Gitaly Pods:
kubectl --namespace <namespace> delete --cascade=false StatefulSet <release-name>-gitaly
The Helm update command will recreate the StatefulSet, which will adopt and
update the Gitaly pods.

Update the chart, and include the updated configuration:

Example:

helm upgrade --install review-update-app-h8qogp gitlab/gitlab \
--set gitlab.gitaly.persistence.size=100Gi \
<your other config settings>
Configure the GitLab chart with UBI-based images | GitLab





  • Sample values
  • Known Limitations

Configure the GitLab chart with UBI-based images

GitLab offers Red Hat UBI
versions of its images, allowing you to replace standard images with UBI-based
images. These images use the same tag as standard images with -ubi8 extension.

The GitLab chart uses third-party images that are not based on UBI. These images
are mostly offer external services to GitLab, such as Redis, PostgreSQL, and so on.
If you wish to deploy a GitLab instance that purely based on UBI you must
disable the internal services, and use external deployments or services.

The services that must be disabled and provided externally are:


  • PostgreSQL
  • MinIO (Object Store)
  • Redis

The services must be disabled are:


  • CertManager (Let’s Encrypt integration)
  • Prometheus
  • Grafana
  • GitLab Runner

Sample values

We provide an example for GitLab chart values in examples/ubi/values.yaml
which can help you to build a pure UBI GitLab deployment.

Known Limitations


  • Currently there is no UBI version of GitLab Runner. Therefore we disable it.
    However, that does not prevent attaching your own runner to your UBI-based
    GitLab deployment.
  • GitLab relies on the official image of Docker Registry which is based on alpine .
    At the moment we do not maintain or release a UBI-based version of Registry. Since
    this functionality is crucial , we do not disable this service.
Read article
Architecture | GitLab






  • Docker Container Images

    • GitLab Docker Images
    • Official Docker Images

  • The GitLab chart

    • Structure of these charts
    • Components list
  • Design Decisions

Architecture

We plan to support three tiers of components:


  1. Docker Containers
  2. Scheduler (Kubernetes)
  3. Higher level configuration tool (Helm)

The main method customers would use to install would be the Helm chart in this repository.
At some point in the future, we may also offer other deployment methods like
Amazon CloudFormation or Docker Swarm.

Docker Container Images

As a foundation, we will be creating a Docker container for each service.
This will allow easier horizontal scaling with reduced image size and complexity.
Configuration should be passed in a standard way for Docker, perhaps environment
variables or a mounted file. This provides a clean common interface with the
scheduler software.

GitLab Docker Images

The GitLab application is built using Docker images that contain GitLab
specific services. The build environments for these images can be found in
the CNG repository.

The following GitLab components have images in the CNG repository.


  • Gitaly
  • GitLab Elasticsearch Indexer
  • mail_room
  • GitLab Exporter
  • GitLab Shell
  • Sidekiq
  • GitLab Toolbox
  • Webservice
  • Workhorse

The following are forked charts which also use GitLab specific Docker images.

Docker images that are used for initContainers and various Job s.


  • alpine-certificates
  • kubectl

Official Docker Images

We leverage the following existing official containers for
underlying services:


  • Docker Distribution (Docker Registry 2.0)
  • Prometheus
  • NGINX Ingress
  • cert-manager
  • Redis
  • PostgreSQL
  • Grafana

The GitLab chart

This is the top level GitLab chart ( gitlab ), which configures all necessary resources
for a complete configuration of GitLab. This includes GitLab, PostgreSQL, Redis,
Ingress, and certificate management charts.

At this high level, a customer can make decisions like:


  • Whether they want to use the embedded PostgreSQL chart, or to use an external
    database like Amazon RDS for PostgreSQL.
  • To bring their own SSL certificates, or leverage Let’s Encrypt.
  • To use a load balancer, or a dedicated Ingress.

Customers who would like to get started quickly and easily should begin with this chart.

Structure of these charts

The main GitLab chart is an umbrella chart, made up of many other charts. Each sub-chart is
documented individually, and laid in a structure that matches the
charts directory structure.

Non-GitLab components are packaged and documented on the top level. GitLab
component services are documented under the GitLab chart:


  • NGINX
  • MinIO
  • Registry
  • GitLab/Gitaly
  • GitLab/GitLab Exporter
  • GitLab/GitLab Grafana
  • GitLab/GitLab Shell
  • GitLab/Migrations
  • GitLab/Sidekiq
  • GitLab/Webservice

Components list

A list of which components are deployed when using the chart, and configuration instructions if needed,
is available on the architecture components list page.

Design Decisions

Documentation of the decisions made regarding the architecture of these charts can
be found in Design Decisions documentation

Read article
Backup and restore | GitLab





  • Toolbox pod

  • Backup utility


    • Backups

      • Sequence of execution
      • Command line arguments
      • GitLab backup bucket
      • Backing up to Google Cloud Storage
    • Restore

Backup and restore

This document explains the technical implementation of the backup and restore into/from CNG.

Toolbox pod

The toolbox chart deploys a pod into the cluster. This pod will act as an entry point for interaction with other containers in the cluster.

Using this pod user can run commands using kubectl exec -it <pod name> -- <arbitrary command>

The Toolbox runs a container from the Toolbox image.

The image contains some custom scripts that are to be called as commands by the user. Those scripts are for running Rake tasks, backup, restore, and some helper scripts for interacting with object storage.

Backup utility

Backup utility is one of the scripts
in the task runner container and as the name suggests it is a script used for doing backups but also handles restoring of an existing backup.

Backups

The backup utility script when run without any arguments creates a backup tar and uploads it to object storage.

Sequence of execution

Backups are made using the following steps, in order:


  1. Backup the database (if not skipped) using the GitLab backup Rake task
  2. Backup the repositories (if not skipped) using the GitLab backup Rake task
  3. For each of the object storage backends

    1. If the object storage backend is marked for skipping, skip this storage backend.
    2. Tar the existing data in the corresponding object storage bucket naming it <bucket-name>.tar
    3. Move the tar to the backup location on disk
  4. Write a backup_information.yml file which contains some metadata identifying the version of GitLab, the time of the backup and the skipped items.
  5. Create a tar file containing individual tar files along with backup_information.yml
  6. Upload the resulting tar file to object storage gitlab-backups bucket.

Command line arguments



  • --skip <component>

    You can skip parts of the backup process by using --skip <component> for every component that you want to skip in the backup process. Skippable components are the database ( db ), repositories ( repositories ), and any of the object storages ( registry , uploads , artifacts , lfs , packages , external_diffs , terraform_state , or ci_secure_files ).


  • -t <timestamp-override-value>

    This gives you partial control over the name of the backup: when you specify this flag the created backup will be named <timestamp-override-value>_gitlab_backup.tar . The default value is the current UNIX timestamp, postfixed with the current date formatted to YYYY_mm_dd .


  • --backend <backend>

    Configures the object storage backend to use for backups. Can be either s3 or gcs . Default is s3 .


  • --storage-class <storage-class-name>

    It is also possible to specify the storage class in which the backup is stored using --storage-class <storage-class-name> , allowing you to save on backup storage costs. If unspecified, this will use the default of the storage backend.


    note
    This storage class name is passed through as-is to the storage class argument of your specified backend.

GitLab backup bucket

The default name of the bucket that will be used to store backups is gitlab-backups . This is configurable
using the BACKUP_BUCKET_NAME environment variable.

Backing up to Google Cloud Storage

By default, the backup utility uses s3cmd to upload and download artifacts from object storage. While this can work with Google Cloud Storage (GCS),
it requires using the Interoperability API which makes undesirable compromises to authentication and authorization. When using Google Cloud Storage
for backups you can configure the backup utility script to use the Cloud Storage native CLI, gsutil , to do the upload and download
of your artifacts by setting the BACKUP_BACKEND environment variable to gcs .

Restore

The backup utility when given an argument --restore attempts to restore from an existing backup to the running instance. This
backup can be from either an Omnibus GitLab or a CNG Helm chart installation given that both the instance that was
backed up and the running instance runs the same version of GitLab. The restore expects a file in backup bucket using -t <backup-name> or a remote URL using -f <url> .

When given a -t parameter it looks into backup bucket in object storage for a backup tar with such name. When
given a -f parameter it expects that the given URL is a valid URI of a backup tar in a location accessible from the container.

After fetching the backup tar the sequence of execution is:


  1. For repositories and database run the GitLab backup Rake task
  2. For each of object storage backends:

    • tar the existing data in the corresponding object storage bucket naming it <backup-name>.tar
    • upload it to tmp bucket in object storage
    • clean up the corresponding bucket
    • restore the backup content into the corresponding bucket

note
If the restore fails, the user will need to revert to previous backup using data in tmp directory of the backup bucket. This is currently a manual process.
Read article
Design Decisions | GitLab





  • Attempt to catch problematic configurations
  • Breaking changes via deprecation
  • Preference of Secrets in initContainer over Environment
  • Sub-charts are deployed from global chart
  • Template partials for gitlab/* should be global whenever possible

  • Forked charts

    • Redis
    • Redis HA
    • MinIO
    • registry
    • NGINX Ingress
  • Kubernetes version used throughout chart
  • Image variants shipped with CNG

Design Decisions

This documentation collects reasoning and decisions made
regarding the design of the Helm charts in this repository.

Attempt to catch problematic configurations

Due to the complexity of these charts and their level of flexibility, there are some
overlaps where it is possible to produce a configuration that would lead to an
unpredictable, or entirely non-functional deployment. In an effort
to prevent known problematic settings combinations, we have implemented template logic
designed to detect and warn the user that their configuration will not work.

This replicates the behavior of deprecations, but is specific to ensuring functional configuration.

Introduced in !757 checkConfig: add methods to test for known errors

Breaking changes via deprecation

During the development of these charts, we occasionally make improvements that require
alterations to the properties of existing deployments. Two examples were the centralization
of configuring the use of MinIO, and the migration of external object storage configuration
from properties to secrets (in observance of our preference).

As a means of preventing a user from accidentally deploying an updated version of these
charts which includes a breaking change against a configuration that would not function, we
have chosen to implement deprecation notifications. These are designed to detect
properties have been relocated, altered, replaced, or removed entirely, then inform
the user of what changes need to be made to the configuration. This may include informing
the user to see documentation on how to replace a property with a secret. These notifications
will cause the Helm install or upgrade commands to stop with a parse error, and output a complete list of items that need to be addressed. We have taken care to ensure a user will not be placed into a loop of error, fix, repeat.

All deprecations must be addressed in order for a successful deployment to occur. We believe
the user would prefer to be informed of a breaking change over experiencing unexpected
behavior or complete failure that requires debugging.

Introduced in !396 Deprecations: implement buffered list of deprecations

Preference of Secrets in initContainer over Environment

Much of the container ecosystem has, or expects, the capability to be configured
through environment variables. This configuration practice
stems from the concept of The Twelve-Factor App. This
greatly simplifies configuration across multiple deployment environments, but there
remains a security concern with passing connection secrets such as passwords and
private keys via the container’s environment.

Most container ecosystems provide a simple method to inspect the state of a running
container, which usually includes the environment. Using Docker
as an example, any process capable of communicating with the daemon can query the
state of all running containers. This means that if you have a privileged container
such as dind , that container can then inspect the environment of any container
on a given node, and expose all secrets contained within.
As a part of the complete DevOps lifecycle, dind is regularly
used for building containers that will be pushed to a registry and subsequently
deployed.

This concern is why we’ve decided to prefer the population of sensitive information
via initContainers.

Related issues:


  • #90
  • #114

Sub-charts are deployed from global chart

All sub-charts of this repository are designed to be deployed via the global chart.
Each component can still be deployed individually, but make use of a common set of
properties facilitated by the global chart.

This decision simplifies both the use and maintenance of the repository as a whole.

Related issue:


  • #352

Template partials for gitlab/* should be global whenever possible

All template partials of the gitlab/* sub-charts should be a part of the global or
GitLab sub-chart templates/_helpers.tpl whenever possible. Templates from
forked charts will remain a part of those charts. This reduces
the maintenance impact of these forks.

The benefits of this are straight-forward:


  • Increased DRY behavior, leading to easier maintenance. There should be no reason
    to have duplicates of the same function across multiple sub-charts when a single
    entry will suffice.
  • Reduction of template naming conflicts. All partials throughout a chart are compiled together,
    and thus we can treat them like the global behavior they are.

Related issue:


  • #352

Forked charts

The following charts have been forked or re-created in this repository following
our guidelines for forking

Redis

With the 3.0 release of the GitLab Helm chart, we no longer fork the upstream Redis chart,
and instead include it as a dependency.

Redis HA

Redis-HA was a chart we included in our releases prior to 3.0 . It has now been removed,
and replaced with upstream Redis chart
which has added optional HA support.

MinIO

Our MinIO chart was altered from the upstream MinIO.


  • Make use of pre-existing Kubernetes secrets instead of creating new ones from properties.
  • Remove providing the sensitive keys via Environment.
  • Automate the creation of multiple buckets via defaultBuckets in place of
    defaultBucket.* properties.

registry

Our registry chart was altered from the upstream docker-registry .


  • Enable the use of in-chart MinIO services automatically.
  • Automatically hook authentication to the GitLab services.

NGINX Ingress

Our NGINX Ingress chart was altered from the upstream NGINX Ingress.


  • Add feature to allow for the TCP ConfigMap to be external to the chart
  • Add feature to allow Ingress class to be templated based on release name

Kubernetes version used throughout chart

To maximize support for different Kubernetes versions, use a kubectl that’s
one minor version lower than the current stable release of Kubernetes.
This should allow support for at least three, and quite possibly more
Kubernetes minor versions. For further discussion on kubectl versions, see
issue 1509.

Related Issues:


  • charts/gitlab#1509
  • charts/gitlab#1583

Related Merge Requests:


  • charts/gitlab!1053
  • build/CNG!329
  • gitlab-build-images!251

Image variants shipped with CNG

Date: 2022-02-10

The CNG project ships images based on both Debian and UBI. The decision to maintain configuration
for both distributions was based upon the following:


  • Why we ship Debian-based images:

    • Track record, precedent
    • Familiarity with distribution
    • Community vs “enterprise”
    • Lack of perceived vendor lock-in
  • Why we ship UBI-based images:

    • Required in some customer environments
    • Required for RHEL certification and inclusion into the OpenShift Marketplace / RedHat Catalog

Further discussion on this topic can be found in issue #3095.

Read article
Goals | GitLab





  • Scheduler
  • Helm charts

Goals

We have a few core goals with this initiative:


  1. Easy to scale horizontally
  2. Easy to deploy, upgrade, maintain
  3. Wide support of cloud service providers
  4. Initial support for Kubernetes and Helm, with flexibility to support other
    schedulers in the future

Scheduler

We will launch with support for Kubernetes, which is mature and widely supported
across the industry. As part of our design however, we will try to avoid decisions
which will preclude the support of other schedulers. This is especially true for
downstream Kubernetes projects like OpenShift and Tectonic. In the future other
schedulers may also be supported like Docker Swarm and Mesosphere.

We aim to support the scaling and self-healing capabilities of Kubernetes:


  • Readiness and Health checks to ensure pods are functioning, and if not to recycle them
  • Tracks to support canary and rolling deployments
  • Auto-scaling

We will try to leverage standard Kubernetes features:


  • ConfigMaps for managing configuration. These will then get mapped or passed to
    Docker containers
  • Secrets for sensitive data

Since we might be also using Consul, this may be utilized instead for consistency with other installation methods.

Helm charts

A Helm chart will be created to manage the deployment of each GitLab specific container/service. We will then also include bundled charts to make the overall deployment easier. This is particularly
important for this effort, as there will be significantly more complexity in
the Docker and Kubernetes layers than the all-in-one Omnibus based solutions.
Helm can help to manage this complexity, and provide an easy top level interface
to manage settings via the values.yaml file.

We plan to offer a three tiered set of Helm charts:

Helm chart Structure

Read article