Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All
DevOps-GitLab
Backing up a GitLab installation | GitLab





  • Create the backup
  • Cron based backup
  • Backup utility extra arguments
  • Backup the secrets
  • Additional Information

Backing up a GitLab installation

GitLab backups are taken by running the backup-utility command in the Toolbox pod provided in the chart. Backups can also be automated by enabling the Cron based backup functionality of this chart.

Before running the backup for the first time, you should ensure the
Toolbox is properly configured
for access to object storage

Follow these steps for backing up a GitLab Helm chart based installation

Create the backup



  1. Ensure the toolbox pod is running, by executing the following command


    kubectl get pods -lrelease=RELEASE_NAME,app=toolbox

  2. Run the backup utility


    kubectl exec <Toolbox pod name> -it -- backup-utility

  3. Visit the gitlab-backups bucket in the object storage service and ensure a tarball has been added. It will be named in <timestamp>_<version>_gitlab_backup.tar format.


  4. This tarball is required for restoration.

Cron based backup


note
The Kubernetes CronJob created by the Helm chart
sets the cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
annotation on the jobTemplate. Some Kubernetes environments, such as
GKE Autopilot, don’t allow this annotation to be set and will not create
Job Pods for the backup.

Cron based backups can be enabled in this chart to happen at regular intervals as defined by the Kubernetes schedule.

You need to set the following parameters:



  • gitlab.toolbox.backups.cron.enabled : Set to true to enable cron based backups

  • gitlab.toolbox.backups.cron.schedule : Set as per the Kubernetes schedule docs

  • gitlab.toolbox.backups.cron.extraArgs : Optionally set extra arguments for backup-utility (like --skip db )

Backup utility extra arguments

The backup utility can take some extra arguments. See what those are with:

kubectl exec <Toolbox pod name> -it -- backup-utility --help

Backup the secrets

You also need to save a copy of the rails secrets as these are not included in the backup as a security precaution. We recommend keeping your full backup that includes the database separate from the copy of the secrets.



  1. Find the object name for the rails secrets


    kubectl get secrets | grep rails-secret

  2. Save a copy of the rails secrets


    kubectl get secrets <rails-secret-name> -o jsonpath="{.data['secrets\.yml']}" | base64 --decode > gitlab-secrets.yaml

  3. Store gitlab-secrets.yaml in a secure location. You need it to restore your backups.

Additional Information


  • GitLab chart Backup/Restore Introduction
  • Restoring a GitLab installation
Backup and restore a GitLab instance | GitLab





  • Prerequisites
  • Backup and Restoring procedures

  • Object storage

    • Backups to S3
    • Backups to Google Cloud Storage (GCS)

  • Troubleshooting

    • Pod eviction issues
    • “Bucket not found” errors
    • “AccessDeniedException: 403” errors in GCP

Backup and restore a GitLab instance

GitLab Helm chart provides a utility pod from the Toolbox sub-chart that acts as an interface for the purpose of backing up and restoring GitLab instances. It is equipped with a backup-utility executable which interacts with other necessary pods for this task.
Technical details for how the utility works can be found in the architecture documentation.

Prerequisites



  • Backup and Restore procedures described here have only been tested with S3 compatible APIs. Support for other object storage services, like Google Cloud Storage, will be tested in future revisions.


  • During restoration, the backup tarball needs to be extracted to disk. This means the Toolbox pod should have disk of necessary size available.


  • This chart relies on the use of object storage for artifacts , uploads , packages , registry and lfs objects, and does not currently migrate these for you during restore. If you are restoring a backup taken from another instance, you must migrate your existing instance to using object storage before taking the backup. See issue 646.

Backup and Restoring procedures


  • Backing up a GitLab installation
  • Restoring a GitLab installation

Object storage

We provide a MinIO instance out of the box when using this charts unless an external object storage is specified. The Toolbox connects to the included MinIO by default, unless specific settings are given. The Toolbox can also be configured to back up to Amazon S3 or Google Cloud Storage (GCS).

Backups to S3

The Toolbox uses s3cmd to connect to object storage. In order to configure connectivity to external object storage gitlab.toolbox.backups.objectStorage.config.secret should be specified which points to a Kubernetes secret containing a .s3cfg file. gitlab.toolbox.backups.objectStorage.config.key should be specified if different from the default of config . This points to the key containing the contents of a .s3cfg file.

It should look like this:

helm install gitlab gitlab/gitlab \
--set gitlab.toolbox.backups.objectStorage.config.secret=my-s3cfg \
--set gitlab.toolbox.backups.objectStorage.config.key=config .

s3cmd .s3cfg file documentation can be found here

In addition, two bucket locations need to be configured, one for storing the backups, and one temporary bucket that is used
when restoring a backup.

--set global.appConfig.backups.bucket=gitlab-backup-storage
--set global.appConfig.backups.tmpBucket=gitlab-tmp-storage

Backups to Google Cloud Storage (GCS)

To backup to GCS you must set gitlab.toolbox.backups.objectStorage.backend to gcs . This ensures that the Toolbox uses the gsutil CLI when storing and retrieving
objects. Additionally you must set gitlab.toolbox.backups.objectStorage.config.gcpProject to the project ID of the GCP project that contains your storage buckets.
You must create a Kubernetes secret with the contents of an active service account JSON key where the service account has the storage.admin role for the buckets
you will use for backup. Below is an example of using the gcloud and kubectl to create the secret.

export PROJECT_ID=$(gcloud config get-value project)
gcloud iam service-accounts create gitlab-gcs --display-name "Gitlab Cloud Storage"
gcloud projects add-iam-policy-binding --role roles/storage.admin ${PROJECT_ID} --member=serviceAccount:gitlab-gcs@${PROJECT_ID}.iam.gserviceaccount.com
gcloud iam service-accounts keys create --iam-account gitlab-gcs@${PROJECT_ID}.iam.gserviceaccount.com storage.config
kubectl create secret generic storage-config --from-file=config=storage.config

Configure your Helm chart as follows to use the service account key to authenticate to GCS for backups:

helm install gitlab gitlab/gitlab \
--set gitlab.toolbox.backups.objectStorage.config.secret=storage-config \
--set gitlab.toolbox.backups.objectStorage.config.key=config \
--set gitlab.toolbox.backups.objectStorage.config.gcpProject=my-gcp-project-id \
--set gitlab.toolbox.backups.objectStorage.backend=gcs

In addition, two bucket locations need to be configured, one for storing the backups, and one temporary bucket that is used
when restoring a backup.

--set global.appConfig.backups.bucket=gitlab-backup-storage
--set global.appConfig.backups.tmpBucket=gitlab-tmp-storage

Troubleshooting

Pod eviction issues

As the backups are assembled locally outside of the object storage target, temporary disk space is needed. The required space might exceed the size of the actual backup archive.
The default configuration will use the Toolbox pod’s file system to store the temporary data. If you find pod being evicted due to low resources, you should attach a persistent volume to the pod to hold the temporary data.
On GKE, add the following settings to your Helm command:

--set gitlab.toolbox.persistence.enabled=true

If your backups are being run as part of the included backup cron job, then you will want to enable persistence for the cron job as well:

--set gitlab.toolbox.backups.cron.persistence.enabled=true

For other providers, you may need to create a persistent volume. See our Storage documentation for possible examples on how to do this.

“Bucket not found” errors

If you see Bucket not found errors during backups, check the
credentials are configured for your bucket.

The command depends on the cloud service provider:



  • For AWS S3, the credentials are stored on the toolbox pod in ~/.s3cfg . Run:


    s3cmd ls

  • For GCP GCS, run:


    gsutil ls

You should see a list of available buckets.

“AccessDeniedException: 403” errors in GCP

An error like [Error] AccessDeniedException: 403 <GCP Account> does not have storage.objects.list access to the Google Cloud Storage bucket.
usually happens during a backup or restore of a GitLab instance, because of missing permissions.

The backup and restore operations use all buckets in the environment, so
confirm that all buckets in your environment have been created, and that the GCP account can access (list, read, and write) all buckets:



  1. Find your toolbox pod:


    kubectl get pods -lrelease=RELEASE_NAME,app=toolbox

  2. Get all buckets in the pod’s environment. Replace <toolbox-pod-name> with your actual toolbox pod name, but leave "BUCKET_NAME" as it is:


    kubectl describe pod <toolbox-pod-name> | grep "BUCKET_NAME"

  3. Confirm that you have access to every bucket in the environment:


    # List
    gsutil ls gs://<bucket-to-validate>/

    # Read
    gsutil cp gs://<bucket-to-validate>/<object-to-get> <save-to-location>

    # Write
    gsutil cp -n <local-file> gs://<bucket-to-validate>/
Read article
Restoring a GitLab installation | GitLab






  • Restoring the secrets

    • Restore the rails secrets
    • Restart the pods

  • Restoring the backup file

    • Restore the runner registration token
  • Enable Kubernetes related settings
  • Restart the pods
  • (Optional) Reset the root user’s password
  • Additional Information

Restoring a GitLab installation

To obtain a backup tarball of an existing GitLab instance that used other installation methods like an Omnibus GitLab
package or Omnibus GitLab Helm chart, follow the instructions
given in documentation.

If you are restoring a backup taken from another instance, you must migrate your existing instance to using object storage
before taking the backup. See issue 646.

It is recommended that you restore a backup to the same version of GitLab on which it was created.

GitLab backup restores are taken by running the backup-utility command on the Toolbox pod provided in the chart.

Before running the restore for the first time, you should ensure the Toolbox is properly configured for
access to object storage

The backup utility provided by GitLab Helm chart supports restoring a tarball from any of the following locations


  1. The gitlab-backups bucket in the object storage service associated to the instance. This is the default scenario.
  2. A public URL that can be accessed from the pod.
  3. A local file that you can copy to the Toolbox pod using kubectl cp

Restoring the secrets

Restore the rails secrets

The GitLab chart expects rails secrets to be provided as a Kubernetes Secret with content in YAML. If you are restoring the rails secret from an Omnibus GitLab instance, secrets are stored in JSON format in the /etc/gitlab/gitlab-secrets.json file. To convert the file and create the secret in YAML format:



  1. Copy the file /etc/gitlab/gitlab-secrets.json to the workstation where you run kubectl commands.


  2. Install the yq tool (version 4.21.1 or later) on your workstation.


  3. Run the following command to convert your gitlab-secrets.json to YAML format:


    yq -P '{"production": .gitlab_rails}' gitlab-secrets.json >> gitlab-secrets.yaml

  4. Check that the new gitlab-secrets.yaml file has the following contents:

    production:
    db_key_base: <your key base value>
    secret_key_base: <your secret key base value>
    otp_key_base: <your otp key base value>
    openid_connect_signing_key: <your openid signing key>
    ci_jwt_signing_key: <your ci jwt signing key>

To restore the rails secrets from a YAML file:



  1. Find the object name for the rails secrets:


    kubectl get secrets | grep rails-secret

  2. Delete the existing secret:


    kubectl delete secret <rails-secret-name>

  3. Create the new secret using the same name as the old, and passing in your local YAML file


    kubectl create secret generic <rails-secret-name> --from-file=secrets.yml=gitlab-secrets.yaml

Restart the pods

In order to use the new secrets, the Webservice, Sidekiq and Toolbox pods
need to be restarted. The safest way to restart those pods is to run:

kubectl delete pods -lapp=sidekiq,release=<helm release name>
kubectl delete pods -lapp=webservice,release=<helm release name>
kubectl delete pods -lapp=toolbox,release=<helm release name>

Restoring the backup file

The steps for restoring a GitLab installation are



  1. Make sure you have a running GitLab instance by deploying the charts. Ensure the Toolbox pod is enabled and running by executing the following command


    kubectl get pods -lrelease=RELEASE_NAME,app=toolbox
  2. Get the tarball ready in any of the above locations. Make sure it is named in the <timestamp>_<version>_gitlab_backup.tar format.

  3. Run the backup utility to restore the tarball


    kubectl exec <Toolbox pod name> -it -- backup-utility --restore -t <timestamp>_<version>

    Here, <timestamp>_<version> is from the name of the tarball stored in gitlab-backups bucket. In case you want to provide a public URL, use the following command


    kubectl exec <Toolbox pod name> -it -- backup-utility --restore -f <URL>

    You can provide a local path as a URL as long as it’s in the format: file:///<path>

  4. This process will take time depending on the size of the tarball.
  5. The restoration process will erase the existing contents of database, move existing repositories to temporary locations and extract the contents of the tarball. Repositories will be moved to their corresponding locations on the disk and other data, like artifacts, uploads, LFS etc. will be uploaded to corresponding buckets in Object Storage.

note
During restoration, the backup tarball needs to be extracted to disk.
This means the Toolbox pod should have disk of necessary size available.
For more details and configuration please see the Toolbox documentation.

Restore the runner registration token

After restoring, the included runner will not be able to register to the instance because it no longer has the correct registration token.
Follow these troubleshooting steps to get it updated.

If the restored backup was not from an existing installation of the chart, you will also need to enable some Kubernetes specific features after the restore. Such as
incremental CI job logging.



  1. Find your Toolbox pod by executing the following command


    kubectl get pods -lrelease=RELEASE_NAME,app=toolbox

  2. Run the instance setup script to enable the necessary features


    kubectl exec <Toolbox pod name> -it -- gitlab-rails runner -e production /scripts/custom-instance-setup

Restart the pods

In order to use the new changes, the Webservice and Sidekiq pods need to be restarted. The safest way to restart those pods is to run:

kubectl delete pods -lapp=sidekiq,release=<helm release name>
kubectl delete pods -lapp=webservice,release=<helm release name>

(Optional) Reset the root user’s password

The restoration process does not update the gitlab-initial-root-password secret with the value from backup. For logging in as root , use the original password included in the backup. In the case that the password is no longer accessible, follow the steps below to reset it.



  1. Attach to the Webservice pod by executing the command


    kubectl exec <Webservice pod name> -it -- bash

  2. Run the following command to reset the password of root user. Replace #{password} with a password of your choice


    /srv/gitlab/bin/rails runner "user = User.first; user.password='#{password}'; user.password_confirmation='#{password}'; user.save!"

Additional Information


  • GitLab chart Backup/Restore Introduction
  • Backing up a GitLab installation
Read article
Using certmanager-issuer for CertManager Issuer creation | GitLab





  • Configuration
  • Installation parameters

Using certmanager-issuer for CertManager Issuer creation

This chart is a helper for Jetstack’s CertManager Helm chart.
It automatically provisions an Issuer object, used by CertManager when requesting TLS certificates for
GitLab Ingresses.

Configuration

We describe all the major sections of the configuration below. When configuring
from the parent chart, these values are:

certmanager-issuer:
# Configure an ACME Issuer in cert-manager. Only used if global.ingress.configureCertmanager is true.
server: https://acme-v02.api.letsencrypt.org/directory

# Provide an email to associate with your TLS certificates
# email:

rbac:
create: true

resources:
requests:
cpu: 50m

# Priority class assigned to pods
priorityClassName: ""

common:
labels: {}

Installation parameters

This table contains all the possible charts configurations that can be supplied
to the helm install command using the --set flags:











Parameter Default Description
server https://acme-v02.api.letsencrypt.org/directory Let’s Encrypt server for use with the ACME CertManager Issuer.
email You must provide an email to associate with your TLS certificates. Let’s Encrypt uses this address to contact you about expiring certificates, and issues related to your account.
rbac.create true When true , creates RBAC-related resources to allow for manipulation of CertManager Issuer objects.
resources.requests.cpu 50m Requested CPU resources for the Issuer creation Job.
common.labels Common labels to apply to the ServiceAccount, Job, ConfigMap, and Issuer.
priorityClassName
Priority class assigned to pods.
Read article
Using the GitLab-Gitaly chart | GitLab





  • Requirements
  • Design Choices

  • Configuration

    • Installation command line options

  • Chart configuration examples

    • extraEnv
    • extraEnvFrom
    • image.pullSecrets
    • tolerations
    • annotations
    • priorityClassName
    • git.config
    • Altering security contexts

  • External Services

    • Workhorse

  • Chart settings

    • Git Repository Persistence
    • Running Gitaly over TLS
    • Global server hooks

Using the GitLab-Gitaly chart

The gitaly sub-chart provides a configurable deployment of Gitaly Servers.

Requirements

This chart depends on access to the Workhorse service, either as part of the
complete GitLab chart or provided as an external service reachable from the Kubernetes
cluster this chart is deployed onto.

Design Choices

The Gitaly container used in this chart also contains the GitLab Shell codebase in
order to perform the actions on the Git repositories that have not yet been ported into Gitaly.
The Gitaly container includes a copy of the GitLab Shell container within it, and
as a result we also need to configure GitLab Shell within this chart.

Configuration

The gitaly chart is configured in two parts: external services,
and chart settings.

Gitaly is by default deployed as a component when deploying the GitLab
chart. If deploying Gitaly separately, global.gitaly.enabled needs to
be set to false and additional configuration will need to be performed
as described in the external Gitaly documentation.

Installation command line options

The table below contains all the possible charts configurations that can be supplied to
the helm install command using the --set flags.













































































Parameter Default Description
annotations Pod annotations
common.labels {} Supplemental labels that are applied to all objects created by this chart.
podLabels Supplemental Pod labels. Will not be used for selectors.
external[].hostname - "" hostname of external node
external[].name - "" name of external node storage
external[].port - "" port of external node
extraContainers List of extra containers to include
extraInitContainers List of extra init containers to include
extraVolumeMounts List of extra volumes mounts to do
extraVolumes List of extra volumes to create
extraEnv List of extra environment variables to expose
extraEnvFrom List of extra environment variables from other data sources to expose
gitaly.serviceName The name of the generated Gitaly service. Overrides global.gitaly.serviceName , and defaults to <RELEASE-NAME>-gitaly
image.pullPolicy Always Gitaly image pull policy
image.pullSecrets Secrets for the image repository
image.repository registry.com/gitlab-org/build/cng/gitaly Gitaly image repository
image.tag master Gitaly image tag
init.image.repository initContainer image
init.image.tag initContainer image tag
internal.names[] - default Ordered names of StatefulSet storages
serviceLabels {} Supplemental service labels
service.externalPort 8075 Gitaly service exposed port
service.internalPort 8075 Gitaly internal port
service.name gitaly The name of the Service port that Gitaly is behind in the Service object.
service.type ClusterIP Gitaly service type
securityContext.fsGroup 1000 Group ID under which the pod should be started
securityContext.fsGroupChangePolicy Policy for changing ownership and permission of the volume (requires Kubernetes 1.23)
securityContext.runAsUser 1000 User ID under which the pod should be started
tolerations [] Toleration labels for pod assignment
persistence.accessMode ReadWriteOnce Gitaly persistence access mode
persistence.annotations Gitaly persistence annotations
persistence.enabled true Gitaly enable persistence flag
persistence.matchExpressions Label-expression matches to bind
persistence.matchLabels Label-value matches to bind
persistence.size 50Gi Gitaly persistence volume size
persistence.storageClass storageClassName for provisioning
persistence.subPath Gitaly persistence volume mount path
priorityClassName Gitaly StatefulSet priorityClassName
logging.level Log level
logging.format json Log format
logging.sentryDsn Sentry DSN URL - Exceptions from Go server
logging.rubySentryDsn Sentry DSN URL - Exceptions from gitaly-ruby
logging.sentryEnvironment Sentry environment to be used for logging
ruby.maxRss Gitaly-Ruby resident set size (RSS) that triggers a memory restart (bytes)
ruby.gracefulRestartTimeout Graceful period before a force restart after exceeding Max RSS
ruby.restartDelay Time that Gitaly-Ruby memory must remain high before a restart (seconds)
ruby.numWorkers Number of Gitaly-Ruby worker processes
shell.concurrency[] Concurrency of each RPC endpoint Specified using keys rpc and maxPerRepo
packObjectsCache.enabled false Enable the Gitaly pack-objects cache
packObjectsCache.dir /home/git/repositories/+gitaly/PackObjectsCache Directory where cache files get stored
packObjectsCache.max_age 5m Cache entries lifespan
git.catFileCacheSize Cache size used by Git cat-file process
git.config[] [] Git configuration that Gitaly should set when spawning Git commands
prometheus.grpcLatencyBuckets Buckets corresponding to histogram latencies on GRPC method calls to be recorded by Gitaly. A string form of the array (for example, "[1.0, 1.5, 2.0]" ) is required as input
statefulset.strategy {} Allows one to configure the update strategy utilized by the StatefulSet
statefulset.livenessProbe.initialDelaySeconds 30 Delay before liveness probe is initiated
statefulset.livenessProbe.periodSeconds 10 How often to perform the liveness probe
statefulset.livenessProbe.timeoutSeconds 3 When the liveness probe times out
statefulset.livenessProbe.successThreshold 1 Minimum consecutive successes for the liveness probe to be considered successful after having failed
statefulset.livenessProbe.failureThreshold 3 Minimum consecutive failures for the liveness probe to be considered failed after having succeeded
statefulset.readinessProbe.initialDelaySeconds 10 Delay before readiness probe is initiated
statefulset.readinessProbe.periodSeconds 10 How often to perform the readiness probe
statefulset.readinessProbe.timeoutSeconds 3 When the readiness probe times out
statefulset.readinessProbe.successThreshold 1 Minimum consecutive successes for the readiness probe to be considered successful after having failed
statefulset.readinessProbe.failureThreshold 3 Minimum consecutive failures for the readiness probe to be considered failed after having succeeded
metrics.enabled false If a metrics endpoint should be made available for scraping
metrics.port 9236 Metrics endpoint port
metrics.path /metrics Metrics endpoint path
metrics.serviceMonitor.enabled false If a ServiceMonitor should be created to enable Prometheus Operator to manage the metrics scraping, note that enabling this removes the prometheus.io scrape annotations
metrics.serviceMonitor.additionalLabels {} Additional labels to add to the ServiceMonitor
metrics.serviceMonitor.endpointConfig {} Additional endpoint configuration for the ServiceMonitor
metrics.metricsPort
DEPRECATED Use metrics.port

Chart configuration examples

extraEnv

extraEnv allows you to expose additional environment variables in all containers in the pods.

Below is an example use of extraEnv :

extraEnv:
SOME_KEY: some_value
SOME_OTHER_KEY: some_other_value

When the container is started, you can confirm that the environment variables are exposed:

env | grep SOME
SOME_KEY=some_value
SOME_OTHER_KEY=some_other_value

extraEnvFrom

extraEnvFrom allows you to expose additional environment variables from other data sources in all containers in the pods.

Below is an example use of extraEnvFrom :

extraEnvFrom:
MY_NODE_NAME:
fieldRef:
fieldPath: spec.nodeName
MY_CPU_REQUEST:
resourceFieldRef:
containerName: test-container
resource: requests.cpu
SECRET_THING:
secretKeyRef:
name: special-secret
key: special_token
# optional: boolean
CONFIG_STRING:
configMapKeyRef:
name: useful-config
key: some-string
# optional: boolean

image.pullSecrets

pullSecrets allows you to authenticate to a private registry to pull images for a pod.

Additional details about private registries and their authentication methods can be
found in the Kubernetes documentation.

Below is an example use of pullSecrets

image:
repository: my.gitaly.repository
tag: latest
pullPolicy: Always
pullSecrets:
- name: my-secret-name
- name: my-secondary-secret-name

tolerations

tolerations allow you schedule pods on tainted worker nodes

Below is an example use of tolerations :

tolerations:
- key: "node_label"
operator: "Equal"
value: "true"
effect: "NoSchedule"
- key: "node_label"
operator: "Equal"
value: "true"
effect: "NoExecute"

annotations

annotations allows you to add annotations to the Gitaly pods.

Below is an example use of annotations :

annotations:
kubernetes.io/example-annotation: annotation-value

priorityClassName

priorityClassName allows you to assign a PriorityClass
to the Gitaly pods.

Below is an example use of priorityClassName :

priorityClassName: persistence-enabled


git.config

git.config allows you to add configuration to all Git commands spawned by
Gitaly. Accepts configuration as documented in git-config(1) in key /
value pairs, as shown below.

git:
config:
- key: "pack.threads"
value: 4
- key: "fsck.missingSpaceBeforeDate"
value: ignore

Altering security contexts

Gitaly StatefulSet performance may suffer when repositories have large
amounts of files.
Mitigate the issue by changing or fully deleting the settings for the
securityContext .

gitlab:
gitaly:
securityContext:
fsGroup: ""
runAsUser: ""

note
The example syntax eliminates the securityContext setting entirely.
Setting securityContext: {} or securityContext: does not work due
to the way Helm merges default values with user provided configuration.

Starting from Kubernetes 1.23 you can instead set the fsGroupChangePolicy to OnRootMismatch to mitigate the issue.

gitlab:
gitaly:
securityContext:
fsGroupChangePolicy: "OnRootMismatch"

From the documentation,
this setting “could help shorten the time it takes to change ownership and permission of a volume.”

External Services

This chart should be attached the Workhorse service.

Workhorse

workhorse:
host: workhorse.example.com
serviceName: webservice
port: 8181







Name Type Default Description
host String The hostname of the Workhorse server. This can be omitted in lieu of serviceName .
port Integer 8181 The port on which to connect to the Workhorse server.
serviceName String webservice The name of the service which is operating the Workhorse server. If this is present, and host is not, the chart will template the hostname of the service (and current .Release.Name ) in place of the host value. This is convenient when using Workhorse as a part of the overall GitLab chart.

Chart settings

The following values are used to configure the Gitaly Pods.


note
Gitaly uses an Auth Token to authenticate with the Workhorse and Sidekiq
services. The Auth Token secret and key are sourced from the global.gitaly.authToken
value. Additionally, the Gitaly container has a copy of GitLab Shell, which has some configuration
that can be set. The Shell authToken is sourced from the global.shell.authToken
values.

Git Repository Persistence

This chart provisions a PersistentVolumeClaim and mounts a corresponding persistent
volume for the Git repository data. You’ll need physical storage available in the
Kubernetes cluster for this to work. If you’d rather use emptyDir, disable PersistentVolumeClaim
with: persistence.enabled: false .


note
The persistence settings for Gitaly are used in a volumeClaimTemplate
that should be valid for all your Gitaly pods. You should not include settings
that are meant to reference a single specific volume (such as volumeName ). If you want
to reference a specific volume, you need to manually create the PersistentVolumeClaim.

note
You can’t change these through our settings once you’ve deployed. In StatefulSet
the VolumeClaimTemplate is immutable.
persistence:
enabled: true
storageClass: standard
accessMode: ReadWriteOnce
size: 50Gi
matchLabels: {}
matchExpressions: []
subPath: "/data"
annotations: {}












Name Type Default Description
accessMode String ReadWriteOnce Sets the accessMode requested in the PersistentVolumeClaim. See Kubernetes Access Modes Documentation for details.
enabled Boolean true Sets whether or not to use a PersistentVolumeClaims for the repository data. If false , an emptyDir volume is used.
matchExpressions Array Accepts an array of label condition objects to match against when choosing a volume to bind. This is used in the PersistentVolumeClaim selector section. See the volumes documentation.
matchLabels Map Accepts a Map of label names and label values to match against when choosing a volume to bind. This is used in the PersistentVolumeClaim selector section. See the volumes documentation.
size String 50Gi The minimum volume size to request for the data persistence.
storageClass String Sets the storageClassName on the Volume Claim for dynamic provisioning. When unset or null, the default provisioner will be used. If set to a hyphen, dynamic provisioning is disabled.
subPath String Sets the path within the volume to mount, rather than the volume root. The root is used if the subPath is empty.
annotations Map Sets the annotations on the Volume Claim for dynamic provisioning. See Kubernetes Annotations Documentation for details.

Running Gitaly over TLS


note
This section refers to Gitaly being run inside the cluster using
the Helm charts. If you are using an external Gitaly instance and want to use
TLS for communicating with it, refer the external Gitaly documentation

Gitaly supports communicating with other components over TLS. This is controlled
by the settings global.gitaly.tls.enabled and global.gitaly.tls.secretName .
Follow the steps to run Gitaly over TLS:



  1. The Helm chart expects a certificate to be provided for communicating over
    TLS with Gitaly. This certificate should apply to all the Gitaly nodes that
    are present. Hence all hostnames of each of these Gitaly nodes should be
    added as a Subject Alternate Name (SAN) to the certificate.

    To know the hostnames to use, check the file /srv/gitlab/config/gitlab.yml
    file in the Toolbox pod and check the various
    gitaly_address fields specified under repositories.storages key within it.


    kubectl exec -it <Toolbox pod> -- grep gitaly_address /srv/gitlab/config/gitlab.yml

note
A basic script for generating custom signed certificates for
internal Gitaly pods can be found in this repository.
Users can use or refer that script to generate certificates with proper
SAN attributes.


  1. Create a k8s TLS secret using the certificate created.


    kubectl create secret tls gitaly-server-tls --cert=gitaly.crt --key=gitaly.key

  2. Redeploy the Helm chart by passing --set global.gitaly.tls.enabled=true .

Global server hooks

The Gitaly StatefulSet has support for Global server hooks. The hook scripts run on the Gitaly pod, and are therefore limited to the tools available in the Gitaly container.

The hooks are populated using ConfigMaps, and can be used by setting the following values as appropriate:


  1. global.gitaly.hooks.preReceive.configmap
  2. global.gitaly.hooks.postReceive.configmap
  3. global.gitaly.hooks.update.configmap

To populate the ConfigMap, you can point kubectl to a directory of scripts:

kubectl create configmap MAP_NAME --from-file /PATH/TO/SCRIPT/DIR
Read article
Using the GitLab-Exporter chart | GitLab





  • Requirements
  • Configuration
  • Installation command line options

  • Chart configuration examples

    • image.pullSecrets
    • extraEnv
    • extraEnvFrom
    • annotations
  • Global settings

  • Chart settings

    • metrics.enabled

Using the GitLab-Exporter chart

The gitlab-exporter sub-chart provides Prometheus metrics for GitLab
application-specific data. It talks to PostgreSQL directly to perform
queries to retrieve data for CI builds, pull mirrors, etc. In addition,
it uses the Sidekiq API, which talks to Redis to gather different
metrics around the state of the Sidekiq queues (e.g. number of jobs).

Requirements

This chart depends on Redis and PostgreSQL services, either as part of
the complete GitLab chart or provided as external services reachable
from the Kubernetes cluster on which this chart is deployed.

Configuration

The gitlab-exporter chart is configured as follows:
Global settings and Chart settings.

Installation command line options

The table below contains all the possible chart configurations that can be supplied
to the helm install command using the --set flags.










































Parameter Default Description
annotations Pod annotations
common.labels {} Supplemental labels that are applied to all objects created by this chart.
podLabels Supplemental Pod labels. Will not be used for selectors.
common.labels Supplemental labels that are applied to all objects created by this chart.
deployment.strategy {} Allows one to configure the update strategy utilized by the deployment
enabled true GitLab Exporter enabled flag
extraContainers List of extra containers to include
extraInitContainers List of extra init containers to include
extraVolumeMounts List of extra volumes mounts to do
extraVolumes List of extra volumes to create
extraEnv List of extra environment variables to expose
extraEnvFrom List of extra environment variables from other data sources to expose
image.pullPolicy IfNotPresent GitLab image pull policy
image.pullSecrets Secrets for the image repository
image.repository registry.gitlab.com/gitlab-org/build/cng/gitlab-exporter GitLab Exporter image repository
image.tag image tag
init.image.repository initContainer image
init.image.tag initContainer image tag
metrics.enabled true If a metrics endpoint should be made available for scraping
metrics.port 9168 Metrics endpoint port
metrics.path /metrics Metrics endpoint path
metrics.serviceMonitor.enabled false If a ServiceMonitor should be created to enable Prometheus Operator to manage the metrics scraping, note that enabling this removes the prometheus.io scrape annotations
metrics.serviceMonitor.additionalLabels {} Additional labels to add to the ServiceMonitor
metrics.serviceMonitor.endpointConfig {} Additional endpoint configuration for the ServiceMonitor
metrics.annotations
DEPRECATED Set explicit metrics annotations. Replaced by template content.
priorityClassName
Priority class assigned to pods.
resources.requests.cpu 75m GitLab Exporter minimum CPU
resources.requests.memory 100M GitLab Exporter minimum memory
serviceLabels {} Supplemental service labels
service.externalPort 9168 GitLab Exporter exposed port
service.internalPort 9168 GitLab Exporter internal port
service.name gitlab-exporter GitLab Exporter service name
service.type ClusterIP GitLab Exporter service type
securityContext.fsGroup 1000 Group ID under which the pod should be started
securityContext.runAsUser 1000 User ID under which the pod should be started
tolerations [] Toleration labels for pod assignment
psql.port Set PostgreSQL server port. Takes precedence over global.psql.port

Chart configuration examples

image.pullSecrets

extraEnv

extraEnv allows you to expose additional environment variables in all containers in the pods.

Below is an example use of extraEnv :

extraEnv:
SOME_KEY: some_value
SOME_OTHER_KEY: some_other_value

When the container is started, you can confirm that the environment variables are exposed:

env | grep SOME
SOME_KEY=some_value
SOME_OTHER_KEY=some_other_value

extraEnvFrom

extraEnvFrom allows you to expose additional environment variables from other data sources in all containers in the pods.

Below is an example use of extraEnvFrom :

extraEnvFrom:
MY_NODE_NAME:
fieldRef:
fieldPath: spec.nodeName
MY_CPU_REQUEST:
resourceFieldRef:
containerName: test-container
resource: requests.cpu
SECRET_THING:
secretKeyRef:
name: special-secret
key: special_token
# optional: boolean
CONFIG_STRING:
configMapKeyRef:
name: useful-config
key: some-string
# optional: boolean

pullSecrets allows you to authenticate to a private registry to pull images for a pod.

Additional details about private registries and their authentication methods can be
found in the Kubernetes documentation.

Below is an example use of pullSecrets :

image:
repository: my.image.repository
pullPolicy: Always
pullSecrets:
- name: my-secret-name
- name: my-secondary-secret-name

annotations

annotations allows you to add annotations to the GitLab Exporter pods. For example:

annotations:
kubernetes.io/example-annotation: annotation-value

Global settings

We share some common global settings among our charts. See the Globals Documentation
for common configuration options, such as GitLab and Registry hostnames.

Chart settings

The following values are used to configure the GitLab Exporter pod.

metrics.enabled

By default, the pod exposes a metrics endpoint at /metrics . When
metrics are enabled, annotations are added to each pod allowing a
Prometheus server to discover and scrape the exposed metrics.

Read article