Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All

DevOps-GitLab

Using the GitLab-Migrations chart | GitLab






  • Requirements
  • Design Choices
  • Configuration
  • Installation command line options

  • Chart configuration examples

    • extraEnv
    • extraEnvFrom
    • image.pullSecrets
  • Using the Community Edition of this chart

  • External Services


    • Redis

      • host
      • serviceName
      • port
      • password
      • sentinels

    • PostgreSQL

      • host
      • serviceName
      • port
      • database
      • preparedStatements
      • username
      • password

Using the GitLab-Migrations chart

The migrations sub-chart provides a single migration Job that handles seeding/migrating the GitLab database. The chart runs using the GitLab Rails codebase.

After migrating, this Job also edits the application settings in the database to turn off writes to authorized keys file. In the charts we are only supporting use of the GitLab Authorized Keys API with the SSH AuthorizedKeysCommand instead of support for writing to an authorized keys file.

Requirements

This chart depends on Redis, and PostgreSQL, either as part of the complete GitLab chart or provided as external services reachable from the Kubernetes cluster this chart is deployed onto.

Design Choices

The migrations creates a new migrations Job each time the chart is deployed. In order to prevent job name collisions, we append the chart revision, and a random alpha-numeric value to the Job name each time is created. The purpose of the random text is described further in this section.

For now we also have the jobs remain as objects in the cluster after they complete. This is so we can observe the migration logs. Currently this means these Jobs persist even after a helm uninstall . This is one of the reasons why we append random text to the Job name, so that future deployments using the same release name don’t cause conflicts. Once we have some form of log-shipping in place, we can revisit the persistence of these objects.

The container used in this chart has some additional optimizations that we are not currently using in this Chart. Mainly the ability to quickly skip running migrations if they are already up to date, without needing to boot up the rails application to check. This optimization requires us to persist the migration status. Which we are not doing with this chart at the moment. In the future we will introduce storage support for the migrations status to this chart.

Configuration

The migrations chart is configured in two parts: external services, and chart settings.

Installation command line options

Table below contains all the possible charts configurations that can be supplied to helm install command using the --set flags


































Parameter Description Default
common.labels Supplemental labels that are applied to all objects created by this chart. {}
image.repository Migrations image repository registry.gitlab.com/gitlab-org/build/cng/gitlab-toolbox-ee
image.tag Migrations image tag
image.pullPolicy Migrations pull policy Always
image.pullSecrets Secrets for the image repository
init.image initContainer image busybox
init.tag initContainer image tag latest
enabled Migrations enable flag true
tolerations Toleration labels for pod assignment []
annotations Annotations for the job spec {}
podAnnotations Annotations for the pob spec {}
podLabels Supplemental Pod labels. Will not be used for selectors.
redis.serviceName Redis service name redis
psql.serviceName Name of Service providing PostgreSQL release-postgresql
psql.password.secret psql secret gitlab-postgres
psql.password.key key to psql password in psql secret psql-password
psql.port Set PostgreSQL server port. Takes precedence over global.psql.port
resources.requests.cpu 250m GitLab Migrations minimum CPU
resources.requests.memory 200Mi GitLab Migrations minimum memory
securityContext.fsGroup 1000 Group ID under which the pod should be started
securityContext.runAsUser 1000 User ID under which the pod should be started
extraInitContainers List of extra init containers to include
extraContainers List of extra containers to include
extraVolumes List of extra volumes to create
extraVolumeMounts List of extra volumes mounts to do
extraEnv List of extra environment variables to expose
extraEnvFrom List of extra environment variables from other data sources to expose
bootsnap.enabled Enable the Bootsnap cache for Rails true
priorityClassName
Priority class assigned to pods.

Chart configuration examples

extraEnv

extraEnv allows you to expose additional environment variables in all containers in the pods.

Below is an example use of extraEnv :

extraEnv:
SOME_KEY: some_value
SOME_OTHER_KEY: some_other_value

When the container is started, you can confirm that the environment variables are exposed:

env | grep SOME
SOME_KEY=some_value
SOME_OTHER_KEY=some_other_value

extraEnvFrom

extraEnvFrom allows you to expose additional environment variables from other data sources in all containers in the pods.

Below is an example use of extraEnvFrom :

extraEnvFrom:
MY_NODE_NAME:
fieldRef:
fieldPath: spec.nodeName
MY_CPU_REQUEST:
resourceFieldRef:
containerName: test-container
resource: requests.cpu
SECRET_THING:
secretKeyRef:
name: special-secret
key: special_token
# optional: boolean
CONFIG_STRING:
configMapKeyRef:
name: useful-config
key: some-string
# optional: boolean

image.pullSecrets

pullSecrets allow you to authenticate to a private registry to pull images for a pod.

Additional details about private registries and their authentication methods
can be found in the Kubernetes documentation.

Below is an example use of pullSecrets :

image:
repository: my.migrations.repository
pullPolicy: Always
pullSecrets:
- name: my-secret-name
- name: my-secondary-secret-name

Using the Community Edition of this chart

By default, the Helm charts use the Enterprise Edition of GitLab. If desired, you can instead use the Community Edition. Learn more about the difference between the two.

In order to use the Community Edition, set image.repository to registry.gitlab.com/gitlab-org/build/cng/gitlab-toolbox-ce

External Services

Redis

redis:
host: redis.example.com
serviceName: redis
port: 6379
sentinels:
- host: sentinel1.example.com
port: 26379
password:
secret: gitlab-redis
key: redis-password

host

The hostname of the Redis server with the database to use. This can be omitted in lieu of serviceName . If using Redis Sentinels, the host attribute needs to be set to the cluster name as specified in the sentinel.conf .

serviceName

The name of the service which is operating the Redis database. If this is present, and host is not, the chart will template the hostname of the service (and current .Release.Name ) in place of the host value. This is convenient when using Redis as a part of the overall GitLab chart. This will default to redis

port

The port on which to connect to the Redis server. Defaults to 6379 .

password

The password attribute for Redis has two sub keys:



  • secret defines the name of the Kubernetes Secret to pull from

  • key defines the name of the key in the above secret that contains the password.

sentinels

The sentinels attribute allows for a connection to a Redis HA cluster.
The sub keys describe each Sentinel connection.



  • host defines the hostname for the Sentinel service

  • port defines the port number to reach the Sentinel service, defaults to 26379

Note: The current Redis Sentinel support only supports Sentinels that have
been deployed separately from the GitLab chart. As a result, the Redis
deployment through the GitLab chart should be disabled with redis.install=false .
The Secret containing the Redis password will need to be manually created
before deploying the GitLab chart.

PostgreSQL

psql:
host: psql.example.com
serviceName: pgbouncer
port: 5432
database: gitlabhq_production
username: gitlab
preparedStatements: false
password:
secret: gitlab-postgres
key: psql-password

host

The hostname of the PostgreSQL server with the database to use. This can be omitted if postgresql.install=true (default non-production).

serviceName

The name of the service which is operating the PostgreSQL database. If this is present, and host is not, the chart will template the hostname of the service in place of the host value.

port

The port on which to connect to the PostgreSQL server. Defaults to 5432 .

database

The name of the database to use on the PostgreSQL server. This defaults to gitlabhq_production .

preparedStatements

If prepared statements should be used when communicating with the PostgreSQL server. Defaults to false .

username

The username with which to authenticate to the database. This defaults to gitlab

password

The password attribute for PostgreSQL has to sub keys:



  • secret defines the name of the Kubernetes Secret to pull from

  • key defines the name of the key in the above secret that contains the password.

Using the Praefect chart (alpha) | GitLab






  • Known limitations and issues
  • Requirements

  • Configuration

    • Replicas
    • Multiple virtual storages
    • Persistence
    • Migrating to Praefect
    • Creating the database
    • Running Praefect over TLS
    • Installation command line options

Using the Praefect chart (alpha)


caution
The Praefect chart is still under development. The alpha version is not yet suitable for production use. Upgrades may require significant manual intervention.
See our Praefect GA release Epic for more information.

The Praefect chart is used to manage a Gitaly cluster inside a GitLab installment deployed with the Helm charts.

Known limitations and issues


  1. The database has to be manually created.
  2. The cluster size is fixed: Gitaly Cluster does not currently support autoscaling.
  3. Using a Praefect instance in the cluster to manage Gitaly instances outside the cluster is not supported.
  4. Upgrades to version 4.8 of the chart (GitLab 13.8) will encounter an issue that makes it appear that repository data is lost. Data is not lost, but requires manual intervention.

Requirements

This chart consumes the Gitaly chart. Settings from global.gitaly are used to configure the instances created by this chart. Documentation of these settings can be found in Gitaly chart documentation.

Important : global.gitaly.tls is independent of global.praefect.tls . They are configured separately.

By default, this chart will create 3 Gitaly Replicas.

Configuration

The chart is disabled by default. To enable it as part of a chart deploy set global.praefect.enabled=true .

Replicas

The default number of replicas to deploy is 3. This can be changed by setting global.praefect.virtualStorages[].gitalyReplicas with the desired number of replicas. For example:

global:
praefect:
enabled: true
virtualStorages:
- name: default
gitalyReplicas: 4
maxUnavailable: 1

Multiple virtual storages

Multiple virtual storages can be configured (see Gitaly Cluster documentation). For example:

global:
praefect:
enabled: true
virtualStorages:
- name: default
gitalyReplicas: 4
maxUnavailable: 1
- name: vs2
gitalyReplicas: 5
maxUnavailable: 2

This will create two sets of resources for Gitaly. This includes two Gitaly StatefulSets (one per virtual storage).

Administrators can then configure where new repositories are stored.

Persistence

It is possible to provide persistence configuration per virtual storage.

global:
praefect:
enabled: true
virtualStorages:
- name: default
gitalyReplicas: 4
maxUnavailable: 1
persistence:
enabled: true
size: 50Gi
accessMode: ReadWriteOnce
storageClass: storageclass1
- name: vs2
gitalyReplicas: 5
maxUnavailable: 2
persistence:
enabled: true
size: 100Gi
accessMode: ReadWriteOnce
storageClass: storageclass2

Migrating to Praefect


note
Group-level wikis cannot be moved using the API at this time.

When migrating from standalone Gitaly instances to a Praefect setup, global.praefect.replaceInternalGitaly can be set to false .
This ensures that the existing Gitaly instances are preserved while the new Praefect-managed Gitaly instances are created.

global:
praefect:
enabled: true
replaceInternalGitaly: false
virtualStorages:
- name: virtualStorage2
gitalyReplicas: 5
maxUnavailable: 2

note
When migrating to Praefect, none of Praefect’s virtual storages can be named default .
This is because there must be at least one storage named default at all times,
therefore the name is already taken by the non-Praefect configuration.

The instructions to migrate to Gitaly Cluster
can then be followed to move data from the default storage to virtualStorage2 . If additional storages
were defined under global.gitaly.internal.names , be sure to migrate repositories from those storages as well.

After the repositories have been migrated to virtualStorage2 , replaceInternalGitaly can be set back to true if a storage named
default is added in the Praefect configuration.

global:
praefect:
enabled: true
replaceInternalGitaly: true
virtualStorages:
- name: default
gitalyReplicas: 4
maxUnavailable: 1
- name: virtualStorage2
gitalyReplicas: 5
maxUnavailable: 2

The instructions to migrate to Gitaly Cluster
can be followed again to move data from virtualStorage2 to the newly-added default storage if desired.

Finally, see the repository storage paths documentation
to configure where new repositories are stored.

Creating the database

Praefect uses its own database to track its state. This has to be manually created in order for Praefect to be functional.


note
These instructions assume you are using the bundled PostgreSQL server. If you are using your own server,
there will be some variation in how you connect.


  1. Log into your database instance:


    kubectl exec -it $(kubectl get pods -l app=postgresql -o custom-columns=NAME:.metadata.name --no-headers) -- bash

    PGPASSWORD=$(cat $POSTGRES_POSTGRES_PASSWORD_FILE) psql -U postgres -d template1

  2. Create the database user:


    CREATE ROLE praefect WITH LOGIN;

  3. Set the database user password.

    By default, the shared-secrets Job will generate a secret for you.



    1. Fetch the password:


      kubectl get secret RELEASE_NAME-praefect-dbsecret -o jsonpath="{.data.secret}" | base64 --decode

    2. Set the password in the psql prompt:


      \password praefect

  4. Create the database:


    CREATE DATABASE praefect WITH OWNER praefect;

Running Praefect over TLS

Praefect supports communicating with client and Gitaly nodes over TLS. This is
controlled by the settings global.praefect.tls.enabled and global.praefect.tls.secretName .
To run Praefect over TLS follow these steps:



  1. The Helm chart expects a certificate to be provided for communicating over
    TLS with Praefect. This certificate should apply to all the Praefect nodes that
    are present. Hence all hostnames of each of these nodes should be added as a
    Subject Alternate Name (SAN) to the certificate or alternatively, you can use wildcards.

    To know the hostnames to use, check the file /srv/gitlab/config/gitlab.yml
    file in the Toolbox Pod and check the various gitaly_address fields specified
    under repositories.storages key within it.


    kubectl exec -it <Toolbox Pod> -- grep gitaly_address /srv/gitlab/config/gitlab.yml

note
A basic script for generating custom signed certificates for internal Praefect Pods
can be found in this repository.
Users can use or refer that script to generate certificates with proper SAN attributes.


  1. Create a TLS Secret using the certificate created.


    kubectl create secret tls <secret name> --cert=praefect.crt --key=praefect.key

  2. Redeploy the Helm chart by passing --set global.praefect.tls.enabled=true .

When running Gitaly over TLS, a secret name must be provided for each virtual storage.

global:
gitaly:
tls:
enabled: true
praefect:
enabled: true
tls:
enabled: true
secretName: praefect-tls
virtualStorages:
- name: default
gitalyReplicas: 4
maxUnavailable: 1
tlsSecretName: default-tls
- name: vs2
gitalyReplicas: 5
maxUnavailable: 2
tlsSecretName: vs2-tls

Installation command line options

The table below contains all the possible charts configurations that can be supplied to
the helm install command using the --set flags.













Parameter Default Description
common.labels {} Supplemental labels that are applied to all objects created by this chart.
failover.enabled true Whether Praefect should perform failover on node failure
failover.readonlyAfter false Whether the nodes should be in read-only mode after failover
autoMigrate true Automatically run migrations on startup
electionStrategy sql See election strategy
image.repository registry.gitlab.com/gitlab-org/build/cng/gitaly The default image repository to use. Praefect is bundled as part of the Gitaly image
podLabels {} Supplemental Pod labels. Will not be used for selectors.
ntpHost pool.ntp.org Configure the NTP server Praefect should ask the for the current time.

























service.name praefect The name of the service to create
service.type ClusterIP The type of service to create
service.internalPort 8075 The internal port number that the Praefect pod will be listening on
service.externalPort 8075 The port number the Praefect service should expose in the cluster
init.resources
init.image
extraEnvFrom List of extra environment variables from other data sources to expose
logging.level Log level
logging.format json Log format
logging.sentryDsn Sentry DSN URL - Exceptions from Go server
logging.rubySentryDsn Sentry DSN URL - Exceptions from gitaly-ruby
logging.sentryEnvironment Sentry environment to be used for logging
metrics.enabled true If a metrics endpoint should be made available for scraping
metrics.port 9236 Metrics endpoint port
metrics.separate_database_metrics true If true then metrics scrapes will not perform database queries, setting to false may cause performance problems
metrics.path /metrics Metrics endpoint path
metrics.serviceMonitor.enabled false If a ServiceMonitor should be created to enable Prometheus Operator to manage the metrics scraping, note that enabling this removes the prometheus.io scrape annotations
metrics.serviceMonitor.additionalLabels {} Additional labels to add to the ServiceMonitor
metrics.serviceMonitor.endpointConfig {} Additional endpoint configuration for the ServiceMonitor
securityContext.runAsUser 1000
securityContext.fsGroup 1000
serviceLabels {} Supplemental service labels
statefulset.strategy {} Allows one to configure the update strategy utilized by the statefulset
Read article

Using the GitLab-Sidekiq chart | GitLab






  • Requirements
  • Design Choices
  • Configuration
  • Installation command line options

  • Chart configuration examples

    • resources
    • extraEnv
    • extraEnvFrom
    • extraVolumes
    • extraVolumeMounts
    • image.pullSecrets
    • tolerations
    • annotations
  • Using the Community Edition of this chart

  • External Services

    • Redis
    • PostgreSQL
    • Gitaly
  • Metrics
  • Chart-wide defaults

  • Per-pod Settings

    • queues
    • negateQueues
    • Example pod entry

  • Configuring the networkpolicy

    • Example Network Policy

Using the GitLab-Sidekiq chart

The sidekiq sub-chart provides configurable deployment of Sidekiq workers, explicitly
designed to provide separation of queues across multiple Deployment s with individual
scalability and configuration.

While this chart provides a default pods: declaration, if you provide an empty definition,
you will have no workers.

Requirements

This chart depends on access to Redis, PostgreSQL, and Gitaly services, either as
part of the complete GitLab chart or provided as external services reachable from
the Kubernetes cluster this chart is deployed onto.

Design Choices

This chart creates multiple Deployment s and associated ConfigMap s. It was decided
that it would be clearer to make use of ConfigMap behaviours instead of using environment
attributes or additional arguments to the command for the containers, in order to
avoid any concerns about command length. This choice results in a large number of
ConfigMap s, but provides very clear definitions of what each pod should be doing.

Configuration

The sidekiq chart is configured in three parts: chart-wide external services,
chart-wide defaults, and per-pod definitions.

Installation command line options

The table below contains all the possible charts configurations that can be supplied
to the helm install command using the --set flags:













































































Parameter Default Description
annotations Pod annotations
podLabels Supplemental Pod labels. Will not be used for selectors.
common.labels Supplemental labels that are applied to all objects created by this chart.
concurrency 20 Sidekiq default concurrency
deployment.strategy {} Allows one to configure the update strategy utilized by the deployment
deployment.terminationGracePeriodSeconds 30 Optional duration in seconds the pod needs to terminate gracefully.
enabled true Sidekiq enabled flag
extraContainers List of extra containers to include
extraInitContainers List of extra init containers to include
extraVolumeMounts String template of extra volume mounts to configure
extraVolumes String template of extra volumes to configure
extraEnv List of extra environment variables to expose
extraEnvFrom List of extra environment variables from other data sources to expose
gitaly.serviceName gitaly Gitaly service name
health_checks.port 3808 Health check server port
hpa.behaviour {scaleDown: {stabilizationWindowSeconds: 300 }} Behavior contains the specifications for up- and downscaling behavior (requires autoscaling/v2beta2 or higher)
hpa.customMetrics [] Custom metrics contains the specifications for which to use to calculate the desired replica count (overrides the default use of Average CPU Utilization configured in targetAverageUtilization )
hpa.cpu.targetType AverageValue Set the autoscaling CPU target type, must be either Utilization or AverageValue
hpa.cpu.targetAverageValue 350m Set the autoscaling CPU target value
hpa.cpu.targetAverageUtilization Set the autoscaling CPU target utilization
hpa.memory.targetType Set the autoscaling memory target type, must be either Utilization or AverageValue
hpa.memory.targetAverageValue Set the autoscaling memory target value
hpa.memory.targetAverageUtilization Set the autoscaling memory target utilization
hpa.targetAverageValue
DEPRECATED Set the autoscaling CPU target value
minReplicas 2 Minimum number of replicas
maxReplicas 10 Maximum number of replicas
maxUnavailable 1 Limit of maximum number of Pods to be unavailable
image.pullPolicy Always Sidekiq image pull policy
image.pullSecrets Secrets for the image repository
image.repository registry.gitlab.com/gitlab-org/build/cng/gitlab-sidekiq-ee Sidekiq image repository
image.tag Sidekiq image tag
init.image.repository initContainer image
init.image.tag initContainer image tag
logging.format default Set to json for JSON-structured logs
metrics.enabled true If a metrics endpoint should be made available for scraping
metrics.port 3807 Metrics endpoint port
metrics.path /metrics Metrics endpoint path
metrics.log_enabled false Enables or disables metrics server logs written to sidekiq_exporter.log
metrics.podMonitor.enabled false If a PodMonitor should be created to enable Prometheus Operator to manage the metrics scraping
metrics.podMonitor.additionalLabels {} Additional labels to add to the PodMonitor
metrics.podMonitor.endpointConfig {} Additional endpoint configuration for the PodMonitor
metrics.annotations
DEPRECATED Set explicit metrics annotations. Replaced by template content.
metrics.tls.enabled false TLS enabled for the metrics/sidekiq_exporter endpoint
metrics.tls.secretName {Release.Name}-sidekiq-metrics-tls Secret for the metrics/sidekiq_exporter endpoint TLS cert and key
psql.password.key psql-password key to psql password in psql secret
psql.password.secret gitlab-postgres psql password secret
psql.port Set PostgreSQL server port. Takes precedence over global.psql.port
redis.serviceName redis Redis service name
resources.requests.cpu 900m Sidekiq minimum needed CPU
resources.requests.memory 2G Sidekiq minimum needed memory
resources.limits.memory Sidekiq maximum allowed memory
timeout 25 Sidekiq job timeout
tolerations [] Toleration labels for pod assignment
memoryKiller.daemonMode true If false , uses the legacy memory killer mode
memoryKiller.maxRss 2000000 Maximum RSS before delayed shutdown triggered expressed in kilobytes
memoryKiller.graceTime 900 Time to wait before a triggered shutdown expressed in seconds
memoryKiller.shutdownWait 30 Amount of time after triggered shutdown for existing jobs to finish expressed in seconds
memoryKiller.hardLimitRss Maximum RSS before immediate shutdown triggered expressed in kilobyte in daemon mode
memoryKiller.checkInterval 3 Amount of time between memory checks
livenessProbe.initialDelaySeconds 20 Delay before liveness probe is initiated
livenessProbe.periodSeconds 60 How often to perform the liveness probe
livenessProbe.timeoutSeconds 30 When the liveness probe times out
livenessProbe.successThreshold 1 Minimum consecutive successes for the liveness probe to be considered successful after having failed
livenessProbe.failureThreshold 3 Minimum consecutive failures for the liveness probe to be considered failed after having succeeded
readinessProbe.initialDelaySeconds 0 Delay before readiness probe is initiated
readinessProbe.periodSeconds 10 How often to perform the readiness probe
readinessProbe.timeoutSeconds 2 When the readiness probe times out
readinessProbe.successThreshold 1 Minimum consecutive successes for the readiness probe to be considered successful after having failed
readinessProbe.failureThreshold 3 Minimum consecutive failures for the readiness probe to be considered failed after having succeeded
securityContext.fsGroup 1000 Group ID under which the pod should be started
securityContext.runAsUser 1000 User ID under which the pod should be started
priorityClassName "" Allow configuring pods priorityClassName , this is used to control pod priority in case of eviction

Chart configuration examples

resources

resources allows you to configure the minimum and maximum amount of resources (memory and CPU) a Sidekiq
pod can consume.

Sidekiq pod workloads vary greatly between deployments. Generally speaking, it is understood that each Sidekiq
process consumes approximately 1 vCPU and 2 GB of memory. Vertical scaling should generally align to this 1:2
ratio of vCPU:Memory .

Below is an example use of resources :

resources:
limits:
memory: 5G
requests:
memory: 2G
cpu: 900m

extraEnv

extraEnv allows you to expose additional environment variables in the dependencies container.

Below is an example use of extraEnv :

extraEnv:
SOME_KEY: some_value
SOME_OTHER_KEY: some_other_value

When the container is started, you can confirm that the environment variables are exposed:

env | grep SOME
SOME_KEY=some_value
SOME_OTHER_KEY=some_other_value

You can also set extraEnv for a specific pod:

extraEnv:
SOME_KEY: some_value
SOME_OTHER_KEY: some_other_value
pods:
- name: mailers
queues: mailers
extraEnv:
SOME_POD_KEY: some_pod_value
- name: catchall
negateQueues: mailers

This will set SOME_POD_KEY only for application containers in the mailers
pod. Pod-level extraEnv settings are not added to init containers.

extraEnvFrom

extraEnvFrom allows you to expose additional environment variables from other data sources in all containers in the pods.

Below is an example use of extraEnvFrom :

extraEnvFrom:
MY_NODE_NAME:
fieldRef:
fieldPath: spec.nodeName
MY_CPU_REQUEST:
resourceFieldRef:
containerName: test-container
resource: requests.cpu
SECRET_THING:
secretKeyRef:
name: special-secret
key: special_token
# optional: boolean
CONFIG_STRING:
configMapKeyRef:
name: useful-config
key: some-string
# optional: boolean

extraVolumes

extraVolumes allows you to configure extra volumes chart-wide.

Below is an example use of extraVolumes :

extraVolumes: |
- name: example-volume
persistentVolumeClaim:
claimName: example-pvc

extraVolumeMounts

extraVolumeMounts allows you to configure extra volumeMounts on all containers chart-wide.

Below is an example use of extraVolumeMounts :

extraVolumeMounts: |
- name: example-volume-mount
mountPath: /etc/example

image.pullSecrets

pullSecrets allows you to authenticate to a private registry to pull images for a pod.

Additional details about private registries and their authentication methods can be
found in the Kubernetes documentation.

Below is an example use of pullSecrets :

image:
repository: my.sidekiq.repository
pullPolicy: Always
pullSecrets:
- name: my-secret-name
- name: my-secondary-secret-name

tolerations

tolerations allow you schedule pods on tainted worker nodes

Below is an example use of tolerations :

tolerations:
- key: "node_label"
operator: "Equal"
value: "true"
effect: "NoSchedule"
- key: "node_label"
operator: "Equal"
value: "true"
effect: "NoExecute"

annotations

annotations allows you to add annotations to the Sidekiq pods.

Below is an example use of annotations :

annotations:
kubernetes.io/example-annotation: annotation-value

Using the Community Edition of this chart

By default, the Helm charts use the Enterprise Edition of GitLab. If desired, you
can use the Community Edition instead. Learn more about the
differences between the two.

In order to use the Community Edition, set image.repository to
registry.gitlab.com/gitlab-org/build/cng/gitlab-sidekiq-ce .

External Services

This chart should be attached to the same Redis, PostgreSQL, and Gitaly instances
as the Webservice chart. The values of external services will be populated into a ConfigMap
that is shared across all Sidekiq pods.

Redis

redis:
host: rank-racoon-redis
port: 6379
sentinels:
- host: sentinel1.example.com
port: 26379
password:
secret: gitlab-redis
key: redis-password











Name Type Default Description
host String The hostname of the Redis server with the database to use. This can be omitted in lieu of serviceName . If using Redis Sentinels, the host attribute needs to be set to the cluster name as specified in the sentinel.conf .
password.key String The password.key attribute for Redis defines the name of the key in the secret (below) that contains the password.
password.secret String The password.secret attribute for Redis defines the name of the Kubernetes Secret to pull from.
port Integer 6379 The port on which to connect to the Redis server.
serviceName String redis The name of the service which is operating the Redis database. If this is present, and host is not, the chart will template the hostname of the service (and current .Release.Name ) in place of the host value. This is convenient when using Redis as a part of the overall GitLab chart.
sentinels.[].host String The hostname of Redis Sentinel server for a Redis HA setup.
sentinels.[].port Integer 26379 The port on which to connect to the Redis Sentinel server.

note
The current Redis Sentinel support only supports Sentinels that have
been deployed separately from the GitLab chart. As a result, the Redis
deployment through the GitLab chart should be disabled with redis.install=false .
The Secret containing the Redis password needs to be manually created
before deploying the GitLab chart.

PostgreSQL

psql:
host: rank-racoon-psql
serviceName: pgbouncer
port: 5432
database: gitlabhq_production
username: gitlab
preparedStatements: false
password:
secret: gitlab-postgres
key: psql-password












Name Type Default Description
host String The hostname of the PostgreSQL server with the database to use. This can be omitted if postgresql.install=true (default non-production).
serviceName String The name of the service which is operating the PostgreSQL database. If this is present, and host is not, the chart will template the hostname of the service in place of the host value.
database String gitlabhq_production The name of the database to use on the PostgreSQL server.
password.key String The password.key attribute for PostgreSQL defines the name of the key in the secret (below) that contains the password.
password.secret String The password.secret attribute for PostgreSQL defines the name of the Kubernetes Secret to pull from.
port Integer 5432 The port on which to connect to the PostgreSQL server.
username String gitlab The username with which to authenticate to the database.
preparedStatements Boolean false If prepared statements should be used when communicating with the PostgreSQL server.

Gitaly

gitaly:
internal:
names:
- default
- default2
external:
- name: node1
hostname: node1.example.com
port: 8079
authToken:
secret: gitaly-secret
key: token









Name Type Default Description
host String The hostname of the Gitaly server to use. This can be omitted in lieu of serviceName .
serviceName String gitaly The name of the service which is operating the Gitaly server. If this is present, and host is not, the chart will template the hostname of the service (and current .Release.Name ) in place of the host value. This is convenient when using Gitaly as a part of the overall GitLab chart.
port Integer 8075 The port on which to connect to the Gitaly server.
authToken.key String The name of the key in the secret below that contains the authToken.
authToken.secret String The name of the Kubernetes Secret to pull from.

Metrics

By default, a Prometheus metrics exporter is enabled per pod. Metrics are only available
when GitLab Prometheus metrics
are enabled in the Admin area. The exporter exposes a /metrics endpoint on port
3807 . When metrics are enabled, annotations are added to each pod allowing a Prometheus
server to discover and scrape the exposed metrics.

Chart-wide defaults

The following values will be used chart-wide, in the event that a value is not presented
on a per-pod basis.














Name Type Default Description
concurrency Integer 25 The number of tasks to process simultaneously.
timeout Integer 4 The Sidekiq shutdown timeout. The number of seconds after Sidekiq gets the TERM signal before it forcefully shuts down its processes.
memoryKiller.checkInterval Integer 3 Amount of time in seconds between memory checks
memoryKiller.maxRss Integer 2000000 Maximum RSS before delayed shutdown triggered expressed in kilobytes
memoryKiller.graceTime Integer 900 Time to wait before a triggered shutdown expressed in seconds
memoryKiller.shutdownWait Integer 30 Amount of time after triggered shutdown for existing jobs to finish expressed in seconds
minReplicas Integer 2 Minimum number of replicas
maxReplicas Integer 10 Maximum number of replicas
maxUnavailable Integer 1 Limit of maximum number of Pods to be unavailable

note

Detailed documentation of the Sidekiq memory killer is available
in the Omnibus documentation.

Per-pod Settings

The pods declaration provides for the declaration of all attributes for a worker
pod. These will be templated to Deployment s, with individual ConfigMap s for their
Sidekiq instances.


note
The settings default to including a single pod that is set up to monitor
all queues. Making changes to the pods section will overwrite the default pod with
a different pod configuration. It will not add a new pod in addition to the default.



































Name Type Default Description
concurrency Integer The number of tasks to process simultaneously. If not provided, it will be pulled from the chart-wide default.
name String Used to name the Deployment and ConfigMap for this pod. It should be kept short, and should not be duplicated between any two entries.
queues String
See below.
negateQueues String
See below.
queueSelector Boolean false Use the queue selector.
timeout Integer The Sidekiq shutdown timeout. The number of seconds after Sidekiq gets the TERM signal before it forcefully shuts down its processes. If not provided, it will be pulled from the chart-wide default. This value must be less than terminationGracePeriodSeconds .
resources Each pod can present it’s own resources requirements, which will be added to the Deployment created for it, if present. These match the Kubernetes documentation.
nodeSelector Each pod can be configured with a nodeSelector attribute, which will be added to the Deployment created for it, if present. These definitions match the Kubernetes documentation.
memoryKiller.checkInterval Integer 3 Amount of time between memory checks
memoryKiller.maxRss Integer 2000000 Overrides the maximum RSS for a given pod.
memoryKiller.graceTime Integer 900 Overrides the time to wait before a triggered shutdown for a given Pod
memoryKiller.shutdownWait Integer 30 Overrides the amount of time after triggered shutdown for existing jobs to finish for a given Pod
minReplicas Integer 2 Minimum number of replicas
maxReplicas Integer 10 Maximum number of replicas
maxUnavailable Integer 1 Limit of maximum number of Pods to be unavailable
podLabels Map {} Supplemental Pod labels. Will not be used for selectors.
strategy {} Allows one to configure the update strategy utilized by the deployment
extraVolumes String Configures extra volumes for the given pod.
extraVolumeMounts String Configures extra volume mounts for the given pod.
priorityClassName String "" Allow configuring pods priorityClassName , this is used to control pod priority in case of eviction
hpa.customMetrics Array [] Custom metrics contains the specifications for which to use to calculate the desired replica count (overrides the default use of Average CPU Utilization configured in targetAverageUtilization )
hpa.cpu.targetType String AverageValue Overrides the autoscaling CPU target type, must be either Utilization or AverageValue
hpa.cpu.targetAverageValue String 350m Overrides the autoscaling CPU target value
hpa.cpu.targetAverageUtilization Integer Overrides the autoscaling CPU target utilization
hpa.memory.targetType String Overrides the autoscaling memory target type, must be either Utilization or AverageValue
hpa.memory.targetAverageValue String Overrides the autoscaling memory target value
hpa.memory.targetAverageUtilization Integer Overrides the autoscaling memory target utilization
hpa.targetAverageValue String
DEPRECATED Overrides the autoscaling CPU target value
extraEnv Map List of extra environment variables to expose. The chart-wide value is merged into this, with values from the pod taking precedence
extraEnvFrom Map List of extra environment variables from other data source to expose
terminationGracePeriodSeconds Integer 30 Optional duration in seconds the pod needs to terminate gracefully.

queues

The queues value is a string containing a comma-separated list of queues to be
processed. By default, it is not set, meaning that all queues will be processed.

The string should not contain spaces: merge,post_receive,process_commit will
work, but merge, post_receive, process_commit will not.

Any queue to which jobs are added but are not represented as a part of at least
one pod item will not be processed . For a complete list of all queues, see
these files in the GitLab source:


  1. app/workers/all_queues.yml
  2. ee/app/workers/all_queues.yml

negateQueues

negateQueues is in the same format as queues , but it represents
queues to be ignored rather than processed.

The string should not contain spaces: merge,post_receive,process_commit will
work, but merge, post_receive, process_commit will not.

This is useful if you have a pod processing important queues, and another pod
processing other queues: they can use the same list of queues, with one being in
queues and the other being in negateQueues .


note

negateQueues should not be provided alongside queues , as it will have no effect.

Example pod entry

pods:
- name: immediate
concurrency: 10
minReplicas: 2 # defaults to inherited value
maxReplicas: 10 # defaults to inherited value
maxUnavailable: 5 # defaults to inherited value
queues: merge,post_receive,process_commit
extraVolumeMounts: |
- name: example-volume-mount
mountPath: /etc/example
extraVolumes: |
- name: example-volume
persistentVolumeClaim:
claimName: example-pvc
resources:
limits:
cpu: 800m
memory: 2Gi
hpa:
cpu:
targetType: Value
targetAverageValue: 350m

Configuring the networkpolicy

This section controls the
NetworkPolicy.
This configuration is optional and is used to limit Egress and Ingress of the
Pods to specific endpoints.










Name Type Default Description
enabled Boolean false This setting enables the network policy
ingress.enabled Boolean false When set to true , the Ingress network policy will be activated. This will block all Ingress connections unless rules are specified.
ingress.rules Array [] Rules for the Ingress policy, for details see https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource and the example below
egress.enabled Boolean false When set to true , the Egress network policy will be activated. This will block all egress connections unless rules are specified.
egress.rules Array [] Rules for the egress policy, these for details see https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource and the example below

Example Network Policy

The Sidekiq service requires Ingress connections for only the Prometheus
exporter if enabled, and normally requires Egress connections to various
places. This examples adds the following network policy:


  • All Ingress requests from the network on TCP 10.0.0.0/8 port 3807 are allowed for metrics exporting
  • All Egress requests to the network on UDP 10.0.0.0/8 port 53 are allowed for DNS
  • All Egress requests to the network on TCP 10.0.0.0/8 port 5432 are allowed for PostgreSQL
  • All Egress requests to the network on TCP 10.0.0.0/8 port 6379 are allowed for Redis
  • Other Egress requests to the local network on 10.0.0.0/8 are restricted
  • Egress requests outside of the 10.0.0.0/8 are allowed

Note the example provided is only an example and may not be complete

Note that the Sidekiq service requires outbound connectivity to the public
internet for images on external object storage

networkpolicy:
enabled: true
ingress:
enabled: true
rules:
- from:
- ipBlock:
cidr: 10.0.0.0/8
ports:
- port: 3807
egress:
enabled: true
rules:
- to:
- ipBlock:
cidr: 10.0.0.0/8
ports:
- port: 53
protocol: UDP
- to:
- ipBlock:
cidr: 10.0.0.0/8
ports:
- port: 5432
protocol: TCP
- to:
- ipBlock:
cidr: 10.0.0.0/8
ports:
- port: 6379
protocol: TCP
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
Read article

Using the GitLab-Spamcheck chart | GitLab






  • Requirements

  • Configuration

    • Enable Spamcheck
    • Configure GitLab to use Spamcheck
  • Installation command line options

  • Chart configuration examples

    • tolerations
    • annotations
    • resources
    • livenessProbe/readinessProbe

Using the GitLab-Spamcheck chart

The spamcheck sub-chart provides a deployment of Spamcheck which is an anti-spam engine developed by GitLab originally to combat the rising amount of spam in GitLab.com, and later made public to be used in self-managed GitLab instances.

Requirements

This chart depends on access to the GitLab API.

Configuration

Enable Spamcheck

spamcheck is disabled by default. To enable it on your GitLab instance, set the Helm property global.spamcheck.enabled to true , for example:

helm upgrade --force --install gitlab . \
--set global.hosts.domain='your.domain.com' \
--set global.hosts.externalIP=XYZ.XYZ.XYZ.XYZ \
--set certmanager-issuer.email='me@example.com' \
--set global.spamcheck.enabled=true

Configure GitLab to use Spamcheck


  1. On the top bar, select Menu > Admin .
  2. On the left sidebar, select Settings > Reporting .
  3. Expand Spam and Anti-bot Protection .
  4. Update the Spam Check settings:

    1. Check the Enable Spam Check via external API endpoint checkbox
    2. For URL of the external Spam Check endpoint use grpc://gitlab-spamcheck.default.svc:8001 , where default is replaced with the Kubernetes namespace where GitLab is deployed.
    3. Leave Spam Check API key blank.
  5. Select Save changes .

Installation command line options

The table below contains all the possible charts configurations that can be supplied to the helm install command using the --set flags.


















































Parameter Default Description
annotations {} Pod annotations
common.labels {} Supplemental labels that are applied to all objects created by this chart.
deployment.livenessProbe.initialDelaySeconds 20 Delay before liveness probe is initiated
deployment.livenessProbe.periodSeconds 60 How often to perform the liveness probe
deployment.livenessProbe.timeoutSeconds 30 When the liveness probe times out
deployment.livenessProbe.successThreshold 1 Minimum consecutive successes for the liveness probe to be considered successful after having failed
deployment.livenessProbe.failureThreshold 3 Minimum consecutive failures for the liveness probe to be considered failed after having succeeded
deployment.readinessProbe.initialDelaySeconds 0 Delay before readiness probe is initiated
deployment.readinessProbe.periodSeconds 10 How often to perform the readiness probe
deployment.readinessProbe.timeoutSeconds 2 When the readiness probe times out
deployment.readinessProbe.successThreshold 1 Minimum consecutive successes for the readiness probe to be considered successful after having failed
deployment.readinessProbe.failureThreshold 3 Minimum consecutive failures for the readiness probe to be considered failed after having succeeded
deployment.strategy {} Allows one to configure the update strategy used by the deployment. When not provided, the cluster default is used.
hpa.behavior {scaleDown: {stabilizationWindowSeconds: 300 }} Behavior contains the specifications for up- and downscaling behavior (requires autoscaling/v2beta2 or higher)
hpa.customMetrics [] Custom metrics contains the specifications for which to use to calculate the desired replica count (overrides the default use of Average CPU Utilization configured in targetAverageUtilization )
hpa.cpu.targetType AverageValue Set the autoscaling CPU target type, must be either Utilization or AverageValue
hpa.cpu.targetAverageValue 100m Set the autoscaling CPU target value
hpa.cpu.targetAverageUtilization Set the autoscaling CPU target utilization
hpa.memory.targetType Set the autoscaling memory target type, must be either Utilization or AverageValue
hpa.memory.targetAverageValue Set the autoscaling memory target value
hpa.memory.targetAverageUtilization Set the autoscaling memory target utilization
hpa.targetAverageValue
DEPRECATED Set the autoscaling CPU target value
image.repository registry.gitlab.com/gitlab-com/gl-security/engineering-and-research/automation-team/spam/spamcheck Spamcheck image repository
logging.format json Log format
logging.level info Log level
metrics.enabled true Toggle Prometheus metrics exporter
metrics.port 8003 Port number to use for the metrics exporter
metrics.path /metrics Path to use for the metrics exporter
maxReplicas 10 HPA maxReplicas
maxUnavailable 1 HPA maxUnavailable
minReplicas 2 HPA maxReplicas
podLabels {} Supplemental Pod labels. Not used for selectors.
resources.requests.cpu 100m Spamcheck minimum CPU
resources.requests.memory 100M Spamcheck minimum memory
securityContext.fsGroup 1000 Group ID under which the pod should be started
securityContext.runAsUser 1000 User ID under which the pod should be started
serviceLabels {} Supplemental service labels
service.externalPort 8001 Spamcheck external port
service.internalPort 8001 Spamcheck internal port
service.type ClusterIP Spamcheck service type
serviceAccount.enabled Flag for using ServiceAccount false
serviceAccount.create Flag for creating a ServiceAccount false
tolerations [] Toleration labels for pod assignment
extraEnvFrom {} List of extra environment variables from other data sources to expose
priorityClassName
Priority class assigned to pods.

Chart configuration examples

tolerations

tolerations allow you schedule pods on tainted worker nodes

Below is an example use of tolerations :

tolerations:
- key: "node_label"
operator: "Equal"
value: "true"
effect: "NoSchedule"
- key: "node_label"
operator: "Equal"
value: "true"
effect: "NoExecute"

annotations

annotations allows you to add annotations to the Spamcheck pods. For example:

annotations:
kubernetes.io/example-annotation: annotation-value

resources

resources allows you to configure the minimum and maximum amount of resources (memory and CPU) a Spamcheck pod can consume.

For example:

resources:
requests:
memory: 100m
cpu: 100M

livenessProbe/readinessProbe

deployment.livenessProbe and deployment.readinessProbe provide a mechanism to help control the termination of Spamcheck Pods in certain scenarios,
such as, when a container is in a broken state.

For example:

deployment:
livenessProbe:
initialDelaySeconds: 10
periodSeconds: 20
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 10
readinessProbe:
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 2
successThreshold: 1
failureThreshold: 3

Refer to the official Kubernetes Documentation
for additional details regarding this configuration.

Read article

Toolbox | GitLab






  • Configuration
  • Configuring backups

  • Persistence configuration

    • Backup considerations
    • Restore considerations
  • Toolbox included tools

Toolbox

The Toolbox Pod is used to execute periodic housekeeping tasks within
the GitLab application. These tasks include backups, Sidekiq maintenance,
and Rake tasks.

Configuration

The following configuration settings are the default settings provided by the
Toolbox chart:

gitlab:
## doc/charts/gitlab/toolbox
toolbox:
enabled: true
replicas: 1
backups:
cron:
enabled: false
concurrencyPolicy: Replace
failedJobsHistoryLimit: 1
schedule: "0 1 * * *"
successfulJobsHistoryLimit: 3
suspend: false
backoffLimit: 6
restartPolicy: "OnFailure"
resources:
requests:
cpu: 50m
memory: 350M
persistence:
enabled: false
accessMode: ReadWriteOnce
size: 10Gi
objectStorage:
backend: s3
config: {}
persistence:
enabled: false
accessMode: 'ReadWriteOnce'
size: '10Gi'
resources:
requests:
cpu: '50m'
memory: '350M'
securityContext:
fsGroup: '1000'
runAsUser: '1000'
































































Parameter Description Default
annotations Annotations to add to the Toolbox Pods and Jobs {}
common.labels Supplemental labels that are applied to all objects created by this chart. {}
antiAffinityLabels.matchLabels Labels for setting anti-affinity options
backups.cron.activeDeadlineSeconds Backup CronJob active deadline seconds (if null, no active deadline is applied) null
backups.cron.backoffLimit Backup CronJob backoff limit 6
backups.cron.concurrencyPolicy Kubernetes Job concurrency policy Replace
backups.cron.enabled Backup CronJob enabled flag false
backups.cron.extraArgs String of arguments to pass to the backup utility
backups.cron.failedJobsHistoryLimit Number of failed backup jobs list in history 1
backups.cron.persistence.accessMode Backup cron persistence access mode ReadWriteOnce
backups.cron.persistence.enabled Backup cron enable persistence flag false
backups.cron.persistence.matchExpressions Label-expression matches to bind
backups.cron.persistence.matchLabels Label-value matches to bind
backups.cron.persistence.size Backup cron persistence volume size 10Gi
backups.cron.persistence.storageClass StorageClass name for provisioning
backups.cron.persistence.subPath Backup cron persistence volume mount path
backups.cron.persistence.volumeName Existing persistent volume name
backups.cron.resources.requests.cpu Backup cron minimum needed CPU 50m
backups.cron.resources.requests.memory Backup cron minimum needed memory 350M
backups.cron.restartPolicy Backup cron restart policy ( Never or OnFailure ) OnFailure
backups.cron.schedule Cron style schedule string 0 1 * * *
backups.cron.startingDeadlineSeconds Backup cron job starting deadline, in seconds (if null, no starting deadline is applied) null
backups.cron.successfulJobsHistoryLimit Number of successful backup jobs list in history 3
backups.cron.suspend Backup cron job is suspended false
backups.objectStorage.backend Object storage provider to use ( s3 or gcs ) s3
backups.objectStorage.config.gcpProject GCP Project to use when backend is gcs
””
backups.objectStorage.config.key Key containing credentials in secret ””
backups.objectStorage.config.secret Object storage credentials secret ””
common.labels Supplemental labels that are applied to all objects created by this chart. {}
deployment.strategy Allows one to configure the update strategy utilized by the deployment { type : Recreate }
enabled Toolbox enablement flag true
extra YAML block for extra gitlab.yml configuration
{}
image.pullPolicy Toolbox image pull policy IfNotPresent
image.pullSecrets Toolbox image pull secrets
image.repository Toolbox image repository registry.gitlab.com/gitlab-org/build/cng/gitlab-toolbox-ee
image.tag Toolbox image tag master
init.image.repository Toolbox init image repository
init.image.tag Toolbox init image tag
init.resources Toolbox init container resource requirements { requests : { cpu : 50m }}
nodeSelector Toolbox and backup job node selection
persistence.accessMode Toolbox persistence access mode ReadWriteOnce
persistence.enabled Toolbox enable persistence flag false
persistence.matchExpressions Label-expression matches to bind
persistence.matchLabels Label-value matches to bind
persistence.size Toolbox persistence volume size 10Gi
persistence.storageClass StorageClass name for provisioning
persistence.subPath Toolbox persistence volume mount path
persistence.volumeName Existing PersistentVolume name
podLabels Labels for running Toolbox Pods {}
priorityClassName
Priority class assigned to pods.
replicas Number of Toolbox Pods to run 1
resources.requests Toolbox minimum requested resources { cpu : 50m , memory : 350M
securityContext.fsGroup Group ID under which the pod should be started 1000
securityContext.runAsUser User ID under which the pod should be started 1000
serviceAccount.annotations Annotations for ServiceAccount {}
serviceAccount.enabled Flag for using ServiceAccount false
serviceAccount.create Flag for creating a ServiceAccount false
serviceAccount.name Name of ServiceAccount to use
tolerations Tolerations to add to the Toolbox
extraEnvFrom List of extra environment variables from other data sources to expose

Configuring backups

Information concerning configuring backups in the
backup and restore documentation. Additional
information about the technical implementation of how the backups are
performed can be found in the
backup and restore architecture documentation.]

Persistence configuration

The persistent stores for backups and restorations are configured separately.
Please review the following considerations when configuring GitLab for
backup and restore operations.

Backups use the backups.cron.persistence.* properties and restorations
use the persistence.* properties. Further descriptions concerning the
configuration of a persistence store will use just the final property key
(e.g. .enabled or .size ) and the appropriate prefix will need to be
added.

The persistence stores are disabled by default, thus .enabled needs to
be set to true for a backup or restoration of any appreciable size.
In addition, either .storageClass needs to be specified for a PersistentVolume
to be created by Kubernetes or a PersistentVolume needs to be manually created.
If .storageClass is specified as ‘-‘, then the PersistentVolume will be
created using the default StorageClass
as specified in the Kubernetes cluster.

If the PersistentVolume is created manually, then the volume can be specified
using the .volumeName property or by using the selector .matchLables /
.matchExpressions properties.

In most cases the default value of .accessMode will provide adequate
controls for only Toolbox accessing the PersistentVolumes. Please consult
the documentation for the CSI driver installed in the Kubernetes cluster to
ensure that the setting is correct.

Backup considerations

A backup operation needs an amount of disk space to hold the individual
components that are being backed up before they are written to the backup
object store. The amount of disk space depends on the following factors:


  • Number of projects and the amount of data stored under each project
  • Size of the PostgresSQL database (issues, MRs, etc.)
  • Size of each object store backend

Once the rough size has been determined, the backups.cron.persistence.size
property can be set so that backups can commence.

Restore considerations

During the restoration of a backup, the backup needs to be extracted to disk
before the files are replaced on the running instance. The size of this
restoration disk space is controlled by the persistence.size property. Be
mindful that as the size of the GitLab installation grows the size of the
restoration disk space also needs to grow accordingly. In most cases the
size of the restoration disk space should be the same size as the backup
disk space.

Toolbox included tools

The Toolbox container contains useful GitLab tools such as Rails console,
Rake tasks, etc. These commands allow one to check the status of the database
migrations, execute Rake tasks for administrative tasks, interact with
the Rails console:

# locate the Toolbox pod
kubectl get pods -lapp=toolbox

# Launch a shell inside the pod
kubectl exec -it <Toolbox pod name> -- bash

# open Rails console
gitlab-rails console -e production

# execute a Rake task
gitlab-rake gitlab:env:info
Read article

Using the GitLab Webservice chart | GitLab






  • Requirements
  • Configuration
  • Installation command line options

  • Chart configuration examples

    • extraEnv
    • extraEnvFrom
    • image.pullSecrets
    • tolerations
    • annotations
    • strategy

    • TLS

      • gitlab-workhorse
      • webservice
  • Using the Community Edition of this chart
  • Global settings

  • Deployments settings

    • Deployments Ingress

  • Ingress settings

    • annotations
    • proxyBodySize

  • Resources

    • Memory requests/limits

  • External Services

    • Redis
    • PostgreSQL
    • Gitaly
    • MinIO
    • Registry

  • Chart settings

    • Metrics
    • GitLab Shell
    • WebServer options

  • Configuring the networkpolicy

    • Example Network Policy
    • LoadBalancer Service

Using the GitLab Webservice chart

The webservice sub-chart provides the GitLab Rails webserver with two Webservice workers
per pod. (The minimum necessary for a single pod to be able to serve any web request in GitLab)

The pods of this chart make use of two containers: gitlab-workhorse and webservice .
GitLab Workhorse listens on
port 8181 , and should always be the destination for inbound traffic to the pod.
The webservice houses the GitLab Rails codebase,
listens on 8080 , and is accessible for metrics collection purposes.
webservice should never recieve normal traffic directly.

Requirements

This chart depends on Redis, PostgreSQL, Gitaly, and Registry services, either as
part of the complete GitLab chart or provided as external services reachable from
the Kubernetes cluster this chart is deployed onto.

Configuration

The webservice chart is configured as follows: Global settings,
Deployments settings, Ingress settings, External services, and
Chart settings.

Installation command line options

The table below contains all the possible chart configurations that can be supplied
to the helm install command using the --set flags.


































































































































Parameter Default Description
annotations Pod annotations
podLabels Supplemental Pod labels. Will not be used for selectors.
common.labels Supplemental labels that are applied to all objects created by this chart.
deployment.terminationGracePeriodSeconds 30 Seconds that Kubernetes will wait for a pod to exit, note this must be longer than shutdown.blackoutSeconds
deployment.livenessProbe.initialDelaySeconds 20 Delay before liveness probe is initiated
deployment.livenessProbe.periodSeconds 60 How often to perform the liveness probe
deployment.livenessProbe.timeoutSeconds 30 When the liveness probe times out
deployment.livenessProbe.successThreshold 1 Minimum consecutive successes for the liveness probe to be considered successful after having failed
deployment.livenessProbe.failureThreshold 3 Minimum consecutive failures for the liveness probe to be considered failed after having succeeded
deployment.readinessProbe.initialDelaySeconds 0 Delay before readiness probe is initiated
deployment.readinessProbe.periodSeconds 10 How often to perform the readiness probe
deployment.readinessProbe.timeoutSeconds 2 When the readiness probe times out
deployment.readinessProbe.successThreshold 1 Minimum consecutive successes for the readiness probe to be considered successful after having failed
deployment.readinessProbe.failureThreshold 3 Minimum consecutive failures for the readiness probe to be considered failed after having succeeded
deployment.strategy {} Allows one to configure the update strategy used by the deployment. When not provided, the cluster default is used.
enabled true Webservice enabled flag
extraContainers List of extra containers to include
extraInitContainers List of extra init containers to include
extras.google_analytics_id nil Google Analytics ID for frontend
extraVolumeMounts List of extra volumes mounts to do
extraVolumes List of extra volumes to create
extraEnv List of extra environment variables to expose
extraEnvFrom List of extra environment variables from other data sources to expose
gitlab.webservice.workhorse.image registry.gitlab.com/gitlab-org/build/cng/gitlab-workhorse-ee Workhorse image repository
gitlab.webservice.workhorse.tag Workhorse image tag
hpa.behavior {scaleDown: {stabilizationWindowSeconds: 300 }} Behavior contains the specifications for up- and downscaling behavior (requires autoscaling/v2beta2 or higher)
hpa.customMetrics [] Custom metrics contains the specifications for which to use to calculate the desired replica count (overrides the default use of Average CPU Utilization configured in targetAverageUtilization )
hpa.cpu.targetType AverageValue Set the autoscaling CPU target type, must be either Utilization or AverageValue
hpa.cpu.targetAverageValue 1 Set the autoscaling CPU target value
hpa.cpu.targetAverageUtilization Set the autoscaling CPU target utilization
hpa.memory.targetType Set the autoscaling memory target type, must be either Utilization or AverageValue
hpa.memory.targetAverageValue Set the autoscaling memory target value
hpa.memory.targetAverageUtilization Set the autoscaling memory target utilization
hpa.targetAverageValue
DEPRECATED Set the autoscaling CPU target value
sshHostKeys.mount false Whether to mount the GitLab Shell secret containing the public SSH keys.
sshHostKeys.mountName ssh-host-keys Name of the mounted volume.
sshHostKeys.types [dsa,rsa,ecdsa,ed25519] List of SSH key types to mount.
image.pullPolicy Always Webservice image pull policy
image.pullSecrets Secrets for the image repository
image.repository registry.gitlab.com/gitlab-org/build/cng/gitlab-webservice-ee Webservice image repository
image.tag Webservice image tag
init.image.repository initContainer image
init.image.tag initContainer image tag
metrics.enabled true If a metrics endpoint should be made available for scraping
metrics.port 8083 Metrics endpoint port
metrics.path /metrics Metrics endpoint path
metrics.serviceMonitor.enabled false If a ServiceMonitor should be created to enable Prometheus Operator to manage the metrics scraping, note that enabling this removes the prometheus.io scrape annotations
metrics.serviceMonitor.additionalLabels {} Additional labels to add to the ServiceMonitor
metrics.serviceMonitor.endpointConfig {} Additional endpoint configuration for the ServiceMonitor
metrics.annotations
DEPRECATED Set explicit metrics annotations. Replaced by template content.
metrics.tls.enabled false TLS enabled for the metrics/web_exporter endpoint
metrics.tls.secretName {Release.Name}-webservice-metrics-tls Secret for the metrics/web_exporter endpoint TLS cert and key
minio.bucket git-lfs Name of storage bucket, when using MinIO
minio.port 9000 Port for MinIO service
minio.serviceName minio-svc Name of MinIO service
monitoring.ipWhitelist [0.0.0.0/0] List of IPs to whitelist for the monitoring endpoints
monitoring.exporter.enabled false Enable webserver to expose Prometheus metrics, this is overridden by metrics.enabled if the metrics port is set to the monitoring exporter port
monitoring.exporter.port 8083 Port number to use for the metrics exporter
psql.password.key psql-password Key to psql password in psql secret
psql.password.secret gitlab-postgres psql secret name
psql.port Set PostgreSQL server port. Takes precedence over global.psql.port
puma.disableWorkerKiller true Disables Puma worker memory killer
puma.workerMaxMemory The maximum memory (in megabytes) for the Puma worker killer
puma.threads.min 4 The minimum amount of Puma threads
puma.threads.max 4 The maximum amount of Puma threads
rack_attack.git_basic_auth {} See GitLab documentation for details
redis.serviceName redis Redis service name
registry.api.port 5000 Registry port
registry.api.protocol http Registry protocol
registry.api.serviceName registry Registry service name
registry.enabled true Add/Remove registry link in all projects menu
registry.tokenIssuer gitlab-issuer Registry token issuer
replicaCount 1 Webservice number of replicas
resources.requests.cpu 300m Webservice minimum CPU
resources.requests.memory 1.5G Webservice minimum memory
service.externalPort 8080 Webservice exposed port
securityContext.fsGroup 1000 Group ID under which the pod should be started
securityContext.runAsUser 1000 User ID under which the pod should be started
serviceLabels {} Supplemental service labels
service.internalPort 8080 Webservice internal port
service.type ClusterIP Webservice service type
service.workhorseExternalPort 8181 Workhorse exposed port
service.workhorseInternalPort 8181 Workhorse internal port
service.loadBalancerIP IP address to assign to LoadBalancer (if supported by cloud provider)
service.loadBalancerSourceRanges List of IP CIDRs allowed access to LoadBalancer (if supported) Required for service.type = LoadBalancer
shell.authToken.key secret Key to shell token in shell secret
shell.authToken.secret {Release.Name}-gitlab-shell-secret Shell token secret
shell.port nil Port number to use in SSH URLs generated by UI
shutdown.blackoutSeconds 10 Number of seconds to keep Webservice running after receiving shutdown, note this must shorter than deployment.terminationGracePeriodSeconds
tls.enabled false Webservice TLS enabled
tls.secretName {Release.Name}-webservice-tls Webservice TLS secrets. secretName must point to a Kubernetes TLS secret.
tolerations [] Toleration labels for pod assignment
trusted_proxies [] See GitLab documentation for details
workhorse.logFormat json Logging format. Valid formats: json , structured , text
workerProcesses 2 Webservice number of workers
workhorse.keywatcher true Subscribe workhorse to Redis. This is required by any deployment servicing request to /api/* , but can be safely disabled for other deployments
workhorse.shutdownTimeout
global.webservice.workerTimeout + 1 (seconds)
Time to wait for all Web requests to clear from Workhorse. Examples: 1min , 65s .
workhorse.trustedCIDRsForPropagation A list of CIDR blocks that can be trusted for propagating a correlation ID. The -propagateCorrelationID option must also be used in workhorse.extraArgs for this to work. See the Workhorse documentation for more details.
workhorse.trustedCIDRsForXForwardedFor A list of CIDR blocks that can be used to resolve the actual client IP via the X-Forwarded-For HTTP header. This is used with workhorse.trustedCIDRsForPropagation . See the Workhorse documentation for more details.
workhorse.livenessProbe.initialDelaySeconds 20 Delay before liveness probe is initiated
workhorse.livenessProbe.periodSeconds 60 How often to perform the liveness probe
workhorse.livenessProbe.timeoutSeconds 30 When the liveness probe times out
workhorse.livenessProbe.successThreshold 1 Minimum consecutive successes for the liveness probe to be considered successful after having failed
workhorse.livenessProbe.failureThreshold 3 Minimum consecutive failures for the liveness probe to be considered failed after having succeeded
workhorse.monitoring.exporter.enabled false Enable workhorse to expose Prometheus metrics, this is overridden by workhorse.metrics.enabled
workhorse.monitoring.exporter.port 9229 Port number to use for workhorse Prometheus metrics
workhorse.monitoring.exporter.tls.enabled false When set to true , enables TLS on metrics endpoint. It requires TLS to be enabled for Workhorse.
workhorse.metrics.enabled true If a workhorse metrics endpoint should be made available for scraping
workhorse.metrics.port 8083 Workhorse metrics endpoint port
workhorse.metrics.path /metrics Workhorse metrics endpoint path
workhorse.metrics.serviceMonitor.enabled false If a ServiceMonitor should be created to enable Prometheus Operator to manage the Workhorse metrics scraping
workhorse.metrics.serviceMonitor.additionalLabels {} Additional labels to add to the Workhorse ServiceMonitor
workhorse.metrics.serviceMonitor.endpointConfig {} Additional endpoint configuration for the Workhorse ServiceMonitor
workhorse.readinessProbe.initialDelaySeconds 0 Delay before readiness probe is initiated
workhorse.readinessProbe.periodSeconds 10 How often to perform the readiness probe
workhorse.readinessProbe.timeoutSeconds 2 When the readiness probe times out
workhorse.readinessProbe.successThreshold 1 Minimum consecutive successes for the readiness probe to be considered successful after having failed
workhorse.readinessProbe.failureThreshold 3 Minimum consecutive failures for the readiness probe to be considered failed after having succeeded
workhorse.imageScaler.maxProcs 2 The maximum number of image scaling processes that may run concurrently
workhorse.imageScaler.maxFileSizeBytes 250000 The maximum file size in bytes for images to be processed by the scaler
workhorse.tls.verify true When set to true forces NGINX Ingress to verify the TLS certificate of Workhorse. For custom CA you need to set workhorse.tls.caSecretName as well. Must be set to false for self-signed certificates.
workhorse.tls.secretName {Release.Name}-workhorse-tls The name of the TLS Secret that contains the TLS key and certificate pair. This is required when Workhorse TLS is enabled.
workhorse.tls.caSecretName The name of the Secret that contains the CA certificate. This is not a TLS Secret, and must have only ca.crt key. This is used for TLS verification by NGINX.
webServer puma Selects web server (Webservice/Puma) that would be used for request handling
priorityClassName "" Allow configuring pods priorityClassName , this is used to control pod priority in case of eviction

Chart configuration examples

extraEnv

extraEnv allows you to expose additional environment variables in all containers in the pods.

Below is an example use of extraEnv :

extraEnv:
SOME_KEY: some_value
SOME_OTHER_KEY: some_other_value

When the container is started, you can confirm that the environment variables are exposed:

env | grep SOME
SOME_KEY=some_value
SOME_OTHER_KEY=some_other_value

extraEnvFrom

extraEnvFrom allows you to expose additional environment variables from other data sources in all containers in the pods.

Below is an example use of extraEnvFrom :

extraEnvFrom:
MY_NODE_NAME:
fieldRef:
fieldPath: spec.nodeName
MY_CPU_REQUEST:
resourceFieldRef:
containerName: test-container
resource: requests.cpu
SECRET_THING:
secretKeyRef:
name: special-secret
key: special_token
# optional: boolean
CONFIG_STRING:
configMapKeyRef:
name: useful-config
key: some-string
# optional: boolean

image.pullSecrets

pullSecrets allows you to authenticate to a private registry to pull images for a pod.

Additional details about private registries and their authentication methods can be
found in the Kubernetes documentation.

Below is an example use of pullSecrets :

image:
repository: my.webservice.repository
pullPolicy: Always
pullSecrets:
- name: my-secret-name
- name: my-secondary-secret-name

tolerations

tolerations allow you schedule pods on tainted worker nodes

Below is an example use of tolerations :

tolerations:
- key: "node_label"
operator: "Equal"
value: "true"
effect: "NoSchedule"
- key: "node_label"
operator: "Equal"
value: "true"
effect: "NoExecute"

annotations

annotations allows you to add annotations to the Webservice pods. For example:

annotations:
kubernetes.io/example-annotation: annotation-value

strategy

deployment.strategy allows you to change the deployment update strategy. It defines how the pods will be recreated when deployment is updated. When not provided, the cluster default is used.
For example, if you don’t want to create extra pods when the rolling update starts and change max unavailable pods to 50%:

deployment:
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 50%

You can also change the type of update strategy to Recreate , but be careful as it will kill all pods before scheduling new ones, and the web UI will be unavailable until the new pods are started. In this case, you don’t need to define rollingUpdate , only type :

deployment:
strategy:
type: Recreate

For more details, see the Kubernetes documentation.

TLS

A Webservice pod runs two containers:


  • gitlab-workhorse
  • webservice


gitlab-workhorse

Workhorse supports TLS for both web and metrics endpoints. This will secure the
communication between Workhorse and other components, in particular nginx-ingress ,
gitlab-shell , and gitaly . The TLS certificate should include the Workhorse
Service host name (e.g. RELEASE-webservice-default.default.svc ) in the Common
Name (CN) or Subject Alternate Name (SAN).

Note that multiple deployments of Webservice can exist,
so you need to prepare the TLS certificate for different service names. This
can be achieved by either multiple SAN or wildcard certificate.

Once the TLS certificate is generated, create a Kubernetes TLS Secret for it. You also need to create
another Secret that only contains the CA certificate of the TLS certificate
with ca.crt key.

The TLS can be enabled for gitlab-workhorse container by setting global.workhorse.tls.enabled
to true . You can pass custom Secret names to gitlab.webservice.workhorse.tls.secretName and
global.certificates.customCAs accordingly.

When gitlab.webservice.workhorse.tls.verify is true (it is by default), you
also need to pass the CA certificate Secret name to gitlab.webservice.workhorse.tls.caSecretName .
This is necessary for self-signed certificates and custom CA. This Secret is used
by NGINX to verify the TLS certificate of Workhorse.

global:
workhorse:
tls:
enabled: true
certificates:
customCAs:
- secret: gitlab-workhorse-ca
gitlab:
webservice:
workhorse:
tls:
verify: true
# secretName: gitlab-workhorse-tls
caSecretName: gitlab-workhorse-ca
monitoring:
exporter:
enabled: true
tls:
enabled: true

TLS can be enabled on metrics endpoints for gitlab-workhorse container by setting
gitlab.webservice.workhorse.monitoring.tls.enabled to true . Note that TLS on
metrics endpoint is only available when TLS is enabled for Workhorse. The metrics
listener uses the same TLS certificate that is specified by gitlab.webservice.workhorse.tls.secretName .


webservice

The primary use case for enabling TLS is to provide encryption via HTTPS
for scraping Prometheus metrics.
For this reason, the TLS certificate should include the Webservice
hostname (ex: RELEASE-webservice-default.default.svc ) in the Common
Name (CN) or Subject Alternate Name (SAN).


note

The Prometheus server bundled with the chart does not yet
support scraping of HTTPS endpoints.

TLS can be enabled on the webservice container by the settings gitlab.webservice.tls.enabled :

gitlab:
webservice:
tls:
enabled: true
# secretName: gitlab-webservice-tls

secretName must point to a Kubernetes TLS secret.
For example, to create a TLS secret with a local certificate and key:

kubectl create secret tls <secret name> --cert=path/to/puma.crt --key=path/to/puma.key

Using the Community Edition of this chart

By default, the Helm charts use the Enterprise Edition of GitLab. If desired, you
can use the Community Edition instead. Learn more about the
differences between the two.

In order to use the Community Edition, set image.repository to
registry.gitlab.com/gitlab-org/build/cng/gitlab-webservice-ce and workhorse.image
to registry.gitlab.com/gitlab-org/build/cng/gitlab-workhorse-ce .

Global settings

We share some common global settings among our charts. See the Globals Documentation
for common configuration options, such as GitLab and Registry hostnames.

Deployments settings

This chart has the ability to create multiple Deployment objects and their related
resources. This feature allows requests to the GitLab application to be distributed between multiple sets of Pods using path based routing.

The keys of this Map ( default in this example) are the “name” for each. default
will have a Deployment, Service, HorizontalPodAutoscaler, PodDisruptionBudget, and
optional Ingress created with RELEASE-webservice-default .

Any property not provided will inherit from the gitlab-webservice chart defaults.

deployments:
default:
ingress:
path: # Does not inherit or default. Leave blank to disable Ingress.
pathType: Prefix
provider: nginx
annotations:
# inherits `ingress.anntoations`
proxyConnectTimeout: # inherits `ingress.proxyConnectTimeout`
proxyReadTimeout: # inherits `ingress.proxyReadTimeout`
proxyBodySize: # inherits `ingress.proxyBodySize`
deployment:
annotations: # map
labels: # map
# inherits `deployment`
pod:
labels: # additional labels to .podLabels
annotations: # map
# inherit from .Values.annotations
service:
labels: # additional labels to .serviceLabels
annotations: # additional annotations to .service.annotations
# inherits `service.annotations`
hpa:
minReplicas: # defaults to .minReplicas
maxReplicas: # defaults to .maxReplicas
metrics: # optional replacement of HPA metrics definition
# inherits `hpa`
pdb:
maxUnavailable: # inherits `maxUnavailable`
resources: # `resources` for `webservice` container
# inherits `resources`
workhorse: # map
# inherits `workhorse`
extraEnv: #
# inherits `extraEnv`
extraEnvFrom: #
# inherits `extraEnvFrom`
puma: # map
# inherits `puma`
workerProcesses: # inherits `workerProcesses`
shutdown:
# inherits `shutdown`
nodeSelector: # map
# inherits `nodeSelector`
tolerations: # array
# inherits `tolerations`

Deployments Ingress

Each deployments entry will inherit from chart-wide Ingress settings. Any value presented here will override those provided there. Outside of path , all settings are identical to those.

webservice:
deployments:
default:
ingress:
path: /
api:
ingress:
path: /api

The path property is directly populated into the Ingress’s path property, and allows one to control URI paths which are directed to each service. In the example above,
default acts as the catch-all path, and api received all traffic under /api

You can disable a given Deployment from having an associated Ingress resource created by setting path to empty. See below, where internal-api will never receive external traffic.

webservice:
deployments:
default:
ingress:
path: /
api:
ingress:
path: /api
internal-api:
ingress:
path:

Ingress settings













Name Type Default Description
ingress.apiVersion String Value to use in the apiVersion field.
ingress.annotations Map See below
These annotations will be used for every Ingress. For example: ingress.annotations."nginx\.ingress\.kubernetes\.io/enable-access-log"=true .
ingress.configureCertmanager Boolean Toggles Ingress annotation cert-manager.io/issuer . For more information see the TLS requirement for GitLab Pages.
ingress.enabled Boolean false Setting that controls whether to create Ingress objects for services that support them. When false , the global.ingress.enabled setting value is used.
ingress.proxyBodySize String 512m
See Below.
ingress.tls.enabled Boolean true When set to false , you disable TLS for GitLab Webservice. This is mainly useful for cases in which you cannot use TLS termination at Ingress-level, like when you have a TLS-terminating proxy before the Ingress Controller.
ingress.tls.secretName String (empty) The name of the Kubernetes TLS Secret that contains a valid certificate and key for the GitLab URL. When not set, the global.ingress.tls.secretName value is used instead.
ingress.tls.smardcardSecretName String (empty) The name of the Kubernetes TLS SEcret that contains a valid certificate and key for the GitLab smartcard URL if enabled. When not set, the global.ingress.tls.secretName value is used instead.

annotations

annotations is used to set annotations on the Webservice Ingress.

We set one annotation by default: nginx.ingress.kubernetes.io/service-upstream: "true" .
This helps balance traffic to the Webservice pods more evenly by telling NGINX to directly
contact the Service itself as the upstream. For more information, see the
NGINX docs.

To override this, set:

gitlab:
webservice:
ingress:
annotations:
nginx.ingress.kubernetes.io/service-upstream: "false"

proxyBodySize

proxyBodySize is used to set the NGINX proxy maximum body size. This is commonly
required to allow a larger Docker image than the default.
It is equivalent to the nginx['client_max_body_size'] configuration in an
Omnibus installation.
As an alternative option,
you can set the body size with either of the following two parameters too:


  • gitlab.webservice.ingress.annotations."nginx\.ingress\.kubernetes\.io/proxy-body-size"
  • global.ingress.annotations."nginx\.ingress\.kubernetes\.io/proxy-body-size"

Resources

Memory requests/limits

Each pod spawns an amount of workers equal to workerProcesses , who each use
some baseline amount of memory. We recommend:


  • A minimum of 1.25GB per worker ( requests.memory )
  • A maximum of 1.5GB per worker, plus 1GB for the primary ( limits.memory )

Note that required resources are dependent on the workload generated by users
and may change in the future based on changes or upgrades in the GitLab application.

Default:

workerProcesses: 2
resources:
requests:
memory: 2.5G # = 2 * 1.25G
# limits:
# memory: 4G # = (2 * 1.5G) + 950M

With 4 workers configured:

workerProcesses: 4
resources:
requests:
memory: 5G # = 4 * 1.25G
# limits:
# memory: 7G # = (4 * 1.5G) + 950M

External Services

Redis

The Redis documentation has been consolidated in the globals
page. Please consult this page for the latest Redis configuration options.

PostgreSQL

The PostgreSQL documentation has been consolidated in the globals
page. Please consult this page for the latest PostgreSQL configuration options.

Gitaly

Gitaly is configured by global settings. Please see the
Gitaly configuration documentation.

MinIO

minio:
serviceName: 'minio-svc'
port: 9000






Name Type Default Description
port Integer 9000 Port number to reach the MinIO Service on.
serviceName String minio-svc Name of the Service that is exposed by the MinIO pod.

Registry

registry:
host: registry.example.com
port: 443
api:
protocol: http
host: registry.example.com
serviceName: registry
port: 5000
tokenIssuer: gitlab-issuer
certificate:
secret: gitlab-registry
key: registry-auth.key













Name Type Default Description
api.host String The hostname of the Registry server to use. This can be omitted in lieu of api.serviceName .
api.port Integer 5000 The port on which to connect to the Registry API.
api.protocol String The protocol Webservice should use to reach the Registry API.
api.serviceName String registry The name of the service which is operating the Registry server. If this is present, and api.host is not, the chart will template the hostname of the service (and current .Release.Name ) in place of the api.host value. This is convenient when using Registry as a part of the overall GitLab chart.
certificate.key String The name of the key in the Secret which houses the certificate bundle that will be provided to the registry container as auth.token.rootcertbundle .
certificate.secret String The name of the Kubernetes Secret that houses the certificate bundle to be used to verify the tokens created by the GitLab instance(s).
host String The external hostname to use for providing Docker commands to users in the GitLab UI. Falls back to the value set in the registry.hostname template. Which determines the registry hostname based on the values set in global.hosts . See the Globals Documentation for more information.
port Integer The external port used in the hostname. Using port 80 or 443 will result in the URLs being formed with http / https . Other ports will all use http and append the port to the end of hostname, for example http://registry.example.com:8443 .
tokenIssuer String gitlab-issuer The name of the auth token issuer. This must match the name used in the Registry’s configuration, as it incorporated into the token when it is sent. The default of gitlab-issuer is the same default we use in the Registry chart.

Chart settings

The following values are used to configure the Webservice Pods.







Name Type Default Description
replicaCount Integer 1 The number of Webservice instances to create in the deployment.
workerProcesses Integer 2 The number of Webservice workers to run per pod. You must have at least 2 workers available in your cluster in order for GitLab to function properly. Note that increasing the workerProcesses will increase the memory required by approximately 400MB per worker, so you should update the pod resources accordingly.

Metrics

Metrics can be enabled with the metrics.enabled value and use the GitLab
monitoring exporter to expose a metrics port. Pods are either given Prometheus
annotations or if metrics.serviceMonitor.enabled is true a Prometheus
Operator ServiceMonitor is created. Metrics can alternativly be scraped from
the /-/metrics endpoint, but this requires GitLab Prometheus metrics
to be enabled in the Admin area. The GitLab Workhorse metrics can also be
exposed via workhorse.metrics.enabled but these can’t be collected using the
Prometheus annotations so either require
workhorse.metrics.serviceMonitor.enabled to be true or external Prometheus
configuration.

GitLab Shell

GitLab Shell uses an Auth Token in its communication with Webservice. Share the token
with GitLab Shell and Webservice using a shared Secret.

shell:
authToken:
secret: gitlab-shell-secret
key: secret
port:







Name Type Default Description
authToken.key String Defines the name of the key in the secret (below) that contains the authToken.
authToken.secret String Defines the name of the Kubernetes Secret to pull from.
port Integer 22 The port number to use in the generation of SSH URLs within the GitLab UI. Controlled by global.shell.port .

WebServer options

Current version of chart supports Puma web server.

Puma unique options:








Name Type Default Description
puma.workerMaxMemory Integer The maximum memory (in megabytes) for the Puma worker killer
puma.threads.min Integer 4 The minimum amount of Puma threads
puma.threads.max Integer 4 The maximum amount of Puma threads

Configuring the networkpolicy

This section controls the
NetworkPolicy.
This configuration is optional and is used to limit Egress and Ingress of the
Pods to specific endpoints.










Name Type Default Description
enabled Boolean false This setting enables the NetworkPolicy
ingress.enabled Boolean false When set to true , the Ingress network policy will be activated. This will block all Ingress connections unless rules are specified.
ingress.rules Array [] Rules for the Ingress policy, for details see https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource and the example below
egress.enabled Boolean false When set to true , the Egress network policy will be activated. This will block all egress connections unless rules are specified.
egress.rules Array [] Rules for the egress policy, these for details see https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource and the example below

Example Network Policy

The webservice service requires Ingress connections for only the Prometheus
exporter if enabled and traffic coming from the NGINX Ingress, and normally
requires Egress connections to various places. This examples adds the following
network policy:


  • All Ingress requests from the network on TCP 10.0.0.0/8 port 8080 are allowed for metrics exporting and NGINX Ingress
  • All Egress requests to the network on UDP 10.0.0.0/8 port 53 are allowed for DNS
  • All Egress requests to the network on TCP 10.0.0.0/8 port 5432 are allowed for PostgreSQL
  • All Egress requests to the network on TCP 10.0.0.0/8 port 6379 are allowed for Redis
  • All Egress requests to the network on TCP 10.0.0.0/8 port 8075 are allowed for Gitaly
  • Other Egress requests to the local network on 10.0.0.0/8 are restricted
  • Egress requests outside of the 10.0.0.0/8 are allowed

Note the example provided is only an example and may not be complete

Note that the Webservice requires outbound connectivity to the public internet
for images on external object storage

networkpolicy:
enabled: true
ingress:
enabled: true
rules:
- from:
- ipBlock:
cidr: 10.0.0.0/8
ports:
- port: 8080
egress:
enabled: true
rules:
- to:
- ipBlock:
cidr: 10.0.0.0/8
ports:
- port: 53
protocol: UDP
- to:
- ipBlock:
cidr: 10.0.0.0/8
ports:
- port: 5432
protocol: TCP
- to:
- ipBlock:
cidr: 10.0.0.0/8
ports:
- port: 6379
protocol: TCP
- to:
- ipBlock:
cidr: 10.0.0.0/8
ports:
- port: 8075
protocol: TCP
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8

LoadBalancer Service

If the service.type is set to LoadBalancer , you can optionally specify service.loadBalancerIP to create
the LoadBalancer with a user-specified IP (if your cloud provider supports it).

When the service.type is set to LoadBalancer you must also set service.loadBalancerSourceRanges to restrict
the CIDR ranges that can access the LoadBalancer (if your cloud provider supports it).
This is currently required due to an issue where metric ports are exposed.

Additional information about the LoadBalancer service type can be found in
the Kubernetes documentation

service:
type: LoadBalancer
loadBalancerIP: 1.2.3.4
loadBalancerSourceRanges:
- 10.0.0.0/8
Read article