Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All
DevOps-GitLab
Configure the GitLab chart with FIPS-compliant images | GitLab





  • Sample values

Configure the GitLab chart with FIPS-compliant images

GitLab offers FIPS-compliant
versions of its images, allowing you to run GitLab on FIPS-enabled clusters.

Sample values

We provide an example for GitLab chart values in
examples/fips/values.yaml
which can help you to build a FIPS-compatible GitLab deployment.

Note the comment under the nginx-ingress.controller key that provides the
relevant configuration to use a FIPS-compatible NGINX Ingress Controller image. This image is
maintained in our NGINX Ingress Controller fork.

Configure the GitLab chart with GitLab Geo | GitLab





  • Requirements
  • Overview
  • Set up Omnibus database nodes
  • Set up Kubernetes clusters
  • Collect information
  • Configure Primary database
  • Deploy chart as Geo Primary site
  • Set the Geo Primary site
  • Configure Secondary database
  • Copy secrets from the primary site to the secondary site
  • Deploy chart as Geo Secondary site
  • Add Secondary Geo site via Primary
  • Confirm Operational Status

Configure the GitLab chart with GitLab Geo

GitLab Geo provides the ability to have geographically distributed application
deployments.

While external database services can be used, these documents focus on
the use of the Omnibus GitLab for PostgreSQL to provide the
most platform agnostic guide, and make use of the automation included in gitlab-ctl .

In this guide, both clusters have the same external URL. See Set up a Unified URL for Geo sites.


note
See the defined terms
to describe all aspects of Geo (mainly the distinction between site and node ).

Requirements

To use GitLab Geo with the GitLab Helm chart, the following requirements must be met:


  • The use of external PostgreSQL services, as the
    PostgresSQL included with the chart is not exposed to outside networks, and doesn’t
    have WAL support required for replication.
  • The supplied database must:

    • Support replication.
    • The primary database must be reachable by the primary site,
      and all secondary database nodes (for replication).
    • Secondary databases only need to be reachable by the secondary sites.
    • Support SSL between primary and secondary database nodes.
  • The primary site must be reachable via HTTP(S) by all secondary sites.
    Secondary sites must be accessible to the primary site via HTTP(S).

Overview

This guide uses 2 Omnibus GitLab database nodes,
configuring only the PostgreSQL services needed, and 2 deployments of the
GitLab Helm chart. It is intended to be the minimal required configuration.
This documentation does not include SSL from application to database, support
for other database providers, or
promoting a secondary site to primary.

The outline below should be followed in order:


  1. Setup Omnibus database nodes
  2. Setup Kubernetes clusters
  3. Collect information
  4. Configure Primary database
  5. Deploy chart as Geo Primary site
  6. Set the Geo primary site
  7. Configure Secondary database
  8. Copy secrets from primary site to secondary site
  9. Deploy chart as Geo Secondary site
  10. Add Secondary Geo site via Primary
  11. Confirm Operational Status

Set up Omnibus database nodes

For this process, two nodes are required. One is the Primary database node, the
other the Secondary database node. You may use any provider of machine
infrastructure, on-premise or from a cloud provider.

Bear in mind that communication is required:


  • Between the two database nodes for replication.
  • Between each database node and their respective Kubernetes deployments:

    • The primary needs to expose TCP port 5432 .
    • The secondary needs to expose TCP ports 5432 & 5431 .

Install an operating system supported by Omnibus GitLab, and then
install the Omnibus GitLab onto it. Do not provide the
EXTERNAL_URL environment variable when installing, as we’ll provide a minimal
configuration file before reconfiguring the package.

After you have installed the operating system, and the GitLab package, configuration
can be created for the services that will be used. Before we do that, information
must be collected.

Set up Kubernetes clusters

For this process, two Kubernetes clusters should be used. These can be from any
provider, on-premise or from a cloud provider.

Bear in mind that communication is required:


  • To the respective database nodes:

    • Primary outbound to TCP 5432 .
    • Secondary outbound to TCP 5432 and 5431 .
  • Between both Kubernetes Ingress via HTTPS.

Each cluster that is provisioned should have:


  • Enough resources to support a base-line installation of these charts.
  • Access to persistent storage:

    • MinIO not required if using external object storage.
    • Gitaly not required if using external Gitaly.
    • Redis not required if using external Redis.

Collect information

To continue with the configuration, the following information needs to be
collected from the various sources. Collect these, and make notes for use through
the rest of this documentation.


  • Primary database:

    • IP address
    • hostname (optional)
  • Secondary database:

    • IP address
    • hostname (optional)
  • Primary cluster:

    • External URL
    • Internal URL
    • IP addresses of nodes
  • Secondary cluster:

    • Internal URL
    • IP addresses of nodes
  • Database Passwords ( must pre-decide the passwords ):


    • gitlab (used in postgresql['sql_user_password'] , global.psql.password )

    • gitlab_geo (used in geo_postgresql['sql_user_password'] , global.geo.psql.password )

    • gitlab_replicator (needed for replication)
  • Your GitLab license file

The Internal URL of each cluster must be unique to the cluster, so that all
clusters can make requests to all other clusters. For example:


  • External URL of all clusters: https://gitlab.example.com
  • Primary cluster’s Internal URL: https://london.gitlab.example.com
  • Secondary cluster’s Internal URL: https://shanghai.gitlab.example.com

This guide does not cover setting up DNS.

The gitlab and gitlab_geo database user passwords must exist in two
forms: bare password, and PostgreSQL hashed password. To obtain the hashed form,
perform the following commands on one of the Omnibus instances, which asks
you to enter and confirm the password before outputting an appropriate hash
value for you to make note of.


  1. gitlab-ctl pg-password-md5 gitlab
  2. gitlab-ctl pg-password-md5 gitlab_geo

Configure Primary database

This section is performed on the Primary Omnibus GitLab database node.

To configure the Primary database node’s Omnibus GitLab, work from
this example configuration:

### Geo Primary
external_url 'http://gitlab.example.com'
roles ['geo_primary_role']
# The unique identifier for the Geo node.
gitlab_rails['geo_node_name'] = 'London Office'
gitlab_rails['auto_migrate'] = false
## turn off everything but the DB
sidekiq['enable']=false
puma['enable']=false
gitlab_workhorse['enable']=false
nginx['enable']=false
geo_logcursor['enable']=false
grafana['enable']=false
gitaly['enable']=false
redis['enable']=false
prometheus_monitoring['enable'] = false
## Configure the DB for network
postgresql['enable'] = true
postgresql['listen_address'] = '0.0.0.0'
postgresql['sql_user_password'] = 'gitlab_user_password_hash'
# !! CAUTION !!
# This list of CIDR addresses should be customized
# - primary application deployment
# - secondary database node(s)
postgresql['md5_auth_cidr_addresses'] = ['0.0.0.0/0']

We must replace several items:



  • external_url must be updated to reflect the host name of our Primary site.

  • gitlab_rails['geo_node_name'] must be replaced with a unique name for your
    site. See the Name field in
    Common settings.

  • gitlab_user_password_hash must be replaced with the hashed form of the
    gitlab password.

  • postgresql['md5_auth_cidr_addresses'] can be update to be a list of
    explicit IP addresses, or address blocks in CIDR notation.

The md5_auth_cidr_addresses should be in the form of
[ '127.0.0.1/24', '10.41.0.0/16'] . It is important to include 127.0.0.1 in
this list, as the automation in Omnibus GitLab connects using this. The
addresses in this list should include the IP address (not hostname) of your
Secondary database, and all nodes of your primary Kubernetes cluster. This can
be left as ['0.0.0.0/0'] , however it is not best practice .

After the configuration above is prepared:


  1. Place the content into /etc/gitlab/gitlab.rb
  2. Run gitlab-ctl reconfigure . If you experience any issues in regards to the
    service not listening on TCP, try directly restarting it with
    gitlab-ctl restart postgresql .
  3. Run gitlab-ctl set-replication-password to set the password for
    the gitlab_replicator user.

  4. Retrieve the Primary database node’s public certificate, this is needed
    for the Secondary database to be able to replicate (save this output):


    cat ~gitlab-psql/data/server.crt

Deploy chart as Geo Primary site

This section is performed on the Primary site’s Kubernetes cluster.

To deploy this chart as a Geo Primary, start from this example configuration:



  1. Create a secret containing the database password for the
    chart to consume. Replace PASSWORD below with the password for the gitlab
    database user:


    kubectl --namespace gitlab create secret generic geo --from-literal=postgresql-password=PASSWORD

  2. Create a primary.yaml file based on the example configuration
    and update the configuration to reflect the correct values:


    ### Geo Primary
    global:
    # See docs.gitlab.com/charts/charts/globals
    # Configure host & domain
    hosts:
    domain: example.com
    # configure DB connection
    psql:
    host: geo-1.db.example.com
    port: 5432
    password:
    secret: geo
    key: postgresql-password
    # configure geo (primary)
    geo:
    nodeName: London Office
    enabled: true
    role: primary
    # External DB, disable
    postgresql:
    install: false

    • global.hosts.domain
    • global.psql.host
    • global.geo.nodeName must match
      the Name field of a Geo site in the Admin Area
    • Also configure any additional settings, such as:

      • Configuring SSL/TLS
      • Using external Redis

      • using external Object Storage

  3. Deploy the chart using this configuration:


    helm upgrade --install gitlab-geo gitlab/gitlab --namespace gitlab -f primary.yaml

    note
    This assumes you are using the gitlab namespace. If you want to use a different namespace,
    you should also replace it in --namespace gitlab throughout the rest of this document.

  4. Wait for the deployment to complete, and the application to come online. When
    the application is reachable, log in.


  5. Sign in to GitLab, and activate your GitLab subscription.


    note
    This step is required for Geo to function.

Set the Geo Primary site

Now that the chart has been deployed, and a license uploaded, we can configure
this as the Primary site. We will do this via the Toolbox Pod.



  1. Find the Toolbox Pod


    kubectl --namespace gitlab get pods -lapp=toolbox

  2. Run gitlab-rake geo:set_primary_node with kubectl exec :


    kubectl --namespace gitlab exec -ti gitlab-geo-toolbox-XXX -- gitlab-rake geo:set_primary_node

  3. Set the primary site’s Internal URL with a Rails runner command. Replace https://primary.gitlab.example.com with the actual Internal URL:


    kubectl --namespace gitlab exec -ti gitlab-geo-toolbox-XXX -- gitlab-rails runner "GeoNode.primary_node.update!(internal_url: 'https://primary.gitlab.example.com'"

  4. Check the status of Geo configuration:


    kubectl --namespace gitlab exec -ti gitlab-geo-toolbox-XXX -- gitlab-rake gitlab:geo:check

    You should see output similar to below:


    WARNING: This version of GitLab depends on gitlab-shell 10.2.0, but you're running Unknown. Please update gitlab-shell.
    Checking Geo ...

    GitLab Geo is available ... yes
    GitLab Geo is enabled ... yes
    GitLab Geo secondary database is correctly configured ... not a secondary node
    Database replication enabled? ... not a secondary node
    Database replication working? ... not a secondary node
    GitLab Geo HTTP(S) connectivity ... not a secondary node
    HTTP/HTTPS repository cloning is enabled ... yes
    Machine clock is synchronized ... Exception: getaddrinfo: Servname not supported for ai_socktype
    Git user has default SSH configuration? ... yes
    OpenSSH configured to use AuthorizedKeysCommand ... no
    Reason:
    Cannot find OpenSSH configuration file at: /assets/sshd_config
    Try fixing it:
    If you are not using our official docker containers,
    make sure you have OpenSSH server installed and configured correctly on this system
    For more information see:
    doc/administration/operations/fast_ssh_key_lookup.md
    GitLab configured to disable writing to authorized_keys file ... yes
    GitLab configured to store new projects in hashed storage? ... yes
    All projects are in hashed storage? ... yes

    Checking Geo ... Finished

    • Don’t worry about Exception: getaddrinfo: Servname not supported for ai_socktype , as Kubernetes containers don’t have access to the host clock. This is OK .

    • OpenSSH configured to use AuthorizedKeysCommand ... no is expected . This
      Rake task is checking for a local SSH server, which is actually present in the
      gitlab-shell chart, deployed elsewhere, and already configured appropriately.

Configure Secondary database

This section is performed on the Secondary Omnibus GitLab database node.

To configure the Secondary database node’s Omnibus GitLab, work from
this example configuration:

### Geo Secondary
# external_url must match the Primary cluster's external_url
external_url 'http://gitlab.example.com'
roles ['geo_secondary_role']
gitlab_rails['enable'] = true
# The unique identifier for the Geo node.
gitlab_rails['geo_node_name'] = 'Shanghai Office'
gitlab_rails['auto_migrate'] = false
geo_secondary['auto_migrate'] = false
## turn off everything but the DB
sidekiq['enable']=false
puma['enable']=false
gitlab_workhorse['enable']=false
nginx['enable']=false
geo_logcursor['enable']=false
grafana['enable']=false
gitaly['enable']=false
redis['enable']=false
prometheus_monitoring['enable'] = false
## Configure the DBs for network
postgresql['enable'] = true
postgresql['listen_address'] = '0.0.0.0'
postgresql['sql_user_password'] = 'gitlab_user_password_hash'
# !! CAUTION !!
# This list of CIDR addresses should be customized
# - secondary application deployment
# - secondary database node(s)
postgresql['md5_auth_cidr_addresses'] = ['0.0.0.0/0']
geo_postgresql['listen_address'] = '0.0.0.0'
geo_postgresql['sql_user_password'] = 'gitlab_geo_user_password_hash'
# !! CAUTION !!
# This list of CIDR addresses should be customized
# - secondary application deployment
# - secondary database node(s)
geo_postgresql['md5_auth_cidr_addresses'] = ['0.0.0.0/0']
gitlab_rails['db_password']='gitlab_user_password'

We must replace several items:



  • gitlab_rails['geo_node_name'] must be replaced with a unique name for your site. See the Name field in
    Common settings.

  • gitlab_user_password_hash must be replaced with the hashed form of the
    gitlab password.

  • postgresql['md5_auth_cidr_addresses'] should be updated to be a list of
    explicit IP addresses, or address blocks in CIDR notation.

  • gitlab_geo_user_password_hash must be replaced with the hashed form of the
    gitlab_geo password.

  • geo_postgresql['md5_auth_cidr_addresses'] should be updated to be a list of
    explicit IP addresses, or address blocks in CIDR notation.

  • gitlab_user_password must be updated, and is used here to allow Omnibus GitLab
    to automate the PostgreSQL configuration.

The md5_auth_cidr_addresses should be in the form of
[ '127.0.0.1/24', '10.41.0.0/16'] . It is important to include 127.0.0.1 in
this list, as the automation in Omnibus GitLab connects using this. The
addresses in this list should include the IP addresses of all nodes of your
Secondary Kubernetes cluster. This can be left as ['0.0.0.0/0'] , however
it is not best practice .

After configuration above is prepared:



  1. Check TCP connectivity to the primary site’s PostgreSQL node:


    openssl s_client -connect <primary_node_ip>:5432 </dev/null

    The output should show the following:


    CONNECTED(00000003)
    write:errno=0

    note
    If this step fails, you may be using the wrong IP address, or a firewall may
    be preventing access to the server. Check the IP address, paying close
    attention to the difference between public and private addresses and ensure
    that, if a firewall is present, the secondary PostgreSQL node is
    permitted to connect to the primary PostgreSQL node on TCP port 5432.
  2. Place the content into /etc/gitlab/gitlab.rb
  3. Run gitlab-ctl reconfigure . If you experience any issues in regards to the
    service not listening on TCP, try directly restarting it with
    gitlab-ctl restart postgresql .
  4. Place the Primary PostgreSQL node’s certificate content from above into primary.crt

  5. Set up PostgreSQL TLS verification on the secondary PostgreSQL node:

    Install the primary.crt file:


    install \
    -D \
    -o gitlab-psql \
    -g gitlab-psql \
    -m 0400 \
    -T primary.crt ~gitlab-psql/.postgresql/root.crt

    PostgreSQL will now only recognize that exact certificate when verifying TLS
    connections. The certificate can only be replicated by someone with access
    to the private key, which is only present on the primary PostgreSQL
    node.


  6. Test that the gitlab-psql user can connect to the primary site’s PostgreSQL
    (the default Omnibus database name is gitlabhq_production ):


    sudo \
    -u gitlab-psql /opt/gitlab/embedded/bin/psql \
    --list \
    -U gitlab_replicator \
    -d "dbname=gitlabhq_production sslmode=verify-ca" \
    -W \
    -h <primary_database_node_ip>

    When prompted enter the password collected earlier for the
    gitlab_replicator user. If all worked correctly, you should see
    the list of primary PostgreSQL node’s databases.

    A failure to connect here indicates that the TLS configuration is incorrect.
    Ensure that the contents of ~gitlab-psql/data/server.crt on the
    primary PostgreSQL node
    match the contents of ~gitlab-psql/.postgresql/root.crt on the
    secondary PostgreSQL node.


  7. Replicate the databases. Replace PRIMARY_DATABASE_HOST with the IP or hostname
    of your Primary PostgreSQL node:


    gitlab-ctl replicate-geo-database --slot-name=geo_2 --host=PRIMARY_DATABASE_HOST --sslmode=verify-ca

  8. After replication has finished, we must reconfigure the Omnibus GitLab one last time
    to ensure pg_hba.conf is correct for the secondary PostgreSQL node:


    gitlab-ctl reconfigure

Copy secrets from the primary site to the secondary site

Now copy a few secrets from the Primary site’s Kubernetes deployment to the
Secondary site’s Kubernetes deployment:


  • gitlab-geo-gitlab-shell-host-keys
  • gitlab-geo-rails-secret

  • gitlab-geo-registry-secret , if Registry replication is enabled.

  1. Change your kubectl context to that of your Primary.

  2. Collect these secrets from the Primary deployment:


    kubectl get --namespace gitlab -o yaml secret gitlab-geo-gitlab-shell-host-keys > ssh-host-keys.yaml
    kubectl get --namespace gitlab -o yaml secret gitlab-geo-rails-secret > rails-secrets.yaml
    kubectl get --namespace gitlab -o yaml secret gitlab-geo-registry-secret > registry-secrets.yaml
  3. Change your kubectl context to that of your Secondary.

  4. Apply these secrets:


    kubectl --namespace gitlab apply -f ssh-host-keys.yaml
    kubectl --namespace gitlab apply -f rails-secrets.yaml
    kubectl --namespace gitlab apply -f registry-secrets.yaml

Next create a secret containing the database passwords. Replace the
passwords below with the appropriate values:

kubectl --namespace gitlab create secret generic geo \
--from-literal=postgresql-password=gitlab_user_password \
--from-literal=geo-postgresql-password=gitlab_geo_user_password

Deploy chart as Geo Secondary site

This section is performed on the Secondary site’s Kubernetes cluster.

To deploy this chart as a Geo Secondary site, start from this example configuration.



  1. Create a secondary.yaml file based on the example configuration
    and update the configuration to reflect the correct values:


    ## Geo Secondary
    global:
    # See docs.gitlab.com/charts/charts/globals
    # Configure host & domain
    hosts:
    hostSuffix: secondary
    domain: example.com
    # configure DB connection
    psql:
    host: geo-2.db.example.com
    port: 5432
    password:
    secret: geo
    key: postgresql-password
    # configure geo (secondary)
    geo:
    enabled: true
    role: secondary
    nodeName: Shanghai Office
    psql:
    host: geo-2.db.example.com
    port: 5431
    password:
    secret: geo
    key: geo-postgresql-password
    # External DB, disable
    postgresql:
    install: false

    • global.hosts.domain
    • global.psql.host
    • global.geo.psql.host
    • global.geo.nodeName must match
      the Name field of a Geo site in the Admin Area
    • Also configure any additional settings, such as:

      • Configuring SSL/TLS
      • Using external Redis
      • using external Object Storage
    • For external databases, global.psql.host is the secondary, read-only replica database, while global.geo.psql.host is the Geo tracking database

  2. Deploy the chart using this configuration:


    helm upgrade --install gitlab-geo gitlab/gitlab --namespace gitlab -f secondary.yaml

  3. Wait for the deployment to complete, and the application to come online.

Add Secondary Geo site via Primary

Now that both databases are configured and applications are deployed, we must tell
the Primary site that the Secondary site exists:


  1. Visit the primary site, and on the top bar, select
    Main menu > Admin .
  2. On the left sidebar, select Geo .
  3. Select Add site .
  4. Add the secondary site. Use the full GitLab URL for the URL.
  5. Enter a Name with the global.geo.nodeName of the Secondary site. These values must always match exactly, character for character.
  6. Enter Internal URL, for example https://shanghai.gitlab.example.com .
  7. Optionally, choose which groups or storage shards should be replicated by the
    secondary site. Leave blank to replicate all.
  8. Select Add node .

After the secondary site is added to the administration panel, it automatically starts
replicating missing data from the primary site. This process is known as “backfill”.
Meanwhile, the primary site starts to notify each secondary site of any changes, so
that the secondary site can replicate those changes promptly.

Confirm Operational Status

The final step is to double check the Geo configuration on the secondary site once fully
configured, via the Toolbox Pod.



  1. Find the Toolbox Pod:


    kubectl --namespace gitlab get pods -lapp=toolbox

  2. Attach to the Pod with kubectl exec :


    kubectl --namespace gitlab exec -ti gitlab-geo-toolbox-XXX -- bash -l

  3. Check the status of Geo configuration:


    gitlab-rake gitlab:geo:check

    You should see output similar to below:


    WARNING: This version of GitLab depends on gitlab-shell 10.2.0, but you're running Unknown. Please update gitlab-shell.
    Checking Geo ...

    GitLab Geo is available ... yes
    GitLab Geo is enabled ... yes
    GitLab Geo secondary database is correctly configured ... yes
    Database replication enabled? ... yes
    Database replication working? ... yes
    GitLab Geo HTTP(S) connectivity ...
    * Can connect to the primary node ... yes
    HTTP/HTTPS repository cloning is enabled ... yes
    Machine clock is synchronized ... Exception: getaddrinfo: Servname not supported for ai_socktype
    Git user has default SSH configuration? ... yes
    OpenSSH configured to use AuthorizedKeysCommand ... no
    Reason:
    Cannot find OpenSSH configuration file at: /assets/sshd_config
    Try fixing it:
    If you are not using our official docker containers,
    make sure you have OpenSSH server installed and configured correctly on this system
    For more information see:
    doc/administration/operations/fast_ssh_key_lookup.md
    GitLab configured to disable writing to authorized_keys file ... yes
    GitLab configured to store new projects in hashed storage? ... yes
    All projects are in hashed storage? ... yes

    Checking Geo ... Finished

    • Don’t worry about Exception: getaddrinfo: Servname not supported for ai_socktype ,
      as Kubernetes containers do not have access to the host clock. This is OK .

    • OpenSSH configured to use AuthorizedKeysCommand ... no is expected . This
      Rake task is checking for a local SSH server, which is actually present in the
      gitlab-shell chart, deployed elsewhere, and already configured appropriately.
Read article
Advanced configuration | GitLab




Advanced configuration


  • Bringing your own custom Docker images
  • Using an external database
  • Using an external Gitaly
  • Using an external GitLab Pages instance
  • Using an external Mattermost
  • Using your own NGINX Ingress Controller
  • Using an external object storage
  • Using an external Redis
  • Using FIPS-compliant images
  • Making use of GitLab Geo functionality
  • Enabling internal TLS between services
  • After install, managing Persistent Volumes
  • Using Red Hat UBI-based images
Read article
Use TLS between components of the GitLab chart | GitLab






  • Preparation


    • Generating certificates for internal use

      • Required certificate CN and SANs
  • Configuration
  • Result
  • Troubleshooting

Use TLS between components of the GitLab chart

The GitLab charts can use transport-layer security (TLS) between the various
components. This requires you to provide certificates for the services
you want to enable, and configure those services to make use of those
certificates and the certificate authority (CA) that signed them.

Preparation

Each chart has documentation regarding enabling TLS for that service, and the various
settings required to ensure that appropriate configuration.

Generating certificates for internal use


note
GitLab does not purport to provide high-grade PKI infrastructure, or certificate
authorities.

For the purposes of this documentation, we provide a Proof of Concept script
below, which makes use of Cloudflare’s CFSSL
to produce a self-signed Certificate Authority, and a wildcard certificate that can be
used for all services.

This script will:


  • Generate a CA key pair.
  • Sign a certificate meant to service all GitLab component service endpoints.
  • Create two Kubernetes Secret objects:

    • A secret of type kuberetes.io/tls which has the server certificate and key pair.
    • A secret of type Opaque which only contains the public certificate of the CA as ca.crt
      as need by NGINX Ingress.

Prerequisites:


  • Bash, or compatible shell.

  • cfssl is available to your shell, and within PATH .

  • kubectl is available, and configured to point to your Kubernetes cluster
    where GitLab will later be installed.

    • Be sure to have created the namespace you wish to have these certificates
      installed into before operating the script.

You may copy the content of this script to your computer, and make the resulting
file executable. We suggest poc-gitlab-internal-tls.sh .

#!/bin/bash
set -e
#############
## make and change into a working directory
pushd $(mktemp -d)

#############
## setup environment
NAMESPACE=${NAMESPACE:-default}
RELEASE=${RELEASE:-gitlab}
## stop if variable is unset beyond this point
set -u
## known expected patterns for SAN
CERT_SANS="*.${NAMESPACE}.svc,${RELEASE}-metrics.${NAMESPACE}.svc,*.${RELEASE}-gitaly.${NAMESPACE}.svc"

#############
## generate default CA config
cfssl print-defaults config > ca-config.json
## generate a CA
echo '{"CN":"'${RELEASE}.${NAMESPACE}.internal.ca'","key":{"algo":"ecdsa","size":256}}' | \
cfssl gencert -initca - | \
cfssljson -bare ca -
## generate certificate
echo '{"CN":"'${RELEASE}.${NAMESPACE}.internal'","key":{"algo":"ecdsa","size":256}}' | \
cfssl gencert -config=ca-config.json -ca=ca.pem -ca-key=ca-key.pem -profile www -hostname="${CERT_SANS}" - |\
cfssljson -bare ${RELEASE}-services

#############
## load certificates into K8s
kubectl -n ${NAMESPACE} create secret tls ${RELEASE}-internal-tls \
--cert=${RELEASE}-services.pem \
--key=${RELEASE}-services-key.pem
kubectl -n ${NAMESPACE} create secret generic ${RELEASE}-internal-tls-ca \
--from-file=ca.crt=ca.pem

note
This script does not preserve the CA’s private key. It is a Proof-of-Concept
helper, and is not intended for production use .

The script expects two environment variables to be set:



  1. NAMESPACE : The Kubernetes Namespace you will later install GitLab to.
    This defaults to default , as with kubectl .

  2. RELEASE : The Helm Release name you will later use to install GitLab.
    This defaults to gitlab .

To operate this script, you may export the two variables, or prepend the
script name with their values.

export NAMESPACE=testing
export RELEASE=gitlab

./poc-gitlab-internal-tls.sh

After the script has run, you will find the two secrets created, and the
temporary working directory contains all certificates and their keys.

$ pwd
/tmp/tmp.swyMgf9mDs
$ kubectl -n ${NAMESPACE} get secret | grep internal-tls
testing-internal-tls kubernetes.io/tls 2 11s
testing-internal-tls-ca Opaque 1 10s
$ ls -1
ca-config.json
ca.csr
ca-key.pem
ca.pem
testing-services.csr
testing-services-key.pem
testing-services.pem

Required certificate CN and SANs

The various GitLab components speak to each other over their Service’s DNS names.
The Ingress objects generated by the GitLab chart must provide NGINX the
name to verify, when tls.verify: true (which is the default). As a result
of this, each GitLab component should receive a certificate with a SAN including
either their Service’s name, or a wildcard acceptable to the Kubernetes Service
DNS entry.


  • service-name.namespace.svc
  • *.namespace.svc

Failure to ensure these SANs within certificates will result in a non-functional
instance, and logs that can be quite cryptic, refering to “connection failure”
or “SSL verification failed”.

You can make use of helm template to retrieve a full list of all
Service object names, if needed. If your GitLab has been deployed without TLS,
you can query Kubernetes for those names:

kubectl -n ${NAMESPACE} get service -lrelease=${RELEASE}

Configuration

Example configurations can be found in examples/internal-tls.

For the purposes of this documentation, we have provided shared-cert-values.yaml
which configures the GitLab components to consume the certificates generated with
the script above, in generating certificates for internal use.

Key items to configure:


  1. Global Custom Certificate Authorities.
  2. Per-component TLS for the service listeners.
    (See each chart’s documentation, under charts/)

This process is greatly simplified by making use of YAML’s native anchor
functionality. A truncated snippet of shared-cert-values.yaml shows this:

.internal-ca: &internal-ca gitlab-internal-tls-ca
.internal-tls: &internal-tls gitlab-internal-tls

global:
certificates:
customCAs:
- secret: *internal-ca
workhorse:
tls:
enabled: true
gitlab:
webservice:
tls:
secretName: *internal-tls
workhorse:
tls:
verify: true # default
secretName: *internal-tls
caSecretName: *internal-ca

Result

When all components have been configured to provide TLS on their service
listeners, all communication between GitLab components will traverse the
network with TLS security, including connections from NGINX Ingress to
each GitLab component.

NGINX Ingress will terminate any inbound TLS, determine the appropriate
services to pass the traffic to, and then form a new TLS connection to
the GitLab component. When configured as shown here, it will also verify
the certificates served by the GitLab components against the CA.

This can be verified by connecting to the Toolbox pod, and querying the
various compontent Services. One such example, connecting to the Webservice
Pod’s primary service port that NGINX Ingress uses:

$ kubectl -n ${NAMESPACE} get pod -lapp=toolbox,release=${RELEASE}
NAME READY STATUS RESTARTS AGE
gitlab-toolbox-5c447bfdb4-pfmpc 1/1 Running 0 65m
$ kubectl exec -ti gitlab-toolbox-5c447bfdb4-pfmpc -c toolbox -- \
curl -Iv "https://gitlab-webservice-default.testing.svc:8181"

The output should be similar to following example:

*   Trying 10.60.0.237:8181...
* Connected to gitlab-webservice-default.testing.svc (10.60.0.237) port 8181 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: CN=gitlab.testing.internal
* start date: Jul 18 19:15:00 2022 GMT
* expire date: Jul 18 19:15:00 2023 GMT
* subjectAltName: host "gitlab-webservice-default.testing.svc" matched cert's "*.testing.svc"
* issuer: CN=gitlab.testing.internal.ca
* SSL certificate verify ok.
> HEAD / HTTP/1.1
> Host: gitlab-webservice-default.testing.svc:8181

Troubleshooting

If your GitLab instance appears unreachable from the browser, by rendering an
HTTP 503 error, NGINX Ingress is likely having a problem verifying the
certificates of the GitLab components.

You may work around this by temporarily setting
gitlab.webservice.workhorse.tls.verify to false .

The NGINX Ingress controller can be connected to, and will evidence a message
in nginx.conf , regarding problems verifying the certificate(s).

Example content, where the Secret is not reachable:

# Location denied. Reason: "error obtaining certificate: local SSL certificate
testing/gitlab-internal-tls-ca was not found"
return 503;

Common problems that cause this:


  • CA certificate is not in a key named ca.crt within the Secret.
  • The Secret was not properly supplied, or may not exist within the Namespace.
Read article
Configure the GitLab chart with persistent volumes | GitLab





  • Locate the GitLab Volumes
  • Before making storage changes

  • Making storage changes


    • Changes to an existing Volume

      • Update the volume to bind to the claim
    • Switching to a different Volume
  • Make changes to the PersistentVolumeClaim
  • Apply the changes to the GitLab chart

Configure the GitLab chart with persistent volumes

Some of the included services require persistent storage, configured through
Persistent Volumes that specify which disks your cluster has access to.
Documentation on the storage configuration necessary to install this chart can be found in our
Storage Guide.

Storage changes after installation need to be manually handled by your cluster
administrators. Automated management of these volumes after installation is not
handled by the GitLab chart.

Examples of changes not automatically managed after initial installation
include:


  • Mounting different volumes to the Pods
  • Changing the effective accessModes or Storage Class
  • Expanding the storage size of your volume* 1

1 In Kubernetes 1.11, expanding the storage size of your volume is supported
if you have allowVolumeExpansion configured to true in your Storage Class.

Automating theses changes is complicated due to:


  1. Kubernetes does not allow changes to most fields in an existing PersistentVolumeClaim
  2. Unless manually configured, the PVC is the only reference to dynamically provisioned PersistentVolumes

  3. Delete is the default reclaimPolicy for dynamically provisioned PersistentVolumes

This means in order to make changes, we need to delete the PersistentVolumeClaim
and create a new one with our changes. But due to the default reclaimPolicy,
deleting the PersistentVolumeClaim may delete the PersistentVolumes
and underlying disk. And unless configured with appropriate volumeNames and/or
labelSelectors, the chart doesn’t know the volume to attach to.

We will continue to look into making this process easier, but for now a manual
process needs to be followed to make changes to your storage.

Locate the GitLab Volumes

Find the volumes/claims that are being used:

kubectl --namespace <namespace> get PersistentVolumeClaims -l release=<chart release name> -ojsonpath='{range .items[*]}{.spec.volumeName}{"\t"}{.metadata.labels.app}{"\n"}{end}'


  • <namespace> should be replaced with the namespace where you installed the GitLab chart.

  • <chart release name> should be replaced with the name you used to install the GitLab chart.

The command prints a list of the volume names, followed by the name of the
service they are for.

For example:

$ kubectl --namespace helm-charts-win get PersistentVolumeClaims -l release=review-update-app-h8qogp -ojsonpath='{range .items[*]}{.spec.volumeName}{"\t"}{.metadata.labels.app}{"\n"}{end}'
pvc-6247502b-8c2d-11e8-8267-42010a9a0113 gitaly
pvc-61bbc05e-8c2d-11e8-8267-42010a9a0113 minio
pvc-61bc6069-8c2d-11e8-8267-42010a9a0113 postgresql
pvc-61bcd6d2-8c2d-11e8-8267-42010a9a0113 prometheus
pvc-61bdf136-8c2d-11e8-8267-42010a9a0113 redis

Before making storage changes

The person making the changes needs to have administrator access to the cluster, and appropriate access to the storage
solutions being used. Often the changes will first need to be applied in the storage solution, then the results need to
be updated in Kubernetes.

Before making changes, you should ensure your PersistentVolumes are using
the Retain reclaimPolicy so they don’t get removed while you are
making changes.

First, find the volumes/claims that are being used.

Next, edit each volume and change the value of persistentVolumeReclaimPolicy
under the spec field, to be Retain rather than Delete

For example:

kubectl --namespace helm-charts-win edit PersistentVolume pvc-6247502b-8c2d-11e8-8267-42010a9a0113

Editing Output:

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubernetes.io/createdby: gce-pd-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: "yes"
pv.kubernetes.io/provisioned-by: kubernetes.io/gce-pd
creationTimestamp: 2018-07-20T14:58:43Z
labels:
failure-domain.beta.kubernetes.io/region: europe-west2
failure-domain.beta.kubernetes.io/zone: europe-west2-b
name: pvc-6247502b-8c2d-11e8-8267-42010a9a0113
resourceVersion: "48362431"
selfLink: /api/v1/persistentvolumes/pvc-6247502b-8c2d-11e8-8267-42010a9a0113
uid: 650bd649-8c2d-11e8-8267-42010a9a0113
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 50Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: repo-data-review-update-app-h8qogp-gitaly-0
namespace: helm-charts-win
resourceVersion: "48362307"
uid: 6247502b-8c2d-11e8-8267-42010a9a0113
gcePersistentDisk:
fsType: ext4
pdName: gke-cloud-native-81a17-pvc-6247502b-8c2d-11e8-8267-42010a9a0113
# Changed the following line
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
status:
phase: Bound

Making storage changes

First, make the desired changes to the disk outside of the cluster. (Resize the
disk in GKE, or create a new disk from a snapshot or clone, etc).

How you do this, and whether or not it can be done live, without downtime, is
dependent on the storage solutions you are using, and can’t be covered by this
document.

Next, evaluate whether you need these changes to be reflected in the Kubernetes
objects. For example: with expanding the disk storage size, the storage size
settings in the PersistentVolumeClaim will only be used when a new volume
resource is requested. So you would only need to increase the values in the
PersistentVolumeClaim if you intend to scale up more disks (for use in
additional Gitaly pods).

If you do need to have the changes reflected in Kubernetes, be sure that you’ve
updated your reclaim policy on the volumes as described in the Before making storage changes
section.

The paths we have documented for storage changes are:


  • Changes to an existing Volume
  • Switching to a different Volume

Changes to an existing Volume

First locate the volume name you are changing.

Use kubectl edit to make the desired configuration changes to the volume. (These changes
should only be updates to reflect the real state of the attached disk)

For example:

kubectl --namespace helm-charts-win edit PersistentVolume pvc-6247502b-8c2d-11e8-8267-42010a9a0113

Editing Output:

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubernetes.io/createdby: gce-pd-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: "yes"
pv.kubernetes.io/provisioned-by: kubernetes.io/gce-pd
creationTimestamp: 2018-07-20T14:58:43Z
labels:
failure-domain.beta.kubernetes.io/region: europe-west2
failure-domain.beta.kubernetes.io/zone: europe-west2-b
name: pvc-6247502b-8c2d-11e8-8267-42010a9a0113
resourceVersion: "48362431"
selfLink: /api/v1/persistentvolumes/pvc-6247502b-8c2d-11e8-8267-42010a9a0113
uid: 650bd649-8c2d-11e8-8267-42010a9a0113
spec:
accessModes:
- ReadWriteOnce
capacity:
# Updated the storage size
storage: 100Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: repo-data-review-update-app-h8qogp-gitaly-0
namespace: helm-charts-win
resourceVersion: "48362307"
uid: 6247502b-8c2d-11e8-8267-42010a9a0113
gcePersistentDisk:
fsType: ext4
pdName: gke-cloud-native-81a17-pvc-6247502b-8c2d-11e8-8267-42010a9a0113
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
status:
phase: Bound

Now that the changes have been reflected in the volume, we need to update
the claim.

Follow the instructions in the Make changes to the PersistentVolumeClaim section.

Update the volume to bind to the claim

In a separate terminal, start watching to see when the claim has its status change to bound,
and then move onto the next step to make the volume available for use in the new claim.

kubectl --namespace <namespace> get --watch PersistentVolumeClaim <claim name>

Edit the volume to make it available to the new claim. Remove the .spec.claimRef section.

kubectl --namespace <namespace> edit PersistentVolume <volume name>

Editing Output:

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubernetes.io/createdby: gce-pd-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: "yes"
pv.kubernetes.io/provisioned-by: kubernetes.io/gce-pd
creationTimestamp: 2018-07-20T14:58:43Z
labels:
failure-domain.beta.kubernetes.io/region: europe-west2
failure-domain.beta.kubernetes.io/zone: europe-west2-b
name: pvc-6247502b-8c2d-11e8-8267-42010a9a0113
resourceVersion: "48362431"
selfLink: /api/v1/persistentvolumes/pvc-6247502b-8c2d-11e8-8267-42010a9a0113
uid: 650bd649-8c2d-11e8-8267-42010a9a0113
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 100Gi
gcePersistentDisk:
fsType: ext4
pdName: gke-cloud-native-81a17-pvc-6247502b-8c2d-11e8-8267-42010a9a0113
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
status:
phase: Released

Shortly after making the change to the Volume, the terminal watching the claim status should show Bound .

Finally, apply the changes to the GitLab chart

Switching to a different Volume

If you want to switch to using a new volume, using a disk that has a copy of the
appropriate data from the old volume, then first you need to create the new
Persistent Volume in Kubernetes.

In order to create a Persistent Volume for your disk, you will need to
locate the driver specific documentation
for your storage type.

There are a couple of things to keep in mind when following the driver documentation:


  • You need to use the driver to create a Persistent Volume, not a Pod object with a volume as shown in a lot of the documentation.
  • You do not want to create a PersistentVolumeClaim for the volume, we will be editing the existing claim instead.

The driver documentation often includes examples for using the driver in a Pod, for example:

apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
# This GCE PD must already exist.
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4

What you actually want, is to create a Persistent Volume, like so:

apiVersion: v1
kind: PersistentVolume
metadata:
name: test-volume
spec:
capacity:
storage: 400Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4

You normally create a local yaml file with the PersistentVolume information,
then issue a create command to Kubernetes to create the object using the file.

kubectl --namespace <your namespace> create -f <local-pv-file>.yaml

Once your volume is created, you can move on to Making changes to the PersistentVolumeClaim

Make changes to the PersistentVolumeClaim

Find the PersistentVolumeClaim you want to change.

kubectl --namespace <namespace> get PersistentVolumeClaims -l release=<chart release name> -ojsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.app}{"\n"}{end}'


  • <namespace> should be replaced with the namespace where you installed the GitLab chart.

  • <chart release name> should be replaced with the name you used to install the GitLab chart.

The command will print a list of the PersistentVolumeClaim names, followed by the name of the
service they are for.

Then save a copy of the claim to your local filesystem:

kubectl --namespace <namespace> get PersistentVolumeClaim <claim name> -o yaml > <claim name>.bak.yaml

Example Output:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/gce-pd
creationTimestamp: 2018-07-20T14:58:38Z
labels:
app: gitaly
release: review-update-app-h8qogp
name: repo-data-review-update-app-h8qogp-gitaly-0
namespace: helm-charts-win
resourceVersion: "48362433"
selfLink: /api/v1/namespaces/helm-charts-win/persistentvolumeclaims/repo-data-review-update-app-h8qogp-gitaly-0
uid: 6247502b-8c2d-11e8-8267-42010a9a0113
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: standard
volumeName: pvc-6247502b-8c2d-11e8-8267-42010a9a0113
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 50Gi
phase: Bound

Create a new YAML file for a new PVC object. Have it use the same metadata.name , metadata.labels , metadata,namespace , and spec fields. (With your updates applied). And drop the other settings:

Example:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: gitaly
release: review-update-app-h8qogp
name: repo-data-review-update-app-h8qogp-gitaly-0
namespace: helm-charts-win
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
# This is our updated field
storage: 100Gi
storageClassName: standard
volumeName: pvc-6247502b-8c2d-11e8-8267-42010a9a0113

Now delete the old claim:

kubectl --namespace <namespace> delete PersistentVolumeClaim <claim name>

Create the new claim:

kubectl --namespace <namespace> create PersistentVolumeClaim -f <new claim yaml file>

If you are binding to the same PersistentVolume that was previous bound to
the claim, then proceed to update the volume to bind to the claim

Otherwise, if you have bound the claim to a new volume, move onto apply the changes to the GitLab chart

Apply the changes to the GitLab chart

After making changes to the PersistentVolumes and PersistentVolumeClaims,
you will also want to issue a Helm update with the changes applied to the chart
settings as well.

See the installation storage guide
for the options.


Note : If you made changes to the Gitaly volume claim, you will need to delete the
Gitaly StatefulSet before you will be able to issue a Helm update. This is
because the StatefulSet’s Volume Template is immutable, and cannot be changed.

You can delete the StatefulSet without deleting the Gitaly Pods:
kubectl --namespace <namespace> delete --cascade=false StatefulSet <release-name>-gitaly
The Helm update command will recreate the StatefulSet, which will adopt and
update the Gitaly pods.

Update the chart, and include the updated configuration:

Example:

helm upgrade --install review-update-app-h8qogp gitlab/gitlab \
--set gitlab.gitaly.persistence.size=100Gi \
<your other config settings>
Read article
Configure the GitLab chart with UBI-based images | GitLab





  • Sample values
  • Known Limitations

Configure the GitLab chart with UBI-based images

GitLab offers Red Hat UBI
versions of its images, allowing you to replace standard images with UBI-based
images. These images use the same tag as standard images with -ubi8 extension.

The GitLab chart uses third-party images that are not based on UBI. These images
are mostly offer external services to GitLab, such as Redis, PostgreSQL, and so on.
If you wish to deploy a GitLab instance that purely based on UBI you must
disable the internal services, and use external deployments or services.

The services that must be disabled and provided externally are:


  • PostgreSQL
  • MinIO (Object Store)
  • Redis

The services must be disabled are:


  • CertManager (Let’s Encrypt integration)
  • Prometheus
  • Grafana
  • GitLab Runner

Sample values

We provide an example for GitLab chart values in examples/ubi/values.yaml
which can help you to build a pure UBI GitLab deployment.

Known Limitations


  • Currently there is no UBI version of GitLab Runner. Therefore we disable it.
    However, that does not prevent attaching your own runner to your UBI-based
    GitLab deployment.
  • GitLab relies on the official image of Docker Registry which is based on alpine .
    At the moment we do not maintain or release a UBI-based version of Registry. Since
    this functionality is crucial , we do not disable this service.
Read article