Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All

DevOps-GitLab

Setup standalone Redis | GitLab






  • Create VM with Omnibus GitLab
  • Configure Omnibus GitLab

Setup standalone Redis

The instructions here make use of the Omnibus GitLab package for Ubuntu. This package provides versions of the services that are guaranteed to be compatible with the charts’ services.

Create VM with Omnibus GitLab

Create a VM on your provider of choice, or locally. This was tested with VirtualBox, KVM, and Bhyve.
Ensure that the instance is reachable from the cluster.

Install Ubuntu Server onto the VM that you have created. Ensure that openssh-server is installed, and that all packages are up to date.
Configure networking and a hostname. Make note of the hostname/IP, and ensure it is both resolvable and reachable from your Kubernetes cluster.
Be sure firewall policies are in place to allow traffic.

Follow the installation instructions for Omnibus GitLab. When you perform the package installation, do not provide the EXTERNAL_URL= value. We do not want automatic configuration to occur, as we’ll provide a very specific configuration in the next step.

Configure Omnibus GitLab

Create a minimal gitlab.rb file to be placed at /etc/gitlab/gitlab.rb . Be very explicit about what is enabled on this node, use the contents below.


note
This example is not intended to provide Redis for scaling.


  • REDIS_PASSWORD should be replaced with the value in the gitlab-redis secret.
# Listen on all addresses
redis['bind'] = '0.0.0.0'
# Set the defaul port, must be set.
redis['port'] = 6379
# Set password, as in the secret `gitlab-redis` populated in Kubernetes
redis['password'] = 'REDIS_PASSWORD'

## Disable everything else
gitlab_rails['enable'] = false
sidekiq['enable'] = false
puma['enable']=false
registry['enable'] = false
gitaly['enable'] = false
gitlab_workhorse['enable'] = false
nginx['enable'] = false
prometheus_monitoring['enable'] = false
postgresql['enable'] = false

After creating gitlab.rb , reconfigure the package with gitlab-ctl reconfigure .
After the task completes, check the running processes with gitlab-ctl status .
The output should appear similar to:

# gitlab-ctl status
run: logrotate: (pid 4856) 1859s; run: log: (pid 31262) 77460s
run: redis: (pid 30562) 77637s; run: log: (pid 30561) 77637s

Configure the GitLab chart with an external Redis | GitLab






  • Configure the chart
  • Use multiple Redis instances
  • Specify secure Redis scheme (SSL)

Configure the GitLab chart with an external Redis

This document intends to provide documentation on how to configure this Helm chart with an external Redis service.

If you don’t have Redis configured, for on-premise or deployment to VM,
consider using our Omnibus GitLab package.

Configure the chart

Disable the redis chart and the Redis service it provides, and point the other services to the external service.

You must set the following parameters:



  • redis.install : Set to false to disable including the Redis chart.

  • global.redis.host : Set to the hostname of the external Redis, can be a domain or an IP address.

  • global.redis.password.enabled : Set to false if the external Redis does not require a password.

  • global.redis.password.secret : The name of the secret which contains the token for authentication.

  • global.redis.password.key : The key in the secret, which contains the token content.

Items below can be further customized if you are not using the defaults:



  • global.redis.port : The port the database is available on, defaults to 6379 .

For example, pass these values via Helm’s --set flag while deploying:

helm install gitlab gitlab/gitlab  \
--set redis.install=false \
--set global.redis.host=redis.example \
--set global.redis.password.secret=gitlab-redis \
--set global.redis.password.key=redis-password \

If you are connecting to a Redis HA cluster that has Sentinel servers
running, the global.redis.host attribute needs to be set to the name of
the Redis instance group (such as mymaster or resque ), as
specified in the sentinel.conf . Sentinel servers can be referenced
using the global.redis.sentinels[0].host and global.redis.sentinels[0].port
values for the --set flag. The index is zero based.

Use multiple Redis instances

GitLab supports splitting several of the resource intensive
Redis operations across multiple Redis instances. This chart supports distributing
those persistence classes to other Redis instances.

More detailed information on configuring the chart for using multiple Redis
instances can be found in the globals
documentation.

Specify secure Redis scheme (SSL)

To connect to Redis using SSL, use the rediss (note the double s ) scheme parameter:

--set global.redis.scheme=rediss
Read article

Configure the GitLab chart with FIPS-compliant images | GitLab






  • Sample values

Configure the GitLab chart with FIPS-compliant images

GitLab offers FIPS-compliant
versions of its images, allowing you to run GitLab on FIPS-enabled clusters.

Sample values

We provide an example for GitLab chart values in
examples/fips/values.yaml
which can help you to build a FIPS-compatible GitLab deployment.

Note the comment under the nginx-ingress.controller key that provides the
relevant configuration to use a FIPS-compatible NGINX Ingress Controller image. This image is
maintained in our NGINX Ingress Controller fork.

Read article

Configure the GitLab chart with GitLab Geo | GitLab






  • Requirements
  • Overview
  • Set up Omnibus database nodes
  • Set up Kubernetes clusters
  • Collect information
  • Configure Primary database
  • Deploy chart as Geo Primary site
  • Set the Geo Primary site
  • Configure Secondary database
  • Copy secrets from the primary site to the secondary site
  • Deploy chart as Geo Secondary site
  • Add Secondary Geo site via Primary
  • Confirm Operational Status

Configure the GitLab chart with GitLab Geo

GitLab Geo provides the ability to have geographically distributed application
deployments.

While external database services can be used, these documents focus on
the use of the Omnibus GitLab for PostgreSQL to provide the
most platform agnostic guide, and make use of the automation included in gitlab-ctl .

In this guide, both clusters have the same external URL. See Set up a Unified URL for Geo sites.


note
See the defined terms
to describe all aspects of Geo (mainly the distinction between site and node ).

Requirements

To use GitLab Geo with the GitLab Helm chart, the following requirements must be met:


  • The use of external PostgreSQL services, as the
    PostgresSQL included with the chart is not exposed to outside networks, and doesn’t
    have WAL support required for replication.
  • The supplied database must:

    • Support replication.
    • The primary database must be reachable by the primary site,
      and all secondary database nodes (for replication).
    • Secondary databases only need to be reachable by the secondary sites.
    • Support SSL between primary and secondary database nodes.
  • The primary site must be reachable via HTTP(S) by all secondary sites.
    Secondary sites must be accessible to the primary site via HTTP(S).

Overview

This guide uses 2 Omnibus GitLab database nodes,
configuring only the PostgreSQL services needed, and 2 deployments of the
GitLab Helm chart. It is intended to be the minimal required configuration.
This documentation does not include SSL from application to database, support
for other database providers, or
promoting a secondary site to primary.

The outline below should be followed in order:


  1. Setup Omnibus database nodes
  2. Setup Kubernetes clusters
  3. Collect information
  4. Configure Primary database
  5. Deploy chart as Geo Primary site
  6. Set the Geo primary site
  7. Configure Secondary database
  8. Copy secrets from primary site to secondary site
  9. Deploy chart as Geo Secondary site
  10. Add Secondary Geo site via Primary
  11. Confirm Operational Status

Set up Omnibus database nodes

For this process, two nodes are required. One is the Primary database node, the
other the Secondary database node. You may use any provider of machine
infrastructure, on-premise or from a cloud provider.

Bear in mind that communication is required:


  • Between the two database nodes for replication.
  • Between each database node and their respective Kubernetes deployments:

    • The primary needs to expose TCP port 5432 .
    • The secondary needs to expose TCP ports 5432 & 5431 .

Install an operating system supported by Omnibus GitLab, and then
install the Omnibus GitLab onto it. Do not provide the
EXTERNAL_URL environment variable when installing, as we’ll provide a minimal
configuration file before reconfiguring the package.

After you have installed the operating system, and the GitLab package, configuration
can be created for the services that will be used. Before we do that, information
must be collected.

Set up Kubernetes clusters

For this process, two Kubernetes clusters should be used. These can be from any
provider, on-premise or from a cloud provider.

Bear in mind that communication is required:


  • To the respective database nodes:

    • Primary outbound to TCP 5432 .
    • Secondary outbound to TCP 5432 and 5431 .
  • Between both Kubernetes Ingress via HTTPS.

Each cluster that is provisioned should have:


  • Enough resources to support a base-line installation of these charts.
  • Access to persistent storage:

    • MinIO not required if using external object storage.
    • Gitaly not required if using external Gitaly.
    • Redis not required if using external Redis.

Collect information

To continue with the configuration, the following information needs to be
collected from the various sources. Collect these, and make notes for use through
the rest of this documentation.


  • Primary database:

    • IP address
    • hostname (optional)
  • Secondary database:

    • IP address
    • hostname (optional)
  • Primary cluster:

    • External URL
    • Internal URL
    • IP addresses of nodes
  • Secondary cluster:

    • Internal URL
    • IP addresses of nodes
  • Database Passwords ( must pre-decide the passwords ):


    • gitlab (used in postgresql['sql_user_password'] , global.psql.password )

    • gitlab_geo (used in geo_postgresql['sql_user_password'] , global.geo.psql.password )

    • gitlab_replicator (needed for replication)
  • Your GitLab license file

The Internal URL of each cluster must be unique to the cluster, so that all
clusters can make requests to all other clusters. For example:


  • External URL of all clusters: https://gitlab.example.com
  • Primary cluster’s Internal URL: https://london.gitlab.example.com
  • Secondary cluster’s Internal URL: https://shanghai.gitlab.example.com

This guide does not cover setting up DNS.

The gitlab and gitlab_geo database user passwords must exist in two
forms: bare password, and PostgreSQL hashed password. To obtain the hashed form,
perform the following commands on one of the Omnibus instances, which asks
you to enter and confirm the password before outputting an appropriate hash
value for you to make note of.


  1. gitlab-ctl pg-password-md5 gitlab
  2. gitlab-ctl pg-password-md5 gitlab_geo

Configure Primary database

This section is performed on the Primary Omnibus GitLab database node.

To configure the Primary database node’s Omnibus GitLab, work from
this example configuration:

### Geo Primary
external_url 'http://gitlab.example.com'
roles ['geo_primary_role']
# The unique identifier for the Geo node.
gitlab_rails['geo_node_name'] = 'London Office'
gitlab_rails['auto_migrate'] = false
## turn off everything but the DB
sidekiq['enable']=false
puma['enable']=false
gitlab_workhorse['enable']=false
nginx['enable']=false
geo_logcursor['enable']=false
grafana['enable']=false
gitaly['enable']=false
redis['enable']=false
prometheus_monitoring['enable'] = false
## Configure the DB for network
postgresql['enable'] = true
postgresql['listen_address'] = '0.0.0.0'
postgresql['sql_user_password'] = 'gitlab_user_password_hash'
# !! CAUTION !!
# This list of CIDR addresses should be customized
# - primary application deployment
# - secondary database node(s)
postgresql['md5_auth_cidr_addresses'] = ['0.0.0.0/0']

We must replace several items:



  • external_url must be updated to reflect the host name of our Primary site.

  • gitlab_rails['geo_node_name'] must be replaced with a unique name for your
    site. See the Name field in
    Common settings.

  • gitlab_user_password_hash must be replaced with the hashed form of the
    gitlab password.

  • postgresql['md5_auth_cidr_addresses'] can be update to be a list of
    explicit IP addresses, or address blocks in CIDR notation.

The md5_auth_cidr_addresses should be in the form of
[ '127.0.0.1/24', '10.41.0.0/16'] . It is important to include 127.0.0.1 in
this list, as the automation in Omnibus GitLab connects using this. The
addresses in this list should include the IP address (not hostname) of your
Secondary database, and all nodes of your primary Kubernetes cluster. This can
be left as ['0.0.0.0/0'] , however it is not best practice .

After the configuration above is prepared:


  1. Place the content into /etc/gitlab/gitlab.rb
  2. Run gitlab-ctl reconfigure . If you experience any issues in regards to the
    service not listening on TCP, try directly restarting it with
    gitlab-ctl restart postgresql .
  3. Run gitlab-ctl set-replication-password to set the password for
    the gitlab_replicator user.

  4. Retrieve the Primary database node’s public certificate, this is needed
    for the Secondary database to be able to replicate (save this output):


    cat ~gitlab-psql/data/server.crt

Deploy chart as Geo Primary site

This section is performed on the Primary site’s Kubernetes cluster.

To deploy this chart as a Geo Primary, start from this example configuration:



  1. Create a secret containing the database password for the
    chart to consume. Replace PASSWORD below with the password for the gitlab
    database user:


    kubectl --namespace gitlab create secret generic geo --from-literal=postgresql-password=PASSWORD

  2. Create a primary.yaml file based on the example configuration
    and update the configuration to reflect the correct values:


    ### Geo Primary
    global:
    # See docs.gitlab.com/charts/charts/globals
    # Configure host & domain
    hosts:
    domain: example.com
    # configure DB connection
    psql:
    host: geo-1.db.example.com
    port: 5432
    password:
    secret: geo
    key: postgresql-password
    # configure geo (primary)
    geo:
    nodeName: London Office
    enabled: true
    role: primary
    # External DB, disable
    postgresql:
    install: false

    • global.hosts.domain
    • global.psql.host
    • global.geo.nodeName must match
      the Name field of a Geo site in the Admin Area
    • Also configure any additional settings, such as:

      • Configuring SSL/TLS
      • Using external Redis

      • using external Object Storage

  3. Deploy the chart using this configuration:


    helm upgrade --install gitlab-geo gitlab/gitlab --namespace gitlab -f primary.yaml

    note
    This assumes you are using the gitlab namespace. If you want to use a different namespace,
    you should also replace it in --namespace gitlab throughout the rest of this document.

  4. Wait for the deployment to complete, and the application to come online. When
    the application is reachable, log in.


  5. Sign in to GitLab, and activate your GitLab subscription.


    note
    This step is required for Geo to function.

Set the Geo Primary site

Now that the chart has been deployed, and a license uploaded, we can configure
this as the Primary site. We will do this via the Toolbox Pod.



  1. Find the Toolbox Pod


    kubectl --namespace gitlab get pods -lapp=toolbox

  2. Run gitlab-rake geo:set_primary_node with kubectl exec :


    kubectl --namespace gitlab exec -ti gitlab-geo-toolbox-XXX -- gitlab-rake geo:set_primary_node

  3. Set the primary site’s Internal URL with a Rails runner command. Replace https://primary.gitlab.example.com with the actual Internal URL:


    kubectl --namespace gitlab exec -ti gitlab-geo-toolbox-XXX -- gitlab-rails runner "GeoNode.primary_node.update!(internal_url: 'https://primary.gitlab.example.com'"

  4. Check the status of Geo configuration:


    kubectl --namespace gitlab exec -ti gitlab-geo-toolbox-XXX -- gitlab-rake gitlab:geo:check

    You should see output similar to below:


    WARNING: This version of GitLab depends on gitlab-shell 10.2.0, but you're running Unknown. Please update gitlab-shell.
    Checking Geo ...

    GitLab Geo is available ... yes
    GitLab Geo is enabled ... yes
    GitLab Geo secondary database is correctly configured ... not a secondary node
    Database replication enabled? ... not a secondary node
    Database replication working? ... not a secondary node
    GitLab Geo HTTP(S) connectivity ... not a secondary node
    HTTP/HTTPS repository cloning is enabled ... yes
    Machine clock is synchronized ... Exception: getaddrinfo: Servname not supported for ai_socktype
    Git user has default SSH configuration? ... yes
    OpenSSH configured to use AuthorizedKeysCommand ... no
    Reason:
    Cannot find OpenSSH configuration file at: /assets/sshd_config
    Try fixing it:
    If you are not using our official docker containers,
    make sure you have OpenSSH server installed and configured correctly on this system
    For more information see:
    doc/administration/operations/fast_ssh_key_lookup.md
    GitLab configured to disable writing to authorized_keys file ... yes
    GitLab configured to store new projects in hashed storage? ... yes
    All projects are in hashed storage? ... yes

    Checking Geo ... Finished

    • Don’t worry about Exception: getaddrinfo: Servname not supported for ai_socktype , as Kubernetes containers don’t have access to the host clock. This is OK .

    • OpenSSH configured to use AuthorizedKeysCommand ... no is expected . This
      Rake task is checking for a local SSH server, which is actually present in the
      gitlab-shell chart, deployed elsewhere, and already configured appropriately.

Configure Secondary database

This section is performed on the Secondary Omnibus GitLab database node.

To configure the Secondary database node’s Omnibus GitLab, work from
this example configuration:

### Geo Secondary
# external_url must match the Primary cluster's external_url
external_url 'http://gitlab.example.com'
roles ['geo_secondary_role']
gitlab_rails['enable'] = true
# The unique identifier for the Geo node.
gitlab_rails['geo_node_name'] = 'Shanghai Office'
gitlab_rails['auto_migrate'] = false
geo_secondary['auto_migrate'] = false
## turn off everything but the DB
sidekiq['enable']=false
puma['enable']=false
gitlab_workhorse['enable']=false
nginx['enable']=false
geo_logcursor['enable']=false
grafana['enable']=false
gitaly['enable']=false
redis['enable']=false
prometheus_monitoring['enable'] = false
## Configure the DBs for network
postgresql['enable'] = true
postgresql['listen_address'] = '0.0.0.0'
postgresql['sql_user_password'] = 'gitlab_user_password_hash'
# !! CAUTION !!
# This list of CIDR addresses should be customized
# - secondary application deployment
# - secondary database node(s)
postgresql['md5_auth_cidr_addresses'] = ['0.0.0.0/0']
geo_postgresql['listen_address'] = '0.0.0.0'
geo_postgresql['sql_user_password'] = 'gitlab_geo_user_password_hash'
# !! CAUTION !!
# This list of CIDR addresses should be customized
# - secondary application deployment
# - secondary database node(s)
geo_postgresql['md5_auth_cidr_addresses'] = ['0.0.0.0/0']
gitlab_rails['db_password']='gitlab_user_password'

We must replace several items:



  • gitlab_rails['geo_node_name'] must be replaced with a unique name for your site. See the Name field in
    Common settings.

  • gitlab_user_password_hash must be replaced with the hashed form of the
    gitlab password.

  • postgresql['md5_auth_cidr_addresses'] should be updated to be a list of
    explicit IP addresses, or address blocks in CIDR notation.

  • gitlab_geo_user_password_hash must be replaced with the hashed form of the
    gitlab_geo password.

  • geo_postgresql['md5_auth_cidr_addresses'] should be updated to be a list of
    explicit IP addresses, or address blocks in CIDR notation.

  • gitlab_user_password must be updated, and is used here to allow Omnibus GitLab
    to automate the PostgreSQL configuration.

The md5_auth_cidr_addresses should be in the form of
[ '127.0.0.1/24', '10.41.0.0/16'] . It is important to include 127.0.0.1 in
this list, as the automation in Omnibus GitLab connects using this. The
addresses in this list should include the IP addresses of all nodes of your
Secondary Kubernetes cluster. This can be left as ['0.0.0.0/0'] , however
it is not best practice .

After configuration above is prepared:



  1. Check TCP connectivity to the primary site’s PostgreSQL node:


    openssl s_client -connect <primary_node_ip>:5432 </dev/null

    The output should show the following:


    CONNECTED(00000003)
    write:errno=0

    note
    If this step fails, you may be using the wrong IP address, or a firewall may
    be preventing access to the server. Check the IP address, paying close
    attention to the difference between public and private addresses and ensure
    that, if a firewall is present, the secondary PostgreSQL node is
    permitted to connect to the primary PostgreSQL node on TCP port 5432.
  2. Place the content into /etc/gitlab/gitlab.rb
  3. Run gitlab-ctl reconfigure . If you experience any issues in regards to the
    service not listening on TCP, try directly restarting it with
    gitlab-ctl restart postgresql .
  4. Place the Primary PostgreSQL node’s certificate content from above into primary.crt

  5. Set up PostgreSQL TLS verification on the secondary PostgreSQL node:

    Install the primary.crt file:


    install \
    -D \
    -o gitlab-psql \
    -g gitlab-psql \
    -m 0400 \
    -T primary.crt ~gitlab-psql/.postgresql/root.crt

    PostgreSQL will now only recognize that exact certificate when verifying TLS
    connections. The certificate can only be replicated by someone with access
    to the private key, which is only present on the primary PostgreSQL
    node.


  6. Test that the gitlab-psql user can connect to the primary site’s PostgreSQL
    (the default Omnibus database name is gitlabhq_production ):


    sudo \
    -u gitlab-psql /opt/gitlab/embedded/bin/psql \
    --list \
    -U gitlab_replicator \
    -d "dbname=gitlabhq_production sslmode=verify-ca" \
    -W \
    -h <primary_database_node_ip>

    When prompted enter the password collected earlier for the
    gitlab_replicator user. If all worked correctly, you should see
    the list of primary PostgreSQL node’s databases.

    A failure to connect here indicates that the TLS configuration is incorrect.
    Ensure that the contents of ~gitlab-psql/data/server.crt on the
    primary PostgreSQL node
    match the contents of ~gitlab-psql/.postgresql/root.crt on the
    secondary PostgreSQL node.


  7. Replicate the databases. Replace PRIMARY_DATABASE_HOST with the IP or hostname
    of your Primary PostgreSQL node:


    gitlab-ctl replicate-geo-database --slot-name=geo_2 --host=PRIMARY_DATABASE_HOST --sslmode=verify-ca

  8. After replication has finished, we must reconfigure the Omnibus GitLab one last time
    to ensure pg_hba.conf is correct for the secondary PostgreSQL node:


    gitlab-ctl reconfigure

Copy secrets from the primary site to the secondary site

Now copy a few secrets from the Primary site’s Kubernetes deployment to the
Secondary site’s Kubernetes deployment:


  • gitlab-geo-gitlab-shell-host-keys
  • gitlab-geo-rails-secret

  • gitlab-geo-registry-secret , if Registry replication is enabled.

  1. Change your kubectl context to that of your Primary.

  2. Collect these secrets from the Primary deployment:


    kubectl get --namespace gitlab -o yaml secret gitlab-geo-gitlab-shell-host-keys > ssh-host-keys.yaml
    kubectl get --namespace gitlab -o yaml secret gitlab-geo-rails-secret > rails-secrets.yaml
    kubectl get --namespace gitlab -o yaml secret gitlab-geo-registry-secret > registry-secrets.yaml
  3. Change your kubectl context to that of your Secondary.

  4. Apply these secrets:


    kubectl --namespace gitlab apply -f ssh-host-keys.yaml
    kubectl --namespace gitlab apply -f rails-secrets.yaml
    kubectl --namespace gitlab apply -f registry-secrets.yaml

Next create a secret containing the database passwords. Replace the
passwords below with the appropriate values:

kubectl --namespace gitlab create secret generic geo \
--from-literal=postgresql-password=gitlab_user_password \
--from-literal=geo-postgresql-password=gitlab_geo_user_password

Deploy chart as Geo Secondary site

This section is performed on the Secondary site’s Kubernetes cluster.

To deploy this chart as a Geo Secondary site, start from this example configuration.



  1. Create a secondary.yaml file based on the example configuration
    and update the configuration to reflect the correct values:


    ## Geo Secondary
    global:
    # See docs.gitlab.com/charts/charts/globals
    # Configure host & domain
    hosts:
    hostSuffix: secondary
    domain: example.com
    # configure DB connection
    psql:
    host: geo-2.db.example.com
    port: 5432
    password:
    secret: geo
    key: postgresql-password
    # configure geo (secondary)
    geo:
    enabled: true
    role: secondary
    nodeName: Shanghai Office
    psql:
    host: geo-2.db.example.com
    port: 5431
    password:
    secret: geo
    key: geo-postgresql-password
    # External DB, disable
    postgresql:
    install: false

    • global.hosts.domain
    • global.psql.host
    • global.geo.psql.host
    • global.geo.nodeName must match
      the Name field of a Geo site in the Admin Area
    • Also configure any additional settings, such as:

      • Configuring SSL/TLS
      • Using external Redis
      • using external Object Storage
    • For external databases, global.psql.host is the secondary, read-only replica database, while global.geo.psql.host is the Geo tracking database

  2. Deploy the chart using this configuration:


    helm upgrade --install gitlab-geo gitlab/gitlab --namespace gitlab -f secondary.yaml

  3. Wait for the deployment to complete, and the application to come online.

Add Secondary Geo site via Primary

Now that both databases are configured and applications are deployed, we must tell
the Primary site that the Secondary site exists:


  1. Visit the primary site, and on the top bar, select
    Main menu > Admin .
  2. On the left sidebar, select Geo .
  3. Select Add site .
  4. Add the secondary site. Use the full GitLab URL for the URL.
  5. Enter a Name with the global.geo.nodeName of the Secondary site. These values must always match exactly, character for character.
  6. Enter Internal URL, for example https://shanghai.gitlab.example.com .
  7. Optionally, choose which groups or storage shards should be replicated by the
    secondary site. Leave blank to replicate all.
  8. Select Add node .

After the secondary site is added to the administration panel, it automatically starts
replicating missing data from the primary site. This process is known as “backfill”.
Meanwhile, the primary site starts to notify each secondary site of any changes, so
that the secondary site can replicate those changes promptly.

Confirm Operational Status

The final step is to double check the Geo configuration on the secondary site once fully
configured, via the Toolbox Pod.



  1. Find the Toolbox Pod:


    kubectl --namespace gitlab get pods -lapp=toolbox

  2. Attach to the Pod with kubectl exec :


    kubectl --namespace gitlab exec -ti gitlab-geo-toolbox-XXX -- bash -l

  3. Check the status of Geo configuration:


    gitlab-rake gitlab:geo:check

    You should see output similar to below:


    WARNING: This version of GitLab depends on gitlab-shell 10.2.0, but you're running Unknown. Please update gitlab-shell.
    Checking Geo ...

    GitLab Geo is available ... yes
    GitLab Geo is enabled ... yes
    GitLab Geo secondary database is correctly configured ... yes
    Database replication enabled? ... yes
    Database replication working? ... yes
    GitLab Geo HTTP(S) connectivity ...
    * Can connect to the primary node ... yes
    HTTP/HTTPS repository cloning is enabled ... yes
    Machine clock is synchronized ... Exception: getaddrinfo: Servname not supported for ai_socktype
    Git user has default SSH configuration? ... yes
    OpenSSH configured to use AuthorizedKeysCommand ... no
    Reason:
    Cannot find OpenSSH configuration file at: /assets/sshd_config
    Try fixing it:
    If you are not using our official docker containers,
    make sure you have OpenSSH server installed and configured correctly on this system
    For more information see:
    doc/administration/operations/fast_ssh_key_lookup.md
    GitLab configured to disable writing to authorized_keys file ... yes
    GitLab configured to store new projects in hashed storage? ... yes
    All projects are in hashed storage? ... yes

    Checking Geo ... Finished

    • Don’t worry about Exception: getaddrinfo: Servname not supported for ai_socktype ,
      as Kubernetes containers do not have access to the host clock. This is OK .

    • OpenSSH configured to use AuthorizedKeysCommand ... no is expected . This
      Rake task is checking for a local SSH server, which is actually present in the
      gitlab-shell chart, deployed elsewhere, and already configured appropriately.
Read article

Advanced configuration | GitLab





Advanced configuration


  • Bringing your own custom Docker images
  • Using an external database
  • Using an external Gitaly
  • Using an external GitLab Pages instance
  • Using an external Mattermost
  • Using your own NGINX Ingress Controller
  • Using an external object storage
  • Using an external Redis
  • Using FIPS-compliant images
  • Making use of GitLab Geo functionality
  • Enabling internal TLS between services
  • After install, managing Persistent Volumes
  • Using Red Hat UBI-based images
Read article

Use TLS between components of the GitLab chart | GitLab







  • Preparation


    • Generating certificates for internal use

      • Required certificate CN and SANs
  • Configuration
  • Result
  • Troubleshooting

Use TLS between components of the GitLab chart

The GitLab charts can use transport-layer security (TLS) between the various
components. This requires you to provide certificates for the services
you want to enable, and configure those services to make use of those
certificates and the certificate authority (CA) that signed them.

Preparation

Each chart has documentation regarding enabling TLS for that service, and the various
settings required to ensure that appropriate configuration.

Generating certificates for internal use


note
GitLab does not purport to provide high-grade PKI infrastructure, or certificate
authorities.

For the purposes of this documentation, we provide a Proof of Concept script
below, which makes use of Cloudflare’s CFSSL
to produce a self-signed Certificate Authority, and a wildcard certificate that can be
used for all services.

This script will:


  • Generate a CA key pair.
  • Sign a certificate meant to service all GitLab component service endpoints.
  • Create two Kubernetes Secret objects:

    • A secret of type kuberetes.io/tls which has the server certificate and key pair.
    • A secret of type Opaque which only contains the public certificate of the CA as ca.crt
      as need by NGINX Ingress.

Prerequisites:


  • Bash, or compatible shell.

  • cfssl is available to your shell, and within PATH .

  • kubectl is available, and configured to point to your Kubernetes cluster
    where GitLab will later be installed.

    • Be sure to have created the namespace you wish to have these certificates
      installed into before operating the script.

You may copy the content of this script to your computer, and make the resulting
file executable. We suggest poc-gitlab-internal-tls.sh .

#!/bin/bash
set -e
#############
## make and change into a working directory
pushd $(mktemp -d)

#############
## setup environment
NAMESPACE=${NAMESPACE:-default}
RELEASE=${RELEASE:-gitlab}
## stop if variable is unset beyond this point
set -u
## known expected patterns for SAN
CERT_SANS="*.${NAMESPACE}.svc,${RELEASE}-metrics.${NAMESPACE}.svc,*.${RELEASE}-gitaly.${NAMESPACE}.svc"

#############
## generate default CA config
cfssl print-defaults config > ca-config.json
## generate a CA
echo '{"CN":"'${RELEASE}.${NAMESPACE}.internal.ca'","key":{"algo":"ecdsa","size":256}}' | \
cfssl gencert -initca - | \
cfssljson -bare ca -
## generate certificate
echo '{"CN":"'${RELEASE}.${NAMESPACE}.internal'","key":{"algo":"ecdsa","size":256}}' | \
cfssl gencert -config=ca-config.json -ca=ca.pem -ca-key=ca-key.pem -profile www -hostname="${CERT_SANS}" - |\
cfssljson -bare ${RELEASE}-services

#############
## load certificates into K8s
kubectl -n ${NAMESPACE} create secret tls ${RELEASE}-internal-tls \
--cert=${RELEASE}-services.pem \
--key=${RELEASE}-services-key.pem
kubectl -n ${NAMESPACE} create secret generic ${RELEASE}-internal-tls-ca \
--from-file=ca.crt=ca.pem

note
This script does not preserve the CA’s private key. It is a Proof-of-Concept
helper, and is not intended for production use .

The script expects two environment variables to be set:



  1. NAMESPACE : The Kubernetes Namespace you will later install GitLab to.
    This defaults to default , as with kubectl .

  2. RELEASE : The Helm Release name you will later use to install GitLab.
    This defaults to gitlab .

To operate this script, you may export the two variables, or prepend the
script name with their values.

export NAMESPACE=testing
export RELEASE=gitlab

./poc-gitlab-internal-tls.sh

After the script has run, you will find the two secrets created, and the
temporary working directory contains all certificates and their keys.

$ pwd
/tmp/tmp.swyMgf9mDs
$ kubectl -n ${NAMESPACE} get secret | grep internal-tls
testing-internal-tls kubernetes.io/tls 2 11s
testing-internal-tls-ca Opaque 1 10s
$ ls -1
ca-config.json
ca.csr
ca-key.pem
ca.pem
testing-services.csr
testing-services-key.pem
testing-services.pem

Required certificate CN and SANs

The various GitLab components speak to each other over their Service’s DNS names.
The Ingress objects generated by the GitLab chart must provide NGINX the
name to verify, when tls.verify: true (which is the default). As a result
of this, each GitLab component should receive a certificate with a SAN including
either their Service’s name, or a wildcard acceptable to the Kubernetes Service
DNS entry.


  • service-name.namespace.svc
  • *.namespace.svc

Failure to ensure these SANs within certificates will result in a non-functional
instance, and logs that can be quite cryptic, refering to “connection failure”
or “SSL verification failed”.

You can make use of helm template to retrieve a full list of all
Service object names, if needed. If your GitLab has been deployed without TLS,
you can query Kubernetes for those names:

kubectl -n ${NAMESPACE} get service -lrelease=${RELEASE}

Configuration

Example configurations can be found in examples/internal-tls.

For the purposes of this documentation, we have provided shared-cert-values.yaml
which configures the GitLab components to consume the certificates generated with
the script above, in generating certificates for internal use.

Key items to configure:


  1. Global Custom Certificate Authorities.
  2. Per-component TLS for the service listeners.
    (See each chart’s documentation, under charts/)

This process is greatly simplified by making use of YAML’s native anchor
functionality. A truncated snippet of shared-cert-values.yaml shows this:

.internal-ca: &internal-ca gitlab-internal-tls-ca
.internal-tls: &internal-tls gitlab-internal-tls

global:
certificates:
customCAs:
- secret: *internal-ca
workhorse:
tls:
enabled: true
gitlab:
webservice:
tls:
secretName: *internal-tls
workhorse:
tls:
verify: true # default
secretName: *internal-tls
caSecretName: *internal-ca

Result

When all components have been configured to provide TLS on their service
listeners, all communication between GitLab components will traverse the
network with TLS security, including connections from NGINX Ingress to
each GitLab component.

NGINX Ingress will terminate any inbound TLS, determine the appropriate
services to pass the traffic to, and then form a new TLS connection to
the GitLab component. When configured as shown here, it will also verify
the certificates served by the GitLab components against the CA.

This can be verified by connecting to the Toolbox pod, and querying the
various compontent Services. One such example, connecting to the Webservice
Pod’s primary service port that NGINX Ingress uses:

$ kubectl -n ${NAMESPACE} get pod -lapp=toolbox,release=${RELEASE}
NAME READY STATUS RESTARTS AGE
gitlab-toolbox-5c447bfdb4-pfmpc 1/1 Running 0 65m
$ kubectl exec -ti gitlab-toolbox-5c447bfdb4-pfmpc -c toolbox -- \
curl -Iv "https://gitlab-webservice-default.testing.svc:8181"

The output should be similar to following example:

*   Trying 10.60.0.237:8181...
* Connected to gitlab-webservice-default.testing.svc (10.60.0.237) port 8181 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: CN=gitlab.testing.internal
* start date: Jul 18 19:15:00 2022 GMT
* expire date: Jul 18 19:15:00 2023 GMT
* subjectAltName: host "gitlab-webservice-default.testing.svc" matched cert's "*.testing.svc"
* issuer: CN=gitlab.testing.internal.ca
* SSL certificate verify ok.
> HEAD / HTTP/1.1
> Host: gitlab-webservice-default.testing.svc:8181

Troubleshooting

If your GitLab instance appears unreachable from the browser, by rendering an
HTTP 503 error, NGINX Ingress is likely having a problem verifying the
certificates of the GitLab components.

You may work around this by temporarily setting
gitlab.webservice.workhorse.tls.verify to false .

The NGINX Ingress controller can be connected to, and will evidence a message
in nginx.conf , regarding problems verifying the certificate(s).

Example content, where the Secret is not reachable:

# Location denied. Reason: "error obtaining certificate: local SSL certificate
testing/gitlab-internal-tls-ca was not found"
return 503;

Common problems that cause this:


  • CA certificate is not in a key named ca.crt within the Secret.
  • The Secret was not properly supplied, or may not exist within the Namespace.
Read article