Giter Site home page Giter Site logo

docker-registry.helm's Introduction

Docker Registry Helm Chart

This directory contains a Kubernetes chart to deploy a private Docker Registry.

Prerequisites Details

  • PV support on underlying infrastructure (if persistence is required)

Chart Details

This chart will do the following:

  • Implement a Docker registry deployment

Installing the Chart

First, add the repo:

helm repo add twuni https://helm.twun.io

To install the chart, use the following:

helm install twuni/docker-registry

Configuration

The following table lists the configurable parameters of the docker-registry chart and their default values.

Parameter Description Default
image.pullPolicy Container pull policy IfNotPresent
image.repository Container image to use registry
image.tag Container image tag to deploy 2.8.1
imagePullSecrets Specify image pull secrets nil (does not add image pull secrets to deployed pods)
persistence.accessMode Access mode to use for PVC ReadWriteOnce
persistence.enabled Whether to use a PVC for the Docker storage false
persistence.deleteEnabled Enable the deletion of image blobs and manifests by digest nil
persistence.size Amount of space to claim for PVC 10Gi
persistence.storageClass Storage Class to use for PVC -
persistence.existingClaim Name of an existing PVC to use for config nil
serviceAccount.create Create ServiceAccount false
serviceAccount.name ServiceAccount name nil
serviceAccount.annotations Annotations to add to the ServiceAccount {}
deployment.annotations Annotations to add to the Deployment {}
service.port TCP port on which the service is exposed 5000
service.type service type ClusterIP
service.clusterIP if service.type is ClusterIP and this is non-empty, sets the cluster IP of the service nil
service.nodePort if service.type is NodePort and this is non-empty, sets the node port of the service nil
service.loadBalancerIP if service.type is LoadBalancer and this is non-empty, sets the loadBalancerIP of the service nil
service.loadBalancerSourceRanges if service.type is LoadBalancer and this is non-empty, sets the loadBalancerSourceRanges of the service nil
service.sessionAffinity service session affinity nil
service.sessionAffinityConfig service session affinity config nil
replicaCount k8s replicas 1
updateStrategy update strategy for deployment {}
podAnnotations Annotations for pod {}
podLabels Labels for pod {}
podDisruptionBudget Pod disruption budget {}
resources.limits.cpu Container requested CPU nil
resources.limits.memory Container requested memory nil
autoscaling.enabled Enable autoscaling using HorizontalPodAutoscaler false
autoscaling.minReplicas Minimal number of replicas 1
autoscaling.maxReplicas Maximal number of replicas 2
autoscaling.targetCPUUtilizationPercentage Target average utilization of CPU on Pods 60
autoscaling.targetMemoryUtilizationPercentage (Kubernetes โ‰ฅ1.23) Target average utilization of Memory on Pods 60
autoscaling.behavior (Kubernetes โ‰ฅ1.23) Configurable scaling behavior {}
priorityClassName priorityClassName ""
storage Storage system to use filesystem
tlsSecretName Name of secret for TLS certs nil
secrets.htpasswd Htpasswd authentication nil
secrets.s3.accessKey Access Key for S3 configuration nil
secrets.s3.secretKey Secret Key for S3 configuration nil
secrets.s3.secretRef The ref for an external secret containing the s3AccessKey and s3SecretKey keys ""
secrets.swift.username Username for Swift configuration nil
secrets.swift.password Password for Swift configuration nil
secrets.haSharedSecret Shared secret for Registry nil
configData Configuration hash for docker nil
s3.region S3 region nil
s3.regionEndpoint S3 region endpoint nil
s3.bucket S3 bucket name nil
s3.rootdirectory S3 prefix that is applied to allow you to segment data nil
s3.encrypt Store images in encrypted format nil
s3.secure Use HTTPS nil
swift.authurl Swift authurl nil
swift.container Swift container nil
proxy.enabled If true, registry will function as a proxy/mirror false
proxy.remoteurl Remote registry URL to proxy requests to https://registry-1.docker.io
proxy.username Remote registry login username nil
proxy.password Remote registry login password nil
proxy.secretRef The ref for an external secret containing the proxyUsername and proxyPassword keys ""
namespace specify a namespace to install the chart to - defaults to .Release.Namespace {{ .Release.Namespace }}
nodeSelector node labels for pod assignment {}
affinity affinity settings {}
tolerations pod tolerations []
ingress.enabled If true, Ingress will be created false
ingress.annotations Ingress annotations {}
ingress.labels Ingress labels {}
ingress.path Ingress service path /
ingress.hosts Ingress hostnames []
ingress.tls Ingress TLS configuration (YAML) []
ingress.className Ingress controller class name nginx
metrics.enabled Enable metrics on Service false
metrics.port TCP port on which the service metrics is exposed 5001
metrics.serviceMonitor.annotations Prometheus Operator ServiceMonitor annotations {}
metrics.serviceMonitor.enable If true, Prometheus Operator ServiceMonitor will be created false
metrics.serviceMonitor.labels Prometheus Operator ServiceMonitor labels {}
metrics.prometheusRule.annotations Prometheus Operator PrometheusRule annotations {}
metrics.prometheusRule.enable If true, Prometheus Operator prometheusRule will be created false
metrics.prometheusRule.labels Prometheus Operator prometheusRule labels {}
metrics.prometheusRule.rules PrometheusRule defining alerting rules for a Prometheus instance {}
extraVolumeMounts Additional volumeMounts to the registry container []
extraVolumes Additional volumes to the pod []
extraEnvVars Additional environment variables to the pod []
initContainers Init containers to be created in the pod []
garbageCollect.enabled If true, will deploy garbage-collector cronjob false
garbageCollect.deleteUntagged If true, garbage-collector will delete manifests that are not currently referenced via tag true
garbageCollect.schedule CronTab schedule, please use standard crontab format 0 1 * * *
garbageCollect.resources garbage-collector requested resources {}

Specify each parameter using the --set key=value[,key=value] argument to helm install.

To generate htpasswd file, run this docker command: docker run --entrypoint htpasswd httpd:2 -Bbn user password > ./htpasswd.

docker-registry.helm's People

Contributors

baznikin avatar canterberry avatar chevrontango avatar ddelange avatar edwargix avatar erikfuego avatar eriwyr avatar g-linville avatar ilmax avatar joaosa avatar joneteus avatar joshsizer avatar jrhorner1 avatar jsievenpiper avatar karezza avatar kuzaxak avatar laverya avatar lenzenmi avatar mrsimonemms avatar nightscape avatar nisimond avatar pavankumar-go avatar pieveee avatar rkevin-arch avatar simonrupar avatar skaronator avatar vvanouytsel avatar vyas-n avatar wkbrd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-registry.helm's Issues

trying to use this helm chart on linode, getting error every time

time="2023-06-25T18:38:49.604570139Z" level=info msg="debug server listening :5001"
time="2023-06-25T18:38:49.607586158Z" level=info msg="Starting upload purge in 57m0s" go.version=go1.16.15 instance.id=5eef58c2-c40b-4ac2-a289-b0789fdeae75 service=registry version="v2.8.1+unknown"
time="2023-06-25T18:38:49.61622471Z" level=info msg="using redis blob descriptor cache" go.version=go1.16.15 instance.id=5eef58c2-c40b-4ac2-a289-b0789fdeae75 service=registry version="v2.8.1+unknown"
time="2023-06-25T18:38:49.616443132Z" level=info msg="listening on [::]:5000" go.version=go1.16.15 instance.id=5eef58c2-c40b-4ac2-a289-b0789fdeae75 service=registry version="v2.8.1+unknown"

seems it cant find redis unless im reading this wrong

Handle autoscaling/v2beta versions

The chart tries to resolve whether the cluster supports autoscaling/v2 or not and selects HPA version to deploy based on that

{{- if $apiVersions.Has "autoscaling/v2" }}

If the cluster supports autoscaling/v2beta1 or autoscaling/v2beta2 but not autoscaling/v2 this chart tries to deploy autoscaling/v2 and that fails.

Suggestion: ignore beta versions and deploy v1 in that case

Garbage Collection

Hi,
I'm getting this error with garbage collection:
failed to garbage collect: failed to mark: filesystem: Path not found: /docker/registry/v2/repositories
It was working fine for ages, maybe an update or something...
I've just enabled it via the helm chart.

Support existingSecrets

It would be nice to allow chart users to specify an existingSecret for htpasswd as well as a handful of other Secrets that are currently either auto-generated or required to supply a value directly. (Admittedly, htpasswd is hashed, but it is still not ideal to keep the hashed value in Git.)

See some great examples in the various bitnami helm charts.

Next Release?

Hey there,
is there some information when the next release is planned?
I need this fix for my deployment: #88

Thanks

AWS S3 IAM role support

Hi,

We can't figure out how to let your chart work with S3 without setting the access and secret key. We just want to use the IAM role that is attached to our service account.

According to the official docker documentation, this should be possible by just omitting the access and secret keys. We also set the IAM permission described in the docs:

https://docs.docker.com/registry/storage-drivers/s3/

Are we missing something here or is your chart not built for this functionality?

Kind regards!

nosniff in config.yml

Hi,

while investigating, delete is "unsupported" although we set:

delete:
  enabled: true

we noticed in:
/etc/docker/registry/config.yml

health:
  storagedriver:
    enabled: true
    interval: 10s
    threshold: 3
http:
  addr: :5000
  debug:
    addr: :5001
    prometheus:
      enabled: true
      path: /metrics
  headers:
    X-Content-Type-Options:
    - nosniff
log:
  fields:
    service: registry
storage:
  cache:
    blobdescriptor: inmemory
  delete:
    enabled: true
  maintenance:
    uploadpurging:
      age: 168h
      dryrun: false
      enabled: true
      interval: 24h
version: 0.1

that nosniff looks wrong?
Shouldnt the http headers look like this:
headers: X-Content-Type-Options: [nosniff]
We expect the "toYaml" in

{{ toYaml .Values.configData | indent 4 }}

to issue that but dont know how to fix it. Or does it work as expected?

Best regards

Chart 2.2.1 Update Failed

Hi,
I tried to update to the latest version of the chart today, however I get the error:
Error: UPGRADE FAILED: YAML parse error on docker-registry/templates/deployment.yaml: error converting YAML to JSON: yaml: line 80: mapping values are not allowed in this context
Is this likely an issue with my custom values.yaml?
No issues with 2.2.0.
Thank you.

Feature Request Make /auth a volume

At the moment we have to generate a fixed user and copy the htpasswd string into the values.yml
This means, once the registry is running we have to shut it down and re-install it to add new users.

It would be far nicer just to map the /auth out as a volume so we can edit users on the fly.

That way, changes to htpasswd (i.e. new users) on the host are immediately visible to the registry.

I have provided a configuration for the current chart as a workaround:

extraVolumeMounts:
  - mountPath: /auth
    name: auth

extraVolumes:
  - name: auth
    hostPath:
      # Put your htpasswd file in here:
      path: /etc/secrets/registry/

extraEnvVars:
  - name: REGISTRY_AUTH
    value: "htpasswd"
  - name: REGISTRY_AUTH_HTPASSWD_REALM
    value: "Registry Realm"
  - name: REGISTRY_AUTH_HTPASSWD_PATH
    value: "/auth/htpasswd"

Also a question: I assumed my registry container runs as root (K3S runs as root by default) but it was unable to see /etc/secrets/registry/ which has root read. Only when I moved htpasswd to /tmp with o+r (everybody can read) did it work. Does the registry run with reduced priveleges?

Which user is the registry running as??

Allowing public/anonymous docker image pulls?

Hello, thank you for this helm chart. I've already gotten a lot of use out of it on my home k8s lab. I'm trying to hack on a Helm chart that doesn't expose imagePullSecrets as a value, so I'm unable to inject my docker credentials from the htpasswd file. Is there any way to allow anonymous image pulls with this registry? This is only for staging/internal use, so I'm not too worried about unauthorized access.

`s3.regionEndpoint` requires scheme

If configuring an S3-compatible object store with an endpoint that is not AWS, the transport scheme (http/https) must be provided, or the server will hang on start while logging no errors, and the liveness checks will eventually kill it, and force the pod into a crash/restart loop.

The docs provided in the values.yaml do not include a scheme, which leads to confusion.

Steps to reproduce

In the config.yaml:

docker-registry:
  storage: s3
  s3:
    region: my-region
    regionEndpoint: s3.compatible.storagedevice.local
    bucket: some-bucket

Deploy the chart

Expected result

The pod comes up and provides a registry

Actual behavior

The pod cannot actually talk to the S3 service, so it silently hangs until killed by the livenessProbe checks. Hilarity ensues.

Workaround

Provide the transport scheme when defining the endpoint:

    regionEndpoint: http://s3.compatible.storagedevice.local/

The provided values.yaml contraindicates this.

Ingress missing necessary Nginx annotation

Following the latest commit to ingress.yaml, this chart no longer plays nicely with the Kubernetes flavor of the Nginx ingress controller.

The ingress controller will produce logs like:

"Ignoring ingress because of error while validating ingress class" ingress="default/docker-registry" error="no object matching key "nginx"

This is caused by the missing kubernetes.io/ingress.class: "nginx" annotation on the ingress created from the ingress.yaml template.

We can no longer add that annotation due to the fact that you'll get the following error during helm install docker-registry twuni/docker-registry -f docker-registry-config.yaml (Note: see docker-registry-config.yaml down below):

Error: INSTALLATION FAILED: Ingress.extensions "docker-registry" is invalid: annotations.kubernetes.io/ingress.class: Invalid value: "nginx": can not be set when the class field is also set

This is due to the recent addition of ingressClassName: {{ .Values.ingress.className }} in ingress.yaml.

Steps to reproduce:

Setup the Nginx controller

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace

Setting up cert manager

helm repo add jetstack https://charts.jetstack.io
helm repo update
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.9.1/cert-manager.crds.yaml
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.9.1
kubectl apply -f cert-issuer.yaml
kubectl apply -f certificate.yaml

where cert-issuer.yaml is

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod-clusterissuer
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: [email protected]
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-secret-prod
    # Enable the HTTP-01 challenge provider
    solvers:
    - http01:
        ingress:
          class: nginx

and certificate.yaml is

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: docker-registry-certificate
spec:
  secretName: letsencrypt-secret-prod
  duration: 2160h
  renewBefore: 360h
  issuerRef:
    name: letsencrypt-prod-clusterissuer
    kind: ClusterIssuer
  dnsNames:
  - some.hostname.com
Set up the docker registry

helm repo add twuni https://helm.twun.io
helm repo update
helm install docker-registry twuni/docker-registry -f docker-registry-config.yaml

where docker-registry-config.yaml is

ingress:
  enabled: true
  hosts:
    - some.hostname.com
  annotations:
    # THIS ANNOTATION IS NECESSARY, BUT NOT ALLOWED: kubernetes.io/ingress.class: "nginx"
    cert-manager.io/cluster-issuer: letsencrypt-prod-clusterissuer
  tls:
    - secretName: letsencrypt-secret-prod
      hosts:
      - some.hostname.com
storage: s3
secrets:
  htpasswd: |-
    <USERNAME>:<ENCRYPTED PASSWORD>
  s3:
    accessKey: "<ACCESS KEY>"
    secretKey: "<SECRET>"
s3:
  region: eu-central-1
  regionEndpoint: eu-central-1.<SOME OBJECT STORAGE HOST>.com
  secure: true
  bucket: <SOME BUCKET>

This is easy enough to work-around (see below). However, this used to work fine out of the box. Here is an example video of what the setup used to be like using older versions of everything.


Click drop-down for workaround

You can workaround this by setting enabled: false in docker-registry-config.yaml, then creating a separate ingress yourself like so:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: docker-registry
  namespace: default
  labels:
    app: docker-registry
    chart: docker-registry-2.2.2
    release: docker-registry
    heritage: Helm
  annotations:
      cert-manager.io/cluster-issuer: "letsencrypt-prod-clusterissuer"
      kubernetes.io/ingress.class: "nginx"
spec:
  rules:
    - host: host.name.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: docker-registry
                port:
                  number: 5000
  tls:
    - hosts:
      - host.name.com
      secretName: letsencrypt-secret-prod


Code style consistency enforcement

๐Ÿ’ก Low priority, low impact. Nice to have.

With the goal of having some automated enforcement of code style consistency within this repo...

Add a step to the CI pipeline to lint yaml files in this repo. It may not be possible/feasible since they're technically Go templates that compile into YAML files, but maybe there's something more that can be done which is helpful.


Yamllint has a rule that requires two spaces between content and comments by default.

Originally posted by @AbrohamLincoln in #57 (comment)

Add support for storage.redirect.disable=true

Hello,

Thank you for providing this Helm Chart to the community. We'd like to see support for storage.redirect.disable=true as it would help in situations where S3 storage is not directly accessible to clients. Sorry for not providing the PR as it's a little beyond my skills.

How to add private CA

Hi, is it possible to add a private CA chain cert for trust?
I added mine by ConfigMap to /etc/ssl/certs/, however still getting:
http: TLS handshake error from 10.42.8.197:45314: remote error: tls: bad certificate

coalesce.go:160: warning: skipped value for updateStrategy: Not a table.

I tried to upgrade from stable repo to this chart, but I get a warning:

coalesce.go:160: warning: skipped value for updateStrategy: Not a table.

The chart gets upgraded, but I cannot verify whether the updateStrategy was honored. Did anything change in this chart compared to stable?

This is the updateStragety from my values.yaml:

updateStrategy:                                                                 
  type: Recreate       

Configuring tls ingress

Currently ingress.yaml template has tls section as follows:

{{- if .Values.ingress.tls }}
tls:
{{ toYaml .Values.ingress.tls | indent 4 }}
{{- end -}}

This one expects that ingress.tls will be single value. But this should be an array and rewritten like
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}

Can you update helm chart?

Persistence volume claim with longhorn not working

I have tried to deploy the chart with a PVC on k3s with Rancher and Longhorn, but keep getting the error:

500 Internal Server Error
POST https://docker.host/v2/app/blobs/uploads/
{"errors":[{"code":"UNKNOWN","message":"unknown error","detail":{"DriverName":"filesystem","Enclosed":{"Op":"mkdir","Path":"/var/lib/registry/docker/registry/v2/repositories/app/_uploads/29ef60bf-e509-4832-a2eb-d60ec8d3e58e","Err":28}}}]}

login does not work somehow

  • docker info
...
 Insecure Registries:
  127.0.0.1:5000
  127.0.0.0/8
...
  • values.yaml
secrets:
  htpasswd: flux:$2y$05$BwBTGNGhCxcgimdKnN1TbuB8tpw/Zj7Yyzwve4.vYYX9FhcIX8VyS
persistence:
    enabled: true
    storageClass: csi-driver-nfs
    size: 1G
  • verify
% echo 'flux:$2y$05$BwBTGNGhCxcgimdKnN1TbuB8tpw/Zj7Yyzwve4.vYYX9FhcIX8VyS' > test.htpasswd
% htpasswd -vb ./test.htpasswd flux testtest123
Password for user flux correct.
  • login via port-forward
kubectl -n docker-registry port-forward svc/cluster0-docker-registry  5000:5000

% docker login -u flux 127.0.0.1:5000
Password: 
Error response from daemon: Get "http://127.0.0.1:5000/v2/": dial tcp 127.0.0.1:5000: connect: connection refused
  • exec -- ps
PID   USER     TIME  COMMAND
    1 1000      0:10 /bin/registry serve /etc/docker/registry/config.yml
   69 1000      0:00 ash
   75 1000      0:00 ps
  • exec -- echo $REGISTRY_AUTH_HTPASSWD_PATH
/auth/htpasswd
  • exec -- cat /auth/htpasswd
flux:$2y$05$BwBTGNGhCxcgimdKnN1TbuB8tpw/Zj7Yyzwve4.vYYX9FhcIX8VyS
  • exec -- nc -w2 -z 0.0.0.0 5000 && echo true
true
  • % nc -w2 -z 127.0.0.1 5000 && echo true
true
  • % curl -vvv 127.0.0.1:5000/v2/
*   Trying 127.0.0.1:5000...
* Connected to 127.0.0.1 (127.0.0.1) port 5000 (#0)
> GET /v2/ HTTP/1.1
> Host: 127.0.0.1:5000
> User-Agent: curl/8.1.2
> Accept: */*
> 
< HTTP/1.1 401 Unauthorized
< Content-Type: application/json; charset=utf-8
< Docker-Distribution-Api-Version: registry/2.0
< Www-Authenticate: Basic realm="Registry Realm"
< X-Content-Type-Options: nosniff
< Date: Wed, 10 Jan 2024 13:32:43 GMT
< Content-Length: 87
< 
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]}
* Connection #0 to host 127.0.0.1 left intact

why tho?

Add namespaces to metadata

None of the template files have the namespace specified. This means that non-default namespaces are not respected when using this chart

Proposed fix:

Add this to all template YAML files:

metadata:
    namespace: {{ .Release.Namespace }}

Happy to raise a PR for this if it would be accepted

Chart ingress not working in latest k8s

According to the k8s deprecation guide: https://kubernetes.io/docs/reference/using-api/deprecation-guide/

The extensions/v1beta1 and networking.k8s.io/v1beta1 API versions of Ingress is no longer served as of v1.22.
Migrate manifests and API clients to use the networking.k8s.io/v1 API version, available since v1.19.

Fix should be applied here:

apiVersion: {{- if .Capabilities.APIVersions.Has "networking.k8s.io/v1beta1" }} networking.k8s.io/v1beta1 {{- else }} extensions/v1beta1 {{- end }}

And probably here:

htpasswd generation example does not work

The example of htpasswd file generation does not work:

$ sudo docker run --entrypoint htpasswd registry:2 -Bbn user password > ./htpasswd
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "htpasswd": executable file not found in $PATH: unknown.
ERRO[0000] error waiting for container: context canceled

I tested that with recent image:

REPOSITORY                                         TAG                            IMAGE ID       CREATED         SIZE
registry                                           2                              1fd8e1b0bb7e   5 months ago    26.2MB

Move chart URL to github.io

Many thanks to all for keeping the Helm charts up to date.

I have one request: It would be sensible if instead of the private TLD helm.twun.io simply github.io would be used (e.g. twuni.github.io).

It can always happen that a private domain is not renewed (happens to the best). This would increase the trust in this chart. Also, I can imagine scenarios where CI pipelines run on systems that can only access specific TLDs.

Helm chart package recursively contains tarball of itself

Helm chart package recursively contains tarball of itself.

How to reproduce:

helm fetch twuni/docker-registry --untar
cd docker-registry
ls -1

Output:

Chart.yaml
LICENSE
README.md
docker-registry-1.10.1.tgz
templates
values.yaml

but:

% tar -tzvf docker-registry-1.10.1.tgz
-rw-r--r--  0 0      0         385 15 Feb 04:01 docker-registry/Chart.yaml
-rw-r--r--  0 0      0        3199 15 Feb 04:01 docker-registry/values.yaml
-rw-r--r--  0 0      0        1477 15 Feb 04:01 docker-registry/templates/NOTES.txt
-rw-r--r--  0 0      0         785 15 Feb 04:01 docker-registry/templates/_helpers.tpl
-rw-r--r--  0 0      0         345 15 Feb 04:01 docker-registry/templates/configmap.yaml
-rw-r--r--  0 0      0        7981 15 Feb 04:01 docker-registry/templates/deployment.yaml
-rw-r--r--  0 0      0        1203 15 Feb 04:01 docker-registry/templates/ingress.yaml
-rw-r--r--  0 0      0         536 15 Feb 04:01 docker-registry/templates/poddisruptionbudget.yaml
-rw-r--r--  0 0      0         770 15 Feb 04:01 docker-registry/templates/pvc.yaml
-rw-r--r--  0 0      0        1555 15 Feb 04:01 docker-registry/templates/secret.yaml
-rw-r--r--  0 0      0        1660 15 Feb 04:01 docker-registry/templates/service.yaml
-rw-r--r--  0 0      0         217 15 Feb 04:01 docker-registry/.circleci/config.yml
-rw-r--r--  0 0      0         333 15 Feb 04:01 docker-registry/.helmignore
-rw-r--r--  0 0      0       11343 15 Feb 04:01 docker-registry/LICENSE
-rw-r--r--  0 0      0        8793 15 Feb 04:01 docker-registry/README.md
-rw-r--r--  0 0      0       23567 15 Feb 04:01 docker-registry/docker-registry-1.10.1.tgz

secrets.htpassword - htpasswd: invalid entry at line 1: "./htpasswd"

Issue

When I create a htpasswd and next pass it to the helm chart

htpasswd -Bbc htpasswd cmoulliard dabou
cat <<EOF > registry-values.yaml 
service:
  type: NodePort
  nodePort: 31000
secrets:
  htpasswd: ./htpasswd
persistence:
  size: 10Gi
EOF

kc create ns default
helm install registry twuni/docker-registry -n default --values registry-values.yaml

the log of the pod reports this error when it is called docker login 95.217.159.244:31000 -u cmoulliard -p dabou

time="2021-03-05T17:58:57.643883309Z" level=warning msg="error authorizing context: basic authentication challenge for realm "Registry Realm": invalid authorization credential" go.version=go1.11.2 http.request.host="95.217.159.244:31000" http.request.id=b36c3cb6-0b4d-4426-8464-ed3490c90a60 http.request.method=GET http.request.remoteaddr="10.244.0.1:50599" http.request.uri="/v2/" http.request.useragent="docker/20.10.5 go/go1.13.15 git-commit/363e9a8 kernel/3.10.0-1160.15.2.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.5 \(linux\))" 
10.244.0.1 - - [05/Mar/2021:17:58:57 +0000] "GET /v2/ HTTP/1.1" 400 0 "" "docker/20.10.5 go/go1.13.15 git-commit/363e9a8 kernel/3.10.0-1160.15.2.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.5 \\(linux\\))"

time="2021-03-05T17:58:57.648234035Z" level=error msg="error checking authorization: htpasswd: invalid entry at line 1: "./htpasswd"" go.version=go1.11.2 http.request.host="95.217.159.244:31000" http.request.id=2446f99d-f6b1-4953-8213-363f96465735 http.request.method=GET http.request.remoteaddr="10.244.0.1:51685" http.request.uri="/v2/" http.request.useragent="docker/20.10.5 go/go1.13.15 git-commit/363e9a8 kernel/3.10.0-1160.15.2.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.5 \(linux\))" 

Add support for custom envFrom secretRef's and configMapRef's

Hi,

Would it be possible to add custom envFrom support in the future?

Background, I am running rook-ceph (operator for running distributed storage ceph).
It has a ObjectBucketClaim that can create s3 buckets. The OBC in turn creates a ConfigMap and a Secret that contains following keys BUCKET_REGION, BUCKET_HOST, BUCKET_PORT, BUCKET_NAME, AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. In order for me to use these with this helm chart I need to first apply the OBC. Then open the configmap and secret and copy the values from the keys and insert them for this helm chart values and then install the helm chart. Makes it impossible for me to automate.

If you could make it so I can add additional envFrom[].secretRef and envFrom[].configMapRef into the values that are applied to the pod container specs. Then I could easily use the above mentioned auto-generated keys as values like so:

secrets.s3.accessKey=$(AWS_ACCESS_KEY_ID)
secrets.s3.secretKey=$(AWS_SECRET_ACCESS_KEY)
s3.region=$(BUCKET_REGION)
s3.regionEndpoint=$(BUCKET_HOST)
s3.bucket=$(BUCKET_NAME)

Thanks!

Adding annotations for deployment to values

It would be very great to have annotations for the deployment itself. As there is currently no way to do that, we would have to clone and integrate the entire chart into our parent chart which is a mess just to change a few lines there.

For example the wave project for kubernetes requires a certain annotation on deployments / statefulsets in order to be active.
As there are annotations available for almost all other resources that would be a great minor addition which would help a lot!

hpa not created in same namespace

hpa is created in default namespace (when using namespace through values file) instead of given namespace hence getting this error

Type Reason Age From Message
Warning FailedGetScale 11s (x161 over 40m) horizontal-pod-autoscaler deployments/scale.apps "my-release-name" not found

Beginner Configuration Help

๐Ÿ‘‹ Hello,

I'm totally new to helm in general. Hopefully there is some documentation you can point me toward.

I'm trying to follow a linode guide to setup a docker registry. It mentions this as an alternative, but I'm not sure how to make it work.

The original example is a configuration with this command:

$ helm install docker-registry stable/docker-registry -f registry/docker-configs.yml
Error: INSTALLATION FAILED: repo stable not found

I am not sure how to correct the error. However, the article mentions helm repo add twuni https://helm.twun.io as an alternative. I'm not sure how to continue from here. I have attempted the following.

$ helm install docker-registry stable/docker-registry -f registry/docker-configs.yml
Error: INSTALLATION FAILED: repo stable not found

$ helm install twuni/docker-registry stable/docker-registry -f registry/docker-configs.yml
Error: INSTALLATION FAILED: repo stable not found

$ helm install twuni/docker-registry docker-registry -f registry/docker-configs.yml     
Error: INSTALLATION FAILED: non-absolute URLs should be in form of repo_name/path_to_chart, got: docker-registry

$ helm install twuni/docker-registry                                        
Error: INSTALLATION FAILED: must either provide a name or specify --generate-name

$ helm install twuni/docker-registry -f registry/docker-configs.yml 
Error: INSTALLATION FAILED: must either provide a name or specify --generate-name

Chart always expects accessKey and secretKey to be defined when using s3 storage

The current chart always secrets.s3.accessKey and secrets.s3.secretKey to be defined when using s3 storage, which can break if you rely on ec2 instance profiles.

The diff below checks to make sure .Values.secrets.s3 is defined before using it, which appears to resolve my issue.

diff -uNrp twuni-docker-registry.helm-cb69658/templates/deployment.yaml twuni-docker-registry.helm-cb69658-working/templates/deployment.yaml
--- twuni-docker-registry.helm-cb69658/templates/deployment.yaml        2022-08-09 17:13:42.000000000 +0000
+++ twuni-docker-registry.helm-cb69658-working/templates/deployment.yaml        2022-08-15 19:01:12.357116958 +0000
@@ -124,6 +124,7 @@ spec:
                   name: {{ template "docker-registry.fullname" . }}-secret
                   key: azureContainer
 {{- else if eq .Values.storage "s3" }}
+            {{- if .Values.secrets.s3 }}
             {{- if or (and .Values.secrets.s3.secretKey .Values.secrets.s3.accessKey) .Values.secrets.s3.secretRef }}
             - name: REGISTRY_STORAGE_S3_ACCESSKEY
               valueFrom:
@@ -136,6 +137,7 @@ spec:
                   name: {{ if .Values.secrets.s3.secretRef }}{{ .Values.secrets.s3.secretRef }}{{ else }}{{ template "docker-registry.fullname" . }}-secret{{ end }}
                   key: s3SecretKey
             {{- end }}
+            {{- end }}
             - name: REGISTRY_STORAGE_S3_REGION
               value: {{ required ".Values.s3.region is required" .Values.s3.region }}
           {{- if .Values.s3.regionEndpoint }}
diff -uNrp twuni-docker-registry.helm-cb69658/templates/secret.yaml twuni-docker-registry.helm-cb69658-working/templates/secret.yaml
--- twuni-docker-registry.helm-cb69658/templates/secret.yaml    2022-08-09 17:13:42.000000000 +0000
+++ twuni-docker-registry.helm-cb69658-working/templates/secret.yaml    2022-08-15 18:58:39.077118130 +0000
@@ -25,7 +25,7 @@ data:
   azureAccountKey: {{ .Values.secrets.azure.accountKey | b64enc | quote }}
   azureContainer: {{ .Values.secrets.azure.container | b64enc | quote }}
     {{- end }}
-  {{- else if eq .Values.storage "s3" }}
+  {{- else if and (eq .Values.storage "s3") .Values.secrets.s3 }}
     {{- if and .Values.secrets.s3.secretKey .Values.secrets.s3.accessKey }}
   s3AccessKey: {{ .Values.secrets.s3.accessKey | b64enc | quote }}
   s3SecretKey: {{ .Values.secrets.s3.secretKey | b64enc | quote }}

Please let me know if I'm doing it wrong or missed some documentation. Thanks!

Feature Request: Add support for registry certificates

I'd like to ask for the option to provide my own certificates to the registry. Almost everything is there already. I'm using like this in the values file:

extraEnvVars:
  - name: REGISTRY_HTTP_TLS_CERTIFICATE
    value: "/certs/tls.crt"
  - name: REGISTRY_HTTP_TLS_KEY
    value: "/certs/tls.key"
extraVolumes:
  - name: registry-tls
    secret:
      secretName: registry-tls
extraVolumeMounts:
  - mountPath: /certs
    name: registry-tls
    readOnly: true

The only missing thing is that extra registry-tls secret. with something like this in the values.yaml:

certs: {}
  # tls.crt: |
  #   your base64 encoded crt file
  # tls.key: |
  #   your base64 encoded key file

and a new secret in the templates. Something like this:

...
{{- with .Values.certs }}
data:
  {{- toYaml . | nindent 2 }}
{{- end }}

Maybe event the extra definitions could be autogenerate as well if the .Values.certs is not empty. But I'm just guessing here. I have never written a chart before.

Enabling proxy configuration causes config issue

Applying the chart with:

proxy:
  enabled: false
  password: ''
  remoteurl: 'https://registry-1.docker.io'
  secretRef: ''
  username: ''

Succeeds without issue. However when trying to apply proxy settings:

proxy:
  enabled: true
  password: 'mypassword'
  remoteurl: 'https://registry-1.docker.io'
  secretRef: ''
  username: 'myusername'

Results in the pod entering CrashLoopBackOff state, logging the following:

kubectl logs docker-registry-1-1629637896-74cbf6d4b-2vkv8 -n dockercache-system                                          
configuration error: error parsing /etc/docker/registry/config.yml: yaml: found unexpected non-alphabetical character

Usage: 
  registry serve <config> [flags]
Flags:
  -h, --help=false: help for serve


Additional help topics:

Configmap generated:

apiVersion: v1
data:
  config.yml: |-
    health:
      storagedriver:
        enabled: true
        interval: 10s
        threshold: 3
    http:
      addr: :5000
      headers:
        X-Content-Type-Options:
        - nosniff
    log:
      fields:
        service: registry
    storage:
      cache:
        blobdescriptor: inmemory
    version: 0.1
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: docker-registry-1-1629638370
    meta.helm.sh/release-namespace: dockercache-system
  creationTimestamp: "2021-08-22T13:19:30Z"
  labels:
    app: docker-registry
    app.kubernetes.io/managed-by: Helm
    chart: docker-registry-1.12.0
    heritage: Helm
    release: docker-registry-1-1629638370
  name: docker-registry-1-1629638370-config
  namespace: dockercache-system
  resourceVersion: "57528303"
  uid: ae665539-d33a-4747-ab97-3c2610077128

somewhat new to kubernetes, successfully setup a dev cluster, built a container with my webapi, installed this via helm & pushed to this registry, how to i access it via kubernetes?

I tried this:

    spec:
      containers:
      - image: registry-docker-registry/my_image:latest
        name: my_image

I see this in the documentation at the end:

To generate htpasswd file, run this docker command: docker run --entrypoint htpasswd registry:2 -Bbn user password > ./htpasswd.

I didn't expect an extra step to configure credentials because I was able to upload to the registry immediately after installing.

Do I need to generate an htpasswd file?

No documentation for pulling an image from the registry

Hello everyone,
I am using this helm chart to set up a local docker registry inside my cluster. After I installed the chart, it asks to enter the following two commands:

export POD_NAME=$(kubectl get pods --namespace default -l "app=docker-registry,release=docker-registry" -o jsonpath="{.items[0].metadata.name}")

kubectl -n default port-forward $POD_NAME 8080:5000

Then I was able to push my locally created image to the registry with the following command:
docker push 127.0.0.1:8080/random-scheduler:v1
Now I can pull or push the image using docker, but when I reference this image inside a deployment I get an error. The following is the deployment file that uses random-scheduler:v1 image:

apiVersion: v1
kind: ReplicationController
metadata:
  name: random-scheduler
spec:
  replicas: 1
  selector:
    app: random-scheduler
  template:
    metadata:
      name: random-scheduler
      labels:
        app: random-scheduler
    spec:
      containers:
      - name: random-scheduler-container
        image: 127.0.0.1:8080/random-scheduler:v1
        ports:
        - containerPort: 9999

When I apply this file, the following is the output of kubectl describe of the pod:

Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  14s                default-scheduler  Successfully assigned default/random-scheduler-cfqvr to k8s-n-2
  Normal   BackOff    11s (x2 over 13s)  kubelet            Back-off pulling image "127.0.0.1:8080/random-scheduler:v1"
  Warning  Failed     11s (x2 over 13s)  kubelet            Error: ImagePullBackOff
  Normal   Pulling    0s (x2 over 13s)   kubelet            Pulling image "127.0.0.1:8080/random-scheduler:v1"
  Warning  Failed     0s (x2 over 13s)   kubelet            Failed to pull image "127.0.0.1:8080/random-scheduler:v1": rpc error: code = Unknown desc = failed to pull and unpack image "127.0.0.1:8080/random-scheduler:v1": failed to resolve reference "127.0.0.1:8080/random-scheduler:v1": failed to do request: Head http://127.0.0.1:8080/v2/random-scheduler/manifests/v1: dial tcp 127.0.0.1:8080:
connect: connection refused
  Warning  Failed     0s (x2 over 13s)   kubelet            Error: ErrImagePull

What is the correct way to reference this image inside my deployment?

deploy pod from docker registry in same k8s but service name cannot be reach...

I got issue for registry host name reach...
When I use this docker-registry in KinD 3 clusters(1 control plane and two worker).

and push image from local via...(I map registry 5000 port to registry.k8s.com via ingress)

docker push registry.k8s.com/grpc-server:V1.2
The push refers to repository [registry.k8s.com/grpc-server]
55344028772e: Pushed
d319ed48691a: Pushed
6470ba8155ed: Pushed
c7bd51621c7a: Pushed
02948dacdd5e: Pushed
c2513ca213d4: Pushed
f618de1e6ce3: Pushed
53cb729241a4: Pushed
c3c01c74818a: Pushed
f83139632251: Pushed
b6f786c730a9: Pushed
63a6bdb95b08: Pushed
8d3ac3489996: Pushed
V1.2: digest: sha256:63586fceda317419374bdabbfd915d8612a6a623fb0bdd4eddd5167fe3d52817 size: 3035

It is successfully
and check repository

curl registry.k8s.com/v2/grpc-server/tags/list
{"name":"grpc-server","tags":["V1.2"]}

So image is existed.

and my registry vervice is

kubectl get svc/my-docker-registry
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
my-docker-registry   ClusterIP   10.96.47.9   <none>        5000/TCP   47m

and I want to run this images as pod...
kubectl run test-pod --image=my-docker-registry:5000/grpc-server:V1.2 --restart=Never

Then I got fail for deployment

kubectl get pods/test-pod
NAME       READY   STATUS             RESTARTS   AGE
test-pod   0/1     ImagePullBackOff   0          18s

Then I check pod logs...

kubectl describe  pod test-pod
Name:         test-pod
Namespace:    default
Priority:     0
Node:         kind-worker2/172.19.0.3
Start Time:   Tue, 02 Aug 2022 16:17:44 +0800
Labels:       run=test-pod
Annotations:  <none>
Status:       Pending
IP:           10.244.2.12
IPs:
  IP:  10.244.2.12
Containers:
  test-pod:
    Container ID:
    Image:          my-docker-registry:5000/grpc-server:V1.2
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mvf72 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kube-api-access-mvf72:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  53s                default-scheduler  Successfully assigned default/test-pod to kind-worker2
  Normal   BackOff    24s (x2 over 52s)  kubelet            Back-off pulling image "my-docker-registry:5000/grpc-server:V1.2"
  Warning  Failed     24s (x2 over 52s)  kubelet            Error: ImagePullBackOff
  Normal   Pulling    12s (x3 over 53s)  kubelet            Pulling image "my-docker-registry:5000/grpc-server:V1.2"
  Warning  Failed     12s (x3 over 52s)  kubelet            Failed to pull image "my-docker-registry:5000/grpc-server:V1.2": rpc error: code = Unknown desc = failed to pull and unpack image "my-docker-registry:5000/grpc-server:V1.2": failed to resolve reference "my-docker-registry:5000/grpc-server:V1.2": failed to do request: Head "https://my-docker-registry:5000/v2/grpc-server/manifests/V1.2": dial tcp: lookup my-docker-registry on 192.168.65.2:53: no such host
  Warning  Failed     12s (x3 over 52s)  kubelet            Error: ErrImagePull

It look like service name cannot be reach in deploy process.
I did something wrong? or do I need to config more from this?

Any information will be appreciated.

Call for code reviewers!

As of right now, we've got about 8 open PRs. Let's those reviewed and approved+merged or closed! If you're one of those contributors, thank you for your patience and for your work.

I'm not an active user of this chart, myself, but if you're reading this, then you probably are. I need more folks like you to help review PRs. There's changesets more ambitious than I'm personally comfortable with reviewing on my own, but if they're to benefit the community, they need ๐Ÿ‘€.

If you're interested, please go right ahead and review some PRs!

There aren't currently any formal contribution guidelines [yet] for this repo, so I'll just outline the one I think is of utmost importance:

Whether you're a PR author or a reviewer, show respect and gratitude for one another. Move forward in good faith. All opinions and voices are welcome here. That said, please be willing to give and receive criticism from time to time, and do so in a kind, respectful, and gracious manner.

Garbage collection cronjob leaves registry in an inconsistent state

Prerequisites

  • kubectl
  • skopeo
  • A deployed registry configured with persistence.deleteEnabled = true and garbageCollect.enabled = true

Steps to reproduce

Setup

export REGISTRY=$NAME_OF_REGISTRY_INGRESS
export REGISTRY_POD=$NAME_OF_REGISTRY_POD_FROM_KUBECTL

docker login $REGISTRY
docker pull hello-world:latest
docker tag hello-world:latest ${REGISTRY}/hello-world:latest
docker push ${REGISTRY}/hello-world:latest
skopeo delete docker://${REGISTRY}/hello-world:latest
kubectl exec $REGISTRY_POD -- /bin/registry garbage-collect --delete-untagged=true /etc/docker/registry/config.yml

Test

docker push ${REGISTRY}/hello-world:latest
docker pull ${REGISTRY}/hello-world:latest

Expected result

Success

Actual behavior

It fails, claiming layers already exist, etc.

Workaround

Restarting the registry after garbage-collection makes it work as expected:

kubectl delete pod $REGISTRY_POD ; sleep 5  # XXX: restart registry
docker push ${REGISTRY}/hello-world:latest
docker pull ${REGISTRY}/hello-world:latest  # XXX: success

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.