Giter Site home page Giter Site logo

fydrah / loginapp Goto Github PK

View Code? Open in Web Editor NEW
74.0 6.0 20.0 3.11 MB

Web application for Kubernetes CLI configuration with OIDC

License: Apache License 2.0

Go 60.32% Makefile 4.33% Shell 16.58% Dockerfile 0.58% HTML 12.34% CSS 0.67% JavaScript 0.41% Mustache 4.76%
dex loginapp docker kubernetes oidc

loginapp's Introduction

Loginapp

Web application for Kubernetes CLI configuration with OIDC

Docker Repository on Quay codebeat badge Codacy Badge FOSSA Status

Loginapp Demo

Usage

Perform configuration checks and run Loginapp.

Loginapp supports three configuration formats:
* Configuration file: '--config' flag
* Flags: '--oidc-xxx' flags for example
* Environment vars: each flag provides an environment var with
  'LOGINAPP_' prefix.
  Ex: '--oidc-client-secret' --> 'LOGINAPP_OIDC_CLIENT_SECRET'

Configuration precedence: flags > environment vars > configuration file

Usage:
  loginapp serve [flags]

Flags:
  -c, --config string                            Configuration file
  -h, --help                                     help for serve
  -l, --listen string                            Listen interface and port (default "0.0.0.0:8080")
      --metrics-port int                         Port to export metrics (default 9090)
  -n, --name string                              Application name. Used for web title. (default "Loginapp")
      --oidc-client-id string                    Client ID (default "loginapp")
      --oidc-client-redirecturl string           Redirect URL for callback. This must be the same than the one provided to the IDP. Must end with '/callback'
      --oidc-client-secret string                Client secret
      --oidc-crossclients strings                Issue token on behalf of this list of client IDs
      --oidc-extra-authcodeopts stringToString   K/V list of extra authorisation code to include in token request (default [])
      --oidc-extra-scopes strings                [DEPRECATED] List of extra scopes to ask. Use oidc.scopes option instead. Option will be removed in next release.
      --oidc-issuer-insecureskipverify           Skip issuer certificate validation (usefull for testing). It is not advised to use this option in production
      --oidc-issuer-rootca string                Certificate authority of the issuer
      --oidc-issuer-url string                   Full URL of issuer before '/.well-known/openid-configuration' path
      --oidc-offlineasscope                      Issue a refresh token for offline access
      --oidc-scopes strings                      List of scopes to request. Updating this parameter will override existing scopes. (default [openid,profile,email,groups])
  -s, --secret string                            Application secret. Must be identical across all loginapp server replicas (this is not the OIDC Client secret)
      --tls-cert string                          TLS certificate path
      --tls-enabled                              Enable TLS
      --tls-key string                           TLS private key path
      --web-assetsdir string                     Directory to look for assets, which are overriding embedded (default "/web/assets")
      --web-kubeconfig-defaultcluster string     Default cluster name to use for full kubeconfig output
      --web-kubeconfig-defaultnamespace string   Default namespace to use for full kubeconfig output (default "default")
      --web-mainclientid string                  Application client ID
      --web-mainusernameclaim string             Claim to use for username (depends on IDP available claims (default "email")
      --web-templatesdir string                  Directory to look for templates, which are overriding embedded (default "/web/templates")

Global Flags:
  -v, --verbose   Verbose output

Configuration

# Application name
# default: mandatory
name: "Kubernetes Auth"

# Bind IP and port (format: "IP:PORT")
# default: mandatory
listen: "0.0.0.0:5555"

# Application secret. Must be identical across
# all loginapp server replicas ( /!\ this is not the OIDC Client secret)
secret: REDACTED

# OIDC configuration
oidc:

  # Client configuration
  client:
    # Application ID
    # default: mandatory
    id: "loginapp"
    # Application Secret
    # default: mandatory
    secret: REDACTED
    # Application Redirect URL
    # must end with "/callback"
    # default: mandatory
    redirectURL: "https://127.0.0.1:5555/callback"

  # Issuer configuration
  issuer:
    # Location of issuer root CA certificate
    # default: mandatory if insecureSkipVerify is false
    rootCA: "example/ssl/ca.pem"
    # Issuer URL
    # default: mandatory
    url: "https://dex.example.com:5556"
    # Skip certificate validation
    # Default: false
    insecureSkipVerify: false

  # List of scopes to request.
  # Updating this parameter will override existing scopes.
  # Default:[openid,profile,email,groups]
  scopes: []

  # OIDC extra configuration
  extra:
    # [DEPREACTED] OIDC Scopes in addition to
    # "openid", "profile", "email", "groups"
    #
    # Use oidc.scopes instead
    #
    # default: []
    scopes: []

    # Extra auth code options
    # Some extra auth code options are required for:
    # * ADFS compatibility (ex: resource, https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/overview/ad-fs-openid-connect-oauth-flows-scenarios)
    # * Google OIDC compatibility (ex: https://developers.google.com/identity/protocols/oauth2/openid-connect#refresh-tokens)
    # See: 
    # default: {}
    authCodeOpts:
      resource: XXXXX

  # Enable offline scope
  # default: false
  offlineAsScope: true
  # Request token on behalf of other clients
  # default: []
  crossClients: []

# Tls support
tls:
  # Enable tls termination
  # default: false
  enabled: true
  # Certificate location
  # default: mandatory if tls.enabled is true
  cert: example/ssl/cert.pem
  # Key location
  # default: mandatory if tls.enabled is true
  key: example/ssl/key.pem

# Configure the web behavior
web:
  # ClientID to output (useful for cross_client)
  # default: value of 'oidc.client.id'
  mainClientID: loginapp
  # Claims to use for kubeconfig username.
  # default: email
  mainUsernameClaim: email
  # Kubeconfig output format
  kubeconfig:
    # Change default cluster for kubeconfig context
    # Default: first cluster name in `clusters`
    defaultCluster: mycluster
    # Change default namespace for kubeconfig contexts
    # Default: default
    defaultNamespace: default
    # Change default context for kubeconfig
    # If not set, use a format like 'defaultClusterName'/'usernameClaim'
    # Default: ""
    defaultContext: altcontextname
    # Extra key/value pairs to add to kubeconfig output.
    # Key/value pairs are added under `user.auth-provider.config`
    # dictionnary into the kubeconfig.
    # Ex:
    # extraOpts:
    #   mykey1: value1
    #
    # Kubeconfig Output:
    # - name: [email protected]
    #     auth-provider:
    #       config:
    #         mykey1: value1
    #         client-id: loginapp
    #         [...]
    extraOpts: {}

# Metrics configuration
metrics:
  # Port to use. Metrics are available at
  # http://IP:PORT/metrics
  # default: 9090
  port: 9090

# Clusters list for CLI configuration
clusters:
  - name: mycluster
    server: https://mycluster.org
    certificate-authority: |
      -----BEGIN CERTIFICATE-----
      MIIC/zCCAeegAwIBAgIULkYvGJPRl50tMoVE4BNM0laRQncwDQYJKoZIhvcNAQEL
      BQAwDzENMAsGA1UEAwwEbXljYTAeFw0xOTAyMTgyMjA5NTJaFw0xOTAyMjgyMjA5
      NTJaMA8xDTALBgNVBAMMBG15Y2EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
      -----END CERTIFICATE-----
    insecure-skip-tls-verify: false
    # Alternative context name for this cluster
    contextName: altcontextname

Deployment

Dev

Manage dependencies

Loginapp uses go modules to manage dependencies.

  # Retrieve dependencies (vendor)
  go mod vendor
Compile, configure and run

Configuration files are located in example directory

  $ make

Run also gofmt before any new commit:

  make gofmt
Dev env

Loginapp uses kind and skaffold for development environment.

Setup steps:

  1. Launch a kind cluster:

    $ test/kubernetes/kindup.sh
    $ kubectl get node
    NAME                     STATUS   ROLES    AGE   VERSION
    loginapp-control-plane   Ready    master   25m   v1.17.0
  2. Generate Dex & Loginapp certificates and configuration for the dev env:

    $ test/genconf.sh
    [...]
    Creating TLS secret for loginapp
    Generating dex and loginapp configurations
    [...]
  3. Launch skaffold:

  • For local dev, launch just dex:

    # Deploy dex
    $ skaffold run -p dex
  • To test kubernetes deployment, launch dex and loginapp:

    # Deploy dex and loginapp
    $ skaffold run -p dex,loginapp
  • Test helm deployment:

    # Deploy dex and loginapp
    $ skaffold run -p helm
  1. [local] Compile and run loginapp:

    $ make
    # A default configurationn is generated at test/generated/loginapp-config-manual.yaml
    $ ./build/loginapp -v serve [-c test/generated/loginapp-config-manual.yaml]
    [...]
    {"level":"info","msg":"export metric on http://0.0.0.0:9090","time":"2020-04-28T18:19:19+02:00"}
    {"level":"info","msg":"listening on https://0.0.0.0:8443","time":"2020-04-28T18:19:19+02:00"}
    [...]
  2. Access loginapp UI:

  3. Default user/password configured by Dex is:

Alternatives

Other projects performing OIDC authentication:

MISC

The code base of this repository uses some source code from the original dexidp/dex repository.

loginapp's People

Contributors

aveyrenc avatar bsnape avatar fydrah avatar imunhatep avatar rinrailin avatar robbiemcmichael avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

loginapp's Issues

Use base64 for cluster certificate authority

We are automating deploy loginapp + dex. We have difficulty on replace cluster certificate-authority strings.

          -----BEGIN CERTIFICATE-----
          MIIC5zCCAc+gAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl
          ******
          -----END CERTIFICATE-----

It has pem format which has many lines and /n. The automation has difficulty to replace certficate-authority strings for different clusters.
Do you support base64 encode string for this certificate-authority? like

    # Clusters list for CLI configuration
    clusters:
      - name: test1
        server: https://****:6443
        certificate-authority: |
          <base64 encoded string>
        insecure-skip-tls-verify: false
        # Alternative context name for this cluster
        contextName: test

or any workaround would be much appreciated.

Thank you
Henry

Add config parameter for context name in generated kubeconfig

Is your feature request related to a problem? Please describe.
Currently context name in generated kubeconfig consists of <cluster_name>/<username>, eg.:

apiVersion: v1
kind: Config
contexts:
- name: mycluster1-us-east4-a/[email protected]
  context:
    user: [email protected]
    cluster: mycluster1-us-east4-a
    namespace: default

Usually username part is redundant and makes context names quite long. It would be useful to have an option how to build context name. In my case I would use context name as a just cluster name, e.g.:

apiVersion: v1
kind: Config
contexts:
- name: mycluster1-us-east4-a
  context:
    user: [email protected]
    cluster: mycluster1-us-east4-a
    namespace: default

Parameter could be hidden under config in e.g: web.kubeconfig.contextName.

Related code: https://github.com/fydrah/loginapp/blob/master/web/templates/token.html#L109

Is it possible to consider such change?

dns lookup broken

I'm getting "dial tcp: lookup dex.example.com on 169.254.25.10:53: no such host" where example.com is actually a real domain, but I have no idea where 169.254.25.10 is coming from. There's no dns on that IP that would be able to answer and as a result loginapp does not start up.

Googling around tells me this is somehow related to docker enigne config but I find this unlikely since this particular container is the only one that is having this issue.

I tried rebuilding it with added /etc/resolv.conf but it is completely ignored (or somehow overwritten).

Next I'll dig into the go source ...

Any hints appreciated.

{"level":"fatal","msg":"'Web.Kubeconfig.ExtraOpts' expected a map, got 'string'"}

I'm trying to migrate from the old loginapp "objectiflibre/login-app:latest", and cannot get this one here to work. What I am currently getting is

{"level":"fatal","msg":"1 error(s) decoding:\n\n* 'Web.Kubeconfig.ExtraOpts' expected a map, got 'string'","time":"2021-01-02T23:33:48Z"}

in the logs the loginapp Kubernetes pod, same as in the previous issue. I tried to add the suggested snipped to the config, but still the same error.

Any help appreciated, in particular a suggestion for a minimal working example for a config file which can be mounted to /app/config.yaml, using a deployment applied directly with kubectl.

Loginapp & Dex version
loginapp: latest (v3.2.0)
dex: latest (not relevant yet I think)

Configuration

Current loginapp (and eventually dex) configuration (without secrets).
I am mounting this in app/config.yaml inside the container (starting the deployment manually, without helm).

log:
  level: debug
name: "Kubernetes Authentication"
listen: "https://0.0.0.0:32443"
oidc:
  client:
    id: "loginapp"
    secret: REDACTED
    redirect_url: "https://<server>/callback"
  issuer:
    url: "https://dex.auth/dex"
    rootCA: "/etc/ssl/ca.pem"
  scopes:
    - email
    - groups
  web:
    kubeconfig:
      extraOpts: {}
tls:
  enabled: false
clusters:
  - name: mycluster
    server: https://mycluster:7443
    certificate-authority: |
        -----BEGIN CERTIFICATE-----
       ...
        -----END CERTIFICATE-----
    insecure-skip-tls-verify: false
web:
  mainClientID: "loginapp"
  mainUsernameClaim: "email"
  kubeconfig:
    extraOpts: {}

deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: dex-loginapp
  namespace: auth
  labels:
    app: dex-loginapp
spec:
  replicas: 1
  ...
    spec:
      containers:
      - name: dex-loginapp
        #image: objectiflibre/login-app:latest
        image: quay.io/fydrah/loginapp:latest
        command: ["/loginapp"]
        args: ["serve", "/app/config.yaml"]
        volumeMounts:
        - mountPath: /etc/ssl/ca.pem
          name: dex-ca
          readOnly: true
        - mountPath: /app/config.yaml
          name: loginapp-config
          readOnly: true
      volumes:
      - hostPath:
          path: /cephfs/certs/dex/ssl/ca.pem
          type: File
        name: dex-ca
      - hostPath:
          path: /cephfs/volumes/dex-loginapp/config.yaml
          type: File
        name: loginapp-config

add include context and clusters for easy copy paste yaml file in `kubeconfig`

Is your feature request related to a problem? Please describe.
Would be great to update the kubeconfig to include clusters and contexts.
At the moment we end up passing around our kubeconfig instead using the loginapp for both clusters and contexts.

contexts is again often used to shorten cluster names, so would be great to also include these in the shared

clusters:
- cluster:
    server: https://k8s-dev.host.tld:6443
  name: some-long-name
- cluster:
    server: https://prod:6443
  name: prod
- cluster:
    server: https://stage:6443
  name: stage
contexts:
- context:
    cluster: some-long-name
    user: user
  name: dev
- context:
    cluster: prod
    user: user
  name: prod
- context:
    cluster: stage
    user: user
  name: stage
users:
- name: user
  user:
    auth-provider:
      [...]

Describe the solution you'd like
Improve the kubeconfig to make it the default place to grab your kubeconfig.

Describe alternatives you've considered
Currently passing the files around in privat messages or in many other forms.

Issues with LetsEncrypt certificates

Hi,
I have been using Your loginapp as a part of the k8s-ldap setup, however I have encountered a problem with handling certificates that were created by cert-manager:

x509: certificate signed by unknown authority

I suspect that the problem is, that the loginapp is based on scratch and doest not have the LetsEncrypt or any other ca.

Best regards

Web interface looks different than in demo screen shot?

Describe the bug

I have setup the full login framework with dex now and it seems to work. However, after login to our cluster, the screen I see looks different than the one in your demo screen shot: There is no menu on the left hand side, and there is no full kubeconfig for users (just a box with the user credentials and one other with a kubectl command line).

When was the interface changed? v3.2.0 can not parse my config so I am stuck with v2.5.0, so I guess it's due to the old version?

Thanks.

Loginapp & Dex version
loginapp: v2.5.0
dex: v2.27.0

loginapp cannot start

log:
  level: debug
replicas: 1
image: quay.io/fydrah/loginapp:v3.2.3
imagePullPolicy: IfNotPresent
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
  create: true
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name:

podSecurityContext: {}
  # fsGroup: 2000

securityContext: {}
  # capabilities:
  #   drop:
  #   - ALL
  # readOnlyRootFilesystem: true
  # runAsNonRoot: true
  # runAsUser: 1000

service:
  type: ClusterIP
  port: 5555

ingress:
  enabled: false

resources: {}

nodeSelector: {}

tolerations: []

affinity: {}

env: {}

args: []

config:
  # Application name, defaults to Release name
  name:
  # Application secret
  # Use an existing secret for the loginapp secret and OIDC secret
  existingSecret:
  # if empty, generate a random string
  # please setup a real secret otherwise helm will generate
  # a new secret at each deployment
  secret: "test123"
  # OIDC Client ID
  clientID: "loginapp"
  # OIDC Client secret
  clientSecret: "clientSecretHere"
  # OIDC Client redirect URL
  # This must end with /callback
  # if empty, defaults to:
  #
  # 1. '{{ .Values.ingress.hosts[0].host }}/callback' if 'ingress.enabled: true' and 'ingress.hosts[0]' exists
  # 2. '{{ .Release.Name }}.{{ .Release.Namespace }}.svc:5555/callback'
  clientRedirectURL: https://loginappURL/callback
  # Issuer root CA configMap
  # ConfigMap containing the root CA
  # and key to use inside the configMap.
  # This configMap must exist
  issuerRootCA: # +doc-gen:break
    configMap: godaddy-ca
    key: ca.crt
  # Skip issuer certificate validation
  # This is usefull for testing purpose, but
  # not recommended in production
  issuerInsecureSkipVerify: false
  # Issuer url
  issuerURL: "https://dexUrl:5556"
  # Include refresh token in request
  refreshToken: false
  tls:
    # Enable TLS for deployment
    enabled: false
  clusters:
   - name: appcluster
     server: https://apiserver.app-cluster.capz.io:6443
     certificate-authority: |
       -----BEGIN CERTIFICATE-----
       MIIC6jCCAdKgAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl
       -----END CERTIFICATE-----
     insecure-skip-tls-verify: false
     # Alternative context name for this cluster
     contextName: app-cluster

# Configuration overrides, this is a free configuration merged
# with the previous generated configuration 'config'. Use this
# to add or overwrites values.
# Example:
#
#  oidc:
#    scopes: [openid,profile,email]
configOverwrites: {}


# Configuration overrides html templates content used by loginapp to display error and token pages
# Example:
#
#  templates:
#    token.html: |-
#      {{`<html>
#      <body>
#      <title>Hello token</title>
#      </body>
#      </html>`}}
templates: {}

# Enable dex deployment
# See https://github.com/dexidp/helm-charts/tree/master/charts/dex
# more information about available values
dex:
  enabled: false

then I do helm install loginapp loginapp/loginapp -n default --values values.yaml ...Pod is in restart loop ...tried to enable debug log but there is not debug log when pod crashes...pod health checks are failing..as expected..but cannot figure out why loginapp pod cannot start.

image
image

it's a bit confusing because I got everything working..then deleted and started over just as practice and to confirm everything...and ran into this ;)

I tried to deploy with only required configs in values.yaml ..but still crash loop ...what else could be the cause of loginapp not being able to start?

Support oidc client secret from environment variable or file

Is your feature request related to a problem? Please describe.

According to documentation and examples client secret needs to be provided as plain text in the config file. This means that the configmap can't be stored in a revision control system which is an inconvenience.

Describe the solution you'd like

Most k8s workloads allow for sensitive data to come from environment variables or files on disk (via kubernetes Secrets)

I think the simplest solution is to make the oidc client secret field optional in the yaml file and if it's not set read secret from an "OIDC_CLIENT_SECRET" environment variable.

On top of that maybe it's worth having structure like this:

oidc:
    client:
        secret-file: /path/to/secretfile

So that users can choose to provide the secret as file on disk.

Describe alternatives you've considered

There are no convenient alternatives I can think of

Additional context

failed to query provider and no secret error

Describe the bug
After deploy new release of dex 2.30.0 via
https://github.com/dexidp/dex/blob/master/examples/k8s/dex.yaml

Quite a few unexpected error in loginapp logs

Loginapp & Dex version
loginapp: v3.2.3
dex: v2.30.0

Configuration

Current loginapp (and eventually dex) configuration (without secrets)

oidc:
      client:
        id: "testloginapp"
        # This isn't a real secret as it's given to all users for the refresh token
        secret: ****
        redirectURL: "https://www.dexexample.com:32000/callback"
      issuer:
        rootCA: "/etc/loginapp/cfg/CA.pem"
        url: "https://www.dexexample.com:32000/dexidp"
        insecureSkipVerify: false
      # List of scopes to request.
      # Updating this parameter will override existing scopes.
      # Default:[openid,profile,email,groups]
      scopes: [openid,profile,email,groups]
      offlineAsScope: false
      crossClients: []
    tls:
      enabled: false
    log:
      level: debug
      format: json
    web:
      # ClientID to output (useful for cross_client)
      # default: value of 'oidc.client.id'
      mainClientID: testloginapp
      # Claims to use for kubeconfig username.
      # default: email
      mainUsernameClaim: email
      # Kubeconfig output format
      kubeconfig:
        # Change default cluster for kubeconfig context
        # Default: first cluster name in `clusters`
        defaultCluster: test1
        # Change default namespace for kubeconfig contexts
        # Default: default
        defaultNamespace: default
        # Change default context for kubeconfig
        # If not set, use a format like 'defaultClusterName'/'usernameClaim'
        # Default: ""
        defaultContext: test
        # Extra key/value pairs to add to kubeconfig output.
        # Key/value pairs are added under `user.auth-provider.config`
        # dictionnary into the kubeconfig.
        # Ex:
        # extraOpts:
        #   mykey1: value1
        #
        # Kubeconfig Output:
        # - name: [email protected]
        #     auth-provider:
        #       config:
        #         mykey1: value1
        #         client-id: loginapp
        #         [...]
        extraOpts: {}
    # Metrics configuration
    metrics:
      # Port to use. Metrics are available at
      # http://IP:PORT/metrics
      # default: 9090
      port: 9090

    # Clusters list for CLI configuration
    clusters:
      - name: test1
        server: https://****:6443
        certificate-authority: |
          -----BEGIN CERTIFICATE-----
          MIIC5zCCAc+gAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl
          ******
          -----END CERTIFICATE-----
        insecure-skip-tls-verify: false
        # Alternative context name for this cluster
        contextName: test

To Reproduce
Steps to reproduce the behavior:

  1. Go to loginapp
  2. login github
  3. got kubeconfig successfully
  4. tail logs of loginapp
$ k logs -f dex-6475b86b87-tz2t7 -c loginapp
{"level":"info","msg":"no secret defined, using a random secret but it is strongly advised to add a secret since without it requests cannot be load balanced between multiple server","time":"2021-10-10T22:53:24Z"}
{"level":"error","msg":"failed to query provider \"https://www.dexexample.com:32000/dexidp\": 502 Bad Gateway: \u003chtml\u003e\r\n\u003chead\u003e\u003ctitle\u003e502 Bad Gateway\u003c/title\u003e\u003c/head\u003e\r\n\u003cbody\u003e\r\n\u003ccenter\u003e\u003ch1\u003e502 Bad Gateway\u003c/h1\u003e\u003c/center\u003e\r\n\u003chr\u003e\u003ccenter\u003enginx/1.17.8\u003c/center\u003e\r\n\u003c/body\u003e\r\n\u003c/html\u003e\r\n","time":"2021-10-10T22:53:24Z"}
{"level":"info","msg":"export metric on http://0.0.0.0:9090","time":"2021-10-10T22:53:24Z"}
{"level":"info","msg":"listening on http://127.0.0.1:5555","time":"2021-10-10T22:53:24Z"}

Expected behavior

  1. I have secret set in loginapp config, not sure why still has log "no secret defined....."
  2. expect no "failed to query provider" in loginapp logs

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context

  1. We run a kind cluster for it. We spoof 127.0.0.1 for www.dexexample.com on the machine. We use nginx as reverse proxy in the pod, the pod has 3 containers dex, loginapp, nginx and use nodeport 32000 to expose service. It is working fine to get kubeconfig file,but quite a few error in loginapp logs.
   upstream dex {
      server 127.0.0.1:5556;
    }
    upstream loginapp {
      server 127.0.0.1:5555;
    }
    server {
      listen              32000 ssl default_server;
      ssl_certificate     /etc/nginx/ssl/tls.crt;
      ssl_certificate_key /etc/nginx/ssl/tls.key;
      location /dexidp {
        proxy_pass http://dex;
      }
      location / {
        proxy_pass http://loginapp;
      }
    }
  1. dex config file is
issuer: https://www.dexexample.com:32000/dexidp
    storage:
      type: kubernetes
      config:
        inCluster: true
    frontend:
      theme: tectonic
    web:
      http: 127.0.0.1:5556
    connectors:
    - type: github
      id: github
      name: GitHub
      config:
        clientID: $GITHUB_CLIENT_ID
        clientSecret: $GITHUB_CLIENT_SECRET
        redirectURI: https://www.dexexample.com:32000/dexidp/callback
        hostName: www.github.com
    oauth2:
      skipApprovalScreen: false

    staticClients:
    - id: testloginapp
      redirectURIs:
      - 'http://127.0.0.1:5555/callback'
      - 'https://www.dexexample.com:32000/callback'
      name: 'testoginapp'
      secret: ****

Better compatibility with Google OpenID Connect: refresh-token

Is your feature request related to a problem? Please describe.

Currently it is possible to use Google OpenID Connect as a loginapp OIDC issuer to generate kubeconfig but unfortunately generated user config lacks refresh-token field so after it expires (couple of minutes) we need to repeat auth procedure again. Using option offlineAsScope: true is not working because Google responds with:

Error 400: invalid_scope
Some requested scopes were invalid. {valid=[openid, https://www.googleapis.com/auth/userinfo.profile, https://www.googleapis.com/auth/userinfo.email], invalid=[offline_access]}

So I guess Google is handling refresh-token differently.

Example of kubeconfig users section generated by loginapp:

users:
- name: [email protected]
  user:
    auth-provider:
      config:
        idp-issuer-url: https://accounts.google.com
        client-id: REDACTED.apps.googleusercontent.com
        id-token: REDACTED
      name: oidc

Example of kubeconfig users section with refresh refresh-token:

users:
- name: [email protected]
  user:
    auth-provider:
      config:
        client-id: REDACTED.apps.googleusercontent.com
        client-secret: REDACTED
        id-token: REDACTED
        idp-issuer-url: https://accounts.google.com
        refresh-token: REDACTED
      name: oidc

Would that be possible to make loginapp more compatible with Google OpenID Connect in terms of refresh-token?

No required SSL certificate was sent

Hi,

I'm trying to deploy dex and loginapp in a kubernetes cluster.
Both dex and loginapp are exposed through a nginx ingress controller and both require authentication with client certificates.
With this setup, loginapp is not able to connect to dex since dex needs a client certificate and its private key in order to accept the connection.
I suppose that loginapp configuration for issuer(dex in my case) should be extended with 2 optional keys client_cert and client_key as bellow:

issuer:
  root_ca: "/etc/ssl/ca.pem"
  client_cert: "/etc/ssl/client-cert.pem"
  client_key: "/etc/ssl/client-key.pem"
  url: "https://dex.mydomain.com"

This is my error :

{"level":"error","msg":"Failed to query provider \"https://dex.mydomain.com\": 400 Bad Request: \u003chtml\u003e\r\n\u003chead\u003e\u003ctitle\u003e400 No required SSL certificate was sent\u003c/title\u003e\u003c/head\u003e\r\n\u003cbody bgcolor=\"white\"\u003e\r\n\u003ccenter\u003e\u003ch1\u003e400 Bad Request\u003c/h1\u003e\u003c/center\u003e\r\n\u003ccenter\u003eNo required SSL certificate was sent\u003c/center\u003e\r\n\u003chr\u003e\u003ccenter\u003enginx\u003c/center\u003e\r\n\u003c/body\u003e\r\n\u003c/html\u003e\r\n","time":"2018-10-25T09:55:23Z"}

Thanks

update helm template ingress apiVersion to compatible with k8s 1.19+

Describe the bug
Error pops up when install loginapp via helm install which set ingress enabled to true in k3s version v1.22.5+k3s1

Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Ingress" in version "networking.k8s.io/v1b

Loginapp & Dex version
loginapp: vX.X.X
dex: vX.X.X

Configuration
see https://github.com/fsdrw08/SoloLab/blob/main/HelmWorkShop/loginapp/values.yaml

Current loginapp (and eventually dex) configuration (without secrets)

...
ingress:
  enabled: true
...

To Reproduce
Steps to reproduce the behavior:

helm install loginapp fydrah-stable/loginapp \
    --namespace dex \
    --values $(wget -q -O /dev/stdout https://raw.githubusercontent.com/fsdrw08/SoloLab/main/HelmWorkShop/loginapp/values.yaml)

Expected behavior
should install loginapp with ingress as expect

Additional context
refere https://github.com/dexidp/helm-charts/blob/master/charts/dex/templates/ingress.yaml#L9 ,
should update helm chart ingress.yaml (insert below code in line 4 in https://github.com/fydrah/loginapp/blob/master/helm/loginapp/templates/ingress.yaml)

{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1

Helm charts- provide custom templates for error and token pages

Is your feature request related to a problem? Please describe.
Unable to provide/override custom html templates for loginapp

Describe the solution you'd like
Add configmap with error.html and token.html keys that will be mounted as a volume at /web/templates/ overriding default templates.

Describe alternatives you've considered
Not much alternatives as this requires changes in deployment manifest

Additional context
Would be nice to be able to provide custom values that would be passed as is to a html templates (e.g. provide values in loginapp configuration)

CrashLoopBackOff error

NAMESPACE NAME READY STATUS RESTARTS AGE
auth dex-d7f5666ff-5r7z4 1/1 Running 0 14m
auth loginapp-5fdc77b6b5-mwx2s 0/1 CrashLoopBackOff 4 2m

dex-cm.yml
...
oauth2:
skipApprovalScreen: true

staticClients:

  • id: loginapp
    redirectURIs:
  • id: web
    redirectURIs:
  • groups
    offline_as_scope: true
    cross_clients:
  • web
    tls:
    enabled: true
    cert: "/etc/loginapp/tls/tls.crt"
    key: "/etc/loginapp/tls/tls.key"
    log:
    level: debug
    format: json
    web_output:
    main_client_id: loginapp
    skip_main_page: true

loginapp-deploy.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: loginapp
namespace: auth
spec:
replicas: 1
template:
metadata:
labels:
app: loginapp
spec:
containers:

  • image: quay.io/fydrah/loginapp:latest
    name: loginapp
    ports:
  • name: http
    containerPort: 5555
    volumeMounts:
  • name: ca
    mountPath: /etc/kubernetes/ssl/
  • name: config
    mountPath: /app/
  • name: tls
    mountPath: /etc/loginapp/tls
    volumes:
  • name: ca
    configMap:
    name: ca
    items:
  • key: ca_jvl.pem
    path: ca_jvl.pem
  • name: config
    configMap:
    name: loginapp
    items:
  • key: config.yaml
    path: config.yaml
  • name: tls
    secret:
    secretName: loginapp.local.tls

auth loginapp-5fdc77b6b5-wsptn 0/1 CrashLoopBackOff 2 1m

logs loginapp-5fdc77b6b5-wsptn -n auth -f
NAME:
loginapp - A new cli application

USAGE:
Web application for Kubernetes CLI configuration with OIDC

COMMANDS:
serve Run loginapp application
help, h Shows a list of commands or help for one command

OPTIONS:
--help, -h show help
--version, -v print the version

Thanks for the help in advance!

feature: cmd-line login

It would really help us, if loginapp could potentially be used as a service, where the token etc. can be gotten from the command line, and not via the browser. We are in a situation, where this usage pops up several times.
As a web browser is mandatory for Open ID Connect, as far as I read the spec. I was wondering if it would be possible to use ie. Chrome headless in a sidecar, to do the actual login, and then grab the response, and feed it back to the caller.

The reason I suggest this as a loginapp improvement/feature request is that I think it would be a nice improvent of loginapp.
As I am not a Go coder (JVM, Python and C/C++ etc.) I can't really improve on loginapp, but I think that is where it belongs. I could go for doing a Python or Java (using ie. Selenium as a PoC), but that really wouldn't help improve loginapp.

ADFS compatibility

Have tested Kubernetes with ADFS oAuth / OIDC Successfully, but to generate .kube/config via loginapp URL need addition to provide resource URL

Support for int128/kubelogin

Is your feature request related to a problem? Please describe.
Each time, the refresh_token gets expired, the config needs to be generated.

Describe the solution you'd like
I would like it know if you could add a kubeconfig file for the kubectl plugin https://github.com/int128/kubelogin, including a link for the installation manual.

Describe alternatives you've considered
Manual documentation.

Additional context
N/A

Web.Kubeconfig.ExtraOpts expected a map, got 'string'

Describe the bug
Config without explicitly defined Web.Kubeconfig.ExtraOpts:

oidc:
  web:
    kubeconfig:
      extraOpts: {}

Causing fatal error:

{"level":"fatal","msg":"1 error(s) decoding:\n\n* 'Web.Kubeconfig.ExtraOpts' expected a map, got 'string'","time":"2020-11-30T19:02:32Z"}

Loginapp & Dex version
loginapp: v3.2.0

Expected behavior
Config without explicitly defined Web.Kubeconfig.ExtraOpts use proper default which allow loginapp to run.

404 error after we have nginx proxy in front of loginapp

Describe the bug
After we put loginapp behind our nginx proxy,
it was working well when we have domain without /auth. ie https://www.dexexample.com/
then we config nginx proxy to route traffic. domain/auth --> loginapp container domain/dex --> dex container
ie https://www.dexexample.com/auth
we got below error from loginapp logs

{"code":404,"level":"info","method":"GET","msg":"","path":"/auth","protocol":"HTTP/1.0","remote_address":"10.40.162.170:44986","request_duration":"28.515µs","time":"2021-12-06T00:38:03Z"}
{"code":404,"level":"info","method":"GET","msg":"","path":"/auth","protocol":"HTTP/1.0","remote_address":"10.40.162.170:45086","request_duration":"28.755µs","time":"2021-12-06T00:38:13Z"}
{"code":404,"level":"info","method":"GET","msg":"","path":"/auth/callback","protocol":"HTTP/1.0","remote_address":"10.40.162.170:37252","request_duration":"113.203µs","time":"2021-12-06T01:10:41Z"}

Loginapp & Dex version
loginapp: v3.2.3
dex: v2.30.0

Configuration

Current loginapp configuration (without secrets)

oidc:
      client:
        id: "testloginapp"
        # This isn't a real secret as it's given to all users for the refresh token
        secret: ****
        redirectURL: "https://www.dexexample.com/auth/callback"
      issuer:
        rootCA: "/etc/loginapp/cfg/CA.pem"
        url: "https://www.dexexample.com/dex"
        insecureSkipVerify: false
      # List of scopes to request.
      # Updating this parameter will override existing scopes.
      # Default:[openid,profile,email,groups]
      scopes: [openid,profile,email,groups]
      offlineAsScope: false
      crossClients: []
    tls:
      enabled: false
    log:
      level: debug
      format: json
    web:
      # ClientID to output (useful for cross_client)
      # default: value of 'oidc.client.id'
      mainClientID: testloginapp
      # Claims to use for kubeconfig username.
      # default: email
      mainUsernameClaim: email
      # Kubeconfig output format
      kubeconfig:
        # Change default cluster for kubeconfig context
        # Default: first cluster name in `clusters`
        defaultCluster: test1
        # Change default namespace for kubeconfig contexts
        # Default: default
        defaultNamespace: default
        # Change default context for kubeconfig
        # If not set, use a format like 'defaultClusterName'/'usernameClaim'
        # Default: ""
        defaultContext: test
        # Extra key/value pairs to add to kubeconfig output.
        # Key/value pairs are added under `user.auth-provider.config`
        # dictionnary into the kubeconfig.
        # Ex:
        # extraOpts:
        #   mykey1: value1
        #
        # Kubeconfig Output:
        # - name: [email protected]
        #     auth-provider:
        #       config:
        #         mykey1: value1
        #         client-id: loginapp
        #         [...]
        extraOpts: {}
    # Metrics configuration
    metrics:
      # Port to use. Metrics are available at
      # http://IP:PORT/metrics
      # default: 9090
      port: 9090

    # Clusters list for CLI configuration
    clusters:
      - name: test1
        server: https://****:6443
        certificate-authority: |
          -----BEGIN CERTIFICATE-----
          MIIC5zCCAc+gAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl
          ******
          -----END CERTIFICATE-----
        insecure-skip-tls-verify: false
        # Alternative context name for this cluster
        contextName: test

To Reproduce
Steps to reproduce the behavior:

  1. Go to /auth
  2. lt reports 404 error

Expected behavior

I expect loginapp redirects page to github login page

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context

  1. We use nginx as reverse proxy in the separate deployment
   nginx-proxy.conf: |
    upstream dex {
      server auth-dex-login.oidc-auth.svc.dexexample.com:5556;
    }
    upstream loginapp {
      server auth-dex-login.oidc-auth.svc.dexexample.com:5555;
    }
    server {
      listen              443 ssl;
      server_name         www.dexexample.com;
      ssl_certificate     /etc/nginx/ssl/tls.crt;
      ssl_certificate_key /etc/nginx/ssl/tls.key;
      location /dex {
        proxy_pass http://dex;
      }
      location / {
        root /usr/local/nginx/html;
        index index.htm index.html;
      
      }
      location /auth {
        proxy_pass http://loginapp;
      }
      
    }
  1. dex config file is
issuer: https://www.dexexample.com/dex
    storage:
      type: kubernetes
      config:
        inCluster: true
    frontend:
      theme: tectonic
    web:
      http: 0.0.0.0:5556
    connectors:
    - type: github
      id: github
      name: GitHub
      config:
        clientID: $GITHUB_CLIENT_ID
        clientSecret: $GITHUB_CLIENT_SECRET
        redirectURI: https://www.dexexample.com/dex/callback
        hostName: www.github.com
    oauth2:
      skipApprovalScreen: false

    staticClients:
    - id: testloginapp
      redirectURIs:
      - 'http://127.0.0.1:5555/callback'
      - 'https://www.dexexample.com/auth/callback'
      name: 'testoginapp'
      secret: ****

Missing refresh token

Describe the bug
I have setup loginapp with helm with refreshToken: "true"
But I do not get a refresh token

Loginapp & Dex version
loginapp: latest
dex: v2.26.0

Configuration
I have not done a lot of configuration changes from the values.yaml file.

config:
  name: auth-login
  secret: "..."
  clientID: "login-cnp"
  clientSecret: "..."
  clientRedirectURL:
  issuerRootCA: # +doc-gen:break
    configMap:
    key: ca.pem
  issuerInsecureSkipVerify: true
  issuerURL: "https://dex/dex"
  refreshToken: "true"
  tls:
    enabled: false
    secretName:
    altnames: []
    altIPs: []
  # List of kubernetes clusters to add on web frontend
  clusters:
    - name: dc9-1
      server: https://k8s-api:6443
      certificate-authority: |
        -----BEGIN CERTIFICATE-----
        ...
        -----END CERTIFICATE-----
        #insecure-skip-tls-verify: false

Install Dex and connect it to AD with the ldap integration.

Expected behavior
I would expect to get a refresh token so that I do not need to login everyday.

If I run loginapp in debug I can tell that the scopes reflect that I would want offline access.
{"level":"debug","msg":"request token with the following scopes: [openid profile email groups offline_access]

And

Extra:{Scopes:[] AuthCodeOpts:map[]} OfflineAsScope:true CrossClients:[] Scopes:[openid profile email groups]}

I am not sure if I am missing something.
I have read #11 but it seems that the config has changed somewhat since 2018.

kubectl 1.22 reports context was not found for specified context

Describe the bug
kubectl is not going to work with the full kubeconfig provided by loginapp

% kubectl create ns playground
Error in configuration: context was not found for specified context: dev/[email protected]

Loginapp & Dex version
loginapp: v3.2.3

To Reproduce
Steps to reproduce the behavior:

  1. login into loginapp
  2. Click on 'full kubeconfig'
  3. copy then to local filesystem to $HOME/.kube/config
  4. run any command with kubectl (v1.22+)

Expected behavior
kubectl just works

Additional context

this is the head of the kubeconfig file:

apiVersion: v1
kind: Config
preferences: {}
current-context: dev/[email protected]
contexts:
- name: dev
  context:
    user: [email protected]
    cluster: dev
    namespace: default
clusters:
- name: dev

If I modifiy current-context to just dev, like current-context: dev, then everything works as expected.

loginapp_request_duration constantly increases

Describe the bug
loginapp_request_duration metrics constantly increase. I have loginapp/dex deployed on multiple clusters and want to setup an alert based on the loginapp_request_duration metric. Based on that: https://github.com/fydrah/loginapp/blob/master/pkg/server/prometheus.go#L42 I would expect that metric is more fluctuating when in my case(6 different deployment) it just increases.

Do I misunderstand anything there?

Loginapp & Dex version
loginapp: v3.0.1 helm chart: loginapp-v1.0.1
dex: v2.24.0

Configuration
I think not relevant but happy to provide but not sure what.

To Reproduce
Steps to reproduce the behavior:

  1. Go to Prometheus
  2. Search for loginapp_request_duration {code=200} [1w]
  3. See in output as a raising seconds

Expected behavior
I would expect loginapp_request_duration is more fluctuant

Screenshots

Screenshot 2021-03-12 at 12 24 23

Screenshot 2021-03-12 at 12 23 55

Could not set same user to several "dex"(k8s) clusters

Describe the bug
When using several k8s-clusters with configured dex on each of them, could not set unique tokens for each cluster-user

Loginapp & Dex version
loginapp: v3.2.3
dex: v2.25.0

Configuration
scopes:

  • openid
  • profile
  • email
  • offline_access
  • groups
  • audience:server:client_id:"oidc-client"

web:
mainUsernameClaim: email

To Reproduce
Login with dex and apply loginapp generated k8s-config for the first k8s-cluster
we`ll get:

  • cluster
  • context
  • user
    One same host, login to second cluster, using dex+loginapp and get its kubeconfig
    Applying it will:
  • cluster (add)
  • context (add)
  • user (REWRITE previous)
    Now we are not allowed to do anything in first cluster, because user-token is rewrited by second config

Expected behavior
username should optionally contain cluster-name in loginapp html. Like
users:

  • name: SomeUser@ClusterName

Additional context
It is very important for people, who use a lot of k8s-contexts on one host. As an example - look the way gangway used to handle with it:
kubeCfgUser := strings.Join([]string{username, cfg.ClusterName}, "@")
...
kubectl config set-credentials "{{ .KubeCfgUser }}"

Pod restart with warning

Describe the bug
I use helm chart loginapp-v1.3.1 and I have some issue about it:
-> mode debug doesn't working:
configOverwrites: log: level: Debug format: json but no debug message appear

-> Pods restart for no reason:
In log pod I have this:

[root@kubernetes loginapp]# kubectl logs loginapp-84fbfc4fdc-krs2x
{"level":"info","msg":"No cluster defined, setting default cluster output to none","time":"2023-05-05T06:55:51Z"}
{"level":"**error**","msg":"Certificate validation is currently disabled, this is not a recommended behavior for production","time":"2023-05-05T06:55:51Z"}

Loginapp & Dex version
loginapp: v3.2.3
dex: v2.36.0

Configuration

Current loginapp (and eventually dex) configuration (without secrets)

config:
  web:
    mainClientID: kubernetes
  name: my-login-app
  existingSecret:
  secret:
  clientID: "loginapp"
  clientSecret: "test"
  clientRedirectURL:
  issuerRootCA: # +doc-gen:break
    configMap:
    key: ca.crt
  issuerInsecureSkipVerify: true
  issuerURL: "http://dextest.default.svc.cluster.local:5556/dex"
  refreshToken: false
  tls:
    enabled: false
    secretName:
    altnames: []
    altIPs: []
  # List of kubernetes clusters to add on web frontend
  clusters: []

configOverwrites: 
  log:
    level: Debug
    format: json
  oidc:
    issuer:
      insecureSkipVerify: true
    crossClients:
      - kubernetes
  web:
    mainClientID: kubernetes

To Reproduce
Steps to reproduce the behavior:

  1. helm install loginapp
  2. check logs login app

Expected behavior
Pod have to start and running state

Additional context
Add any other context about the problem here.

helm repository is unreachable

Describe the bug
I'm trying to install loginapp via the helm chart, but I'm getting a 403 error from registry.fhardy.fr

Loginapp & Dex version
loginapp: v1.1.0
dex: v0.3.3

To Reproduce
Steps to reproduce the behavior:

  1. execute helm repo add fhardy-stable https://registry.fhardy.fr/chartrepo/stable

Expected behavior
Repository is added to helm

Google OIDC: Error 400: invalid_scope invalid=[groups]

Describe the bug

I use google as OIDC issuer. loginapp default OIDC scope is: "openid", "profile", "email", "groups". It seems that I can only add more scopes via configuration oidc.extra.scopes but I cannot change the default. It can be problematic in case of google because google is not accepting scope "groups", error message is generated:

Authorization Error
Error 400: invalid_scope
Some requested scopes were invalid. {valid=[openid, https://www.googleapis.com/auth/userinfo.profile, https://www.googleapis.com/auth/userinfo.email], invalid=[groups]}

Loginapp & Dex version
loginapp: v3.1.0

Configuration

Current loginapp (and eventually dex) configuration (without secrets)

name: "Kubernetes Auth"
listen: "0.0.0.0:5555"
secret: REDACTED
oidc:
  client:
    id: REDACTED.apps.googleusercontent.com
    secret: REDACTED
    redirectURL: http://example.com/callback
  issuer:
    url: https://accounts.google.com
    insecureSkipVerify: true

Expected behavior
I can change default OIDC scopes to be able to remove "gropus" to be able to satisfy google OIDC provider.

Issues with dex new release 2.30.0

Describe the bug
After deploy new release of dex 2.30.0 via
https://github.com/dexidp/dex/blob/master/examples/k8s/dex.yaml

The loginapp is not working as expected

Loginapp & Dex version
loginapp: v3.2.3
dex: v2.30.0

Configuration

Current loginapp (and eventually dex) configuration (without secrets)

oidc:
      client:
        id: "testloginapp"
        # This isn't a real secret as it's given to all users for the refresh token
        secret: ****
        redirectURL: "https://www.dexexample.com:32000/callback"
      issuer:
        rootCA: "/etc/loginapp/cfg/CA.pem"
        url: "https://www.dexexample.com:32000/dexidp"
        insecureSkipVerify: false
      # List of scopes to request.
      # Updating this parameter will override existing scopes.
      # Default:[openid,profile,email,groups]
      scopes: []
      offlineAsScope: false
      crossClients: []
    tls:
      enabled: false
    log:
      level: debug
      format: json
    web:
      # ClientID to output (useful for cross_client)
      # default: value of 'oidc.client.id'
      mainClientID: testloginapp
      # Claims to use for kubeconfig username.
      # default: email
      mainUsernameClaim: email
      # Kubeconfig output format
      kubeconfig:
        # Change default cluster for kubeconfig context
        # Default: first cluster name in `clusters`
        defaultCluster: test1
        # Change default namespace for kubeconfig contexts
        # Default: default
        defaultNamespace: default
        # Change default context for kubeconfig
        # If not set, use a format like 'defaultClusterName'/'usernameClaim'
        # Default: ""
        defaultContext: test
        # Extra key/value pairs to add to kubeconfig output.
        # Key/value pairs are added under `user.auth-provider.config`
        # dictionnary into the kubeconfig.
        # Ex:
        # extraOpts:
        #   mykey1: value1
        #
        # Kubeconfig Output:
        # - name: [email protected]
        #     auth-provider:
        #       config:
        #         mykey1: value1
        #         client-id: loginapp
        #         [...]
        extraOpts: {}
    # Metrics configuration
    metrics:
      # Port to use. Metrics are available at
      # http://IP:PORT/metrics
      # default: 9090
      port: 9090

    # Clusters list for CLI configuration
    clusters:
      - name: test1
        server: https://****:6443
        certificate-authority: |
          -----BEGIN CERTIFICATE-----
          MIIC5zCCAc+gAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl
          ******
          -----END CERTIFICATE-----
        insecure-skip-tls-verify: false
        # Alternative context name for this cluster
        contextName: test

To Reproduce
Steps to reproduce the behavior:

  1. Go to loginapp
  2. See error
invalid_scope: Missing required scope(s) ["openid"].
internal server error

url is https://www.dexexample.com:32000/callback?error=invalid_scope&error_description=Missing+required+scope%28s%29+%5B%22openid%22%5D.&state=5P1QJkKHQi9rk42uHLJUUPQBy%2B7jxnAbJuk12znxqPY%3D

Expected behavior
expect github login page

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context

  1. we use nginx as reverse proxy in the pod, the pod has 3 containers dex, loginapp, nginx
   upstream dex {
      server 127.0.0.1:5556;
    }
    upstream loginapp {
      server 127.0.0.1:5555;
    }
    server {
      listen              32000 ssl default_server;
      ssl_certificate     /etc/nginx/ssl/tls.crt;
      ssl_certificate_key /etc/nginx/ssl/tls.key;
      location /dexidp {
        proxy_pass http://dex;
      }
      location / {
        proxy_pass http://loginapp;
      }
    }
  1. dex config file is
issuer: https://www.dexexample.com:32000/dexidp
    storage:
      type: kubernetes
      config:
        inCluster: true
    frontend:
      theme: tectonic
    web:
      http: 127.0.0.1:5556
    connectors:
    - type: github
      id: github
      name: GitHub
      config:
        clientID: $GITHUB_CLIENT_ID
        clientSecret: $GITHUB_CLIENT_SECRET
        redirectURI: https://www.dexexample.com:32000/dexidp/callback
        hostName: www.github.com
    oauth2:
      skipApprovalScreen: false

    staticClients:
    - id: testloginapp
      redirectURIs:
      - 'http://127.0.0.1:5555/callback'
      - 'https://www.dexexample.com:32000/callback'
      name: 'testoginapp'
      secret: ****
  1. I test dex 2.30.0 via example-app in below doc, it is working fine though,
    https://dexidp.io/docs/connectors/github/#run-example-client-app-with-github-config
    I also notice dex 2.30.0 was released later than loginapp 3.2.3

Allow empty clientSecret

Is your feature request related to a problem? Please describe.
It is a valid use case where OIDC Provider does not require client secret for a client. It would make sense to allow loginapp to accept no client secret.

Describe the solution you'd like
Make clientSecret field optional.

Describe alternatives you've considered

Additional context

Customize "Copy this in your ~/.kube/config file" page

Hi and thanks for the great login app.

Is there any plans to allow customization of the "Copy this in your ~/.kube/config file" page? I'd like to make it a bit more clear where to place it exactly. And maybe provide per-cluster instructions.

Just curious at this time.

Return support for mounted template and assets

Is your feature request related to a problem? Please describe.
It was very convenient to have the possibility to override default template and assets with volume mounts. Such possibility was removed in 3.0.0 release.

Describe the solution you'd like
It will be a good idea to check if template file and/or assets directory exists. If they exists -- use template/assets from file system. If not -- use default template/assets, bundled with packr/v2

Describe alternatives you've considered
The only alternative for now is building own loginapp image, which is not a good solution.

Additional context
none

Error from server (Forbidden) User "system:serviceaccount:default:default" cannot list resource "pods" in API group

Hi, when I try to use the credentials from loginapp... I am getting the errors below...makes me feel like I am not doing something right? For example, I configured github as my dex connector...so I was at least expecting my github email of "[email protected]" in the error...

Tried Full kubeconfig and got the below
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:default" cannot list resource "pods" in API group "" in the namespace "default"

Tried Clusters method and got the below in kube-apiserver pod log

E1216 03:42:50.491195       1 authentication.go:63] "Unable to authenticate the request" err="invalid bearer token"
E1216 03:42:52.804920       1 authentication.go:63] "Unable to authenticate the request" err="invalid bearer token"

I am using:

  • quay.io/fydrah/loginapp:v3.2.3
  • ghcr.io/dexidp/dex:v2.30.0

Support using a specific LoadBalancer IP

Is your feature request related to a problem? Please describe.
It is sometimes necessary to require a specific IP for Kubernetes services (e.g. for DNS).

Describe the solution you'd like
Fixed in #28

Additional context
Tested internally and working across multiple clusters.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.