Giter Site home page Giter Site logo

openshift / oauth-proxy Goto Github PK

View Code? Open in Web Editor NEW

This project forked from bitly/oauth2_proxy

257.0 14.0 134.0 26.45 MB

A reverse proxy that provides authentication with OpenShift via OAuth and Kubernetes service accounts

License: MIT License

Go 99.32% Shell 0.27% Dockerfile 0.10% Makefile 0.31%

oauth-proxy's Introduction

OpenShift oauth-proxy

A reverse proxy and static file server that provides authentication and authorization to an OpenShift OAuth server or Kubernetes master supporting the 1.6+ remote authorization endpoints to validate access to content. It is intended for use within OpenShift clusters to make it easy to run both end-user and infrastructure services that don't provide their own authentication.

Features:

  • Performs zero-configuration OAuth when run as a pod in OpenShift
  • Able to perform simple authorization checks against the OpenShift and Kubernetes RBAC policy engine to grant access
  • May also be configured to check bearer tokens or Kubernetes client certificates and verify access
  • On OpenShift 3.6+ clusters, supports zero-configuration end-to-end TLS via the out of the box router

This is a fork of the https://github.com/bitly/oauth2_proxy project with other providers removed (for now). It's focused on providing the simplest possible secure proxy on OpenShift

Sign In Page

Using this proxy with OpenShift

This proxy is best used as a sidecar container in a Kubernetes pod, protecting another server that listens only on localhost. On an OpenShift cluster, it can use the service account token as an OAuth client secret to identify the current user and perform access control checks. For example:

$ ./oauth-proxy --upstream=http://localhost:8080 --cookie-secret=SECRET \
      --openshift-service-account=default --https-address=

will start the proxy against localhost:8080, encrypt the login cookie with SECRET, use the default service account in the current namespace, and only listen on http.

A full sidecar example is in contrib/sidecar.yaml which also demonstrates using OpenShift TLS service serving certificates (giving you an automatic in-cluster cert) with an external route. Run against a 3.6+ cluster with:

$ oc create -f https://raw.githubusercontent.com/openshift/oauth-proxy/master/contrib/sidecar.yaml

The OpenShift provider defaults to allowing any user that can log into the OpenShift cluster - the following sections cover more on restricting access.

Limiting access to users

While you can use the --email-domains and --authenticated-emails-file to match users directly, the proxy works best when you delegate authorization to the OpenShift master by specifying what permissions you expect the user to have. This allows you to leverage OpenShift RBAC and groups to map users to permissions centrally.

Require specific permissions to login via OAuth with --openshift-sar=JSON

SAR stands for "Subject Access Review", which is a request sent to the OpenShift or Kubernetes server to check the access for a particular user. Expects a single subject access review JSON object, or a JSON array, all of which must be satisfied for a user to be able to access the backend server.

Pros:

  • Easiest way to protect an entire website or API with an OAuth flow
  • Requires no additional permissions to be granted for the proxy service account

Cons:

  • Not well suited for service-to-service access
  • All-or-nothing protection for the upstream server

Example:

# Allows access if the user can view the service 'proxy' in namespace 'app-dev'
--openshift-sar='{"namespace":"app-dev","resource":"services","resourceName":"proxy","verb":"get"}'

A user who visits the proxy will be redirected to an OAuth login with OpenShift, and must grant access to the proxy to view their user info and request permissions for them. Once they have granted that right to the proxy, it will check whether the user has the required permissions. If they do not, they'll be given a permission denied error. If they are, they'll be logged in via a cookie.

Run oc explain subjectaccessreview to see the schema for a review, including other fields. Specifying multiple rules via a JSON array ([{...}, {...}]) will require all permissions to be granted.

Require specific permissions per host with --openshift-sar-by-host=JSON

This is similar to the --openshift-sar option but instead of the rules applying to all hosts, you can set up specific rules that are checked for a particular upstream host. Using a JSON object the keys are hostnames and the value is a JSON array of SAR rules.

Both --openshift-sar and --openshift-sar-by-host can be used together which will require all of the rules from the former as well as any rules that match the host to be satisified for a user to be able to access the backed server.

Example:

# Allows access to foo.example.com if the user can view the service 'proxy' in namespace 'app-dev'
--openshift-sar-by-host='{"foo.example.com":{"namespace":"app-dev","resource":"services","resourceName":"proxy","verb":"get"}}'

Delegate authentication and authorization to OpenShift for infrastructure

OpenShift leverages bearer tokens for end users and for service accounts. When running infrastructure services, it may be easier to delegate all authentication and authoration to the master. The --openshift-delegate-urls=JSON flag enables delegation, asking the master to validate any incoming requests with an Authorization: Bearer header or client certificate to be forwarded to the master for verification. If the user authenticates, they are then checked against one of the entries in the provided map

The value of the flag is a JSON map of path prefixes to v1beta1.ResourceAttributes, and the longest path prefix is checked. If no path matches the request, authentication and authorization are skipped.

Pros:

  • Allow other OpenShift service accounts or infrastructure components to authorize to specific APIs

Cons:

  • Not suited for web browser use
  • Should not be used by untrusted components (can steal tokens)

Example:

# Allows access if the provided bearer token has view permission on a custom resource
--openshift-delegate-urls='{"/":{"group":"custom.group","resource":"myproxy","verb":"get"}}'

# Grant access only to paths under /api
--openshift-delegate-urls='{"/api":{"group":"custom.group","resource":"myproxy","verb":"get"}}'

WARNING: Because users are sending their own credentials to the proxy, it's important to use this setting only when the proxy is under control of the cluster administrators. Otherwise, end users may unwittingly provide their credentials to untrusted components that can then act as them.

When configured for delegation, Oauth Proxy will not set the X-Forwarded-Access-Token header on the upstream request. If you wish to forward the bearer token received from the client, you will have to use the --pass-user-bearer-token option in addition to --openshift-delegate-urls.

WARNING: With --pass-user-bearer-token the client's bearer token will be passed upstream. This could pose a security risk if the token is misused or leaked from the upstream service. Bear in mind that the tokens received from client could be long term and hard to revoke.

Other configuration flags

--openshift-service-account=NAME

Will attempt to read the --client-id and --client-secret from the service account information injected by OpenShift. Uses the value of /var/run/secrets/kubernetes.io/serviceaccount/namespace to build the correct --client-id, and the contents of /var/run/secrets/kubernetes.io/serviceaccount/token as the --client-secret.

--openshift-ca

One or more paths to CA certificates that should be used when connecting to the OpenShift master. If none are provided, the proxy will default to using /var/run/secrets/kubernetes.io/serviceaccount/ca.crt.

Discovering the OAuth configuration of an OpenShift cluster

OpenShift supports the /.well-known/oauth-authorization-server endpoint, which returns a JSON document describing the authorize and token URLs, as well as the default scopes. If you are running outside of OpenShift you can specify these flags directly using the existing flags for these URLs.

Configuring the proxy's service account in OpenShift

In order for service accounts to be used as OAuth clients, they must have the proper OAuth annotations set. to point to a valid external URL. In most cases, this can be a route exposing the service fronting your proxy. We recommend using a Reencrypt type route and service serving certs to maximize end to end security. See contrib/sidecar.yaml for an example of these used in concert.

By default, the redirect URI of a service account set up as an OAuth client must point to an HTTPS endpoint which is a common configuration error.

Developing

To build, ensure you are running Go 1.7+ and clone the repo:

$ go get -u github.com/openshift/oauth-proxy
$ cd $GOPATH/src/github.com/openshift/oauth-proxy

To build, run:

$ go test .

The docker images for this repository are built by the OpenShift release process and are available at

$ docker pull registry.svc.ci.openshift.org/ci/oauth-proxy:v1

End-to-end testing

To run the end-to-end test suite against a build of the current commit on an OpenShift cluster, use test/e2e.sh. You may need to change the DOCKER_REPO, KUBECONFIG, and TEST_NAMESPACE variables to accommodate your cluster. Each test sets up an oauth-proxy deployment and steps through the OAuth process, ensuring that the backend site can be reached (or not, depending on the test). The deployment is deleted before running the next test. DEBUG_TEST=testname can be used to skip the cleanup step for a specific test and halt the suite to allow for further debugging on the cluster.

$ test/e2e.sh

Architecture

OAuth2 Proxy Architecture

Configuration

oauth-proxy can be configured via config file, command line options or environment variables.

To generate a strong cookie secret use python -c 'import os,base64; print base64.b64encode(os.urandom(16))'

Email Authentication

To authorize by email domain use --email-domain=yourcompany.com. To authorize individual email addresses use --authenticated-emails-file=/path/to/file with one email per line. To authorize all email addresses use --email-domain=*.

Config File

An example oauth-proxy.cfg config file is in the contrib directory. It can be used by specifying -config=/etc/oauth-proxy.cfg

Command Line Options

Usage of oauth-proxy:
  -approval-prompt string: OAuth approval_prompt (default "force")
  -authenticated-emails-file string: authenticate against emails via file (one per line)
  -basic-auth-password string: the password to set when passing the HTTP Basic Auth header
  -bypass-auth-except-for value: provide authentication ONLY for request paths under proxy-prefix and those that match the given regex (may be given multiple times). Cannot be set with -skip-auth-regex
  -bypass-auth-for value: alias for -skip-auth-regex
  -client-id string: the OAuth Client ID: ie: "123456.apps.googleusercontent.com"
  -client-secret string: the OAuth Client Secret
  -config string: path to config file
  -cookie-domain string: an optional cookie domain to force cookies to (ie: .yourcompany.com)*
  -cookie-expire duration: expire timeframe for cookie (default 168h0m0s)
  -cookie-httponly: set HttpOnly cookie flag (default true)
  -cookie-name string: the name of the cookie that the oauth_proxy creates (default "_oauth2_proxy")
  -cookie-refresh duration: refresh the cookie after this duration; 0 to disable
  -cookie-samesite string | set SameSite cookie attribute (ie: `"lax"`, `"strict"`, `"none"`, or `""`)
  -cookie-secret string: the seed string for secure cookies (optionally base64 encoded)
  -cookie-secret-file string: same as "-cookie-secret" but read it from a file
  -cookie-secure: set secure (HTTPS) cookie flag (default true)
  -custom-templates-dir string: path to custom html templates
  -display-htpasswd-form: display username / password login form if an htpasswd file is provided (default true)
  -email-domain value: authenticate emails with the specified domain (may be given multiple times). Use * to authenticate any email
  -footer string: custom footer string. Use "-" to disable default footer.
  -htpasswd-file string: additionally authenticate against a htpasswd file. Entries must be created with "htpasswd -s" for SHA encryption
  -http-address string: [http://]<addr>:<port> or unix://<path> to listen on for HTTP clients (default "127.0.0.1:4180")
  -https-address string: <addr>:<port> to listen on for HTTPS clients (default ":443")
  -login-url string: Authentication endpoint
  -pass-access-token: pass OAuth access_token to upstream via X-Forwarded-Access-Token header
  -pass-user-bearer-token: pass OAuth access token received from the client to upstream via X-Forwarded-Access-Token header
  -pass-basic-auth: pass HTTP Basic Auth, X-Forwarded-User and X-Forwarded-Email information to upstream (default true)
  -pass-host-header: pass the request Host Header to upstream (default true)
  -pass-user-headers: pass X-Forwarded-User and X-Forwarded-Email information to upstream (default true)
  -profile-url string: Profile access endpoint
  -provider string: OAuth provider (default "google")
  -proxy-prefix string: the url root path that this proxy should be nested under (e.g. /<oauth2>/sign_in) (default "/oauth")
  -proxy-websockets: enables WebSocket proxying (default true)
  -redeem-url string: Token redemption endpoint
  -redirect-url string: the OAuth Redirect URL. ie: "https://internalapp.yourcompany.com/oauth2/callback"
  -request-logging: Log requests to stdout (default false)
  -scope string: OAuth scope specification
  -set-xauthrequest: set X-Auth-Request-User and X-Auth-Request-Email response headers (useful in Nginx auth_request mode)
  -signature-key string: GAP-Signature request signature key (algorithm:secretkey)
  -skip-auth-preflight: will skip authentication for OPTIONS requests
  -skip-auth-regex value: bypass authentication for requests path's that match (may be given multiple times). Cannot be set with -bypass-auth-except-for
  -skip-provider-button: will skip sign-in-page to directly reach the next step: oauth/start
  -ssl-insecure-skip-verify: skip validation of certificates presented when using HTTPS
  -tls-cert string: path to certificate file
  -tls-key string: path to private key file
  -upstream value: the http url(s) of the upstream endpoint or file:// paths for static files. Routing is based on the path
  -upstream-timeout duration: maximum amount of time the server will wait for a response from the upstream (default 30s)
  -validate-url string: Access token validation endpoint
  -version: print version string

See below for provider specific options

Upstream Configuration

oauth-proxy supports having multiple upstreams, and has the option to pass requests on to HTTP(S) servers or serve static files from the file system. HTTP and HTTPS upstreams are configured by providing a URL such as http://127.0.0.1:8080/ for the upstream parameter, that will forward all authenticated requests to be forwarded to the upstream server. If you instead provide http://127.0.0.1:8080/some/path/ then it will only be requests that start with /some/path/ which are forwarded to the upstream.

Static file paths are configured as a file:// URL. file:///var/www/static/ will serve the files from that directory at http://[oauth-proxy url]/var/www/static/, which may not be what you want. You can provide the path to where the files should be available by adding a fragment to the configured URL. The value of the fragment will then be used to specify which path the files are available at. file:///var/www/static/#/static/ will ie. make /var/www/static/ available at http://[oauth-proxy url]/static/.

Multiple upstreams can either be configured by supplying a comma separated list to the -upstream parameter, supplying the parameter multiple times or provinding a list in the config file. When multiple upstreams are used routing to them will be based on the path they are set up with.

Environment variables

The following environment variables can be used in place of the corresponding command-line arguments:

  • OAUTH2_PROXY_CLIENT_ID
  • OAUTH2_PROXY_CLIENT_SECRET
  • OAUTH2_PROXY_COOKIE_NAME
  • OAUTH2_PROXY_COOKIE_SAMESITE
  • OAUTH2_PROXY_COOKIE_SECRET
  • OAUTH2_PROXY_COOKIE_DOMAIN
  • OAUTH2_PROXY_COOKIE_EXPIRE
  • OAUTH2_PROXY_COOKIE_REFRESH
  • OAUTH2_PROXY_SIGNATURE_KEY

SSL Configuration

There are two recommended configurations.

  1. Configure SSL Terminiation with OAuth2 Proxy by providing a --tls-cert=/path/to/cert.pem and --tls-key=/path/to/cert.key.

The command line to run oauth-proxy in this configuration would look like this:

./oauth-proxy \
   --email-domain="yourcompany.com"  \
   --upstream=http://127.0.0.1:8080/ \
   --tls-cert=/path/to/cert.pem \
   --tls-key=/path/to/cert.key \
   --cookie-secret=... \
   --cookie-secure=true \
   --provider=... \
   --client-id=... \
   --client-secret=...
  1. Configure SSL Termination with Nginx (example config below), Amazon ELB, Google Cloud Platform Load Balancing, or ....

Because oauth-proxy listens on 127.0.0.1:4180 by default, to listen on all interfaces (needed when using an external load balancer like Amazon ELB or Google Platform Load Balancing) use --http-address="0.0.0.0:4180" or --http-address="http://:4180".

Nginx will listen on port 443 and handle SSL connections while proxying to oauth-proxy on port 4180. oauth-proxy will then authenticate requests for an upstream application. The external endpoint for this example would be https://internal.yourcompany.com/.

An example Nginx config follows. Note the use of Strict-Transport-Security header to pin requests to SSL via HSTS:

server {
    listen 443 default ssl;
    server_name internal.yourcompany.com;
    ssl_certificate /path/to/cert.pem;
    ssl_certificate_key /path/to/cert.key;
    add_header Strict-Transport-Security max-age=2592000;

    location / {
        proxy_pass http://127.0.0.1:4180;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Scheme $scheme;
        proxy_connect_timeout 1;
        proxy_send_timeout 30;
        proxy_read_timeout 30;
    }
}

The command line to run oauth-proxy in this configuration would look like this:

./oauth-proxy \
   --email-domain="yourcompany.com"  \
   --upstream=http://127.0.0.1:8080/ \
   --cookie-secret=... \
   --cookie-secure=true \
   --provider=... \
   --client-id=... \
   --client-secret=...

Endpoint Documentation

oauth-proxy responds directly to the following endpoints. All other endpoints will be proxied upstream when authenticated. The /oauth prefix can be changed with the --proxy-prefix config variable.

  • /robots.txt - returns a 200 OK response that disallows all User-agents from all paths; see robotstxt.org for more info
  • /oauth/healthz - returns an 200 OK response
  • /oauth/sign_in - the login page, which also doubles as a sign out page (it clears cookies)
  • /oauth/start - a URL that will redirect to start the OAuth cycle
  • /oauth/callback - the URL used at the end of the OAuth cycle. The oauth app will be configured with this as the callback url.
  • /oauth/auth - only returns a 202 Accepted response or a 401 Unauthorized response; for use with the Nginx auth_request directive

Request signatures

If signature-key is defined, proxied requests will be signed with the GAP-Signature header, which is a Hash-based Message Authentication Code (HMAC) of selected request information and the request body see SIGNATURE_HEADERS in oauthproxy.go.

signature_key must be of the form algorithm:secretkey, (ie: signature_key = "sha1:secret0")

For more information about HMAC request signature validation, read the following:

Logging Format

oauth-proxy logs requests to stdout in a format similar to Apache Combined Log.

<REMOTE_ADDRESS> - <[email protected]> [19/Mar/2015:17:20:19 -0400] <HOST_HEADER> GET <UPSTREAM_HOST> "/path/" HTTP/1.1 "<USER_AGENT>" <RESPONSE_CODE> <RESPONSE_BYTES> <REQUEST_DURATION>

Configuring for use with the Nginx auth_request directive

The Nginx auth_request directive allows Nginx to authenticate requests via the oauth-proxy's /auth endpoint, which only returns a 202 Accepted response or a 401 Unauthorized response without proxying the request through. For example:

server {
  listen 443 ssl spdy;
  server_name ...;
  include ssl/ssl.conf;

  location = /oauth2/auth {
    internal;
    proxy_pass http://127.0.0.1:4180;
  }

  location /oauth2/ {
    proxy_pass       http://127.0.0.1:4180;
    proxy_set_header Host                    $host;
    proxy_set_header X-Real-IP               $remote_addr;
    proxy_set_header X-Scheme                $scheme;
    proxy_set_header X-Auth-Request-Redirect $request_uri;
  }

  location /upstream/ {
    auth_request /oauth2/auth;
    error_page 401 = /oauth2/sign_in;

    # pass information via X-User and X-Email headers to backend,
    # requires running with --set-xauthrequest flag
    auth_request_set $user   $upstream_http_x_auth_request_user;
    auth_request_set $email  $upstream_http_x_auth_request_email;
    proxy_set_header X-User  $user;
    proxy_set_header X-Email $email;

    proxy_pass http://backend/;
  }

  location / {
    auth_request /oauth2/auth;
    error_page 401 = https://example.com/oauth2/sign_in;

    root /path/to/the/site;
  }
}

oauth-proxy's People

Contributors

chirino avatar dbrgn avatar drewolson avatar enj avatar funkymrrogers avatar jburnham avatar jehiah avatar johnboxall avatar k-wall avatar kincl avatar kwilczynski avatar mbland avatar mrwacky42 avatar openshift-bot avatar openshift-ci[bot] avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar ploxiln avatar rhamilto avatar s-urbaniak avatar semenko avatar sgnn7 avatar simo5 avatar simonpasquier avatar sjoerdmulder avatar smarterclayton avatar sricola avatar stlaz avatar tanuck avatar tomtaylor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

oauth-proxy's Issues

Routing to an external route

Is it possible to route to an external pod instead of a sidecar using oauth-proxy? I would like to use it as a standalone container and route to other pods/services in a different namespace in the cluster.

Fix description

The description says "A reverse proxy that provides authentication with OpenShift and other OAuth providers", but it's only supporting openshift it seems. Bummer..

Openshift Origin

How do you get the tls-key for Openshift Origin? Is there a CLI command I can run a system:admin on the router to get this?

Setting openshift-sar gives error when running inside Openshift

When attempting to run oauth-proxy, I am able to run it without issues until I try to add --openshift-sar to the startup inside Openshift.

The error reported is:
2017/09/12 21:11:52 main.go:125: unable to decode review: invalid character ''' looking for beginning of value

From the openshfit console, here is the startup command I'm using:

<image-entrypoint> --https-address=:8443 --provider=openshift --openshift-service-account=oshinko --upstream=http://localhost:8080 --tls-cert=/etc/tls/private/tls.crt --tls-key=/etc/tls/private/tls.key --cookie-secret=SECRET --openshift-sar='{"namespace":"myproject","resource":"services","name":"oshinko-web","verb":"get"}'

Openshift version information:
oc v3.6.0+c4dd4cf
kubernetes v1.6.1+5115d708d7
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://192.168.1.108:8443
openshift v3.6.0+c4dd4cf
kubernetes v1.6.1+5115d708d7

Service account access to an API protected by an oauth_proxy openshift

Hello,

I have an API application. This API is sitting behind the oauth_proxy using openshift auth as sidecar. I want a way to have other applications from other projects in the same openshift cluster to access this API.

(Assumption) The "application" that wants to use the API needs to be able to programaticcally request a token (using a service account / token auth) to then be able to use the API.

I was trying to use the -pass-user-bearer-token=true options, but it didn't work and i don't really know where to go as i don't know the correct flow:

  • does the service account + token needs to be passed to the oauth proxy to get another form of credential, then use this credential in a cookie / header setting to access the API sitting behind the oauth proxy?
  • does the service account + token is enough to access directly the API sitting behind the oauth_proxy?

Anyway, how to do that? Any help will be greatly appreciated.

PS: i didn't know if there was a google group more appropriate to ask this question. If that's the case, let me know.

dep references to fsnotify are broken, need to be changed

The references in Gopkg.toml, Gopkg.lock, and watcher.go are using the broken gopkg.in/fsnotify.v1 URIs, and they cannot be resolved in newer versions of Go/Dep. Need to change those references to github.com/fsnotify/fsnotify to support ongoing development.

fix disparity with client-id and openshift-service-account options

If you have both openshift-service-account="foo" and client-id="foo" set, the authentication request results in a failing lookup of an oauth client that is not the service account (for "foo" not "system:serviceaccount:namespace:foo"). This config quirk was really obnoxious to debug. The workaround is to remove client-id, letting openshift-service-account set the correct clientid by default.

The `--ssl-insecure-skip-verify` option does not seem to work

I have configured:

args:
          - '--skip-provider-button'
          - '--ssl-insecure-skip-verify'
          - '--request-logging=true'
          - "--https-address="
          - "--http-address=:${PROXY_PORT}"
          - "--provider=openshift"
          - "--openshift-service-account=proxy"
          - "--upstream=https://openshift.default.svc:443/oapi/"
          - "--upstream=https://openshift.default.svc:443/api/"
          - "--upstream=http://localhost:${APP_PORT}/"
          - '--cookie-name=OCP_TOKEN'
          - '--cookie-expire=1h0m0s'
          - '--cookie-refresh=0h10m0s'
          - "--cookie-domain=.rht-labs.com"
          - "--cookie-secret=${COOKIE_SECRET}"
          - "--pass-user-bearer-token=true"

When I make a request to either of the SSL upstreams I get the following error:

2019/03/27 12:35:43 reverseproxy.go:321: http: proxy error: x509: certificate signed by unknown authority

I would have expected the --ssl-insecure-skip-verify to allow the application to ignore the self-signed certificates.

Passing OAuth access_token to upstream via Authorisation header (rather than X-Forwarded-Access-Token)

In our use-case, we want to use oauth-proxy to protect a user-facing application (HTML/JS single page app).

oauth-proxy will be used to allow the user to authenticate against the OpenShift OAuth server, serve the application's static content, and provide a proxy to an API server. The user's interactions with the application will cause API calls to be made. The application won't be in possession of the access_token, so I wanted to rely on oauth-proxy to argument the authenticated request with the authentication user's access token as it traversed the oauth-proxy.

I have found --pass-access-token=true which does almost what I want, however the API server in question does not understand X-Forwarded-Access-Token. I desire a version of the --pass-access-token option that would pass OAuth access_token of the authenticated user to upstream via Authorisation header formatted in the bearer format..

I can't change the API server to understand X-Forwarded-Access-Token (the API server in question is the Kubernetes API server itself).

Is there another way to use oauth-proxy that would achieve the same ends? Alternatively, I would be happy to submit a PR.

Thanks in advance.

Getting error "Cookie "_oauth_proxy" not present"

2018/01/24 01:55:04 provider.go:265: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.

  | 2018/01/24 01:55:04 oauthproxy.go:161: mapping path "/" => upstream "http://localhost:9093"
  | 2018/01/24 01:55:04 oauthproxy.go:184: compiled skip-auth-regex => "^/metrics"
  | 2018/01/24 01:55:04 oauthproxy.go:190: OAuthProxy configured for Client ID: system:serviceaccount:prometheus:prometheus
  | 2018/01/24 01:55:04 oauthproxy.go:200: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain: refresh:disabled
  | 2018/01/24 01:55:04 http.go:96: HTTPS: listening on [::]:10443
  | 2018/01/24 01:55:12 oauthproxy.go:657: 10.129.0.1:51132 Cookie "_oauth_proxy" not present
  | 2018/01/24 01:55:12 provider.go:345: authorizer reason: User "system:anonymous" cannot get namespaces in project "prometheus"
  | 2018/01/24 01:55:12 oauthproxy.go:657: 10.129.0.1:51132 Cookie "_oauth_proxy" not present
  | 2018/01/24 01:55:12 provider.go:345: authorizer reason: User "system:anonymous" cannot get namespaces in project "prometheus"

Unable to login to application behind oauth-proxy

Currently I am getting a 504 when trying to login to applications that live behind the oauth proxy.

My current logs from the proxy

2018/02/15 20:41:43 provider.go:476: Performing OAuth discovery against https://192.168.0.1/.well-known/oauth-authorization-server
--
  | 2018/02/15 20:41:43 provider.go:522: 200 GET https://192.168.0.1/.well-known/oauth-authorization-server  {
  | "issuer": "https://west.aws.openshift.bestbuy.com ",
  | "authorization_endpoint": "https://west.openshift.com/oauth/authorize ",
  | "token_endpoint": "https://west.openshift.com/oauth/token ",
  | "scopes_supported": [
  | "user:check-access",
  | "user:full",
  | "user:info",
  | "user:list-projects",
  | "user:list-scoped-projects"
  | ],
  | "response_types_supported": [
  | "code",
  | "token"
  | ],
  | "grant_types_supported": [
  | "authorization_code",
  | "implicit"
  | ],
  | "code_challenge_methods_supported": [
  | "plain",
  | "S256"
  | ]
  | }
  | 2018/02/15 20:41:43 provider.go:265: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.
  | 2018/02/15 20:41:43 oauthproxy.go:161: mapping path "/" => upstream "http://localhost:9090 "
  | 2018/02/15 20:41:43 oauthproxy.go:184: compiled skip-auth-regex => "^/metrics"
  | 2018/02/15 20:41:43 oauthproxy.go:190: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-metrics:prometheus
  | 2018/02/15 20:41:43 oauthproxy.go:200: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled
  | 2018/02/15 20:41:43 http.go:96: HTTPS: listening on [::]:8443
  | 2018/02/15 21:00:06 oauthproxy.go:657: 10.129.0.1:56400 Cookie "_oauth_proxy" not present
  | 2018/02/15 21:00:06 provider.go:345: authorizer reason: User "system:anonymous" cannot get namespaces in project "openshift-metrics"
  | 2018/02/15 21:00:07 oauthproxy.go:657: 10.128.0.1:52578 Cookie "_oauth_proxy" not present
  | 2018/02/15 21:00:07 provider.go:345: authorizer reason: User "system:anonymous" cannot get namespaces in project "openshift-metrics"
  | 2018/02/15 21:00:39 oauthproxy.go:657: 10.128.0.1:53120 Cookie "_oauth_proxy" not present
  | 2018/02/15 21:00:39 provider.go:345: authorizer reason: User "system:anonymous" cannot get namespaces in project "openshift-metrics"

What I am seeing is that the discovery is using masterPublicURL instead of masterURL. Our public url is not acessable from within the cluster as the nodes sit on a private subnet and use masterURL.

So then I looked at trying to override the values given, but it seems that setting a login url only sets a endpoint not the actual url. Similar to redeem etc.

To check that it is because it uses the masterPublicURL instead of the masterURL I whitelisted the NAT on our masters and everything works as expected. So my question, is there a plan to make those fields to be over written?

Misconfiguration of /ping endpoint ?

Hi, I'm trying to use /ping endpoint in a readiness probe in k8s but I always get a 400 error, notifying me of missing params related to authentication
I actually get the same if I try to spawn a bash instance in the container and check with curl.

Surprisingly /robots.txt retrurns correctly a 200.

Any chance the default config is not correct and /ping? or anyhow, does it make sense that /ping and /robots.txt are configured differently?

Getting SIGSEGV when deploying on OpenShift

I have followed the instructions on the README.md, but the proxy is crashing on me.

2019/03/25 13:56:51 provider.go:106: Defaulting client-id to system:serviceaccount:deven-test:proxy
2019/03/25 13:56:51 provider.go:111: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x40 pc=0x11bdedb]

goroutine 1 [running]:
main.(*Options).Validate(0xc420311180, 0x1caeae0, 0xc420362000, 0x9, 0xc420347c01)
	/go/src/github.com/openshift/oauth-proxy/options.go:208 +0x46b
main.main()
	/go/src/github.com/openshift/oauth-proxy/main.go:136 +0x1915

I double-checked the service account and annotation match the route for my application, I tried giving the service account various role-bindings, etc... Any help would be GREATLY appreciated!!

README: Missing information about "upstream-ca" parameter

The README of the project does not contain information about the "upstream-ca" parameter. Actually I did not know about this parameter until reading the code source.
This parameter is what I needed to make the oauth-proxy working with my HTTPS upstream using a self-signed certificate.

Merge back with upstream?

Hi,

the upstream version doesn't support certain features and doesn't look well maintained in general with lots of stale pull requests. This project on the other hand doesn't support the most common oauth providers. It would be great to merge back. Specifically this could mean adding back the providers from the upstream project. I'm happy to help with this if you want to go down that route.

Support http_proxy env variable

Authentication fails if the openshift cluster's public endpoint is reachable only through an http proxy.

Error message on my environment:

018/05/28 07:44:32 oauthproxy.go:582: error redeeming code (client:10.128.4.1:33746 ("10.0.1.152")): Post https://openshift_public_url/oauth/token: dial tcp xx.xx.xx.xx:443: getsockopt: connection timed out
2018/05/28 07:44:32 oauthproxy.go:399: ErrorPage 500 Internal Error Internal Error

I tried configuring uppercase and lowercase environment variables http_proxy & https_proxy but it doesn't work. I looked at the authproxy.go source code and proxy handling doesn't seem implemented.

This is a real show stopper for us because we would like to expose Prometheus dashboards to users coming from Internet and our openshift cluster can't be directly exposed on Internet.

Authentication with default prometheus in 3.11

I have a fresh install of Openshift Enterprise 3.11 which secures access to a Prometheus instance with oauth2_proxy, in prometheus-k8s-X/prometheus-proxy. I am trying to login in with credentials for admin (with cluster-admin role) but upon sending those I just get HTTP response 200 and the same login screen. The log shows me that I wasn't allowed:

2018/10/26 15:00:47 provider.go:382: authorizer reason: no RBAC policy matched

But when I have enabled audit log on the master API it seems that I should be:

{"kind":"Event",
 "apiVersion":"audit.k8s.io/v1beta1",
 "metadata":{creationTimestamp":"2018-10-26T15:00:47Z"},
 "level":"Metadata",
 "timestamp":"2018-10-26T15:00:47Z",
 "auditID":"eab256ee-7acc-4e67-af49-98f9db052f55",
 "stage":"RequestReceived",
 "requestURI":"/apis/authorization.k8s.io/v1beta1/subjectaccessreviews",
 "verb":"create",
 "user":{"username":"system:serviceaccount:openshift-monitoring:prometheus-k8s","uid":"50806ab7-d62
f-11e8-b3d3-ecb1d78a7a18","groups":["system:serviceaccounts","system:serviceaccounts:openshift-monitoring","system:authenticated"]},
 "sourceIPs":["10.128.0.82"],
 "objectRef":{"resource":"subjectaccessreviews","apiGroup":"authorization.k8s.io","apiVersion":"v1beta1"},
 "requestReceivedTimestamp":"2018-10-26T15:00:47.897449Z",
 "stageTimestamp":"2018-10-26T15:00:47.897449Z"
}
{"kind":"Event",
 "apiVersion":"audit.k8s.io/v1beta1",
 "metadata":{"creationTimestamp":"2018-10-26T15:00:47Z"},
 "level":"Metadata",
 "timestamp":"2018-10-26T15:00:47Z",
 "auditID":"eab256ee-7acc-4e67-af49-98f9db052f55",
 "stage":"ResponseComplete",
 "requestURI":"/apis/authorization.k8s.io/v1beta1/subjectaccessreviews",
 "verb":"create",
 "user":{"username":"system:serviceaccount:openshift-monitoring:prometheus-k8s","uid":"50806ab7-d62f-11e8-b3d3-ecb1d78a7a18","groups":["system:serviceaccounts","system:serviceaccounts:openshift-monitoring","system:authenticated"]},
 "sourceIPs":["10.128.0.82"],
 "objectRef":{"resource":"subjectaccessreviews","apiGroup":"authorization.k8s.io","apiVersion":"v1beta1"},
 "responseStatus":{"metadata":{},"code":201},
 "requestReceivedTimestamp":"2018-10-26T15:00:47.897449Z",
 "stageTimestamp":"2018-10-26T15:00:47.897991Z",
 "annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"prometheus-k8s\" of ClusterRole \"prometheus-k8s\" to ServiceAccount \"prometheus-k8s/openshift
-monitoring\""}
}

Where is the problem?

Grafana: The request is missing a required parameter

Hello,

I am using a brand new installation of openshift:

oc v3.11.0+62803d0-1
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://master.mydomain.fr:8443
openshift v3.11.0+2bcedfc-77
kubernetes v1.11.0+d4cacc0

I have 3 nodes:

[root@master centos]# oc get node
NAME                 STATUS    ROLES     AGE       VERSION
master.mydomain.fr   Ready     master    1d        v1.11.0+d4cacc0
node1.mydomain.fr    Ready     infra     1d        v1.11.0+d4cacc0
node2.mydomain.fr    Ready     compute   1d        v1.11.0+d4cacc0
node3.mydomain.fr    Ready     compute   1d        v1.11.0+d4cacc0
node4.mydomain.fr    Ready     compute   1d        v1.11.0+d4cacc0

I successfully configured everything but now I am trying to access to my grafana dashboards. I created a route:

grafana.mydomain.fr

I am redirected to the Login Button of the OauthProxy. When I click on Login I am redirected to a blank page with Json:

{"error":"invalid_request","error_description":"The request is missing a required parameter, includes an invalid parameter value, includes a parameter more than once, or is otherwise malformed.","state":"7f3a18c95ae8237d39d35bf8ff317ae4:/"}

Here is the URI I am getting:

https://master.mydomain.fr:8443/oauth/authorize?approval_prompt=force&client_id=system%3Aserviceaccount%3Aopenshift-monitoring%3Agrafana&redirect_uri=https%3A%2F%2Fgrafana.mydomain.fr%2Foauth%2Fcallback&response_type=code&scope=user%3Ainfo+user%3Acheck-access&state=7f3a18c95ae8237d39d35bf8ff317ae4%3A%2F

After looking at the log I am getting:

 no RBAC policy matched

I am pretty sure it is linked to my configuration (maybe the redirect URI is not correct) but to be honnest I don't know how to configure it. Do you have an idea ?

Thanks in advance

Incorrect remote authenticator semantics

Currently we:

  1. Try to authenticate even when there is no bearer token (wastes network IO)
  2. Try to authorize even if the user is anonymous (which is impossible for bearer token and generates useless log messages)
  3. Should normalize (and do proper length checks in) the code that works with req.Header.Get("Authorization") (for both bearer and basic)

Other issues:

  1. Pointless CheckRequestAuth func (just call ValidateRequest directly)
  2. session.AccessToken is overloaded to mean both AccessToken and BearerToken. This needs to be split out and the logic around tokenProvidedByClient needs to be replaced with a simple if / else if statement

-tls-client-ca seems completely dead

I can't get TLS client authentication to work. Setting -tls-client-ca causes oauth-proxy to request a client cert, but it then does nothing with it. Code inspection suggests to me that it might have worked after PR #1, but that it hasn't since PR #2. I can't see any code in master which checks the peer cert in any way.

This is a pity, because it would be really useful if we supported TLS client authentication.

@mrogers950 @simo5

Unable to login to application behind oauth-proxy when using IT-signed SSL certificate

This is similar to #28

I've re-built the oauth-proxy image using the latest master and release-1.0 branch - same error.
If I deploy same in test environment where everything is self-signed - I can log in successfully.

$ oc logs -c oauth-proxy -f logging-kibana-6-fhg27
2017/10/27 19:45:00 provider.go:91: Defaulting client-id to system:serviceaccount:test-elasticsearch:aggregated-logging-kibana
2017/10/27 19:45:00 provider.go:96: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token
2017/10/27 19:45:00 provider.go:478: Performing OAuth discovery against https://172.22.0.1/.well-known/oauth-authorization-server
2017/10/27 19:45:00 provider.go:524: 200 GET https://172.22.0.1/.well-known/oauth-authorization-server {
  "issuer": "https://osemaster.sbu.lab.eng.bos.redhat.com:8443",
  "authorization_endpoint": "https://osemaster.sbu.lab.eng.bos.redhat.com:8443/oauth/authorize",
  "token_endpoint": "https://osemaster.sbu.lab.eng.bos.redhat.com:8443/oauth/token",
  "scopes_supported": [
    "user:check-access",
    "user:full",
    "user:info",
    "user:list-projects",
    "user:list-scoped-projects"
  ],
  "response_types_supported": [
    "code",
    "token"
  ],
  "grant_types_supported": [
    "authorization_code",
    "implicit"
  ],
  "code_challenge_methods_supported": [
    "plain",
    "S256"
  ]
}
2017/10/27 19:45:00 oauthproxy.go:161: mapping path "/" => upstream "http://localhost:5601"
2017/10/27 19:45:00 oauthproxy.go:190: OAuthProxy configured for  Client ID: system:serviceaccount:test-elasticsearch:aggregated-logging-kibana
2017/10/27 19:45:00 oauthproxy.go:200: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled
2017/10/27 19:45:00 http.go:56: HTTP: listening on 127.0.0.1:4180
2017/10/27 19:45:00 http.go:96: HTTPS: listening on [::]:8443
2017/10/27 19:45:12 oauthproxy.go:657: 172.20.10.1:49918 Cookie "_oauth_proxy" not present
2017/10/27 19:45:12 oauthproxy.go:657: 172.20.10.1:49918 Cookie "_oauth_proxy" not present
2017/10/27 19:45:25 oauthproxy.go:582: error redeeming code (client:172.20.10.1:49918): Post https://osemaster.sbu.lab.eng.bos.redhat.com:8443/oauth/token: x509: certificate signed by unknown authority
2017/10/27 19:45:25 oauthproxy.go:399: ErrorPage 500 Internal Error Internal Error
2017/10/27 19:45:25 oauthproxy.go:657: 172.20.10.1:49918 Cookie "_oauth_proxy" not present

snippet of the oauth-proxy container spec from the pod:

  - args:
    - --https-address=:8443
    - --provider=openshift
    - --openshift-service-account=aggregated-logging-kibana
    - --upstream=http://localhost:5601
    - --tls-cert=/etc/tls/private/tls.crt
    - --tls-key=/etc/tls/private/tls.key
    - --cookie-secret=SECRET
    - --openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    - --ssl-insecure-skip-verify=true
    image: docker-registry.default.svc:5000/test-elasticsearch/oauth-proxy:latest
    imagePullPolicy: IfNotPresent
    name: oauth-proxy
    ports:
    - containerPort: 8443
      name: oaproxy
      protocol: TCP
    resources:
      limits:
        memory: 256Mi
      requests:
        cpu: 100m
        memory: 256Mi
    securityContext:
      capabilities:
        drop:
        - KILL
        - MKNOD
        - SETGID
        - SETUID
        - SYS_CHROOT
      privileged: false
      runAsUser: 1000520000
      seLinuxOptions:
        level: s0:c23,c7
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /etc/tls/private
      name: proxy-tls
      readOnly: true
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: aggregated-logging-kibana-token-2wkbq
      readOnly: true

osemaster.sbu.lab.eng.bos.redhat.com:8443 cert is signed by Red Hat IT. It's running Openshift v3.6.173.0.21

error redeeming code: Post https://console.example.com/oauth/token: EOF

Hey,

I am trying to open Grafana from the openshift-monitoring project (shipped with openshift-ansible). Its a fresh 3.11 ODK cluster build with these playbooks: https://github.com/openshift/openshift-ansible.

Now in the openshift-monitoring project I see that a route was configured: https://grafana-openshift-monitoring.example.com . When I open that link, I get to the Log-In-with-OpenShift screen. I hit that button and get redirected to https://console.example.com/oauth/authorize?foo=bar . I authorize with my SSO and the side loads for while until a 500 Internal Error is shown.

In the logs of the grafana-proxy I see this error:

2019/03/07 08:37:37 oauthproxy.go:635: error redeeming code (client:10.13.10.1:48968): Post https://console.example.com/oauth/token: EOF
2019/03/07 08:37:37 oauthproxy.go:434: ErrorPage 500 Internal Error Internal Error
2019/03/07 08:37:37 provider.go:382: authorizer reason: no RBAC policy matched

Can you help me?

PS: The authorization in general works. I am able to authorize and access the dashboard at https://console.example.com. My user holds the role cluster-monitoring-view.

x509: certificate signed by unknown authority

Hi,

Lately, we started to test the oauth-proxy with the sidecar demo. We tried the openshift/oauth-proxy:v1.0.0 ~ latest, but we always got the issue below.

2018/06/10 00:41:00 oauthproxy.go:238: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain: refresh:disabled
2018/06/10 00:41:00 http.go:56: HTTP: listening on 127.0.0.1:4180
2018/06/10 00:41:00 http.go:96: HTTPS: listening on [::]:8443
2018/06/10 00:41:49 oauthproxy.go:635: error redeeming code (client:10.131.0.1:55816): Post https://openshift.k8s.OURDOMAIN.com:8443/oauth/token: x509: certificate signed by unknown authority
2018/06/10 00:41:49 oauthproxy.go:434: ErrorPage 500 Internal Error Internal Error

As far as we know, the oauth-proxy would take the /var/run/secrets/kubernetes.io/serviceaccount/ca.crt as the openshift-ca by default. We copied that file out of the container and used OpenSSL to run against the server as below,

openssl s_client -connect openshift.k8s.OURDOMAIN.com:8443 -CAfile ca.crt

The verify return code is 0 and it looked good (if we don't pass the CAfile, the verify return code would be 19 (self signed certificate in certificate chain))

SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES128-GCM-SHA256
Session-ID: 2DEC4D9600F1A1BBF2467391B1D79C7D8C21A1011D0FB6758BFD5D45393DA59E
Session-ID-ctx:
Master-Key: E3A3247A40AE91ED6339D0636A2B918A71F46ED09E8C5AB368338FEDA3B0FD4B528A49140C0463CA18C2A39ADB183244
TLS session ticket:
0000 - 0a 16 e9 90 30 b6 82 50-e8 4a ba 45 a0 1a c8 50 ....0..P.J.E...P
0010 - d3 0d 6a 3e ce a8 7a 49-8f 26 a5 23 6c 9d 9d b6 ..j>..zI.&.#l...
0020 - 7a 30 61 b2 1b 83 f1 fa-1c dc 6e d4 81 61 51 ef z0a.......n..aQ.
0030 - e1 e7 bb e2 03 58 8b 38-0c 12 2d ce 5c f7 92 f3 .....X.8..-....
0040 - 22 01 55 82 c0 79 ad 4e-d5 c8 36 99 9b 15 2c bb ".U..y.N..6...,.
0050 - 74 55 92 24 2c 22 11 4f-bf ae 36 3f b1 00 9c 41 tU.$,".O..6?...A
0060 - 57 e8 bf 58 46 ba cf ac-c3 3a 2d 93 bf ad 87 f6 W..XF....:-.....
0070 - 49 bb f5 10 26 c4 e3 fe- I...&...

Start Time: 1528599113
Timeout: 300 (sec)
Verify return code: 0 (ok)

We kind of hit the roadblock now. Any help would be appreciated!

BTW, here is our OpenShfit version information,

openshift v3.7.1+c2ce2c0-1
kubernetes v1.7.6+a08f5eeb62

Steve

Error on oauth-proxy <-> grafana pod with openshift authentication

Hi there,

I'm getting an error trying to put grafana and oauth-proxy as a sidecar container into openshift. I already have working with prometheus + oauth-proxy sidecar and it is working fine, but following the same principle it fails. Just to clarify, Prometheus work, but not grafana.

Same config for both oauth-proxies, but the upstreams servers are not the same.

Here more detail:

Error in Grafana's Oauth-Proxy log:

2018/03/01 11:29:47 oauthproxy.go:657: 172.17.0.1:57264 Cookie "_oauth_proxy" not present
2018/03/01 11:29:53 oauthproxy.go:582: error redeeming code (client:172.17.0.1:57264): got 400 from "https://172.17.0.1.nip.io:8443/oauth/token" {"error":"unauthorized_client","error_description":"The client is not authorized to request a token using this method."}
2018/03/01 11:29:53 oauthproxy.go:399: ErrorPage 500 Internal Error Internal Error

As you see, the oauth-cookie are not present on the server response, I already check-it on the web browser and are not present in the response, but exists on the request.

Services:

  • Grafana
apiVersion: v1
items:
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      prometheus.io/scheme: https
      prometheus.io/scrape: "true"
      service.alpha.openshift.io/serving-cert-secret-name: prometheus-grafana-proxy-tls
      service.alpha.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1493021254
    creationTimestamp: 2018-03-01T09:07:45Z
    labels:
      app: prometheus
      service: prometheus-grafana-proxy
    name: prometheus-grafana-proxy
    namespace: prom-01
    resourceVersion: "3592"
    selfLink: /api/v1/namespaces/prom-01/services/prometheus-grafana-proxy
    uid: 019156f7-1d30-11e8-812a-c85b7694ac5a
  spec:
    clusterIP: 172.30.78.23
    ports:
    - name: prometheus-grafana-proxy-443
      port: 443
      protocol: TCP
      targetPort: 8443
    selector:
      app: prometheus
      service: prometheus-grafana-proxy
    sessionAffinity: None
    type: ClusterIP
  status:
    loadBalancer: {}
  • Prometheus
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      service.alpha.openshift.io/serving-cert-secret-name: prometheus-proxy-tls
      service.alpha.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1493021254
    creationTimestamp: 2018-03-01T09:05:09Z
    labels:
      app: prometheus
      service: prometheus-proxy
    name: prometheus-proxy
    namespace: prom-01
    resourceVersion: "3281"
    selfLink: /api/v1/namespaces/prom-01/services/prometheus-proxy
    uid: a49feb4f-1d2f-11e8-812a-c85b7694ac5a
  spec:
    clusterIP: 172.30.234.107
    ports:
    - name: prometheus-proxy-443
      port: 443
      protocol: TCP
      targetPort: 8443
    selector:
      app: prometheus
      service: prometheus-proxy
    sessionAffinity: None
    type: ClusterIP
  status:
    loadBalancer: {}
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Routes:

  • Grafana
apiVersion: v1
items:
- apiVersion: v1
  kind: Route
  metadata:
    annotations:
      openshift.io/host.generated: "true"
    creationTimestamp: 2018-03-01T09:07:46Z
    labels:
      app: prometheus
      service: prometheus-grafana-proxy
    name: prometheus-grafana-proxy
    namespace: prom-01
    resourceVersion: "3594"
    selfLink: /oapi/v1/namespaces/prom-01/routes/prometheus-grafana-proxy
    uid: 01fb8295-1d30-11e8-812a-c85b7694ac5a
  spec:
    host: prometheus-grafana-proxy-prom-01.172.17.0.1.nip.io
    port:
      targetPort: prometheus-grafana-proxy-443
    tls:
      destinationCACertificate: "-----BEGIN COMMENT-----\nThis is an empty PEM file
        created to provide backwards compatibility\nfor reencrypt routes that have
        no destinationCACertificate. This \ncontent will only appear for routes accessed
        via /oapi/v1/routes.\n-----END COMMENT-----\n"
      termination: reencrypt
    to:
      kind: Service
      name: prometheus-grafana-proxy
      weight: 100
    wildcardPolicy: None
  status:
    ingress:
    - conditions:
      - lastTransitionTime: 2018-03-01T09:07:46Z
        status: "True"
        type: Admitted
      host: prometheus-grafana-proxy-prom-01.172.17.0.1.nip.io
      routerName: router
      wildcardPolicy: None
  • Prometheus
- apiVersion: v1
  kind: Route
  metadata:
    annotations:
      openshift.io/host.generated: "true"
    creationTimestamp: 2018-03-01T09:05:10Z
    labels:
      app: prometheus
      service: prometheus-proxy
    name: prometheus-proxy
    namespace: prom-01
    resourceVersion: "3283"
    selfLink: /oapi/v1/namespaces/prom-01/routes/prometheus-proxy
    uid: a514653b-1d2f-11e8-812a-c85b7694ac5a
  spec:
    host: prometheus-proxy-prom-01.172.17.0.1.nip.io
    port:
      targetPort: prometheus-proxy-443
    tls:
      destinationCACertificate: "-----BEGIN COMMENT-----\nThis is an empty PEM file
        created to provide backwards compatibility\nfor reencrypt routes that have
        no destinationCACertificate. This \ncontent will only appear for routes accessed
        via /oapi/v1/routes.\n-----END COMMENT-----\n"
      termination: reencrypt
    to:
      kind: Service
      name: prometheus-proxy
      weight: 100
    wildcardPolicy: None
  status:
    ingress:
    - conditions:
      - lastTransitionTime: 2018-03-01T09:05:10Z
        status: "True"
        type: Admitted
      host: prometheus-proxy-prom-01.172.17.0.1.nip.io
      routerName: router
      wildcardPolicy: None
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

ServiceAccount:

  • Grafana
apiVersion: v1
imagePullSecrets:
- name: prometheus-grafana-proxy-dockercfg-qbzb5
kind: ServiceAccount
metadata:
  annotations:
    serviceaccounts.openshift.io/oauth-redirectreference.primary: '{"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"prometheus-grafana-proxy"}}'
  creationTimestamp: 2018-03-01T09:05:33Z
  name: prometheus-grafana-proxy
  namespace: prom-01
  resourceVersion: "3375"
  selfLink: /api/v1/namespaces/prom-01/serviceaccounts/prometheus-grafana-proxy
  uid: b284cce5-1d2f-11e8-812a-c85b7694ac5a
secrets:
- name: prometheus-grafana-proxy-token-vrkwb
- name: prometheus-grafana-proxy-dockercfg-qbzb5
  • Prometheus:
apiVersion: v1
imagePullSecrets:
- name: prometheus-proxy-dockercfg-6rdv6
kind: ServiceAccount
metadata:
  annotations:
    serviceaccounts.openshift.io/oauth-redirectreference.primary: '{"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"prometheus-proxy"}}'
  creationTimestamp: 2018-03-01T09:02:21Z
  name: prometheus-proxy
  namespace: prom-01
  resourceVersion: "2999"
  selfLink: /api/v1/namespaces/prom-01/serviceaccounts/prometheus-proxy
  uid: 4087de7a-1d2f-11e8-812a-c85b7694ac5a
secrets:
- name: prometheus-proxy-dockercfg-6rdv6
- name: prometheus-proxy-token-bh6xf

Rolebinding:

  • Grafana
apiVersion: v1
groupNames: null
kind: RoleBinding
metadata:
  creationTimestamp: 2018-03-01T09:07:04Z
  name: prometheus-grafana-proxy-view
  namespace: prom-01
  resourceVersion: "3523"
  selfLink: /oapi/v1/namespaces/prom-01/rolebindings/prometheus-grafana-proxy-view
  uid: e906301b-1d2f-11e8-812a-c85b7694ac5a
roleRef:
  name: view
subjects:
- kind: ServiceAccount
  name: prometheus-grafana-proxy
  namespace: prom-01
userNames:
- system:serviceaccount:prom-01:prometheus-grafana-proxy
  • Prometheus:
apiVersion: v1
groupNames: null
kind: RoleBinding
metadata:
  creationTimestamp: 2018-03-01T09:03:47Z
  name: prometheus-proxy-view
  namespace: prom-01
  resourceVersion: "3144"
  selfLink: /oapi/v1/namespaces/prom-01/rolebindings/prometheus-proxy-view
  uid: 737631e1-1d2f-11e8-812a-c85b7694ac5a
roleRef:
  name: view
subjects:
- kind: ServiceAccount
  name: prometheus-proxy
  namespace: prom-01
userNames:
- system:serviceaccount:prom-01:prometheus-proxy

DeploymentConfig

  • Grafana
apiVersion: v1
kind: DeploymentConfig
metadata:
  creationTimestamp: 2018-03-01T09:07:46Z
  generation: 7
  labels:
    app: prometheus
    service: prometheus-grafana-proxy
  name: prometheus-grafana-proxy
  namespace: prom-01
  resourceVersion: "17942"
  selfLink: /oapi/v1/namespaces/prom-01/deploymentconfigs/prometheus-grafana-proxy
  uid: 024e69f0-1d30-11e8-812a-c85b7694ac5a
spec:
  replicas: 1
  selector:
    app: prometheus
    service: prometheus-grafana-proxy
  strategy:
    activeDeadlineSeconds: 21600
    resources: {}
    rollingParams:
      intervalSeconds: 1
      maxSurge: 25%
      maxUnavailable: 25%
      timeoutSeconds: 600
      updatePeriodSeconds: 1
    type: Rolling
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: prometheus
        service: prometheus-grafana-proxy
    spec:
      containers:
      - args:
        - --provider=openshift
        - --openshift-service-account=prometheus-grafana-proxy
        - --config=/prometheus-grafana-proxy/prometheus-grafana-oauth-proxy.yml
        - --openshift-sar={"namespace":"prom-01","resource":"deploymentconfigs","name":"prometheus-grafana-proxy","verb":"update"}
        - --client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
        - --skip-auth-regex=^/metrics,/api/datasources,/api/dashboards
        image: registry.access.redhat.com/openshift3/oauth-proxy:latest
        imagePullPolicy: Always
        name: prometheus-grafana-proxy
        ports:
        - containerPort: 8443
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/tls/private
          name: prometheus-grafana-proxy-tls
        - mountPath: /prometheus-grafana-proxy
          name: prometheus-grafana-proxy-oauth-configmap
      - command:
        - ./bin/grafana-server
        image: docker.io/mrsiano/grafana-ocp:latest
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /login
            port: 3000
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 3
        name: prometheus-grafana
        ports:
        - containerPort: 3000
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: 3000
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 3
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /root/go/src/github.com/grafana/grafana/conf
          name: prometheus-grafana-config-map
        - mountPath: /root/go/src/github.com/grafana/grafana/data
          name: prometheus-grafana-data
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: prometheus-proxy
      serviceAccountName: prometheus-proxy
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          name: prometheus-grafana-config-map
        name: prometheus-grafana-config-map
      - emptyDir: {}
        name: prometheus-grafana-data
      - configMap:
          defaultMode: 420
          name: prometheus-grafana-proxy-oauth-configmap
        name: prometheus-grafana-proxy-oauth-configmap
      - name: prometheus-grafana-proxy-tls
        secret:
          defaultMode: 420
          secretName: prometheus-grafana-proxy-tls
  test: false
  triggers:
  - type: ConfigChange
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: 2018-03-01T11:29:19Z
    lastUpdateTime: 2018-03-01T11:29:21Z
    message: replication controller "prometheus-grafana-proxy-7" successfully rolled
      out
    reason: NewReplicationControllerAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: 2018-03-01T11:36:19Z
    lastUpdateTime: 2018-03-01T11:36:19Z
    message: Deployment config has minimum availability.
    status: "True"
    type: Available
  details:
    causes:
    - type: ConfigChange
    message: config change
  latestVersion: 7
  observedGeneration: 7
  readyReplicas: 1
  replicas: 1
  unavailableReplicas: 0
  updatedReplicas: 1
  • Prometheus
apiVersion: v1
kind: DeploymentConfig
metadata:
  creationTimestamp: 2018-03-01T09:05:11Z
  generation: 1
  labels:
    app: prometheus
    service: prometheus-proxy
  name: prometheus-proxy
  namespace: prom-01
  resourceVersion: "3364"
  selfLink: /oapi/v1/namespaces/prom-01/deploymentconfigs/prometheus-proxy
  uid: a58068b9-1d2f-11e8-812a-c85b7694ac5a
spec:
  replicas: 1
  selector:
    app: prometheus
    service: prometheus-proxy
  strategy:
    activeDeadlineSeconds: 21600
    resources: {}
    rollingParams:
      intervalSeconds: 1
      maxSurge: 25%
      maxUnavailable: 25%
      timeoutSeconds: 600
      updatePeriodSeconds: 1
    type: Rolling
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: prometheus
        service: prometheus-proxy
    spec:
      containers:
      - args:
        - --provider=openshift
        - --openshift-service-account=prometheus-proxy
        - --config=/prometheus-proxy/prometheus-oauth-proxy.yml
        - --openshift-sar={"namespace":"prom-01","resource":"deploymentconfigs","name":"prometheus","verb":"update"}
        image: registry.access.redhat.com/openshift3/oauth-proxy:latest
        imagePullPolicy: Always
        name: prometheus-proxy
        ports:
        - containerPort: 8443
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/tls/private
          name: prometheus-proxy-tls
        - mountPath: /prometheus-proxy
          name: prometheus-proxy-oauth-configmap
      - image: registry.access.redhat.com/openshift3/prometheus:latest
        imagePullPolicy: IfNotPresent
        name: prometheus
        ports:
        - containerPort: 9090
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /prometheus-rules
          name: prometheus-rules
        - mountPath: /etc/prometheus
          name: prometheus-configmap-volume
        - mountPath: /prometheus
          name: prometheus-data
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: prometheus-proxy
      serviceAccountName: prometheus-proxy
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          name: prometheus-rules
        name: prometheus-rules
      - configMap:
          defaultMode: 420
          name: prometheus-configmap
        name: prometheus-configmap-volume
      - configMap:
          defaultMode: 420
          name: prometheus-proxy-oauth-configmap
        name: prometheus-proxy-oauth-configmap
      - name: prometheus-proxy-tls
        secret:
          defaultMode: 420
          secretName: prometheus-proxy-tls
      - emptyDir: {}
        name: prometheus-data
  test: false
  triggers:
  - type: ConfigChange
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: 2018-03-01T09:05:30Z
    lastUpdateTime: 2018-03-01T09:05:30Z
    message: Deployment config has minimum availability.
    status: "True"
    type: Available
  - lastTransitionTime: 2018-03-01T09:05:31Z
    lastUpdateTime: 2018-03-01T09:05:31Z
    message: replication controller "prometheus-proxy-1" successfully rolled out
    reason: NewReplicationControllerAvailable
    status: "True"
    type: Progressing
  details:
    causes:
    - type: ConfigChange
    message: config change
  latestVersion: 1
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  unavailableReplicas: 0
  updatedReplicas: 1

Getting "The client is not authorized to request a token using this method"

I'm trying to integrate the OpenShift OAuth Proxy with the Jaeger Operator, but I'm currently unable to login as developer:developer. Upon login, this is what I see in the browser:

image

In the container logs, this can be seen:

$ oc logs simplest-6fb4f96f96-kwwng -c oauth-proxy
2018/11/08 15:48:34 provider.go:102: Defaulting client-id to system:serviceaccount:default:simplest-ui-proxy
2018/11/08 15:48:34 provider.go:107: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token
2018/11/08 15:48:34 provider.go:530: Performing OAuth discovery against https://172.30.0.1/.well-known/oauth-authorization-server
2018/11/08 15:48:34 provider.go:576: 200 GET https://172.30.0.1/.well-known/oauth-authorization-server {
  "issuer": "https://192.168.42.155:8443",
  "authorization_endpoint": "https://192.168.42.155:8443/oauth/authorize",
  "token_endpoint": "https://192.168.42.155:8443/oauth/token",
  "scopes_supported": [
    "user:check-access",
    "user:full",
    "user:info",
    "user:list-projects",
    "user:list-scoped-projects"
  ],
  "response_types_supported": [
    "code",
    "token"
  ],
  "grant_types_supported": [
    "authorization_code",
    "implicit"
  ],
  "code_challenge_methods_supported": [
    "plain",
    "S256"
  ]
}
2018/11/08 15:48:34 oauthproxy.go:201: mapping path "/" => upstream "http://localhost:16686/"
2018/11/08 15:48:34 oauthproxy.go:228: OAuthProxy configured for  Client ID: system:serviceaccount:default:simplest-ui-proxy
2018/11/08 15:48:34 oauthproxy.go:238: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled
2018/11/08 15:48:34 http.go:56: HTTP: listening on 127.0.0.1:4180
2018/11/08 15:48:34 http.go:96: HTTPS: listening on [::]:8443
2018/11/08 15:49:05 oauthproxy.go:635: error redeeming code (client:172.17.0.1:43104): got 400 from "https://192.168.42.155:8443/oauth/token" {"error":"unauthorized_client","error_description":"The client is not authorized to request a token using this method."}
2018/11/08 15:49:05 oauthproxy.go:434: ErrorPage 500 Internal Error Internal Error

This is how the service account looks like in Go (I can provide the YAML version as well, if necessary):

https://github.com/jpkrohling/jaeger-operator/blob/89-Protect-ui-with-OAuth-Proxy/pkg/account/oauth-proxy.go

I'm not sure it's relevant, but this is how the sidecar container looks like:

https://github.com/jpkrohling/jaeger-operator/blob/89-Protect-ui-with-OAuth-Proxy/pkg/inject/oauth-proxy.go#L32

Any ideas on what I might be doing wrong?

cluster admin permissions required for header based authentication

Hi,

we want to authenticate http requests to a service behind the oauth-proxy by sending a bearer token in the Authorization header. That works, following the docs at https://github.com/openshift/oauth-proxy#delegate-authentication-and-authorization-to-openshift-for-infrastructure

The problem is that using --openshift-delegate-urls requires the serviceaccount to have a binding to the cluster role system:auth-delegator. To create that binding one needs cluster admin permissions.

Is that intentional? Any way around that? Using the oauth-proxy otherwise (for requests coming from a web browser) works fine and does not require cluster level permissions.

(the use case i'm thinking of is provisioning services with the Ansible Service Broker: the serviceaccount running the playbooks will not have that kind of permissions)

ping @maleck13

Can my app reuse a token for requests to itself?

I am running an application behind an oauth proxy that needs to make requests back to itself (frontend needs to talk to backend, both exposed as public endpoints). How can I authenticate the request from the frontend to the backend?

Sign-in message problems

Hitting the prometheus route in OCP 3.7.0-0.143.0 results in a prompt:

"Sign in with a Account"

  1. grammatically, it should be "an account"
  2. Looking at tempates.go, it appears there should be a ProviderName insert in the message. Maybe "Sign in with an OpenShift account"?

openshift-delegate-urls does not work as expected

Running oauth-proxy from https://access.redhat.com/containers/#/registry.access.redhat.com/openshift3/oauth-proxy/images/latest seems to misbehave when using openshift-delegate-urls as access is denied.

Configuration looks as following:

[foo@master-1 ~]$ oc project test
Already on project "test" on server "https://openshift.example.com:443".
[foo@master-1 ~]$ oc get svc nodejs
NAME      CLUSTER-IP     EXTERNAL-IP   PORT(S)            AGE
nodejs    172.30.126.6   <none>        443/TCP,8778/TCP   4d
[foo@master-1 ~]$ oc get clusterrolebinding cluster-readers
NAME              ROLE              USERS     GROUPS                   SERVICE ACCOUNTS                                                                        SUBJECTS
cluster-readers   /cluster-reader   simon     system:cluster-readers   management-infra/management-admin, default/router, logging/aggregated-logging-fluentd

[foo@master-1 ~]$ oc whoami
simon

[foo@master-0 ~]$ oc export dc/nodejs
apiVersion: v1
kind: DeploymentConfig
metadata:
  creationTimestamp: null
  generation: 1
  labels:
    app: nodejs
  name: nodejs
spec:
  replicas: 1
  selector:
    app: nodejs
    deploymentconfig: nodejs
  strategy:
    activeDeadlineSeconds: 21600
    resources: {}
    rollingParams:
      intervalSeconds: 1
      maxSurge: 25%
      maxUnavailable: 25%
      timeoutSeconds: 600
      updatePeriodSeconds: 1
    type: Rolling
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nodejs
        deploymentconfig: nodejs
    spec:
      containers:
      - image: 172.30.138.227:5000/test/nodejs@sha256:281d3b11486581d439294f194e22112d063002b37c199ddcac061140964b30bc
        imagePullPolicy: IfNotPresent
        name: nodejs
        ports:
        - containerPort: 8080
          protocol: TCP
        - containerPort: 8778
          name: jolokia
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      - args:
        - --https-address=:8443
        - --provider=openshift
        - --openshift-service-account=nodejs
        - --upstream=http://localhost:8080
        - --tls-cert=/etc/tls/private/tls.crt
        - --tls-key=/etc/tls/private/tls.key
        - --validate-url=https://openshift.default.svc.cluster.local/oapi/v1/users/~
        - --redeem-url=https://openshift.default.svc.cluster.local/oauth/token
        - --openshift-review-url=https://openshift.default.svc.cluster.local/oapi/v1/subjectaccessreviews
        - --openshift-delegate-urls={"/":{"resource":"service","verb":"get","namespace":"test","name":"nodejs"}}
        - --cookie-secret-file=/etc/proxy/secrets/session_secret
        env:
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        image: registry.access.redhat.com/openshift3/oauth-proxy@sha256:f3cf64678f5a593d312e0efe4073d9e19313007ed35d1688a3320b6245baaedc
        imagePullPolicy: IfNotPresent
        name: node-oauth-proxy
        ports:
        - containerPort: 8443
          name: public
          protocol: TCP
        resources:
          limits:
            cpu: 5m
            memory: 35Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/tls/private
          name: proxy-tls
        - mountPath: /etc/proxy/secrets
          name: proxy-secret
        - mountPath: /etc/proxy/htpasswd
          name: prometheus-proxy-htpasswd
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: nodejs
      serviceAccountName: nodejs
      terminationGracePeriodSeconds: 30
      volumes:
      - name: proxy-tls
        secret:
          defaultMode: 420
          secretName: application-tls
      - name: proxy-secret
        secret:
          defaultMode: 420
          secretName: application-proxy
      - configMap:
          defaultMode: 420
          name: oauth-proxy-htpasswd
        name: prometheus-proxy-htpasswd
  test: false
  triggers:
  - imageChangeParams:
      automatic: true
      containerNames:
      - nodejs
      from:
        kind: ImageStreamTag
        name: nodejs:latest
        namespace: test
    type: ImageChange
  - imageChangeParams:
      automatic: true
      containerNames:
      - node-oauth-proxy
      from:
        kind: ImageStreamTag
        name: oauth-proxy:latest
        namespace: test
    type: ImageChange
  - type: ConfigChange
status:
  availableReplicas: 0
  latestVersion: 0
  observedGeneration: 0
  replicas: 0
  unavailableReplicas: 0
  updatedReplicas: 0

When running the below curl command it would actually fail.

[quicklab@master-1 ~]$ curl -v -X GET -H "Authorization: Bearer <token>" -k https://nodejs-test.apps.example.com | more
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0* About to connect() to nodejs-test.apps.example.com port 443 (#0)
*   Trying 10.1.1.106...
* Connected to nodejs-test.apps.example.com (10.1.1.106) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* 	subject: CN=router.default.svc
* 	start date: Aug 24 14:46:04 2018 GMT
* 	expire date: Aug 23 14:46:05 2020 GMT
* 	common name: router.default.svc
* 	issuer: CN=openshift-service-serving-signer@1535118586
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: nodejs-test.apps.example.com
> Accept: */*
> Authorization: Bearer <token>
> HTTP/1.1 403 Forbidden
< Set-Cookie: _oauth_proxy=; Path=/; Domain=nodejs-test.apps.example.com; Expires=Fri, 31 Aug 2018 14:40:43 GMT; HttpOnly; Secure
< Date: Fri, 31 Aug 2018 15:40:43 GMT
< Content-Type: text/html; charset=utf-8
< Transfer-Encoding: chunked

oauth-proxy would report the following error.

2018/08/31 15:40:43 provider.go:382: authorizer reason: User "simon" cannot get service in project "test"

If the user simon as cluster-admin it works.

Red Hat OpenShift Container Platform version

[foo@master-0 ~]$ oc version
oc v3.6.173.0.128
kubernetes v1.6.1+5115d708d7
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://openshift.example.com:443
openshift v3.6.173.0.128
kubernetes v1.6.1+5115d708d7

Checking logs on Red Hat OpenShift Container Platform - Master does not reveal any detail (with default log-level)

'authenticated-emails-file' option not respected by default

Hi,
I just tested the oauth-proxy in a simple application. I used the --authenticated-emails-file option. After testing access I noticed that all email addresses are allowed, so you just need to have a valid openshift account. Although this is documented I still think that just passing the 'authenticated-emails-file' should give the expected result of just allowing the given addresses.

I had a look into the code and found that by default all domains ('*') are allowed, not respecting any given 'authenticated-emails-file' option (options.go line 165). Maybe we could check there if an authenticated-emails-file option was given and if yes just do a

o.EmailDomains = []string{}

?

oauth-proxy example does not work with 3.10 `oc cluster up`

This might be specifically an OpenShift issue but noting it here:
Bringing up a 3.10 cluster with oc cluster up and running the example deployment contrib/sidecar.yaml, when oauth-proxy tries to redeem the token after authentication there is a certificate verification failure connecting to the master.

[mrogers@mothra oauth-proxy]$ oc cluster up
....
Login to server ...
Creating initial project "myproject" ...                                                                                                                     
Server Information ...                                                                                                                                       
OpenShift server started.                                                                                                                                    

The server is accessible via web console at:
    https://127.0.0.1:8443                                                                                                                                   

You are logged in as:                                                                                                                                        
    User:     developer                                                                                                                                      
    Password: <any value>                                                                                                                                    

To login as administrator:
    oc login -u system:admin

[mrogers@mothra oauth-proxy]$ cp openshift.local.clusterup/openshift-apiserver/admin.kubeconfig ~/.kube/config
[mrogers@mothra oauth-proxy]$ oc login -u dev -p pass
Login successful.

You don't have any projects. You can try to create a new project, by running

    oc new-project <projectname>

[mrogers@mothra oauth-proxy]$ oc new-project foo
Now using project "foo" on server "https://127.0.0.1:8443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git

to build a new example application in Ruby.
[mrogers@mothra oauth-proxy]$ oc create -f contrib/sidecar.yaml 
serviceaccount/proxy created
route.route.openshift.io/proxy created
service/proxy created
deployment.apps/proxy created
[mrogers@mothra oauth-proxy]$ oc get all
NAME                         READY     STATUS    RESTARTS   AGE
pod/proxy-5bdfb49876-77w2w   2/2       Running   0          8s

NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/proxy   ClusterIP   172.30.132.141   <none>        443/TCP   8s

NAME                    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/proxy   1         1         1            1           8s

NAME                               DESIRED   CURRENT   READY     AGE
replicaset.apps/proxy-5bdfb49876   1         1         1         8s

NAME                             HOST/PORT                    PATH      SERVICES   PORT      TERMINATION   WILDCARD
route.route.openshift.io/proxy   proxy-foo.127.0.0.1.nip.io             proxy      <all>     reencrypt     None
[mrogers@mothra oauth-proxy]$ oc status
In project foo on server https://127.0.0.1:8443

https://proxy-foo.127.0.0.1.nip.io (reencrypt) (svc/proxy)
  deployment/proxy deploys openshift/oauth-proxy:latest,openshift/hello-openshift:latest
    deployment #1 running for 36 seconds - 1 pod


1 info identified, use 'oc status -v' to see details.

< access proxy-foo.127.0.0.1.nip.io and authenticate to openshift... >

[mrogers@mothra oauth-proxy]$ oc logs pod/proxy-5bdfb49876-77w2w -c oauth-proxy
2018/07/06 17:57:20 provider.go:98: Defaulting client-id to system:serviceaccount:foo:proxy
2018/07/06 17:57:20 provider.go:103: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token
2018/07/06 17:57:20 provider.go:526: Performing OAuth discovery against https://172.30.0.1/.well-known/oauth-authorization-server
2018/07/06 17:57:20 provider.go:572: 200 GET https://172.30.0.1/.well-known/oauth-authorization-server {
  "issuer": "https://127.0.0.1:8443",
  "authorization_endpoint": "https://127.0.0.1:8443/oauth/authorize",
  "token_endpoint": "https://127.0.0.1:8443/oauth/token",
  "scopes_supported": [
    "user:check-access",
    "user:full",
    "user:info",
    "user:list-projects",
    "user:list-scoped-projects"
  ],
  "response_types_supported": [
    "code",
    "token"
  ],
  "grant_types_supported": [
    "authorization_code",
    "implicit"
  ],
  "code_challenge_methods_supported": [
    "plain",
    "S256"
  ]
}
2018/07/06 17:57:20 oauthproxy.go:201: mapping path "/" => upstream "http://localhost:8080/"
2018/07/06 17:57:20 oauthproxy.go:228: OAuthProxy configured for  Client ID: system:serviceaccount:foo:proxy
2018/07/06 17:57:20 oauthproxy.go:238: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled
2018/07/06 17:57:20 http.go:56: HTTP: listening on 127.0.0.1:4180
2018/07/06 17:57:20 http.go:96: HTTPS: listening on [::]:8443
2018/07/06 17:58:17 oauthproxy.go:635: error redeeming code (client:172.17.0.1:59502): Post https://127.0.0.1:8443/oauth/token: x509: cannot validate certificate for 127.0.0.1 because it doesn't contain any IP SANs
2018/07/06 17:58:17 server.go:2753: http: TLS handshake error from 127.0.0.1:60896: remote error: tls: bad certificate
2018/07/06 17:58:17 oauthproxy.go:434: ErrorPage 500 Internal Error Internal Error

The certificate being served by the master does contain an IP SAN for 127.0.0.1:

[mrogers@mothra oauth-proxy]$ nmap -p 8443 --script ssl-cert 127.0.0.1

Starting Nmap 7.40 ( https://nmap.org ) at 2018-07-06 14:17 EDT
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00023s latency).
PORT     STATE SERVICE
8443/tcp open  https-alt
| ssl-cert: Subject: commonName=10.13.129.159
| Subject Alternative Name: DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:localhost, DNS:openshift, DNS:openshift.default, DNS:openshift.default.svc, DNS:openshift.default.svc.cluster.local, DNS:10.13.129.159, DNS:127.0.0.1, DNS:172.17.0.1, DNS:172.30.0.1, DNS:192.168.124.1, IP Address:10.13.129.159, IP Address:127.0.0.1, IP Address:172.17.0.1, IP Address:172.30.0.1, IP Address:192.168.124.1
| Issuer: commonName=openshift-signer@1530899739
| Public Key type: rsa
| Public Key bits: 2048
| Signature Algorithm: sha256WithRSAEncryption
| Not valid before: 2018-07-06T17:55:38
| Not valid after:  2020-07-05T17:55:39
| MD5:   3616 248d 3dd5 527c 775f 73d8 bc77 22de
|_SHA-1: c1f9 42fc 3aa0 514b 0d5e 0f02 0a99 7fb5 29a2 a7e2

Nmap done: 1 IP address (1 host up) scanned in 0.20 seconds

Using skip-provider-button along with proxy-prefix results in non-working setup

I have an app running on /. When I set -proxy-prefix to , the authentication process works correctly. However, when I also introduce -skip-provider-button, after I perform the login (which seems to work just fine), I am redirected to a URL that omits the / portion, which fails.

Expected, after login, the redirect goes to /.

This seems to be the same issue reported in: bitly#327

Oauth-proxy as Sidecar

Has anyone used nginx as frontend with oauth-proxy as a sidecar pod and could share the deployment configuration? I am trying to get the example at the bottom of readme to work but I am pretty sure I am having issue because nginx and oauth-proxy are not in the same pod.

Really the only reason I am using nginx is because I am splitting traffic based on user email. I don't think oauth-proxy can do this via configuration. I wouldn't mind this path either as it was pretty easy just using oauth-proxy with nginx. I am sure I could make a custom image if someone could point me in the right direction.

Prometheus oauth-proxy throws 'ErrorPage 500 Internal Error' using a signed certificate from a CA.

Summary:
Prometheus oauth-proxy throws 'ErrorPage 500 Internal Error' using a certificate signed from a CA.
Possibly related to BZ1535585

Description:
When a user logs in via the webconsole and tries to access the prometheus route, he is redirected to the oauth-proxy login page which then throws a 500 internal error at login time.

Is there a recommended way of getting custom signed certs (ie from a CA) trusted at the cluster level?
Instead of having to configure it per component/client?

oc v3.7.23
kubernetes v1.7.6+a08f5eeb62
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://a.example.domain:8443
openshift v3.7.23
kubernetes v1.7.6+a08f5eeb62

oc describe sts prometheus
// prom-proxy
template:
metadata:
creationTimestamp: null
labels:
app: prometheus
name: prometheus
spec:
containers:
- args:
- -provider=openshift
- -https-address=:8443
- -http-address=
- -email-domain=*
- -upstream=http://localhost:9090
- -ssl-insecure-skip-verify
- -client-id=system:serviceaccount:openshift-metrics:prometheus
- '-openshift-sar={"resource": "namespaces", "verb": "get", "resourceName":
"openshift-metrics", "namespace": "openshift-metrics"}'
- '-openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get",
"resourceName": "openshift-metrics", "namespace": "openshift-metrics"}}'
- -tls-cert=/etc/tls/private/tls.crt
- -tls-key=/etc/tls/private/tls.key
- -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
- -cookie-secret-file=/etc/proxy/secrets/session_secret
- -openshift-ca=/etc/pki/tls/cert.pem
- -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
- -skip-auth-regex=^/metrics
image: registry.access.redhat.com/openshift3/oauth-proxy:v3.7
imagePullPolicy: IfNotPresent
name: prom-proxy
ports:
- containerPort: 8443
name: web

oc logs prometheus-0 -c prom-proxy -f

2018/02/21 15:40:23 provider.go:476: Performing OAuth discovery against https://172.30.0.1/.well-known/oauth-authorization-server
2018/02/21 15:40:23 provider.go:522: 200 GET https://172.30.0.1/.well-known/oauth-authorization-server {
"issuer": "https://a.example.domain:8443",
"authorization_endpoint": "https://a.example.domain:8443/oauth/authorize",
"token_endpoint": "https://a.example.domain:8443/oauth/token",
"scopes_supported": [
"user:check-access",
"user:full",
"user:info",
"user:list-projects",
"user:list-scoped-projects"
],
"response_types_supported": [
"code",
"token"
],
"grant_types_supported": [
"authorization_code",
"implicit"
],
"code_challenge_methods_supported": [
"plain",
"S256"
]
}

2018/02/21 15:40:23 provider.go:265: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.
2018/02/21 15:40:23 oauthproxy.go:161: mapping path "/" => upstream "http://localhost:9090"
2018/02/21 15:40:23 oauthproxy.go:184: compiled skip-auth-regex => "^/metrics"
2018/02/21 15:40:23 oauthproxy.go:190: OAuthProxy configured for Client ID: system:serviceaccount:openshift-metrics:prometheus
2018/02/21 15:40:23 oauthproxy.go:200: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain: refresh:disabled
2018/02/21 15:40:23 http.go:96: HTTPS: listening on [::]:8443
2018/02/21 15:42:29 oauthproxy.go:657: 10.1.2.1:52904 Cookie "_oauth_proxy" not present
2018/02/21 15:42:29 provider.go:345: authorizer reason: User "system:anonymous" cannot get namespaces in project "openshift-metrics"
2018/02/21 15:42:29 oauthproxy.go:657: 10.1.2.1:52904 Cookie "_oauth_proxy" not present
2018/02/21 15:42:29 provider.go:345: authorizer reason: User "system:anonymous" cannot get namespaces in project "openshift-metrics"
2018/02/21 15:43:46 oauthproxy.go:582: error redeeming code (client:10.1.2.1:53068): Post https://a.example.domain:8443/oauth/token: x509: certificate signed by unknown authority
2018/02/21 15:43:46 oauthproxy.go:399: ErrorPage 500 Internal Error Internal Error
2018/02/21 15:43:46 oauthproxy.go:657: 10.1.2.1:53068 Cookie "_oauth_proxy" not present
2018/02/21 15:43:46 provider.go:345: authorizer reason: User "system:anonymous" cannot get namespaces in project "openshift-metrics"
2018/02/21 15:43:54 oauthproxy.go:657: 10.1.2.1:53068 Cookie "_oauth_proxy" not present
2018/02/21 15:43:54 provider.go:345: authorizer reason: User "system:anonymous" cannot get namespaces in project "openshift-metrics"
2018/02/21 15:43:58 oauthproxy.go:582: error redeeming code (client:10.1.2.1:53068): Post https://a.example.domain:8443/oauth/token: x509: certificate signed by unknown authority
2018/02/21 15:43:58 oauthproxy.go:399: ErrorPage 500 Internal Error Internal Error
2018/02/21 15:43:59 oauthproxy.go:657: 10.1.2.1:53068 Cookie "_oauth_proxy" not present
2018/02/21 15:43:59 provider.go:345: authorizer reason: User "system:anonymous" cannot get namespaces in project "openshift-metrics"

/etc/origin/master/master-config.yaml
// using OpenIDIdentityProvider for Keycloak and named certificates.

oauthConfig:
assetPublicURL: https://a.example.domain:8443/console/
grantConfig:
method: auto
identityProviders:

  • name: keycloak
    challenge: true
    login: true
    provider:
    apiVersion: v1
    kind: OpenIDIdentityProvider
    ca: /opt/certs/BAGLRootCA04.crt
    clientID: openshift
    clientSecret: 2feccb92-fa45-4a79-93ff-3af21a408023
    claims:
    id:
    - sub
    preferredUsername:
    - preferred_username
    name:
    - name
    email:
    - email
    urls:
    authorize: https://keycloak-a.apps.example.domain/auth/realms/openshift/protocol/openid-connect/auth
    token: https://keycloak-a.apps.example.domain/auth/realms/openshift/protocol/openid-connect/token
    userInfo: https://keycloak-a.apps.example.domain/auth/realms/openshift/protocol/openid-connect/userinfo
    masterCA: ca-bundle.crt
    masterPublicURL: https://a.example.domain:8443
    masterURL: https://cluster1.example.domain:8443
    sessionConfig:
    sessionMaxAgeSeconds: 3600
    sessionName: ssn
    sessionSecretsFile: /etc/origin/master/session-secrets.yaml
    tokenConfig:
    accessTokenMaxAgeSeconds: 86400
    authorizeTokenMaxAgeSeconds: 500
    pauseControllers: false
    policyConfig:
    bootstrapPolicyFile: /etc/origin/master/policy.json
    openshiftInfrastructureNamespace: openshift-infra
    openshiftSharedResourcesNamespace: openshift
    pauseControllers: false
    policyConfig:
    bootstrapPolicyFile: /etc/origin/master/policy.json
    openshiftInfrastructureNamespace: openshift-infra
    openshiftSharedResourcesNamespace: openshift
    projectConfig:
    defaultNodeSelector: ""
    projectRequestMessage: ""
    projectRequestTemplate: ""
    securityAllocator:
    mcsAllocatorRange: s0:/2
    mcsLabelsPerProject: 5
    uidAllocatorRange: 1000000000-1999999999/10000
    routingConfig:
    subdomain: apps.example.domain
    serviceAccountConfig:
    limitSecretReferences: false
    managedNames:
  • default
  • builder
  • deployer
    masterCA: ca-bundle.crt
    privateKeyFile: serviceaccounts.private.key
    publicKeyFiles:
  • serviceaccounts.public.key
    servingInfo:
    bindAddress: 0.0.0.0:8443
    bindNetwork: tcp4
    certFile: master.server.crt
    clientCA: ca.crt
    keyFile: master.server.key
    maxRequestsInFlight: 500
    namedCertificates:
  • certFile: /etc/origin/master/named_certificates/a.example.domain.cer
    keyFile: /etc/origin/master/named_certificates/nonprod.key
    names:
    • a.example.domain
      requestTimeoutSeconds: 3600
      volumeConfig:
      dynamicProvisioningEnabled: true

@simo5

ssl-insecure-skip-verify is not taken into account

Hi,

ssl-insecure-skip-verify does not seem to taken into account:

2017/12/01 11:36:19 oauthproxy.go:582: error redeeming code (client:10.129.0.1:45870): Post https://api.ocp.rhlab.me.com/oauth/token: x509: certificate signed by unknown authority

I added the
ssl-insecure-skip-verify and I still have the issue
oauthproxy.go:+L582 does not have the check

Unable to login to application behind oauth-proxy when using public SSL certificate

I am unable to login to an application (prometheus) which is protected by oauth-proxy. The public URL of the application (and of the openshift console) is served by a SSL certificate signed by public authority (which is not the kubernetes CA).

When deploying the same application in toy environment (using the default router certificate, which is generated by the kubernetes CA) I have no problem logging in.

However, on the production system, I get an error 500 after submitting my credentials on the openshift logging page.

This is the logs of my prometheus pod (oauth-proxy container):

[root@customerdvrs01 ~]# oc logs -f prometheus-core-2-vql0t -c oauth-proxy 
2017/10/22 13:27:32 provider.go:476: Performing OAuth discovery against https://172.30.0.1/.well-known/oauth-authorization-server
2017/10/22 13:27:32 provider.go:522: 200 GET https://172.30.0.1/.well-known/oauth-authorization-server {
  "issuer": "https://openshift.customer.domain:8443",
  "authorization_endpoint": "https://openshift.customer.domain:8443/oauth/authorize",
  "token_endpoint": "https://openshift.customer.domain:8443/oauth/token",
  "scopes_supported": [
    "user:check-access",
    "user:full",
    "user:info",
    "user:list-projects",
    "user:list-scoped-projects"
  ],
  "response_types_supported": [
    "code",
    "token"
  ],
  "grant_types_supported": [
    "authorization_code",
    "implicit"
  ],
  "code_challenge_methods_supported": [
    "plain",
    "S256"
  ]
}
2017/10/22 13:27:32 provider.go:265: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.
2017/10/22 13:27:32 oauthproxy.go:161: mapping path "/" => upstream "http://localhost:9090"
2017/10/22 13:27:32 oauthproxy.go:184: compiled skip-auth-regex => "^/metrics"
2017/10/22 13:27:32 oauthproxy.go:190: OAuthProxy configured for  Client ID: system:serviceaccount:monitoring:prometheus
2017/10/22 13:27:32 oauthproxy.go:200: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled
2017/10/22 13:27:32 http.go:56: HTTP: listening on :8080
2017/10/22 13:27:50 oauthproxy.go:582: error redeeming code (client:10.128.0.1:44912): Post https://openshift.customer.domain:8443/oauth/token: x509: certificate signed by unknown authority
2017/10/22 13:27:50 oauthproxy.go:399: ErrorPage 500 Internal Error Internal Error

From the currently open PRs, I can see that there is an issue with not being able to use the system CA roots at the same time as the kubernetes CA; is my problem related and would merging PR #25 or #27 help?

504 timeout after logging in

I was attempting to deploy syndesis to openshift. This project uses oauth-proxy. When I access the service, I run into a 504 timeout after logging in. I am using RH-SSO for my openshift authentication. And in the logs for the oauthproxy I am seeing the following error:

2018/04/04 17:58:00 oauthproxy.go:635: error redeeming code (client:10.128.0.1:34586): Post https://rhsledcloud.net:8443/oauth/token: dial tcp 13.59.116.22:8443: getsockopt: connection timed out

Any idea on what I might be missing in my setup for this to work.

Prevent proxy from starting if the SA does not match

We get the SA name as user input, and process it here:

// all OpenShift service accounts are OAuth clients, use this if we have it
if len(serviceAccount) > 0 {
if data, err := ioutil.ReadFile("/var/run/secrets/kubernetes.io/serviceaccount/namespace"); err == nil && len(data) > 0 {
defaults.ClientID = fmt.Sprintf("system:serviceaccount:%s:%s", strings.TrimSpace(string(data)), serviceAccount)
log.Printf("Defaulting client-id to %s", defaults.ClientID)
}
tokenPath := "/var/run/secrets/kubernetes.io/serviceaccount/token"
if data, err := ioutil.ReadFile(tokenPath); err == nil && len(data) > 0 {
defaults.ClientSecret = strings.TrimSpace(string(data))
log.Printf("Defaulting client-secret to service account token %s", tokenPath)
}
}

This code assumes the SA provided by user input is the same one that is running the pod (because we use that as the secret for the SA based OAuth client). It pulls the SA namespace from this as well.

We need to validate this instead of assuming the values are in agreement. We could do oc get user ~ with the SA token to make sure it matches (but I feel like there must be a better way to tell what SA is running a pod from inside the pod). The more correct thing would be a flag like --use-openshift-service-account=bool that just pulls the correct information from the pod / API.

xref: #60

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.