Giter Site home page Giter Site logo

runsd's Introduction

runsd

runsd is a drop-in binary to your container image that runs on Google Cloud Run (fully managed) that allows your services to discover each other and authenticate automatically without needing to change your code.

It helps you bring existing microservices, for example from Kubernetes, to Cloud Run. It’s not language-specific and works with external tools and binaries.

Goal: This tool is developed since Cloud Run does not have built-in service discovery and automatic service-to-service authentication features. The goal is to provide the functionality until the official features ship. It is expected the experience will be quite similar and the migration will be quite easy once the official feature becomes available.

Note: This project is not a support component of Cloud Run. It's developed as a community effort and provided as-is without any guarantees.

Features

runsd does its job in your container, entirely in userspace and does not need to run with any additional privileges or permissions to work.

runsd feature list

DNS Service Discovery

With runsd, other Cloud Run services in the same GCP project can be resolved using hostname http://SERVICE_NAME[.REGION[.run.internal]].

The goal of this project is to provide a solution until Cloud Run has an officially supported feature. Therefore, you should not use the fully qualified domain name format listed above in your code. Only use formats:

  • http://<SERVICE_NAME> and
  • http://<SERVICE_NAME>.<REGION>.

runsd service discovery

Automatic Service Authentication

Normally, to have Cloud Run services that make requests to each other (for example, microservices), your program needs to fetch an identity token from the metadata service and set it as a header on the outbound request.

With runsd, this is no longer needed since authentication handled out-of-the-box. This means you don't need to change your code when you bring your apps to Cloud Run from other platforms that have name-based DNS resolution (such as like Kubernetes or Compute Engine):

Cloud Run authentication before & after

Installation

For my tracking purposes, please fill out the form at https://forms.gle/kCgEEiRqrmHhM65g6 if you are using runsd. Your feedback will be important in shaping this feature.

To install runsd in your container, you need to download its binary and prefix your original entrypoint with it.

For example:

ADD https://github.com/ahmetb/runsd/releases/download/<VERSION>/runsd /bin/runsd
RUN chmod +x /bin/runsd
ENTRYPOINT ["runsd", "--", "/app"]

In the example above, change <VERSION> to a version number in the Releases page. It is wise to pick a version and use it as long as you can until you hit a bug.

After installing runsd, it will have no effect while running locally. However, while on Cloud Run, you can now query other services by name over http://.

Note that your traffic is still secure –as the request is upgraded to HTTPS before it leaves your container.

Usage

After installing runsd as your new entrypoint, you container can now make requests to other Cloud Run services in the same project directly by name, e.g. http://hello.

Note that:

  • You can use http://hello to connect to a service within the same region.

  • You can use http://hello.us-central1 notation if the service is deployed in another region (but the same project).

  • Do not use https:// or port 443. You need to make requests using http over port 80 for runsd to work. (HTTPS is added before your request leaves the container.)

Quickstart

You can deploy this sample application to Cloud Run to try out querying other private Cloud Run services without tokens and without full .run.app domains by directly using curl.

This sample app has runsd as its entrypoint and it will show you a form that you can use to query other private Cloud Run services easily with curl.

Below, replace <HASH> with the random string part of your Cloud Run URLs (e.g. 'dpyb4duzqq' if the URLs for your project are 'foo-dpyb4duzqq-uc.run.app').

gcloud alpha run deploy curl-app --platform=managed
   --region=us-central1 --allow-unauthenticated --source=example \
   --set-env-vars=CLOUD_RUN_PROJECT_HASH=<HASH>

Note: Do not forget to delete this service after you try it out, since it gives unauthenticated access to your private services.

Architecture

runsd Architecture Diagram

runsd has a rather hacky architecture, but most notably does 4 things:

  1. runsd is the new entrypoint of your container, and it runs your original entrypoint as its subprocess.

  2. runsd updates /etc/resolv.conf of your container with new DNS search domains and sends all DNS queries to localhost:53.

  3. runsd runs a DNS server locally inside your container localhost:53. This resolves internal hostnames to a local proxy server inside the container (localhost:80) and forwards all other domains to the original DNS resolver.

  4. runsd runs an HTTP proxy server on port 80 inside the container. This server retrieves identity tokens, adds them to the outgoing requests and upgrades the connection to HTTPS.

Troubleshooting

By default runsd does not log anything to your application in order to not confuse you or mess with your log collection setup.

If you need to expose more verbose logs, change the entrypoint in your Dockerfile from ENTRYPOINT ["runsd", "--", ...] to;

ENTRYPOINT ["runsd", "-v=5", "--", ...]

You can adjust the number based on how much detailed logs you want to see.

If the logs don't help you troubleshoot the issues, feel free to open an issue on this repository; however, don’t have any expectations about when it will be resolved. Patch and more tests are always welcome.

Limitations and Known Issues

  1. All names like http://NAME will resolve to a Cloud Run URL even if they don't exist. Therefore, for example, if http://hello doesn't exist, it will will still be routed to a URL as if it existed, and it will get HTTP 404.
  2. Similar to previous item http://metadata will be assumed as a Cloud Run service instead of instance metadata server. To prevent this, use its FQDN metadata.google.internal. with a trailing dot.
  3. No structured logging support, but this should not impact you since the runsd binary is not supposed to log anything except the errors by default.
  4. WebSockets, gRPC (incl. streaming) and SSE works. Please file issues if it does not work.

This is not an official Google project.

runsd's People

Contributors

ahmetb avatar asim-tahir avatar mistic92 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

runsd's Issues

Structured logging

  1. No logs by default
  2. Leveled logging
  3. Structured logging format (logfmt, json)

Usage without entrypoint subprocess #44

Hey Ahmetb,
Yeah if your entrypoint is a sophisticated script, it won't prevent the port overlap. You have to find a way to run nginx process on another port. Runsd is a solution that requires entrypoint change unfortunately. The repo README explains how the hack works in detail.

The docker-entrypoint.sh file is the same that ships with nginx:stable-alpine. All I did was change the exposed port 80 to the ports the nginx container will fastcgi_pass to.

So, somehow the container now complains that...
nginx: [emerg] invalid host in upstream "http://environ-docgen:9000" in /etc/nginx/conf.d/nginx.conf:34
...its the fastcgi_pass directive telling where to send requests. Destination container is also runsdified.

Please see images showing dev and prod environments.
Personally, I believe this should have a simple solution, since its complain is straightforward.

Screenshot (4)
Screenshot (5)

The Cloud Run Invoker role is assigned to the gcloud service account (and also the Cloud Run Service Agent role) in use by the container, so, in theory, IAM policy constraints should not be the issue here at all.

Interestingly, omitting the scheme (http://) from the fastcgi_pass directive resolves the issue on dev (local vscode) environment, but in production (gcloud run) environment there are no complains, and no service intercommunication either.

Please see the images below for more reference.

Screenshot (6)

Screenshot (7)

Screenshot (8)

...and yes, the cloud run container port was changed to reflect the new (runsdified) container port exposed (8080). The result of that attempt is this...

Screenshot (9)

Dev (local vscode) environment continues to work as expected, shutting down runsdified containers gracefully.

Screenshot (10)

My next question, given the above reconnaissance is... why does the schemeless fastcgi_pass directive even work?, plus what is the proper way to do service discovery and communication with runsd on GCP?
I feel like runsd internal scheme [http->https] redirection is the culprit for the bad request although I might be wrong.

Unable to call service when going through a VPC connector

I was trying this out with a receiving and invoking service in the same region with the following setup

  • [Receiving + invoking service] All ingress traffic allowed
  • [Receiving service] Authenticated users required
  • [Receiving service] Set IAM policy on receiving service to give the invoking service the invoker role

This works as expected.

Next, I created a VPC connector in the same region and modified the setup

  • Receiving service: Set allowed ingress to be internal only
  • Receiving service: Route all traffic through a VPC connector (it has to call some internal IP address)
  • Invoking service: Route all traffic through the VPC connector as well

That was when it broke.

Cloud Logging showed that the URL was resolved successfully: [proxy] start: GET url=https://svc-hash-region.a.run.app/

However, each call was hit with a proxy timeout thereafter: http: proxy error: dial tcp x.x.x.x:443: i/o timeout

Did I misconfigure something, or could it be the way the request passes through the VPC connector VMs?

failed to fetch metadata token: metadata server responeded with code=500

Hi,

I have deployed two services in the same project to europe-west4 region. One of them is the 'curl' service coming from this repository and it has public access.

Second service is a very basic nodejs application that just returns static HTML. Service name is 'internaldocs'. If I make it publicly accessible, I'm able to get 200 response.

However, when I make the 'internaldocs' service allow internal traffic and traffic from Cloud Load Balancing only and trying to access it from 'curl' service by proving the 'http://internaldocs' url I get the following error;

$ curl -sSLNv --http2 http://internaldocs

*   Trying ::1:80...
* Connected to internaldocs (::1) port 80 (#0)
 GET / HTTP/1.1
 Host: internaldocs
 User-Agent: curl/7.76.1
 Accept: */*
 Connection: Upgrade, HTTP2-Settings
 Upgrade: h2c
 HTTP2-Settings: AAMAAABkAAQCAAAAAAIAAAAA
 
* Mark bundle as not supporting multiuse
< HTTP/1.1 101 Switching Protocols
< Connection: Upgrade
< Upgrade: h2c
* Received 101
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Connection state changed (MAX_CONCURRENT_STREAMS == 250)!
< HTTP/2 500 
< date: Sun, 25 Apr 2021 18:22:07 GMT
< 
{ [98 bytes data]
failed to fetch metadata token: metadata server responeded with code=500 500 Internal Server Error* Connection #0 to host internaldocs left intact

I believe I gave the right permissions to the services by following this document https://cloud.google.com/run/docs/authenticating/service-to-service#gcloud but obviously I'm missing something.

You think it might be a bug or misconfiguration?

Have a nice one!

Usage tracking

Need to track usage at per-project level.
See if we can track this internally (by modifying User-Agent or container entrypoint).

If not, add a warning to the startup. Make user fill a form, then show the user the way to silence the warning (likely env variable).

FQDNs don't work with a trailing period on Cloud Run

local dig:

$ dig +search A hello.us-central1.run.internal


; <<>> DiG 9.14.12 <<>> +search A hello.us-central1.run.internal
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9556
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;hello.us-central1.run.internal.	IN	A

;; ANSWER SECTION:
hello.us-central1.run.internal.	10 IN	A	127.0.0.1

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Wed Oct 07 02:34:08 UTC 2020
;; MSG SIZE  rcvd: 94

remote dig:

$ dig +search A hello.us-central1.run.internal


; <<>> DiG 9.16.6 <<>> +search A hello.us-central1.run.internal
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 27459
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;hello.us-central1.run.internal.google.internal.	IN A

;; AUTHORITY SECTION:
internal.		30	IN	SOA	ns.global.gcedns-prod.internal. cloud-dns-hostmaster.google.com. 2015030600 7200 3600 24796800 5

;; Query time: 3 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Wed Oct 07 02:34:03 UTC 2020
;; MSG SIZE  rcvd: 169

both have NDOTS configured to 4

local log:

I1007 02:34:08.533319       1 dns.go:50] [dns] type=A name=hello.us-central1.run.internal.us-central1.run.internal. dots=7
I1007 02:34:08.533364       1 dns.go:58] [dns] type=A name=hello.us-central1.run.internal.us-central1.run.internal. is too short or long (need ndots=4; got=7), nxdomain
I1007 02:34:08.533916       1 dns.go:50] [dns] type=A name=hello.us-central1.run.internal.run.internal. dots=6
I1007 02:34:08.533961       1 dns.go:58] [dns] type=A name=hello.us-central1.run.internal.run.internal. is too short or long (need ndots=4; got=6), nxdomain
I1007 02:34:08.534366       1 dns.go:50] [dns] type=A name=hello.us-central1.run.internal. dots=4
I1007 02:34:08.534407       1 dns.go:77] [dns] MATCH type=A name=hello.us-central1.run.internal.

remote log:

I1007 02:34:03.692093 1 dns.go:50] [dns] type=A name=hello.us-central1.run.internal.us-central1.run.internal. dots=7
I1007 02:34:03.692121 1 dns.go:58] [dns] type=A name=hello.us-central1.run.internal.us-central1.run.internal. is too short or long (need ndots=4; got=7), nxdomain
I1007 02:34:03.692628 1 dns.go:50] [dns] type=A name=hello.us-central1.run.internal.run.internal. dots=6
I1007 02:34:03.692650 1 dns.go:58] [dns] type=A name=hello.us-central1.run.internal.run.internal. is too short or long (need ndots=4; got=6), nxdomain
I1007 02:34:03.692950 1 dns.go:108] [dns] recursing type=A name=hello.us-central1.run.internal.google.internal.
I1007 02:34:03.696508 1 dns.go:118] [dns] recursed type=A name=hello.us-central1.run.internal.google.internal. resp_code=0

for some reason, while remote it skips trying the 4-dot version.

local resolv conf:

nameserver 127.0.0.1
nameserver ::1
search us-central1.run.internal run.internal
options ndots:4

remote resolv.conf:

nameserver 127.0.0.1
nameserver ::1
search us-central1.run.internal run.internal google.internal.
options ndots:4

(the only difference is google.internal. domain being added).

This was the case also before removing trailing periods from search domains.

Project state

Hello there 👋

I wonder about the state of this project, looks like you stopped development roughly 2 years ago, never got a stable release, but there is no information about it.

It's not needed anymore, because GCP built this in, or there is some successor?

I'd love to hear something about it.

Failing to integrate runsd into a dockerized next.js app which is deployed on cloud run due to error finding executable "runsd"

I have a dockerized next.js application using the initial Dockerfile (shared below) which is working fine when I build an image and deploy it to GCP's cloud run.

Recently I needed to connect with another nodejs api hosted on another private cloud run so after configurin proper access I started to integrate runsd as usual as you can see in the Dockerfile with runsd (shared below)

Both Dockerfile with and without runsd build correctly with docker on my local device and the cloud but the Dockerfile with runsd is starting fine in my local device but fail to start on the cloud due to the following error:

terminated: Application failed to start: kernel init: cannot resolve init executable: error finding executable "runsd" in PATH [/usr/local/sbin /usr/local/bin /usr/sbin /usr/bin /sbin /bin]: no such file or directory

note: I pulled the docker image built and fails to start on the cloud to my local machine and managed to run it fine with no error which is driving my crazy

Can you advise me if you have other ideas that I can try to fix this issue or at least figure out why it fails?

Initial Dockerfile (works fine)

FROM node:18-alpine AS base

# Install dependencies only when needed
FROM base AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app

# Install dependencies based on the preferred package manager
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
  if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
  elif [ -f package-lock.json ]; then npm ci; \
  elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
  else echo "Lockfile not found." && exit 1; \
  fi


# Rebuild the source code only when needed
FROM base AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .

# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
ENV NEXT_TELEMETRY_DISABLED 1

RUN yarn build

# If using npm comment out above and use below instead
# RUN npm run build

# Production image, copy all the files and run next
FROM base AS runner
WORKDIR /app

ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
ENV NEXT_TELEMETRY_DISABLED 1

RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs

COPY --from=builder /app/public ./public

# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static

USER nextjs

EXPOSE 3000

ENV PORT 3000

CMD ["node", "server.js"]

Dockerfile with runsd (Fail to start)

FROM node:18-alpine AS base
ADD https://github.com/ahmetb/runsd/releases/download/v0.0.0-rc.15/runsd /bin/runsd
RUN chmod +x /bin/runsd
# Install dependencies only when needed
FROM base AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app

# Install dependencies based on the preferred package manager
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
  if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
  elif [ -f package-lock.json ]; then npm ci; \
  elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
  else echo "Lockfile not found." && exit 1; \
  fi


# Rebuild the source code only when needed
FROM base AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .

# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
ENV NEXT_TELEMETRY_DISABLED 1

RUN yarn build

# If using npm comment out above and use below instead
# RUN npm run build

# Production image, copy all the files and run next
FROM base AS runner
WORKDIR /app

ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
ENV NEXT_TELEMETRY_DISABLED 1

RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs

COPY --from=builder /app/public ./public

# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static

USER nextjs

EXPOSE 3000

ENV PORT 3000

ENTRYPOINT ["runsd", "--", "node", "server.js"]

Request to non Cloud Run URL fail with EAI_AGAIN

Hi, I'm trying to call non Cloud Run url. I was trying both with http and https but every time I'm getting

getaddrinfo EAI_AGAIN platform.dev.xxxxxxx.com"

Is it possible to do requests to external url? In this case it's hosted on GKE but with external domain connected.

Running the subprocess as a different user

@ahmetb As mentioned before, runsd is really nice and native support in Cloud Run would be a great addition, IMHO. We encountered one aspect though which we would like to challenge here.

In general, we're utilizing a non-privileged user within our container images. Under this user the actual service will be started. Due to the fact that runsd needs root permissions in order to change the resolv.conf we can't follow this approach when using runsd.

Do you think it would be "in scope" to pass the uid and guid to runsd so that it spawns the application by respecting the two values?

Curious to hear your thoughts on this and – as mentioned above – 🤞🏽 for having runsd natively in Cloud Run 🙂

DNS issue resolving external domains

I just tried to use runsd with GCP tracking and got this error

@google-cloud/trace-agent ERROR TraceWriter#publish: Received error  while publishing traces to cloudtrace.googleapis.com: FetchError: request to https://cloudtrace.googleapis.com/v1/projects/yosh-dev/traces failed, reason: getaddrinfo EAI_AGAIN cloudtrace.googleapis.com

I assume the reason is that it's using https while for runsd it should be http. Is there a way to makie it working without changing urls in client object?

Can't get proxy to work with gRPC server

I have both a gRPC server and client running on Cloud Run and the communication between the two of them works fine when calling my-service-name-hash-ez.run.app:443 and providing to the client grpc.credentials.createSsl() – that's because Cloud Run exposes the public service under HTTPS.

When I make them private and trying to add runsd to handle the authentication, calling my-service-name:80 this time, I get the error:

2022-10-06 20:01:52.298 CESTD 2022-10-06T18:01:51.898Z | subchannel | (21) ::1:80 connection closed with error 68968495794112:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:332:

If I use grpc.credentials.createInsecure() instead, since now the request is made to :80 instead of :443 so it gets proxied, I get the error:

http: proxy error: x509: certificate signed by unknown authority

I've already verified that the receiving Cloud Run service has the principal of the caller for the role Cloud Run Invoker.

Problem with Listening gRPC Requests Over HTTP on Cloud Run

I am not quite sure that the case handled by the runsd before. But now, there is an issue with or without runsd.

I was running my gRPC services on Cloud Run without any problem. But today, I realized they are no longer working over HTTP including the services that have no change for a long time.

The exception is The request :scheme header 'https' does not match the transport scheme 'http'."

The services receiving requests over HTTP and using the binary https://github.com/ahmetb/runsd as a parent process.

So, is there any change on Cloud Run side or is there anything that I am missing?

If I change the code to receive requests over HTTPS, probably they will work(not tested yet). But, it is not the point. They were running without any issue before.

I implemented the Program.cs and the docker file as explained on there https://cloud.google.com/run/docs/quickstarts/build-and-deploy/c-sharp and this is not working too.

image

Automatic authentication and how that works

Hi! So I have two cloud runs going, both are in authed mode, and the one authed one uses the other service.
This tool talks about automatic authentication.

Does this work so that this tool adds the auth header nessecary to pass through the service?
Does this do that through a Bearer token or how?

Because right now, it does not work out of the box.
I have to add serviceaccount rights as an invoker on the service which is consumed by another service, to get this to work. Else i get 403 forbidden

Is there anything I need to configure to get authentication to work properly?

I am using v0.0.0-rc.15
And follow getting started.

Rename project

runsd is a better name:

  • shorter
  • more memorable
  • it's clear it's unofficial

Permission denied

Hi,

I am trying to get this running but I get permission denied right away when the container tries to start.

F1204 02:17:16.218653       1 main.go:204] open /etc/resolv.conf: permission denied"

Did Cloud run change something so this tool doesn't work anymore?

Incremental Rollout

We're trying out this runtime-modification, on a select number of services first. Do we need to apply the change for all containers, or only the source-container which is calling other services using the old method?

Correct or support Triggers for target Cloud Run service?

Hello ahmetb, thanks for providing support for this project.

I would like to know what is the supported Cloud Run Triggers I should be configuring on my target-services?

For example originally, we used Ingress configured to "Allow all traffic"
and Authentication configured to "Allow unauthenticated invocations"
which I know, for obvious reasons is very bad.

With this project/module/runsd, we are now able to configure "Require authentication" on the target-service (which is a big improvement)

However, when we tried to configure the Ingress to "Internal traffic only", we get a 403 Access forbidden - that's all we know, error

The target-service is being requested via an nginx reverse-proxy service (our frontend-app), so we should be correct to assume both source and target service is "internal traffic"

Or are we missing something / some core-concept?

edit**
additionally, from what I can tell, the target-service didn't even receive the request; and the source-service (our frontend nginx service) just logged a 403, and nothing else

Usage without entrypoint subprocess

This is more of a question than an issue.
So, it appears that...
1 main.go:104] specify subprocess as positional args, e.g: '/runsd -- python3 server.py'
...meaning, for images without an explicit entrypoint (i.e. using docker-entrypoint.sh) would always complain (and fail).

Also,
1 main.go:284] ipv6 stack not available: listen tcp6 [::1]:0: bind: cannot assign requested address
...meaning, runsd executable may not be usable on lightweight images (for instance, alpine images)

Question:
How to use runsd without explicitly setting an entrypoint subprocess command.
I believe this is particularly useful when using official images that use default docker entrypoint.

An example showing how to runsd on (for instance) nginx:stable-alpine will definitely clarify my question.

nginx:stable-alpine exposes port 80 in its Dockerfile, causing this...
1 main.go:90] your Cloud Run application is set to run on PORT=80, this conflicts with runsd
...and this happens after explicitly setting entrypoint subprocess to default docker-entrypoint.sh in Dockerfile ENTRYPOINT.

Shared VPC

Is runsd support Shared VPC using VPC Connector?

What am i doing wrong when im getting 403 Forbidden?

Screenshot 2021-04-22 at 11 00 57

Some context

Im running a graphql api that runs on unauthenticated service.
That graphql shall call an authenticated service in the same cloud run project.
Iv added my project has as env to the service
image

Im adding runsd to the container, and it runs fine, as I reach the container and get a response about the url is forbidden
Screenshot 2021-04-22 at 11 02 50

May the problem be sub urls? Or how can I go about debugging where the issue might be?

20+ seconds to resolve/connect to MongoDB atlas cluster

We've got an issue on cold-starts with cloud run where it's taking 20 seconds to connect to our mongo cluster. Removing runsd results in ~25ms connection times.

The issue I believe is more to do with the way mongo networks atlas. I am as far from a networking expert as you can get, so can't comment on the specifics.

The options before us are the following

  • Remove runsd and implement our own api-url discovery/authentication soluton
  • Disable the DNS server and maintain the proxy, so only need to concern with discovery

Ideally, and without asking too much time to investigate a very niche issue, is there any way to configure wildcard domains which bypass the normally fantastic DNS resolution you've built in?

image

image

I'm not sure what information is useful to you here, so please, anything I can do to help let me know

Feeback on rc13 run as user

Having tested here are some findings:

Here we are trying to run with custom user = "node"

  • Running with no other modification other than adding the "-user=node" flag results in a permission denied on runsd itself
  • Adding a chown node:node /bin/runsd lets it run but then runsd quits as it has no permission to modify resolv.conf F0505 07:19:47.187954 1 main.go:204] open /etc/resolv.conf: permission denied

For reference, stage 2 of our dockerfile looks like:

# Stage 2
FROM node:14-alpine3.12

# Add https://github.com/ahmetb/runsd for auth and service discovery in cloud run services
ADD https://github.com/ahmetb/runsd/releases/download/v0.0.0-rc.13/runsd /bin/runsd
RUN chmod +x /bin/runsd && chown node:node /bin/runsd

RUN apk update --no-cache && apk upgrade --no-cache
RUN apk add "musl>=1.1.24-r10" "musl-utils>=1.1.24-r10"

ENV HOST 0.0.0.0
WORKDIR /home/node

COPY --chown=node:node --from=builder /home/node .

USER node

CMD ["runsd", "-user=node", "--", "yarn", "start"]

runsd plugin for Dapr?

There is an interesting project, Dapr, that provides some common interfaces for typical microservices work. It's not crazy unique, but it has potential.

Like runsd, dapr uses a sidecar architecture. They have "Pluggable service discovery" but I don't see something that works super well with Cloud Run fully managed right out of the box. I suppose I could make it work with GCP Service Directory.

But then, I thought it should be pretty for you to drop runsd as a plugin. I mean, the basic DNS package they have looks like:

func (k *resolver) ResolveID(req nameresolution.ResolveRequest) (string, error) {
	return fmt.Sprintf("%s-dapr.%s.svc:%d", req.ID, req.Namespace, req.Port), nil
}

I'm throwing this out there because I do not currently have time to dive into Go and figure out how to add this as a plugin there. And while I'm not a huge fan of ORMs, I can appreciate a lot of the abstraction Dapr is offering.

Ability to run on Compute Engine

Will it be possible to run runsd as a process on a Compute Engine VM in Google Cloud?

I have tried to start the binary up on a VM with the following parameters, and it looks like it starting - but I can't connect to any services. I get an 502: Bad Gateway error, when I try to connect to the services.

./runsd -v=5 -gcp_region=europe-west4 -gcp_project_hash=[my-project-hash] -- wget http://[my-cloud-run-service]/hello

Subprocess exits

Hi,

I try to run Kong Gateway (https://hub.docker.com/layers/kong/library/kong/2.1.2/images/sha256-4b32def28f08747719387529dc59a880c8fd09adf35ac368444ce79263903620?context=explore) in runsd.

The container works if we don't start it without runsd. If we use runsd, we get the following error;

I0430 10:21:39.543587       1 main.go:81] starting runsd version=0.0.0-rc.12 commit=2b7437bb12f6f0e15495062d189bfd06fb80091d pid=1
I0430 10:21:39.543964       1 main.go:115] on cloudrun: false
I0430 10:21:39.543978       1 main.go:144] skipping dns servers initialization
I0430 10:21:39.543987       1 main.go:200] skipping http proxy server initialization
I0430 10:21:39.543995       1 main.go:229] starting subprocess. cmd="./cloudrun.sh" argv=[]string(nil)
2021/04/30 10:21:39 [verbose] Kong: 2.4.0
2021/04/30 10:21:39 [debug] ngx_lua: 10019
2021/04/30 10:21:39 [debug] nginx: 1019003
2021/04/30 10:21:39 [debug] Lua: LuaJIT 2.1.0-beta3
2021/04/30 10:21:39 [verbose] no config file found at /etc/kong/kong.conf
2021/04/30 10:21:39 [verbose] no config file found at /etc/kong.conf
2021/04/30 10:21:39 [verbose] no config file, skip loading
2021/04/30 10:21:39 [debug] reading environment variables
2021/04/30 10:21:39 [debug] KONG_PLUGINS ENV found with "bundled"
2021/04/30 10:21:39 [debug] KONG_DECLARATIVE_CONFIG ENV found with "/data/main/kong.yaml"
2021/04/30 10:21:39 [debug] KONG_DATABASE ENV found with "off"
2021/04/30 10:21:39 [debug] admin_access_log = "logs/admin_access.log"
2021/04/30 10:21:39 [debug] admin_error_log = "logs/error.log"
2021/04/30 10:21:39 [debug] admin_listen = {"127.0.0.1:8001 reuseport backlog=16384","127.0.0.1:8444 http2 ssl reuseport backlog=16384"}
2021/04/30 10:21:39 [debug] admin_ssl_cert = {}
2021/04/30 10:21:39 [debug] admin_ssl_cert_key = {}
2021/04/30 10:21:39 [debug] anonymous_reports = true
2021/04/30 10:21:39 [debug] cassandra_contact_points = {"127.0.0.1"}
2021/04/30 10:21:39 [debug] cassandra_data_centers = {"dc1:2","dc2:3"}
2021/04/30 10:21:39 [debug] cassandra_keyspace = "kong"
2021/04/30 10:21:39 [debug] cassandra_lb_policy = "RequestRoundRobin"
2021/04/30 10:21:39 [debug] cassandra_port = 9042
2021/04/30 10:21:39 [debug] cassandra_read_consistency = "ONE"
2021/04/30 10:21:39 [debug] cassandra_refresh_frequency = 60
2021/04/30 10:21:39 [debug] cassandra_repl_factor = 1
2021/04/30 10:21:39 [debug] cassandra_repl_strategy = "SimpleStrategy"
2021/04/30 10:21:39 [debug] cassandra_schema_consensus_timeout = 10000
2021/04/30 10:21:39 [debug] cassandra_ssl = false
2021/04/30 10:21:39 [debug] cassandra_ssl_verify = false
2021/04/30 10:21:39 [debug] cassandra_timeout = 5000
2021/04/30 10:21:39 [debug] cassandra_username = "kong"
2021/04/30 10:21:39 [debug] cassandra_write_consistency = "ONE"
2021/04/30 10:21:39 [debug] client_body_buffer_size = "8k"
2021/04/30 10:21:39 [debug] client_max_body_size = "0"
2021/04/30 10:21:39 [debug] client_ssl = false
2021/04/30 10:21:39 [debug] cluster_control_plane = "127.0.0.1:8005"
2021/04/30 10:21:39 [debug] cluster_data_plane_purge_delay = 1209600
2021/04/30 10:21:39 [debug] cluster_listen = {"0.0.0.0:8005"}
2021/04/30 10:21:39 [debug] cluster_mtls = "shared"
2021/04/30 10:21:39 [debug] cluster_ocsp = "off"
2021/04/30 10:21:39 [debug] database = "off"
2021/04/30 10:21:39 [debug] db_cache_ttl = 0
2021/04/30 10:21:39 [debug] db_cache_warmup_entities = {"services"}
2021/04/30 10:21:39 [debug] db_resurrect_ttl = 30
2021/04/30 10:21:39 [debug] db_update_frequency = 5
2021/04/30 10:21:39 [debug] db_update_propagation = 0
2021/04/30 10:21:39 [debug] declarative_config = "/data/main/kong.yaml"
2021/04/30 10:21:39 [debug] dns_error_ttl = 1
2021/04/30 10:21:39 [debug] dns_hostsfile = "/etc/hosts"
2021/04/30 10:21:39 [debug] dns_no_sync = false
2021/04/30 10:21:39 [debug] dns_not_found_ttl = 30
2021/04/30 10:21:39 [debug] dns_order = {"LAST","SRV","A","CNAME"}
2021/04/30 10:21:39 [debug] dns_resolver = {}
2021/04/30 10:21:39 [debug] dns_stale_ttl = 4
2021/04/30 10:21:39 [debug] error_default_type = "text/plain"
2021/04/30 10:21:39 [debug] go_plugins_dir = "off"
2021/04/30 10:21:39 [debug] go_pluginserver_exe = "/usr/local/bin/go-pluginserver"
2021/04/30 10:21:39 [debug] headers = {"server_tokens","latency_tokens"}
2021/04/30 10:21:39 [debug] host_ports = {}
2021/04/30 10:21:39 [debug] kic = false
2021/04/30 10:21:39 [debug] log_level = "notice"
2021/04/30 10:21:39 [debug] lua_package_cpath = ""
2021/04/30 10:21:39 [debug] lua_package_path = "./?.lua;./?/init.lua;"
2021/04/30 10:21:39 [debug] lua_socket_pool_size = 30
2021/04/30 10:21:39 [debug] lua_ssl_protocols = "TLSv1.1 TLSv1.2 TLSv1.3"
2021/04/30 10:21:39 [debug] lua_ssl_trusted_certificate = {}
2021/04/30 10:21:39 [debug] lua_ssl_verify_depth = 1
2021/04/30 10:21:39 [debug] mem_cache_size = "128m"
2021/04/30 10:21:39 [debug] nginx_admin_client_body_buffer_size = "10m"
2021/04/30 10:21:39 [debug] nginx_admin_client_max_body_size = "10m"
2021/04/30 10:21:39 [debug] nginx_admin_directives = {{name="client_max_body_size",value="10m"},{name="client_body_buffer_size",value="10m"}}
2021/04/30 10:21:39 [debug] nginx_daemon = "on"
2021/04/30 10:21:39 [debug] nginx_events_directives = {{name="worker_connections",value="auto"},{name="multi_accept",value="on"}}
2021/04/30 10:21:39 [debug] nginx_events_multi_accept = "on"
2021/04/30 10:21:39 [debug] nginx_events_worker_connections = "auto"
2021/04/30 10:21:39 [debug] nginx_http_client_body_buffer_size = "8k"
2021/04/30 10:21:39 [debug] nginx_http_client_max_body_size = "0"
2021/04/30 10:21:39 [debug] nginx_http_directives = {{name="ssl_session_tickets",value="on"},{name="ssl_session_timeout",value="1d"},{name="lua_ssl_protocols",value="TLSv1.1 TLSv1.2 TLSv1.3"},{name="client_max_body_size",value="0"},{name="client_body_buffer_size",value="8k"},{name="ssl_dhparam",value="ffdhe2048"},{name="ssl_protocols",value="TLSv1.2 TLSv1.3"},{name="ssl_prefer_server_ciphers",value="off"}}
2021/04/30 10:21:39 [debug] nginx_http_lua_ssl_protocols = "TLSv1.1 TLSv1.2 TLSv1.3"
2021/04/30 10:21:39 [debug] nginx_http_ssl_dhparam = "ffdhe2048"
2021/04/30 10:21:39 [debug] nginx_http_ssl_prefer_server_ciphers = "off"
2021/04/30 10:21:39 [debug] nginx_http_ssl_protocols = "TLSv1.2 TLSv1.3"
2021/04/30 10:21:39 [debug] nginx_http_ssl_session_tickets = "on"
2021/04/30 10:21:39 [debug] nginx_http_ssl_session_timeout = "1d"
2021/04/30 10:21:39 [debug] nginx_http_status_directives = {}
2021/04/30 10:21:39 [debug] nginx_http_upstream_directives = {}
2021/04/30 10:21:39 [debug] nginx_main_daemon = "on"
2021/04/30 10:21:39 [debug] nginx_main_directives = {{name="worker_processes",value="auto"},{name="daemon",value="on"},{name="worker_rlimit_nofile",value="auto"}}
2021/04/30 10:21:39 [debug] nginx_main_worker_processes = "auto"
2021/04/30 10:21:39 [debug] nginx_main_worker_rlimit_nofile = "auto"
2021/04/30 10:21:39 [debug] nginx_optimizations = true
2021/04/30 10:21:39 [debug] nginx_proxy_directives = {{name="real_ip_header",value="X-Real-IP"},{name="real_ip_recursive",value="off"}}
2021/04/30 10:21:39 [debug] nginx_proxy_real_ip_header = "X-Real-IP"
2021/04/30 10:21:39 [debug] nginx_proxy_real_ip_recursive = "off"
2021/04/30 10:21:39 [debug] nginx_sproxy_directives = {}
2021/04/30 10:21:39 [debug] nginx_status_directives = {}
2021/04/30 10:21:39 [debug] nginx_stream_directives = {{name="ssl_session_tickets",value="on"},{name="ssl_session_timeout",value="1d"},{name="ssl_dhparam",value="ffdhe2048"},{name="lua_ssl_protocols",value="TLSv1.1 TLSv1.2 TLSv1.3"},{name="ssl_protocols",value="TLSv1.2 TLSv1.3"},{name="ssl_prefer_server_ciphers",value="off"}}
2021/04/30 10:21:39 [debug] nginx_stream_lua_ssl_protocols = "TLSv1.1 TLSv1.2 TLSv1.3"
2021/04/30 10:21:39 [debug] nginx_stream_ssl_dhparam = "ffdhe2048"
2021/04/30 10:21:39 [debug] nginx_stream_ssl_prefer_server_ciphers = "off"
2021/04/30 10:21:39 [debug] nginx_stream_ssl_protocols = "TLSv1.2 TLSv1.3"
2021/04/30 10:21:39 [debug] nginx_stream_ssl_session_tickets = "on"
2021/04/30 10:21:39 [debug] nginx_stream_ssl_session_timeout = "1d"
2021/04/30 10:21:39 [debug] nginx_supstream_directives = {}
2021/04/30 10:21:39 [debug] nginx_upstream_directives = {}
2021/04/30 10:21:39 [debug] nginx_worker_processes = "auto"
2021/04/30 10:21:39 [debug] pg_database = "kong"
2021/04/30 10:21:39 [debug] pg_host = "127.0.0.1"
2021/04/30 10:21:39 [debug] pg_max_concurrent_queries = 0
2021/04/30 10:21:39 [debug] pg_port = 5432
2021/04/30 10:21:39 [debug] pg_ro_ssl = false
2021/04/30 10:21:39 [debug] pg_ro_ssl_verify = false
2021/04/30 10:21:39 [debug] pg_semaphore_timeout = 60000
2021/04/30 10:21:39 [debug] pg_ssl = false
2021/04/30 10:21:39 [debug] pg_ssl_verify = false
2021/04/30 10:21:39 [debug] pg_timeout = 5000
2021/04/30 10:21:39 [debug] pg_user = "kong"
2021/04/30 10:21:39 [debug] plugins = {"bundled","unl-authenticator"}
2021/04/30 10:21:39 [debug] pluginserver_names = {}
2021/04/30 10:21:39 [debug] port_maps = {}
2021/04/30 10:21:39 [debug] prefix = "/usr/local/kong/"
2021/04/30 10:21:39 [debug] proxy_access_log = "logs/access.log"
2021/04/30 10:21:39 [debug] proxy_error_log = "logs/error.log"
2021/04/30 10:21:39 [debug] proxy_listen = {"0.0.0.0:8000 reuseport backlog=16384","0.0.0.0:8443 http2 ssl reuseport backlog=16384"}
2021/04/30 10:21:39 [debug] real_ip_header = "X-Real-IP"
2021/04/30 10:21:39 [debug] real_ip_recursive = "off"
2021/04/30 10:21:39 [debug] role = "traditional"
2021/04/30 10:21:39 [debug] ssl_cert = {}
2021/04/30 10:21:39 [debug] ssl_cert_key = {}
2021/04/30 10:21:39 [debug] ssl_cipher_suite = "intermediate"
2021/04/30 10:21:39 [debug] ssl_ciphers = "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384"
2021/04/30 10:21:39 [debug] ssl_dhparam = "ffdhe2048"
2021/04/30 10:21:39 [debug] ssl_prefer_server_ciphers = "on"
2021/04/30 10:21:39 [debug] ssl_protocols = "TLSv1.1 TLSv1.2 TLSv1.3"
2021/04/30 10:21:39 [debug] ssl_session_tickets = "on"
2021/04/30 10:21:39 [debug] ssl_session_timeout = "1d"
2021/04/30 10:21:39 [debug] status_access_log = "off"
2021/04/30 10:21:39 [debug] status_error_log = "logs/status_error.log"
2021/04/30 10:21:39 [debug] status_listen = {"off"}
2021/04/30 10:21:39 [debug] status_ssl_cert = {}
2021/04/30 10:21:39 [debug] status_ssl_cert_key = {}
2021/04/30 10:21:39 [debug] stream_listen = {"off"}
2021/04/30 10:21:39 [debug] trusted_ips = {}
2021/04/30 10:21:39 [debug] untrusted_lua = "sandbox"
2021/04/30 10:21:39 [debug] untrusted_lua_sandbox_environment = {}
2021/04/30 10:21:39 [debug] untrusted_lua_sandbox_requires = {}
2021/04/30 10:21:39 [debug] upstream_keepalive_idle_timeout = 60
2021/04/30 10:21:39 [debug] upstream_keepalive_max_requests = 100
2021/04/30 10:21:39 [debug] upstream_keepalive_pool_size = 60
2021/04/30 10:21:39 [debug] worker_consistency = "strict"
2021/04/30 10:21:39 [debug] worker_state_update_frequency = 5
2021/04/30 10:21:39 [verbose] prefix in use: /usr/local/kong
2021/04/30 10:21:39 [verbose] preparing nginx prefix directory at /usr/local/kong
2021/04/30 10:21:39 [verbose] SSL enabled on 
2021/04/30 10:21:39 [verbose] generating /usr/local/kong/ssl/kong-default.crt SSL certificate (
2021/04/30 10:21:39 [verbose] generating /usr/local/kong/ssl/kong-default-ecdsa.crt SSL certificate (
2021/04/30 10:21:39 [verbose] SSL enabled on 
2021/04/30 10:21:39 [verbose] generating /usr/local/kong/ssl/admin-kong-default.crt SSL certificate (
2021/04/30 10:21:39 [verbose] generating /usr/local/kong/ssl/admin-kong-default-ecdsa.crt SSL certificate (
2021/04/30 10:21:39 [verbose] generating ffdhe2048 DH parameters
2021/04/30 10:21:39 [debug] searching for OpenResty 'nginx' executable
2021/04/30 10:21:39 [debug] /usr/local/openresty/nginx/sbin/nginx -v: 'nginx version: openresty/1.19.3.1'
2021/04/30 10:21:39 [debug] found OpenResty 'nginx' executable at /usr/local/openresty/nginx/sbin/nginx
2021/04/30 10:21:39 [debug] testing nginx configuration: KONG_NGINX_CONF_CHECK=true /usr/local/openresty/nginx/sbin/nginx -t -p /usr/local/kong -c nginx.conf
2021/04/30 10:21:39 [debug] sending signal to pid at: /usr/local/kong/pids/nginx.pid
2021/04/30 10:21:39 [debug] kill -0 `cat /usr/local/kong/pids/nginx.pid` >/dev/null 2>&1
2021/04/30 10:21:39 [debug] loading subsystems migrations...
2021/04/30 10:21:39 [verbose] retrieving cache schema state...
2021/04/30 10:21:39 [verbose] schema state retrieved
2021/04/30 10:21:39 [debug] searching for OpenResty 'nginx' executable
2021/04/30 10:21:39 [debug] /usr/local/openresty/nginx/sbin/nginx -v: 'nginx version: openresty/1.19.3.1'
2021/04/30 10:21:39 [debug] found OpenResty 'nginx' executable at /usr/local/openresty/nginx/sbin/nginx
2021/04/30 10:21:39 [debug] sending signal to pid at: /usr/local/kong/pids/nginx.pid
2021/04/30 10:21:39 [debug] kill -0 `cat /usr/local/kong/pids/nginx.pid` >/dev/null 2>&1
2021/04/30 10:21:39 [debug] starting nginx: /usr/local/openresty/nginx/sbin/nginx -p /usr/local/kong -c nginx.conf
2021/04/30 10:21:40 [debug] nginx started
2021/04/30 10:21:40 [info] Kong started
I0430 10:21:40.653595       1 main.go:258] subprocess exited successfully

To be honest, I don't have any idea why subprocess exits after kong starts successfully.

Any help would be appreciated!

External domain getaddrinfo ENOTFOUND

Hi,

We're encountering the following DNS error when executing a request from our Cloud Run container to api.powerbi.com:

getaddrinfo ENOTFOUND api.powerbi.com.

We tried both POWERBI_URL="https://api.powerbi.com/v1.0/myorg" and POWERBI_URL="https://api.powerbi.com./v1.0/myorg" with a dot because of #42 (comment).

DNS resolving of this domain works without runsd inside Cloud Run.

Our DB connection is however working with runsd, but that might be related to the fact that this is not a http connection so runsd might handle this differently.
Another difference between api.powerbi.com and the MongoDB domain could be the length, as we see a lot of logs like this:
runsd

Which brings me to the side question, again related to #42, if there could be a negative impact on DB connections depending on how many times runsd is resolving DNS names?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.