Giter Site home page Giter Site logo

envoyproxy / envoy-website Goto Github PK

View Code? Open in Web Editor NEW
40.0 40.0 147.0 194.92 MB

Envoy Proxy website

Home Page: https://envoyproxy.io

License: Apache License 2.0

Starlark 17.15% Shell 11.62% HTML 26.97% Dockerfile 4.19% Python 13.30% SCSS 26.77%
cncf envoy website

envoy-website's People

Contributors

alyssawilk avatar benhall avatar caniszczyk avatar chalin avatar chaspy avatar cpakulski avatar dansipple avatar dependabot[bot] avatar devadvocado avatar euroelessar avatar ilevine avatar jpeach avatar lucperkins avatar mattklein123 avatar mmorel-35 avatar moderation avatar ned1313 avatar peterj avatar phlax avatar richarddli avatar ryantheoptimist avatar shikugawa avatar shubhambhattar avatar slovdahl avatar stefprez avatar stevesloka avatar taiki45 avatar trjordan avatar update-envoy[bot] avatar zacharysarah avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

envoy-website's Issues

Reduce the size of this repository

The size of this repository is prohibitively large and makes it very difficult to work with - it can even fail when trying to checkout the repo (see #187)

The problem stems from the main Envoy repo pushing a full copy of the generated docs on every postsubmit CI run.

That pipeline also has issues and can create a race condition - where this repo has changed between the start and finish of a docs build job. (envoyproxy/envoy#16205)

This can also result in stable version docs being absent entirely - eg https://www.envoyproxy.io/docs/envoy/v1.13.8/

Fixing the docs or updating in any way is very difficult - again because of the sheer size of this repo

I would suggest that we should:

  • only publish rst files for "latest" builds - its the huge number of changes to the commit hash that is causing the problem - the rst rarely changes but the html has many changes on every envoy push
  • use netlify to build "latest" from the rst
  • publish full html for release versions so that we preserve the historical version docs and dont need to keep rebuilding them

Make doc links more accessible

https://www.envoyproxy.io/docs lists stable releases, Latest and followed by archived releases. This some times is inconvenient for folks who want to look at latest docs and they have to scroll down every time.

Can we organize in such a way that latest shows on top and below that there are columns one for stable releases and one for archived releases - so that it is easy to access without scrolling much?

Disable github pages

After setting up event notifications for this repo i noticed that its building gh pages on main pushes - we probs want to disable i think

Unable to run website locally

Hi all. I was trying to run website locally without Docker and it failed. I use Ruby 3.0.3 and it doesn't work. Seems like the issue comes from Jekyll itself based on this issue jekyll/jekyll#8523

So, to solve this, I install webrick manually via bundle add webrick and now the local website is able to run locally.

Do I need to add this information in the README so anyone who faces this issue can solve it? Or are we planning to explicitly add webrick in Gemfile?

Given openssl command requires non-empty pass phrase

$ openssl req -x509 -newkey rsa:4096 -keyout example-com.key -out example-com.crt -days 365
Generating a 4096 bit RSA private key
...............................................................................................................................................................................................................................................................................
.............................................++
...++
writing new private key to 'example-com.key'
Enter PEM pass phrase:
4415583852:error:28FFF065:lib(40):CRYPTO_internal:result too small:/BuildRoot/Library/Caches/com.apple.xbs/Sources/libressl/libressl-22.200.4/libressl-2.6/crypto/ui/ui_lib.c:830:You must type in 4 to 1023 characters
4415583852:error:09FFF06D:PEM routines:CRYPTO_internal:problems getting password:/BuildRoot/Library/Caches/com.apple.xbs/Sources/libressl/libressl-22.200.4/libressl-2.6/crypto/pem/pem_lib.c:115:
4415583852:error:09FFF06F:PEM routines:CRYPTO_internal:read key:/BuildRoot/Library/Caches/com.apple.xbs/Sources/libressl/libressl-22.200.4/libressl-2.6/crypto/pem/pem_pk8.c:129:

Dynamic metadata uses a wrong Lua syntax for table initialization

Should be

function envoy_on_request(request_handle)
  local headers = request_handle:headers()
  request_handle:streamInfo():dynamicMetadata():set("envoy.filters.http.lua", "request.info", {
    ["auth"] = headers:get("authorization"),
    ["token"] = headers:get("x-request-token"),
  })
end

Instead of

function envoy_on_request(request_handle)
  local headers = request_handle:headers()
  request_handle:streamInfo():dynamicMetadata():set("envoy.filters.http.lua", "request.info", {
    auth: headers:get("authorization"),
    token: headers:get("x-request-token"),
  })
end

SSL: certificate subject name does not match target host name

In ssl.md, the curl command gives the following response:

$ curl --cacert example-com.crt --connect-to localhost -H 'Host: example.com' https://localhost/service/1
curl: (51) SSL: certificate subject name 'example.com' does not match target host name 'localhost'

This appears to be due to a CN in the certificate request:

Common Name (eg, fully qualified host name) []:example.com

that differs from the hostname in the curl URL:

https://localhost/service/1

Maintainer access to Google analytics

Currently it seems that none of the website maintainers have access to the Google Analytics console

Its not clear whether a new account has been set up and whether we need to be provided access to that - @chalin perhaps you can now give us access ?

cc @mattklein123

Migrate to Google Analytics 4 (GA4)

This issue is part of a CNCF-wide effort to upgrade project websites to GA4 since Google has deprecated Universal Analytics (UA). For more details, see:

Tasks: stages 1, 2 & 3

  • Create a GA4 site tag under the CNCF projects account.
    The new GA4 stream measurement ID is: G-DXJEH1ZRXX
  • Connect to the UA site tag from the GA4 tag.
  • Use Netlify snippet injection to enable GA4 - "Google Analytics 4 - only in production", just before </head>:
    {% if CONTEXT == 'production' %}
    <script async src="https://www.googletagmanager.com/gtag/js?id=G-DXJEH1ZRXX"></script>
    <script>
      window.dataLayer = window.dataLayer || [];
      function gtag(){dataLayer.push(arguments);}
      gtag('js', new Date());
    
      gtag('config', 'G-DXJEH1ZRXX');
    </script>
    {% endif %}
  • Ensure that the GA4 site tag is receiving events.
  • Drop UA Jekyll config - #276

/cc @caniszczyk @nate-double-u


Completed tasks

Tasks: prep

I'm noticing that UA GA was added via:

and that despite all the source files still being present, I'm not seeing any evidence of the UA-148359966-1 site tag being used in any of the ep.io pages. Might analytics have been inadvertently removed?

I've raised a separate issue for this:

cannot clone this repository

When attempting to clone this repository, I get the following error:

$ git clone https://github.com/envoyproxy/envoyproxy.github.io
Cloning into 'envoyproxy.github.io'...
remote: Enumerating objects: 9652779, done.
remote: Counting objects: 100% (212143/212143), done.
remote: Compressing objects: 100% (39902/39902), done.
Receiving objects: 100% (9652779/9652779), 2.90 GiB | 14.52 MiB/s, done.
remote: Total 9652779 (delta 110329), reused 211797 (delta 109990), pack-reused 9440636
fatal: fetch-pack: invalid index-pack output

Note that it took nearly 10 minutes for the entire repository to download before failing on this error.

[Urgent] Opt-out of "Automatically set up a basic Google Analytics 4 property"

This month (March 2023), Google will start creating new GA4 site tags if there's no existing GA4 ID connected to an existing UA ID. Envoyproxy.io already has a GA4 site tag, it was setup via:

I don't have access to the website's UA console (for property UA-148359966-1). Could you grant me access, or have someone with editor permissions opt-out of "Automatically set up a basic Google Analytics 4 property" (click the checkbox shown by the arrow below -- this is a sample console image taken from another site):

Screen Shot 2023-02-23 at 16 38 43

For context regarding this problem, see:

/cc @nate-double-u

Missing explanation or incorrect logic in stale-requests.svg ?

Looking at the sequence diagram
https://github.com/envoyproxy/envoyproxy.github.io/blob/master/docs/envoy/latest/_images/stale-requests.svg included in the page https://www.envoyproxy.io/docs/envoy/latest/api-docs/xds_protocol#resource-updates I see an inconsistency.

The first request (top green arrow) is considered stale for (V=X, N=A, R={foo}) for whatever reason. However the second request (second green arrow) is not considered stale with (V=X, N=A) just because R={foo,bar} i.e. a resource name bar was added. However the 3rd request (3rd green arrow) is considered stale with (V=X, N=A) even if a new resource name baz was added. Why is that? Why does Envoy need to upgrade (V=Y, N=B) for the request to be considered not-stale even if a new resource baz was added?

Katacoda scenario "implementing metrics tracing" not working

Tried out the scenario Implementing Metrics and Tracing Capabilities but the proxy container is not starting properly.

The given envoy.yaml configuration is:

$ cat envoy.yaml
static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 8080 }
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        config:
          codec_type: auto
          stat_prefix: ingress_http
          route_config:
            name: local_route
            virtual_hosts:
            - name: backend
              domains:
                - "*"
              routes:
              - match:
                  prefix: "/"
                route:
                  cluster: targetCluster
          http_filters:
          - name: envoy.router
  clusters:
  - name: targetCluster
    connect_timeout: 0.25s
    type: STRICT_DNS
    dns_lookup_family: V4_ONLY
    lb_policy: ROUND_ROBIN
    hosts: [
      { socket_address: { address: 172.18.0.3, port_value: 80 }},
      { socket_address: { address: 172.18.0.4, port_value: 80 }}
    ]
    health_checks:
      - timeout: 1s
        interval: 10s
        interval_jitter: 1s
        unhealthy_threshold: 6
        healthy_threshold: 1
        http_health_check:
          path: "/health"
  tracing:
    http:
      name: envoy.zipkin
      config:
        collector_cluster: 172.18.0.6
        collector_endpoint: "/api/v1/spans"
        shared_span_context: false

The proxy container logs:

$ docker logs proxy
[2019-04-03 08:52:44.210][000007][info][main] [source/server/server.cc:202] initializing epoch 0 (hot restart version=10.200.16384.127.options=capacity=16384, num_slots=8209 hash=228984379728933363 size=2654312)
[2019-04-03 08:52:44.210][000007][info][main] [source/server/server.cc:204] statically linked extensions:
[2019-04-03 08:52:44.210][000007][info][main] [source/server/server.cc:206]   access_loggers: envoy.file_access_log,envoy.http_grpc_access_log
[2019-04-03 08:52:44.210][000007][info][main] [source/server/server.cc:209]   filters.http: envoy.buffer,envoy.cors,envoy.ext_authz,envoy.fault,envoy.filters.http.header_to_metadata,envoy.filters.http.jwt_authn,envoy.filters.http.rbac,envoy.grpc_http1_bridge,envoy.grpc_json_transcoder,envoy.grpc_web,envoy.gzip,envoy.health_check,envoy.http_dynamo_filter,envoy.ip_tagging,envoy.lua,envoy.rate_limit,envoy.router,envoy.squash
[2019-04-03 08:52:44.211][000007][info][main] [source/server/server.cc:212]   filters.listener: envoy.listener.original_dst,envoy.listener.proxy_protocol,envoy.listener.tls_inspector
[2019-04-03 08:52:44.211][000007][info][main] [source/server/server.cc:215]   filters.network: envoy.client_ssl_auth,envoy.echo,envoy.ext_authz,envoy.filters.network.rbac,envoy.filters.network.sni_cluster,envoy.filters.network.thrift_proxy,envoy.http_connection_manager,envoy.mongo_proxy,envoy.ratelimit,envoy.redis_proxy,envoy.tcp_proxy
[2019-04-03 08:52:44.211][000007][info][main] [source/server/server.cc:217]   stat_sinks: envoy.dog_statsd,envoy.metrics_service,envoy.stat_sinks.hystrix,envoy.statsd
[2019-04-03 08:52:44.211][000007][info][main] [source/server/server.cc:219]   tracers: envoy.dynamic.ot,envoy.lightstep,envoy.zipkin
[2019-04-03 08:52:44.211][000007][info][main] [source/server/server.cc:222]   transport_sockets.downstream: envoy.transport_sockets.alts,envoy.transport_sockets.capture,raw_buffer,tls
[2019-04-03 08:52:44.211][000007][info][main] [source/server/server.cc:225]   transport_sockets.upstream: envoy.transport_sockets.alts,envoy.transport_sockets.capture,raw_buffer,tls
[2019-04-03 08:52:44.217][000007][critical][main] [source/server/server.cc:80] error initializing configuration '/etc/envoy/envoy.yaml': Unable to parse JSON as proto (INVALID_ARGUMENT:(static_resources) tracing: Cannot find field.): {"static_resources":{"tracing":{"http":{"config":{"shared_span_context":false,"collector_endpoint":"/api/v1/spans","collector_cluster":"172.18.0.6"},"name":"envoy.zipkin"}},"clusters":[{"health_checks":[{"healthy_threshold":1,"timeout":"1s","http_health_check":{"path":"/health"},"interval_jitter":"1s","interval":"10s","unhealthy_threshold":6}],"lb_policy":"ROUND_ROBIN","hosts":[{"socket_address":{"port_value":80,"address":"172.18.0.3"}},{"socket_address":{"port_value":80,"address":"172.18.0.4"}}],"name":"targetCluster","connect_timeout":"0.25s","type":"STRICT_DNS","dns_lookup_family":"V4_ONLY"}],"listeners":[{"address":{"socket_address":{"port_value":8080,"address":"0.0.0.0"}},"filter_chains":[{"filters":[{"config":{"http_filters":[{"name":"envoy.router"}],"route_config":{"virtual_hosts":[{"routes":[{"route":{"cluster":"targetCluster"},"match":{"prefix":"/"}}],"domains":["*"],"name":"backend"}],"name":"local_route"},"stat_prefix":"ingress_http","codec_type":"auto"},"name":"envoy.http_connection_manager"}]}],"name":"listener_0"}]}}
[2019-04-03 08:52:44.217][000007][info][main] [source/server/server.cc:500] exiting

The configuration mentioned here is:

{  
  "static_resources":{  
    "tracing":{  
      "http":{  
        "config":{  
          "shared_span_context":false,
          "collector_endpoint":"/api/v1/spans",
          "collector_cluster":"172.18.0.6"
        },
        "name":"envoy.zipkin"
      }
    },
    "clusters":[  
      {  
        "health_checks":[  
          {  
            "healthy_threshold":1,
            "timeout":"1s",
            "http_health_check":{  
              "path":"/health"
            },
            "interval_jitter":"1s",
            "interval":"10s",
            "unhealthy_threshold":6
          }
        ],
        "lb_policy":"ROUND_ROBIN",
        "hosts":[  
          {  
            "socket_address":{  
              "port_value":80,
              "address":"172.18.0.3"
            }
          },
          {  
            "socket_address":{  
              "port_value":80,
              "address":"172.18.0.4"
            }
          }
        ],
        "name":"targetCluster",
        "connect_timeout":"0.25s",
        "type":"STRICT_DNS",
        "dns_lookup_family":"V4_ONLY"
      }
    ],
    "listeners":[  
      {  
        "address":{  
          "socket_address":{  
            "port_value":8080,
            "address":"0.0.0.0"
          }
        },
        "filter_chains":[  
          {  
            "filters":[  
              {  
                "config":{  
                  "http_filters":[  
                    {  
                      "name":"envoy.router"
                    }
                  ],
                  "route_config":{  
                    "virtual_hosts":[  
                      {  
                        "routes":[  
                          {  
                            "route":{  
                              "cluster":"targetCluster"
                            },
                            "match":{  
                              "prefix":"/"
                            }
                          }
                        ],
                        "domains":[  
                          "*"
                        ],
                        "name":"backend"
                      }
                    ],
                    "name":"local_route"
                  },
                  "stat_prefix":"ingress_http",
                  "codec_type":"auto"
                },
                "name":"envoy.http_connection_manager"
              }
            ]
          }
        ],
        "name":"listener_0"
      }
    ]
  }
}

Tried also with this configuration (move tracing to the root of the tree):

$ cat envoy.yaml
static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 8080 }
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        config:
          codec_type: auto
          stat_prefix: ingress_http
          route_config:
            name: local_route
            virtual_hosts:
            - name: backend
              domains:
                - "*"
              routes:
              - match:
                  prefix: "/"
                route:
                  cluster: targetCluster
          http_filters:
          - name: envoy.router
  clusters:
  - name: targetCluster
    connect_timeout: 0.25s
    type: STRICT_DNS
    dns_lookup_family: V4_ONLY
    lb_policy: ROUND_ROBIN
    hosts: [
      { socket_address: { address: 172.18.0.3, port_value: 80 }},
      { socket_address: { address: 172.18.0.4, port_value: 80 }}
    ]
    health_checks:
      - timeout: 1s
        interval: 10s
        interval_jitter: 1s
        unhealthy_threshold: 6
        healthy_threshold: 1
        http_health_check:
          path: "/health"
tracing:
  http:
    name: envoy.zipkin
    config:
      collector_cluster: 172.18.0.6
      collector_endpoint: "/api/v1/spans"
      shared_span_context: false

But then I get:

$ docker logs proxy
[2019-04-03 09:18:50.928][000011][info][main] [source/server/server.cc:202] initializing epoch 0 (hot restart version=10.200.16384.127.options=capacity=16384, num_slots=8209 hash=228984379728933363 size=2654312)
[2019-04-03 09:18:50.936][000011][info][main] [source/server/server.cc:204] statically linked extensions:
[2019-04-03 09:18:50.936][000011][info][main] [source/server/server.cc:206]   access_loggers: envoy.file_access_log,envoy.http_grpc_access_log
[2019-04-03 09:18:50.936][000011][info][main] [source/server/server.cc:209]   filters.http: envoy.buffer,envoy.cors,envoy.ext_authz,envoy.fault,envoy.filters.http.header_to_metadata,envoy.filters.http.jwt_authn,envoy.filters.http.rbac,envoy.grpc_http1_bridge,envoy.grpc_json_transcoder,envoy.grpc_web,envoy.gzip,envoy.health_check,envoy.http_dynamo_filter,envoy.ip_tagging,envoy.lua,envoy.rate_limit,envoy.router,envoy.squash
[2019-04-03 09:18:50.936][000011][info][main] [source/server/server.cc:212]   filters.listener: envoy.listener.original_dst,envoy.listener.proxy_protocol,envoy.listener.tls_inspector
[2019-04-03 09:18:50.936][000011][info][main] [source/server/server.cc:215]   filters.network: envoy.client_ssl_auth,envoy.echo,envoy.ext_authz,envoy.filters.network.rbac,envoy.filters.network.sni_cluster,envoy.filters.network.thrift_proxy,envoy.http_connection_manager,envoy.mongo_proxy,envoy.ratelimit,envoy.redis_proxy,envoy.tcp_proxy
[2019-04-03 09:18:50.936][000011][info][main] [source/server/server.cc:217]   stat_sinks: envoy.dog_statsd,envoy.metrics_service,envoy.stat_sinks.hystrix,envoy.statsd
[2019-04-03 09:18:50.936][000011][info][main] [source/server/server.cc:219]   tracers: envoy.dynamic.ot,envoy.lightstep,envoy.zipkin
[2019-04-03 09:18:50.936][000011][info][main] [source/server/server.cc:222]   transport_sockets.downstream: envoy.transport_sockets.alts,envoy.transport_sockets.capture,raw_buffer,tls
[2019-04-03 09:18:50.937][000011][info][main] [source/server/server.cc:225]   transport_sockets.upstream: envoy.transport_sockets.alts,envoy.transport_sockets.capture,raw_buffer,tls
[2019-04-03 09:18:50.942][000011][warning][main] [source/server/server.cc:272] No admin address given, so no admin HTTP server started.
[2019-04-03 09:18:50.943][000011][info][config] [source/server/configuration_impl.cc:50] loading 0 static secret(s)
[2019-04-03 09:18:50.943][000011][info][config] [source/server/configuration_impl.cc:56] loading 1 cluster(s)
[2019-04-03 09:18:50.955][000011][info][config] [source/server/configuration_impl.cc:61] loading 1 listener(s)
[2019-04-03 09:18:50.959][000011][info][config] [source/server/configuration_impl.cc:95] loading tracing configuration
[2019-04-03 09:18:50.959][000011][info][config] [source/server/configuration_impl.cc:104]   loading tracing driver: envoy.zipkin
[2019-04-03 09:18:50.961][000011][critical][main] [source/server/server.cc:80] error initializing configuration '/etc/envoy/envoy.yaml': 172.18.0.6 collector cluster is not defined on cluster manager level
[2019-04-03 09:18:50.962][000011][info][main] [source/server/server.cc:500] exiting

Improving accessibility of the website

What is the problem you're trying to solve
Improving accessibility of the website

Describe the solution you'd like
Adding alt tags for images

Additional context

For many people, technology makes things easier. For people with disabilities, technology makes things possible. Accessibility means developing content to be as accessible as possible no matter an individual's physical and cognitive abilities and no matter how they access the web.
"The Web is fundamentally designed to work for all people, whatever their hardware, software, language, culture, location, or physical or mental ability. When the Web meets this goal, it is accessible to people with a diverse range of hearing, movement, sight, and cognitive ability."

-From MDN

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.