Giter Site home page Giter Site logo

googlecloudplatform / jetty-runtime Goto Github PK

View Code? Open in Web Editor NEW
44.0 43.0 42.0 6.06 MB

Google Cloud Platform Jetty Docker image

License: Apache License 2.0

AMPL 2.20% Shell 26.44% Java 63.11% HTML 0.07% Dockerfile 2.52% Go 5.65%
jetty docker gcp app-engine java google runtime

jetty-runtime's Introduction

Build Status

Google Cloud Platform Jetty Docker Image

This repository contains the source for the Google-maintained Jetty docker image. This image can be used as the base image for running Java web applications on Google App Engine Flexible Environment and Google Container Engine. It provides the Jetty Servlet container on top of the OpenJDK image.

This image is mirrored at both launcher.gcr.io/google/jetty and gcr.io/google-appengine/jetty.

The layout of this image is intended to mostly mimic the official docker-jetty image and unless otherwise noted, the official docker-jetty documentation should apply.

Configuring the Jetty image

Arguments passed to the docker run command are passed to Jetty, so the configuration of the jetty server can be seen with a command like:

docker run gcr.io/google-appengine/jetty --list-config

Alternate commands can also be passed to the docker run command, so the image can be explored with

docker run -it --rm launcher.gcr.io/google/jetty bash

Various environment variables (see below) can also be used to set jetty properties, enable modules and disable modules. These variables may be set either in an app.yaml or passed in to a docker run command eg.

docker run -it --rm -e JETTY_PROPERTIES=jetty.http.idleTimeout=10000 launcher.gcr.io/google/jetty 

To update the server configuration in a derived Docker image, the Dockerfile may enable additional modules with RUN commands like:

WORKDIR $JETTY_BASE
RUN java -jar "$JETTY_HOME/start.jar" --add-to-startd=jmx,stats

Modules may be configured in a Dockerfile by editing the properties in the corresponding mod files in /var/lib/jetty/start.d/ or the module can be deactivated by removing that file.

Enabling gzip compression

The gzip handler is bundled with Jetty but not activated by default. To activate this module you have to set the environment variable JETTY_MODULES_ENABLE=gzip

For example with docker:

docker run -p 8080 -e JETTY_MODULES_ENABLE=gzip gcr.io/yourproject/yourimage

Or with GAE (app.yaml):

env_variables:
  JETTY_MODULES_ENABLE: gzip

Using Quickstart

Jetty provides mechanisms to speed up the start time of your application by pre-scanning its content and generating configuration files. If you are using an extended image you can active quickstart by executing /scripts/jetty/quickstart.sh in your Dockerfile, after the application WAR is added.

FROM launcher.gcr.io/google/jetty
ADD your-application.war $JETTY_BASE/webapps/root.war

# generate quickstart-web.xml
RUN /scripts/jetty/quickstart.sh

App Engine Flexible Environment

When using App Engine Flexible, you can use the runtime without worrying about Docker by specifying runtime: java in your app.yaml:

runtime: java
env: flex

The runtime image launcher.gcr.io/google/jetty will be automatically selected if you are attempting to deploy a WAR (*.war file).

If you want to use the image as a base for a custom runtime, you can specify runtime: custom in your app.yaml and then write the Dockerfile like this:

FROM launcher.gcr.io/google/jetty
ADD your-application.war $APP_DESTINATION_WAR

That will add the WAR in the correct location for the Docker container.

You can also use exploded-war artifacts:

ADD your-application $APP_DESTINATION_EXPLODED_WAR

Once you have this configuration, you can use the Google Cloud SDK to deploy this directory containing the 2 configuration files and the WAR using:

gcloud app deploy app.yaml

Container Engine & other Docker hosts

For other Docker hosts, you'll need to create a Dockerfile based on this image that copies your application code and installs dependencies. For example:

FROM launcher.gcr.io/google/jetty
COPY your-application.war $APP_DESTINATION_WAR

If your artifact is an exploded-war, then use the APP_DESTINATION_EXPLODED_WAR environment variable instead. You can then build the docker container using docker build or Google Cloud Container Builder. By default, the CMD is set to start the Jetty server. You can change this by specifying your own CMD or ENTRYPOINT.

Entry Point Features

The /docker-entrypoint.bash for the image is inherited from the openjdk-runtime and its capabilities are described in the associated README

This image updates the docker CMD and adds the /setup-env.d/50-jetty.bash script to include options and arguments to run the Jetty container, unless an executable argument is passed to the docker image. Additional environment variables are used/set including:

Env Var Maven Prop Value/Comment
JETTY_VERSION jetty9.version
GAE_IMAGE_NAME jetty
GAE_IMAGE_LABEL docker.tag.long
JETTY_HOME jetty.home /opt/jetty-home
JETTY_BASE jetty.base /var/lib/jetty
TMPDIR /tmp/jetty
JETTY_PROPERTIES Comma separated list of name=value pairs appended to $JETTY_ARGS
JETTY_MODULES_ENABLE Comma separated list of modules to enable by appending to $JETTY_ARGS
JETTY_MODULES_DISABLE Comma separated list of modules to disable by removing from $JETTY_BASE/start.d
JETTY_ARGS Arguments passed to jetty's start.jar. Any arguments used for custom jetty configuration should be passed here.
ROOT_WAR $JETTY_BASE/webapps/root.war
ROOT_DIR $JETTY_BASE/webapps/root
JAVA_OPTS JVM runtime arguments

If a WAR file is found at $ROOT_WAR, it is unpacked to $ROOT_DIR if it is newer than the directory or the directory does not exist. If there is no $ROOT_WAR or $ROOT_DIR, then /app is symbolic linked to $ROOT_DIR. If a $ROOT_DIR is discovered or made by this script, then it is set as the working directory. See Extending the image below for some examples of adding an application as a WAR file or directory.

The command line executed is effectively (where $@ are the args passed into the docker entry point):

java $JAVA_OPTS \
     -Djetty.base=$JETTY_BASE \
     -jar $JETTY_HOME/start.jar \
     "$@"

Logging

This image is configured to use Java Util Logging(JUL) to capture all logging from the container and its dependencies. Applications that also use the JUL API will inherit the same logging configuration.

By default JUL is configured to use a ConsoleHandler to send logs to the stderr of the container process. When run on as a GCP deployment, all output to stderr is captured and is available via the Stackdriver logging console, however more detailed and integrated logs are available if the Stackdriver logging mechanism is used directly (see below).

To alter logging configuration a new logging.properties file must be provided to the image that among other things can: alter log levels generated by Loggers; alter log levels accepted by handlers; add/remove/configure log handlers.

Providing logging.properties via the web application

A new logging configuration file can be provided as part of the application (typically at WEB-INF/logging.properties) and the Java System Property java.util.logging.config.file updated to reference it.

When running in a GCP environment, the system property can be set in app.yaml:

env_variables:
  JETTY_ARGS: -Djava.util.logging.config.file=WEB-INF/logging.properties

If the image is run directly, then a -e argument to the docker run command can be used to set the system property:

docker run \
  -e JETTY_ARGS=-Djava.util.logging.config.file=WEB-INF/logging.properties \
  ...

Providing logging.properties via a custom image

If this image is being used as the base of a custom image, then the following Dockerfile commands can be used to add either replace the existing logging configuration file or to add a new logging.properties file.

The default logging configuration file is located at /var/lib/jetty/etc/java-util-logging.properties, which can be replaced in a custom image is built. The default configuration can be replaced with a Dockerfile like:

FROM gcr.io/google-appengine/jetty
ADD logging.properties /var/lib/jetty/etc/java-util-logging.properties
...

Alternately an entirely new location for the file can be provided and the environment amended in a Dockerfile like:

FROM gcr.io/google-appengine/jetty
ADD logging.properties /etc/logging.properties
ENV JETTY_ARGS -Djava.util.logging.config.file=/etc/logging.properties
...

Providing logging.properties via docker run

A logging.properties file may be added to an existing images using the docker run command if the deployment environment allows for the run arguments to be modified. The -v option can be used to bind a new logging.properties file to the running instance and the -e option can be used to set the system property to point to it:

docker run -it --rm \
-v /mylocaldir/logging.properties:/etc/logging.properties \
-e JETTY_ARGS="-Djava.util.logging.config.file=/etc/logging.properties" \
...

Enhanced Stackdriver Logging (BETA)

When running on the Google Cloud Platform Flex environment, the Java Util Logging can be configured to send logs to Google Stackdriver Logging by providing a logging.properties file that configures a LoggingHandler as follows:

handlers=com.google.cloud.logging.LoggingHandler

# Optional configuration
.level=INFO
com.google.cloud.logging.LoggingHandler.level=FINE
com.google.cloud.logging.LoggingHandler.log=gae_app.log
com.google.cloud.logging.LoggingHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.SimpleFormatter.format=%3$s: %5$s%6$s

When deployed on the GCP Flex environment, an image so configured will automatically be configured with:

  • a LabelLoggingEnhancer instance, that will add labels from the monitored resource to each log entry.
  • a TraceLoggingEnhancer instance that will add any trace-id set to each log entry.
  • the gcp module will be enabled that configures jetty so that the setCurrentTraceId method is called for any thread handling a request.

When deployed in other environments, logging enhancers can be manually configured by setting a comma separated list of class names as the com.google.cloud.logging.LoggingHandler.enhancers property.

When using Stackdriver logging, it is recommended that io.grpc and sun.net logging level is kept at INFO level, as both these packages are used by Stackdriver internals and can result in verbose and/or initialisation problems.

Distributed Session Storage

The Jetty session mechanism is highly customizable and the options presented below are only a subset of meaningful configurations. Consult the Jetty Sessions documentation for more details.

Google Cloud Session Store

This image can be configured to use Google Cloud Datastore for clustered session storage by enabling the gcp-datastore-sessions jetty module. You can do this in your app.yaml:

env_variables:
  JETTY_MODULES_ENABLE: gcp-datastore-sessions

Jetty will use the default namespace in Datastore as the store for all session data, or jetty.session.gcloud.namespace property can be used to set an alternative namespace. By default gcloud has no request affinity, so all session data will be retrieved and stored from the datastore on every request and no session data will be shared in memory.

Note that the gcp-datastore-sessions module is an aggregate module and the same configuration can be achieved by activating it's dependent modules individually:

env_variables:
  JETTY_MODULES_ENABLE: session-cache-null,gcp-datastore,session-store-gcloud

Cached Google Cloud Session Store

The Google Load Balancer can support instance affinity for more efficient session usage. This can be configured in app.yaml with:

network:
  session_affinity: true

env_variables:
  JETTY_MODULES_ENABLE: session-cache-hash,gcp-datastore,session-store-gcloud

Sessions will be retrieved from the in memory session cache and multiple requests can share a session instance. The Google Data Cloud is only accessed for unknown sessions (if affinity changes) or if a session is modified. Session cache behaviour can be further configured by following the Jetty Session Cache documentation. Note that affinity is achieved by the Google Load Balancer setting a GCLB cookie rather than tracking the JSESSIONID cookie.

Memcached Google Cloud Session Store (Alpha)

Sessions can be cached in memcache (without need for affinity) and backed by Google Cloud Datastore. This can be configured in app.yaml with:

env_variables:
  JETTY_MODULES_ENABLE: gcp-memcache-datastore-sessions

Note that the gcp-memcache-datastore-sessions module is an aggregate module and the same configuration can be achieved by activating it's dependent modules individually:

env_variables:
  JETTY_MODULES_ENABLE: session-cache-null,gcp-datastore,session-store-gcloud,gcp-xmemcached,session-store-cache

The session-cache-null module may be replaced with the session-cache-hash module to achieve 2 levels of caching (in memory and memcache) prior to accessing the Google Cloud Datastore, and network affinity may also be activated as above.

Extending the image

The image produced by this project may be automatically used/extended by the Cloud SDK and/or App Engine maven plugin. Alternately it may be explicitly extended with a custom Dockerfile.

The latest released version of this image is available at launcher.gcr.io/google/jetty, alternately you may build and push your own version with the shell commands:

mvn clean install
docker tag jetty:latest gcr.io/your-project-name/jetty:your-label
gcloud docker -- push gcr.io/your-project-name/jetty:your-label

Adding the root WAR application to an image

A standard war file may be deployed as the root context in an extended image by placing the war file in the docker build directory and using a Dockerfile like:

FROM launcher.gcr.io/google/jetty
COPY your-application.war $APP_DESTINATION_WAR

An exploded-war can also be used:

COPY your-application $APP_DESTINATION_EXPLODED_WAR

Adding the root application to an image

If the application exists as directory (i.e. an expanded war file), then directory must be placed in the docker build directory and using a Dockerfile like:

FROM launcher.gcr.io/google/jetty
COPY your-application-dir $JETTY_BASE/webapps/root

Mounting the root application at local runtime

If no root WAR or root directory is found, the docker-entrypoint.bash script will link the /app directory as the root application. Thus the root application can be added to the image via a runtime mount:

docker run -v /some-path/your-application:/app launcher.gcr.io/google/jetty  

Enabling dry-run

The image's default start command will first run the jetty start.jar as a --dry-run to generate the JVM start command before starting the jetty web server. If you wish to generate the start command in your Dockerfile rather than at container start-time, you can run the /scripts/jetty/generate-jetty-start.sh script to generate it for you, i.e.

RUN /scripts/jetty/generate-jetty-start.sh

NOTE: Make sure that the web application and any additional custom jetty modules have been added to the container BEFORE running this script.

Google Authentication using Jetty OpenID Module

The Jetty openid module adds support for the OpenID Connect authentication protocol over OAuth 2.0. This can be set up so that Jetty authenticates users with Google's Identity Platform allowing users to sign in with their Google account.

Set Up Application on Google API Console

Before your application can use Google's OAuth 2.0 authentication system for user login, you must set up a project in the Google API Console to obtain OAuth 2.0 credentials (clientID and clientSecret), set a redirect URI, and (optionally) customize the branding information that your users see on the user-consent screen.

Guide to setting up Application in Google API Console: https://developers.google.com/identity/protocols/oauth2/openid-connect#appsetup

Jetty Configuration

The Jetty OpenID configuration is usually set in the openid.ini file. In this file we must set the values for the OpenID provider, the clientID and clientSecret. The OpenID provider should be set to https://accounts.google.com. The clientID and clientSecret should be obtained from the project which was set up in the Google API Console.

See the Jetty documentation for OpenID Support to get a general overview of how to enable OpenID authentication in your a webapp and how to access the authenticated users information.

Docker Configuration

Special configuration should be added to the Dockerfile to enable the openid module and then to copy the openid.ini file to $JETTY_BASE/start.d/.

Example Dockerfile:

FROM gcr.io/google-appengine/jetty
RUN java -jar "$JETTY_HOME/start.jar" --create-startd
RUN java -jar "$JETTY_HOME/start.jar" --add-to-start=webapp,deploy,http,openid
COPY openid.ini $JETTY_BASE/start.d/
COPY openid-webapp.war $JETTY_BASE/webapps/

Development Guide

Contributing changes

Licensing

jetty-runtime's People

Contributors

ahmetb avatar ajkannan avatar aozarov avatar balopat avatar bendory avatar cassand avatar chanseokoh avatar chengyuanzhao avatar dlorenc avatar donmccasland avatar gregw avatar hegemonic avatar imjasonh avatar janbartel avatar jboynes avatar jmcc0nn3ll avatar joakime avatar joaoandremartins avatar lachlan-roberts avatar ludoch avatar meltsufin avatar nkubala avatar rauls5382 avatar sharifelgamal avatar shreejad avatar tstromberg avatar walkerwatch avatar yubin154 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jetty-runtime's Issues

Session Management backed by Datastore

Session clustering backed by Datastore should be optionally enabled using an environment variable. If not enabled, local in-memory session management would be used.

See #5 for previous discussion on this topic.

Create test webapp with included SCM contents

There are situations where the webapp is accidentally built with SCM contents included.

We can build a test webapp to ensure that those SCM contents are not being accessed.

Not sure how important this style of validation is for gcloud though.

Add test docker images for server logging scenarios

The following scenarios are identified:

  • log4j on server (application logging separate)
  • logback on server (application logging separate)
  • log4j on server (application forced to use server log4j)
  • consolidated server logging (application forced to use server logging implementations and configurations for: commons-logging, java.util.logging, slf4j, log4j, logkit, and juli)

remove the docker label prefix

The docker label prefix mechanism is no longer needed. This was used to add a branch prefix to docker labels, but there is no longer an async branch.

quickstart support?

Should quickstart be supported by default, or documented as a desirable feature.
Testing done in #8 using quickstart and/or annotations

Create a test webapp for Session configurations

Identified sample test webapps:

  • Sessions using HashSessionManager
  • Sessions using jetty-gcloud-session-manager
    • include positive results with a sufficient cluster
    • include negative results when using insufficient cluster size

Create test webapps for bytecode scanning concerns

Identified behaviors to look for

  • Add individual WEB-INF/lib/*.jar for each bytecode support level 1.1 to 1.8
  • Add known common bad bytecode - (eg: icu4j-2.6.1.jar and the com/ibm/icu/impl/data/LocaleElements_zh__PINYIN.class entry)
  • Add known slow to scan jars - (eg: org.webjars:extjs:5.1.0.jar)
  • Include java 1.8 new bytecode (lamdas, etc)
  • Add webapp usage of an old asm.jar (to ensure that the server side will not fail with this setup)

CI performance testing

Regular load testing should be conducted against deployed webapplication. Several fixed levels of load should be offered and the latency/qos measured and logged.

This will allow tracking of performance changes over releases.

Create test webapp for native / JNI executions

If gcloud deployment allow for native / JNI executions then we should create a test webapp to ensure that works.

This could include a combination of using pre-existing binaries on the docker images, and also using libs included in the webapp.

Add test webapps for application specific logging scenarios

The following combinations are being done.

  • commons_logging_1.0.3 (using Log4JLogger)
  • commons_logging_1.1 (using Jdk14Logger)
  • java_util_logging (direct usage, reset of root, configured output to console)
  • log4j_1.1.3 (direct usage and configured output)
  • log4j_1.2.15 (direct usage and configured output)
  • log4j2 (direct usage and configured output. more in future)
  • slf4j_1.2 (direct usage, log4j output)
  • slf4j_1.5.6 (configured to capture log4j + commons-logging, output to java.util.logging)
  • slf4j_1.6.1 (configured to capture java.util.logging, output to log4j)
  • slf4j_1.6.6 (w/ org.slf4j.spi.LocationAwareLogger usage to log4j)
  • slf4j_1.7.2 (direct usage, log4j output)
  • apache juli (direct usage and from apache jsp serviceloader)
  • logback (direct usage, slf4j usage, capture java.util.logging, capture log4j, capture commons-logging, w/logback access, output by logback)
  • multiple logging libs (log4j direct + configured output, java.util.logging direct + configured output, and commons-logging discovered, slf4j-api -> logback + configured output)

Test the Integration with Cloud Debugger

The test could be something simple like

  • deploy a sample app
  • set a snapshot (aka a breakpoint) via the API or gcloud
  • exercise the breakpoint
  • verify that the breakpoint has been hit

JMX Support

Jetty is fully instrumented with JMX mbeans that allow monitoring, control and debugging of the server. The JVM also provides useful information via JMX.

Should we provide modules and/or documentation to demonstrate how to enable JMX in the jetty-runtime and how to access it remotely.

Jetty module activation

The various features of jetty are activated and configured by our module mechanism, which has so far been done within the docker file. There is a discussion about using environment variables to control some of these features, so this issue is to discuss exactly how that can be done.

Firstly some background on the jetty module mechanism. An example is to activate gcloud sessions we need to run a command (currently from a Dockerfile RUN command) like:

java -jar $JETTY_HOME/start.jar --add-to-start=session-store-gcloud,jcl

The --add-to-start command enables both the session-store-gcloud module and it's dependencies, which primarily is the sessions module. Note that we also add the jcl module to pick one of our available implementations of java commons logging, which is a dependency of the gcloud library used (if there was only one, we'd pick it automatically).

To add memcache, you would need to run the command (or add it to the previous command):

java -jar $JETTY_HOME/start.jar --add-to-start=session-store-cache

This is easy enough to do in custom Dockerfiles, but the suggestion is that such features need to be selectable at run time via env variables. So we need to consider how best to do that, so let's see what our --add-to-start command does.

When adding a thirdparty module like gcloud session, the --add-to-start command downloads any dependencies not part of the normal jetty distribution, which for gcloud, this is quiet a few jars, so I don't think we'd want to be doing this for every instance we start up as it increases the chance of a failure. The solution to this is that in our image Dockerfile for jetty-runtime we can enable all the modules that might be enabled by env variables, so the library downloads are done, and then disable them by removing the related start.d/*.ini files.

Next the default command line is modified to include activation of the module, which is simply adding something like: --module=session-store-gcloud to the command line. This is what could be done by a start script that checks the env variables.

Finally, the --add-to-start command creates a start.d/session-store-gcloud.ini file with parameters that can be configured for the module. e.g.:

[] cat start.d/session-store-gcloud.ini 
# --------------------------------------- 
# Module: session-store-gcloud
# Enables GCloudDatastore session management.
# --------------------------------------- 
--module=session-store-gcloud


## GCloudDatastore Session config
#jetty.session.gcloud.maxRetries=5
#jetty.session.gcloud.backoffMs=1000



[] cat start.d/sessions.ini 
# --------------------------------------- 
# Module: sessions
# The session management. By enabling this module, it allows
# session management to be configured via the ini templates
# created or by enabling other session-cache or session-store
# modules.  Without this module enabled, the server may still
# use sessions, but their management cannot be configured.
# --------------------------------------- 
--module=sessions

## The name to uniquely identify this server instance
#jetty.sessionIdManager.workerName=node1

## Period between runs of the session scavenger (in seconds)
#jetty.sessionScavengeInterval.seconds=60

[] cat start.d/session-store-cache.ini 
# --------------------------------------- 
# Module: session-store-cache
# Enables caching of SessionData in front of a SessionDataStore.
# --------------------------------------- 
--module=session-store-cache


## Session Data Cache type: xmemcached
session-data-cache=xmemcached
#jetty.session.memcached.host=localhost
#jetty.session.memcached.port=11211
#jetty.session.memcached.expirySec=

As you can see, almost all the parameters are commented out, so the default settings are normally good. This means that we can access the default module configuration simply by a start script that adds the appropriate --module= args to the command line based on the environment variables.

However, as soon as a user wants to configure any of these features, they are going to have to re-create the ini files and edit the parameters in them in their own custom Dockerfile. This then creates a duality where a module might be enabled in either a start.d/*.ini file and/or enabled by the start script adding a command line argument based on an environment variable.

I don't think this duality is good and it will confuse many that the mechanism changes the moment they go from a default image to a custom image.

So another idea is that our Dockerfile can enable all the possible modules and then move their start.d/*.ini files to a gae.d/ directory. The start script would then check the env variables and if a feature is turned on, it would move the ini file(s) from gae.d/ back to start.d/. To configure these features they could then use a custom Dockerfile simply to replace/edit the gae.d/*.ini files that the start script would use. Still not perfect but better.

Perhaps a better way would be for the start script to first check if a feature is already enabled, and if so ignore the env variables. Note that jetty will throw an exception if you try to configure two contradictory modules (eg two non compatible session data stores), so if a user enables one session store and the env variable is indicating another, we need to either error or ignore the env variable?

To make more progress on deciding how to do this, we need to know more about what the environment variables will look like. In #41 @janbartel will soon enumerated the various session configurations that we could deploy. @meltsufin can you look at that issue and when you indicate which of those you want available on jetty-runtime can you also indicate how you imagine the environment variable will look like? Will it be several booleans? an enumeration? will there be configuration env variables such as ports etc.?

Add /third_party/ for patched Spring Petclinic and Dandelion

Proposal to add the following to the repository:

These 2 projects will be wired up into the maven build.

What should the groupId and artifactId be for those two new maven modules?

Add test webapp module for quickstart

Create a /tests/test-war-quickstart that tests the quickstart facility of jetty.

The sample project should be sufficiently complex. Perhaps either the spring petclinic, or the cdi/weld petclinic.

Create test webapps for JSP support levels

Identified webapps

  • JSP 2.0 very old usage
  • JSP 2.1 older usages
  • JSP 2.2 (seen in Jetty 8)
  • JSP 2.3 (seen in Jetty 9)
  • Custom Taglibs
  • JSTL / EL usages
  • WEB-INF/lib/*.jar inclusions that can cause problems
    • extra javax.el (duplicates or more than 1 versions) api libs
    • extra javax.el (duplicates or more than 1 versions) impl libs
    • extra jstl (duplicates or more than 1 versions) api libs
    • extra jstl (duplicates or more than 1 versions) impl llibs

Set up CI

Continuous integration testing needs to setup with:

  • a snapshot build for every commit to this repository against a fixed version of openjdk-runtime
  • a cascade build for every commit to openjdk-runtime against the new image

Avoid `chown -R` in Dockerfile

The use of chown -R in Dockerfiles is currently used to ensure the files are correctly owned. However as pointed out in moby/moby#6119, this can result in significant bloat in the size of the resulting docker images.

We should consider new docker features to avoid this issue moby/moby#10775, or at least to a manual unpack of the tgz files that will allow user and group to be set correctly initially

Create test webapp for WebSocket support

Identified behaviors

  • Native Jetty WebSocket
    • Server
    • Client
    • Servlet Based
    • Filter Based
    • Annotation Based
    • Listener Based
      • WebSocketConnectionListener
      • WebSocketFrameListener
      • WebSocketListener
      • WebSocketPartialListener
      • WebSocketPingPongListener
  • JSR356 (javax.websocket)
    • Server
      • @PathParam's
      • ServerApplicationConfig
      • ServerEndpointConfig (w/ HttpSession capture)
    • Client
      • ClientEndpointConfig
    • Custom Encoders / Decoders
    • Annotation Based
    • Endpoint Based

Test appengine HTTP headers

Requests to the runtime will have the following headers:

  • x-forwarded-for is a list of upstream proxy IP addresses
  • x-cloud-trace-context is a unique identifier for the request used for traces and logging
  • x-forwarded-proto shows HTTP or HTTPS based on the origin request protocol

Test that user servlets see the correct headers to comply with the servlet spec.

Update to jetty-9.4.x

This will provide access to:

  • greatly improved session manager and gcloud/memcache integration
  • more flexible request customizer
  • improved performance

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.