- Running the Webserver
- Testing the Project
- Benchmarking the Project
- Running ul-api on Docker
- Contributing
- Getting Help
- External Resources
- License
To start-up the axum webserver, just run:
cargo run
This will start-up the service, running on 2 ports:
3000
: mainul-api
application, including/healthcheck
, etc.4000
:/metrics
Upon running the application locally, OpenAPI
documentation is available as a swagger-ui
at http://localhost:3000/swagger-ui/
. Read more in
Docs and OpenAPI.
For local development with logs displayed using ANSI terminal colors, we recommend running:
cargo run --features ansi-logs
To better help diagnose and debug your server application, you can run:
RUSTFLAGS="--cfg tokio_unstable" cargo run --features console, ansi-logs
This command uses a compile-time feature-flag, console
, to give us local
access to tokio-console
, a diagnostics and debugging
tool for asynchronous Rust programs, akin to pprof, htop
/top
,
etc. You can install tokio-console
using cargo
:
cargo install --locked tokio-console
Once executed, just run tokio-console --retain-for <*>min
to use it and explore.
ul-api
contains a file for configuration settings,
loaded by the application when it starts. Configuration can be overridden using
environment variables that begin with an APP prefix. To allow for underscores
in variable names, use separators with two underscores between APP and the name
of the setting, for example:
export APP__SERVER__ENVIRONMENT="dev"
This export would override this setting in the default config:
[server]
environment = "local"
Making HTTP Client Requests with Reqwest
This web framework includes the reqwest HTTP Client library for
making requests to external APIs and services, separate from the axum
webserver
itself. We use the reqwest-middleware crate for
wrapping around reqwest
requests for client middleware chaining, giving us
metrics, retries, and tracing out of the box. We have an
integration test,
which demonstrates how to build a client with middleware and configuration:
// reqwest::Client by default has a timeout of 30s
let reqwest_client = Client::builder()
.pool_idle_timeout(settings.http_client.pool_idle_timeout())
.timeout(Duration::from_millis(settings.http_client.timeout_ms))
.build();
Ok(Self {
client: ClientBuilder::new(reqwest_client?)
.with(TracingMiddleware::<ExtendedTrace>::new())
.with(Logger)
.with(RetryTransientMiddleware::new_with_policy(
retry_policy,
"AClient".to_string(),
))
.with(Metrics {
name: "AClient".to_string(),
})
.build(),
url: settings.url.to_string(),
})
Note: Our logging middleware
implements traits for both axum
and reqwest
Request
types. Additionally,
we implement an HTTP Client-specific middleware for deriving metrics for each
external, reqwest
request in
middleware/client.metrics.rs.
For the axum
webserver itself, metrics are derived via
middleware/metrics.rs.
-
Run tests
cargo test
For benchmarking and measuring performance, this project leverages
criterion and a test_utils
feature flag
for integrating proptest within the the suite for working with
strategies and sampling from randomly generated values.
-
Run benchmarks
cargo bench --features test_utils
We recommend setting your Docker Engine configuration
with experimental
and buildkit
set to true
, for example:
{
"builder": {
"gc": {
"defaultKeepStorage": "20GB",
"enabled": true
}
},
"experimental": true,
"features": {
"buildkit": true
}
}
-
Build a multi-plaform Docker image via buildx:
docker buildx build --platform=linux/amd64,linux/arm64 -t ul-api --progress=plain .
-
Run a Docker image (depending on your platform):
docker run --platform=linux/amd64 -t ul-api
🎈 We're thankful for any feedback and help in improving our project! We have a contributing guide to help you get involved. We also adhere to our Code of Conduct.
This repository contains a Nix flake that initiates both the Rust toolchain set in rust-toolchain.toml and a pre-commit hook. It also installs helpful cargo binaries for development. Please install nix and direnv to get started.
Run nix develop
or direnv allow
to load the devShell
flake output,
according to your preference.
For formatting Rust in particular, we automatically format on nightly
, as it
uses specific nightly features we recommend by default.
This project recommends using pre-commit for running pre-commit hooks. Please run this before every commit and/or push.
- If you are doing interim commits locally, and for some reason if you don't
want pre-commit hooks to fire, you can run
git commit -a -m "Your message here" --no-verify
.
If you make any changes to axum routes/handlers, make sure to add/update
OpenAPI specifications. You can run cargo run --bin openapi
to generate an updated specification .json file, located
here.
An example of adding an OpenAPI specification is the following:
#[utoipa::path(
get,
path = "/ping",
responses(
(status = 200, description = "Ping successful"),
(status = 500, description = "Ping not successful", body=AppError)
)
)]
pub async fn get() -> AppResult<StatusCode> {
Ok(StatusCode::OK)
}
Of note, once you add the utoipa attribute macro to a route,
you should also update the ApiDoc
struct in src/docs.rs:
/// API documentation generator.
#[derive(OpenApi)]
#[openapi(
paths(health::healthcheck, ping::get),
components(schemas(AppError)),
tags(
(name = "", description = "")
)
)]
/// Tied to OpenAPI documentation.
#[derive(Debug)]
pub struct ApiDoc;
For logs, traces, and metrics, ul-api
utilizes several log levels
and middleware trace layers to control how events are recorded. The trace layers
include a: (1) storage layer; (2) otel layer; (3) format layer; and a
(4) metrics layer. The log levels include: (1) trace; (2) debug;
(3) info; (4) warn; and (5) error. All of this leverages the
tracing library and it's related extensions. This approach
is heavily inspired by Composing an observable Rust application.
At its core, the storage layer exists
to capture everything flowing through ul-api
before events are
diffracted to their respective log levels--this way it is possible for
ul-api
to maintain contextual trace information throughout the
lifetime of any event, no matter the log level.
The final layer is the metrics layer, which, of note, removes the stored span information upon span closure.
The logging middleware automatically drives request/response logging, taking into account status codes and helpful contextual information.
For logging, we use the [tracing][tracing-log] library and structure logs in
logfmt
style. The implementation of the log generation is inspired
by influxdata's (Influx DB's) version.
When defining log functions for output, please define them like so:
self.healthcheck()
.await
.map(|_| {
info!(
subject = "postgres",
category = "db",
"connection to PostgresDB successful"
)
})
.map_err(|e| {
error!(
subject = "postgres",
category = "db",
error=?e,
"failed to connect to PostgresDB",
);
ul-api
implements hooks around the creation and closing of spans
across the lifetime of events and requests in order to track the entire trace
of that event or request. Each created span has a unique span id that will match
its close. Below is an example which demonstrates the opening of span with an id
of 2251799813685249
, then a logging event which occurs within that span, and
then closing of that span once it's complete.
level=INFO span_name="HTTP request" span=2251799813685249 span_event=new_span timestamp=2023-01-29T15:06:42.188395Z http.method=GET http.client_ip=127.0.0.1:59965 http.host=localhost:3000 trace_id=fa9754fa3142db2c100a8c47f6dd391d http.route=/ping
level=INFO subject=request category=http.request msg="started processing request" request_path=/ping authorization=null target="project::middleware::logging" location="project/src/middleware/logging.rs:123" timestamp=2023-01-29T15:06:42.188933Z span=2251799813685249 otel.name="GET /ping" http.method=GET http.scheme=HTTP http.client_ip=127.0.0.1:59965 http.flavor=1.1 otel.kind=server http.user_agent=curl/7.85.0 http.host=localhost:3000 trace_id=fa9754fa3142db2c100a8c47f6dd391d http.target=/ping http.route=/ping
level=INFO span_name="HTTP request" span=2251799813685249 span_event=close_span timestamp=2023-01-29T15:06:42.192221Z http.method=GET latency_ms=3 http.client_ip=127.0.0.1:59965 http.host=localhost:3000 trace_id=fa9754fa3142db2c100a8c47f6dd391d http.route=/ping
When leveraging tracing's instrument functionality, we can instrument a function to create and enter a tracing span every time that function is called. There are two ways to use instrumentation:
- instrumentation macros
- instrumentation methods.
#[instrument(
level = "info",
name = "ul-api.songs.handler.POST",
skip_all,
fields(category = "http.handler", subject = "songs")
)]
pub async fn post(db: Extension<PG>,...)
// Start a span around the context process spawn
let process_span = debug_span!(
parent: None,
"process.async",
subject = "songs.async",
category = "songs"
);
process_span.follows_from(Span::current());
tokio::spawn(
async move {
match context.process().await {
Ok(r) => debug!(event=?r, "successfully processed song addition"),
Err(e) => warn!(error=?e, "failed processing song"),
}
}
.instrument(process_span),
);
If a function is instrumented with a special .record
prefix in the name
field, then, as part of the its execution, a counter
will
automatically be incremented and a histogram
recorded for that
function's span context (start-to-end):
#[instrument(
level = "info",
name = "record.save_event",
skip_all,
fields(category="db", subject="postgres", event_id = %event.event_id,
event_type=%event.event_type,
metric_name="db_event",
metric_label_event_type=%event.event_type
)
err(Display)
)]
async fn save_event(...) -> ... {
These metrics are derived via the metrics layer
where the metrics are stripped off the .record
prefix and then recorded with
the [metrics-rs][metrics-rs] library:
let span_name = span
.name()
.strip_prefix(METRIC_META_PREFIX)
.unwrap_or_else(|| span.name());
...
...
metrics::increment_counter!(format!("{name}_total"), &labels);
metrics::histogram!(
format!("{name}_duration_seconds"),
elapsed_secs_f64,
&labels
);
How is OTEL incorporated for exporting (distributed) tracing information?
The axum-tracing-opentelemetry crate provides middleware for adding
OTEL integration to a tower
service, an extended Tracelayer
,
setting OTEL span information when ul-api
application
routes are executed.
OTEL trace information is exported using a opentelemetry propagation layer, which is registered along with the other layers, for example storage, logging, metrics. This information is exported over [grpc][grpc], using a Rust implementation of the opentelemetry otlp specification, codified in our tracer module. With the proper settings and setup, this will work for local development, exporting to a service like Jaeger or for sending traces to Honeycomb or a similar cloud service.
- We recommend leveraging cargo-watch, cargo-expand and irust for Rust development.
- We recommend using cargo-udeps for removing unused dependencies before commits and pull-requests.
This project lightly follows the Conventional Commits
convention to help explain
commit history and tie in with our release process. The full specification
can be found here. We recommend prefixing your commits with
a type of fix
, feat
, docs
, ci
, refactor
, etc..., structured like so:
<type>[optional scope]: <description>
[optional body]
[optional footer(s)]
For usage questions, usecases, or issues please open an issue in our repository.
We would be happy to try to answer your question or try opening a new issue on Github.
These are references to specifications, talks and presentations, etc.
This project is licensed under the MIT License, or http://opensource.org/licenses/MIT.