Giter Site home page Giter Site logo

actix-web-prom's Introduction

actix-web-prom

CI Status docs.rs crates.io MIT licensed

Prometheus instrumentation for actix-web. This middleware is heavily influenced by the work in sd2k/rocket_prometheus. We track the same default metrics and allow for adding user defined metrics.

By default two metrics are tracked (this assumes the namespace actix_web_prom):

  • actix_web_prom_http_requests_total (labels: endpoint, method, status): the total number of HTTP requests handled by the actix HttpServer.

  • actix_web_prom_http_requests_duration_seconds (labels: endpoint, method, status): the request duration for all HTTP requests handled by the actix HttpServer.

Usage

First add actix-web-prom to your Cargo.toml:

[dependencies]
actix-web-prom = "0.8.0"

You then instantiate the prometheus middleware and pass it to .wrap():

use std::collections::HashMap;

use actix_web::{web, App, HttpResponse, HttpServer};
use actix_web_prom::{PrometheusMetrics, PrometheusMetricsBuilder};

async fn health() -> HttpResponse {
    HttpResponse::Ok().finish()
}

#[actix_web::main]
async fn main() -> std::io::Result<()> {
    let mut labels = HashMap::new();
    labels.insert("label1".to_string(), "value1".to_string());
    let prometheus = PrometheusMetricsBuilder::new("api")
        .endpoint("/metrics")
        .const_labels(labels)
        .build()
        .unwrap();

        HttpServer::new(move || {
            App::new()
                .wrap(prometheus.clone())
                .service(web::resource("/health").to(health))
        })
        .bind("127.0.0.1:8080")?
        .run()
        .await?;
    Ok(())
}

Using the above as an example, a few things are worth mentioning:

  • api is the metrics namespace
  • /metrics will be auto exposed (GET requests only) with Content-Type header content-type: text/plain; version=0.0.4; charset=utf-8
  • Some(labels) is used to add fixed labels to the metrics; None can be passed instead if no additional labels are necessary.

A call to the /metrics endpoint will expose your metrics:

$ curl http://localhost:8080/metrics
# HELP api_http_requests_duration_seconds HTTP request duration in seconds for all requests
# TYPE api_http_requests_duration_seconds histogram
api_http_requests_duration_seconds_bucket{endpoint="/metrics",label1="value1",method="GET",status="200",le="0.005"} 1
api_http_requests_duration_seconds_bucket{endpoint="/metrics",label1="value1",method="GET",status="200",le="0.01"} 1
api_http_requests_duration_seconds_bucket{endpoint="/metrics",label1="value1",method="GET",status="200",le="0.025"} 1
api_http_requests_duration_seconds_bucket{endpoint="/metrics",label1="value1",method="GET",status="200",le="0.05"} 1
api_http_requests_duration_seconds_bucket{endpoint="/metrics",label1="value1",method="GET",status="200",le="0.1"} 1
api_http_requests_duration_seconds_bucket{endpoint="/metrics",label1="value1",method="GET",status="200",le="0.25"} 1
api_http_requests_duration_seconds_bucket{endpoint="/metrics",label1="value1",method="GET",status="200",le="0.5"} 1
api_http_requests_duration_seconds_bucket{endpoint="/metrics",label1="value1",method="GET",status="200",le="1"} 1
api_http_requests_duration_seconds_bucket{endpoint="/metrics",label1="value1",method="GET",status="200",le="2.5"} 1
api_http_requests_duration_seconds_bucket{endpoint="/metrics",label1="value1",method="GET",status="200",le="5"} 1
api_http_requests_duration_seconds_bucket{endpoint="/metrics",label1="value1",method="GET",status="200",le="10"} 1
api_http_requests_duration_seconds_bucket{endpoint="/metrics",label1="value1",method="GET",status="200",le="+Inf"} 1
api_http_requests_duration_seconds_sum{endpoint="/metrics",label1="value1",method="GET",status="200"} 0.00003
api_http_requests_duration_seconds_count{endpoint="/metrics",label1="value1",method="GET",status="200"} 1
# HELP api_http_requests_total Total number of HTTP requests
# TYPE api_http_requests_total counter
api_http_requests_total{endpoint="/metrics",label1="value1",method="GET",status="200"} 1

Features

If you enable process feature of this crate, default process metrics will also be collected. Default process metrics

# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0.22
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1048576
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 78
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 17526784
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1628105774.92
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1893163008

Custom metrics

You instantiate PrometheusMetrics and then use its .registry to register your custom metric (in this case, we use a IntCounterVec).

Then you can pass this counter through .data() to have it available within the resource responder.

use actix_web::{web, App, HttpResponse, HttpServer};
use actix_web_prom::{PrometheusMetrics, PrometheusMetricsBuilder};
use prometheus::{opts, IntCounterVec};

async fn health(counter: web::Data<IntCounterVec>) -> HttpResponse {
    counter.with_label_values(&["endpoint", "method", "status"]).inc();
    HttpResponse::Ok().finish()
}

#[actix_web::main]
async fn main() -> std::io::Result<()> {
    let prometheus = PrometheusMetricsBuilder::new("api")
        .endpoint("/metrics")
        .build()
        .unwrap();

    let counter_opts = opts!("counter", "some random counter").namespace("api");
    let counter = IntCounterVec::new(counter_opts, &["endpoint", "method", "status"]).unwrap();
    prometheus
        .registry
        .register(Box::new(counter.clone()))
        .unwrap();

        HttpServer::new(move || {
            App::new()
                .wrap(prometheus.clone())
                .data(counter.clone())
                .service(web::resource("/health").to(health))
        })
        .bind("127.0.0.1:8080")?
        .run()
        .await?;
    Ok(())
}

Custom Registry

Some apps might have more than one actix_web::HttpServer. If that's the case, you might want to use your own registry:

use actix_web::{web, App, HttpResponse, HttpServer};
use actix_web_prom::{PrometheusMetrics, PrometheusMetricsBuilder};
use actix_web::rt::System;
use prometheus::Registry;
use std::thread;

async fn public_handler() -> HttpResponse {
    HttpResponse::Ok().body("Everyone can see it!")
}

async fn private_handler() -> HttpResponse {
    HttpResponse::Ok().body("This can be hidden behind a firewall")
}

fn main() -> std::io::Result<()> {
    let shared_registry = Registry::new();

    let private_metrics = PrometheusMetricsBuilder::new("private_api")
        .registry(shared_registry.clone())
        .endpoint("/metrics")
        .build()
        // It is safe to unwrap when __no other app has the same namespace__
        .unwrap();

    let public_metrics = PrometheusMetricsBuilder::new("public_api")
        .registry(shared_registry.clone())
        // Metrics should not be available from the outside
        // so no endpoint is registered
        .build()
        .unwrap();

    let private_thread = thread::spawn(move || {
        let mut sys = System::new();
        let srv = HttpServer::new(move || {
            App::new()
                .wrap(private_metrics.clone())
                .service(web::resource("/test").to(private_handler))
        })
        .bind("127.0.0.1:8081")
        .unwrap()
        .run();
        sys.block_on(srv).unwrap();
    });

    let public_thread = thread::spawn(|| {
        let mut sys = System::new();
        let srv = HttpServer::new(move || {
            App::new()
                .wrap(public_metrics.clone())
                .service(web::resource("/test").to(public_handler))
        })
        .bind("127.0.0.1:8082")
        .unwrap()
        .run();
        sys.block_on(srv).unwrap();
    });

    private_thread.join().unwrap();
    public_thread.join().unwrap();
    Ok(())
}

Configurable routes pattern cardinality

Let's say you have on your app a route to fetch posts by language and by slug GET /posts/{language}/{slug}. By default, actix-web-prom will provide metrics for the whole route with the label endpoint set to the pattern /posts/{language}/{slug}. This is great but you cannot differentiate metrics across languages (as there is only a limited set of them). Actix-web-prom can be configured to allow for more cardinality on some route params.

For that you need to add a middleware to pass some extensions data, specifically the MetricsConfig struct that contains the list of params you want to keep cardinality on.

use actix_web::dev::Service;
use actix_web::HttpMessage;
use actix_web_prom::MetricsConfig;

web::resource("/posts/{language}/{slug}")
    .wrap_fn(|req, srv| {
        req.extensions_mut().insert::<MetricsConfig>(
            MetricsConfig { cardinality_keep_params: vec!["language".to_string()] }
        );
        srv.call(req)
    })
    .route(web::get().to(handler));

See the full example with_cardinality_on_params.rs.

Configurable metric names

If you want to rename the default metrics, you can use ActixMetricsConfiguration to do so.

use actix_web_prom::{PrometheusMetricsBuilder, ActixMetricsConfiguration};

PrometheusMetricsBuilder::new("api")
    .endpoint("/metrics")
    .metrics_configuration(
        ActixMetricsConfiguration::default()
        .http_requests_duration_seconds_name("my_http_request_duration"),
    )
    .build()
    .unwrap();

See full example configuring_default_metrics.rs.

actix-web-prom's People

Contributors

alexey-n-chernyshov avatar arthurlm avatar cquintana-verbio avatar ctron avatar dejankos avatar dependabot-preview[bot] avatar dependabot[bot] avatar jcgruenhage avatar kppullin-nt avatar lefuturiste avatar mstyura avatar nazar-pc avatar nlopes avatar nwacky avatar parkercouch avatar potkae avatar ptescher avatar qrilka avatar quite4work avatar rabadin avatar riordanp avatar robjtede avatar stiiifff avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

actix-web-prom's Issues

Add custom registry support and None endpoint

Some projects have more than one actix_web Server (for example, to separate public and private APIs). In this case user might not want to expose metrics endpoint on public server and would like to have one only on private server. I suggest making endpoint optional:

 let public_metrics = PrometheusMetrics::new("public", None);
 let private_metrics = PrometheusMetrics::new("private", Some("/metrics"));

Also for projects with multiple actix_web servers it might be useful to have one metrics endpoint for the whole project => it might be useful to share one Registry instance among multiple middlewares:

let r = Registry::new();
let public_metrics =
        PrometheusMetrics::new_with_registry(r.clone(), "public", None).unwrap();
let private_metrics =
        PrometheusMetrics::new_with_registry(r.clone(), "private", Some("/metrics")).unwrap();

This way middleware can be used in projects with one or more actix_web Servers out of the box

Issue when using middleware within a scope

I am trying to use the middleware within a scope e.g.

        App::new()
            .service(
                web::scope("/internal")
                    .wrap(prometheus.clone())
                    .service(web::resource("/").to(|| HttpResponse::Ok())),
            )
            .service(web::resource("/").to(|| HttpResponse::Ok()))

and I'm getting the following error:

error[E0271]: type mismatch resolving `<PrometheusMetrics as Transform<actix_web::scope::ScopeService>>::Response == ServiceResponse`
  --> src/main.rs:11:22
   |
11 |                     .wrap(prometheus.clone())
   |                      ^^^^ expected struct `StreamLog`, found enum `actix_web::dev::Body`
   |
   = note: expected struct `ServiceResponse<StreamLog<_>>`
              found struct `ServiceResponse<actix_web::dev::Body>`

actix 3 release bump

Thanks for this crate! This is probably a simple one :-)

actix 3 was just released, and it probably requires a bump of actix herein, as I'm seeing this when upgrading:

error[E0271]: type mismatch resolving `<actix_web_prom::PrometheusMetrics as actix_web::dev::Transform<<impl actix_service::ServiceFactory as actix_service::ServiceFactory>::Service>>::Request == actix_web::dev::ServiceRequest`
   --> version.rs:100:14
    |
100 |             .wrap(prometheus.clone())
    |              ^^^^ expected struct `actix_web::service::ServiceRequest`, found struct `actix_web::dev::ServiceRequest`
    |
    = note: perhaps two different versions of crate `actix_web` are being used?

Should not extend metrics response on 404 (Uncontrolled Resource Consumption Vulnerability)

Currently, when an endpoint receives a 404, it extends the Prometheus metrics response by approx ~15 new lines / ~1.5KB.

For example, my server does not include a /time.php endpont, so when I was bot scraped by a bot, of these lines were appended.

# ...
public_api_http_requests_duration_seconds_bucket{endpoint="/time.php",method="GET",status="404",le="0.005"} 1
public_api_http_requests_duration_seconds_bucket{endpoint="/time.php",method="GET",status="404",le="0.01"} 1
public_api_http_requests_duration_seconds_bucket{endpoint="/time.php",method="GET",status="404",le="0.025"} 1
public_api_http_requests_duration_seconds_bucket{endpoint="/time.php",method="GET",status="404",le="0.05"} 1
public_api_http_requests_duration_seconds_bucket{endpoint="/time.php",method="GET",status="404",le="0.1"} 1
public_api_http_requests_duration_seconds_bucket{endpoint="/time.php",method="GET",status="404",le="0.25"} 1
public_api_http_requests_duration_seconds_bucket{endpoint="/time.php",method="GET",status="404",le="0.5"} 1
public_api_http_requests_duration_seconds_bucket{endpoint="/time.php",method="GET",status="404",le="1"} 1
public_api_http_requests_duration_seconds_bucket{endpoint="/time.php",method="GET",status="404",le="2.5"} 1
public_api_http_requests_duration_seconds_bucket{endpoint="/time.php",method="GET",status="404",le="5"} 1
public_api_http_requests_duration_seconds_bucket{endpoint="/time.php",method="GET",status="404",le="10"} 1
public_api_http_requests_duration_seconds_bucket{endpoint="/time.php",method="GET",status="404",le="+Inf"} 1
public_api_http_requests_duration_seconds_sum{endpoint="/time.php",method="GET",status="404"} 0.000011828
public_api_http_requests_duration_seconds_count{endpoint="/time.php",method="GET",status="404"} 1
# ...
public_api_http_requests_total{endpoint="/time.php",method="GET",status="404"} 1

Then, repeat for 100+ other 404'ing endpoints, and you end up with a really large metrics response, resulting in hundreds of GB of egress on your /metrics endpoint (which has been kindly refunded by the Fly.io billing team).

Screenshot from 2024-01-26 15-03-36

Furthermore, a dedicated attacker could certainly exhaust significantly more resources simply by 404ing on more URLs

To resolve this issue, 404 responses from endpoints that fail to match should not be included in metrics. I believe this PR should resolve the issue?

Also recommend submitting an advisory to RustSec for this version of the package.

Split out handler

From the source code:

// We short circuit the response status and body to serve the endpoint
// automagically. This way the user does not need to set the middleware *AND*
// an endpoint to serve middleware results. The user is only required to set
// the middleware and tell us what the endpoint should be.
if inner.matches(&path, &method) {
    head.status = StatusCode::OK;
    body = ResponseBody::Other(Body::from_message(inner.metrics()));
}

I'm not convinced that this is a good idea. new() could return a tuple of (middleware, handler), which would make the code a lot cleaner IMO.

One issue that came to mind quickly is that I want the metrics to include the processing times of all the middlewares, so it's registered as the outmost middleware, but that causes the logging middleware to log a 404 for /metrics, because it logs this before the prm middleware changes the status code and body.

Would you be okay with a PR that changes this behaviour?

can't use actix-web-prom with actix-web 4

hello everyone I am tring to use use actix-web-prom with actix-web 4 whith the below dependency

actix-web = {git = "https://github.com/actix/actix-web",version ="4.0.0-beta.8"}
actix-web-prom = { git = "https://github.com/arthurlm/actix-web-prom.git", branch = "actix-next" }

but I got the following error

not all trait items implemented, missing: Error
--> C:\Users\AbdelmonemMohamed.cargo\git\checkouts\actix-web-prom-af77977b77662c26\a23d5a6\src\lib.rs:589:1
|
589 | impl<B: MessageBody> MessageBody for StreamLog {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ missing Error in implementation
|
= help: implement the missing item: type Error = Type;

please any help

Allow customizing histogram buckets

While the prometheus library allows for customizing histogram buckets, actix-web-prom does not expose this ability. It would be nice to provide buckets that are more pertinent to expected performance.

Is there a way to pull out the HistogramVec from the registry and customize it in my program, or can we add a method here?

Release, please

Hi!

Since #76 is merged, could you bump the version and release on crates?

Thanks!

no default features for actix-web

The dependency in Cargo.toml should ideally be:

- actix-web = "3.0.2"
+ actix-web = { version = "3.0.2", default-features = false }

Compiler error in 0.6.0.beta-3

To reproduce the issue I'm having, create a new crate with the following dependencies (possibly not all required)

[dependencies]
actix-web = "4.0.0-beta.8"
actix-cors = "0.6.0-beta.3"
actix-web-prom = "0.6.0-beta.3"

It gives the following compiler error:

error[E0432]: unresolved imports `actix_web::body::AnyBody`, `actix_web::dev::BodySize`, `actix_web::dev::MessageBody`, `actix_web::http::HeaderValue`
   --> /home/erlend/.cargo/registry/src/github.com-1ecc6299db9ec823/actix-web-prom-0.6.0-beta.3/src/lib.rs:218:5
    |
218 |     body::AnyBody,
    |     ^^^^^^^^^^^^^ no `AnyBody` in `body`
219 |     dev::{BodySize, MessageBody, Service, ServiceRequest, ServiceResponse, Transform},
    |           ^^^^^^^^  ^^^^^^^^^^^ no `MessageBody` in `dev`
    |           |
    |           no `BodySize` in `dev`
220 |     http::{header::CONTENT_TYPE, HeaderValue, Method, StatusCode},
    |                                  ^^^^^^^^^^^ no `HeaderValue` in `http`

For more information about this error, try `rustc --explain E0432`.
error: could not compile `actix-web-prom` due to previous error

rustc:

rustc 1.58.0-nightly (dd549dcab 2021-11-25)

Provide a way to use actix-web-prom with actix-web 4.0.0

Hi,

There currently multiple PRs pending to allow usage of actix-web-prom with actix-web = "4.0.0" (see: #45, #44).

I have read discussions and seen that maintainers do not want to have beta version in master branch.
This is perfectly understandable. Indeed, libraries releases should never depends on beta versions.

However, it would be awesome if you could provide a branch allowing users to continue using actix-web-prom with new versions of actix-web.

I have done this in my fork (see changes) and it works out of the box changing Cargo.toml like bellow:

- actix-web-prom = "0.5"
+ actix-web-prom = { git = "https://github.com/arthurlm/actix-web-prom.git", branch = "actix-next" }

Would you like me to create a PR on branch actix-next in your repository (and not merging into master) ?

Do you have any better ideas to allow users continue using your works with actix-web = "4.0.0-beta.X" ?

Cannot use PrometheusMetricsBuilder with default prometheus registry and lazy static

PrometheusMetricsBuilder consumes Registry not allowing to pass default prometheus registry.
This prevents usage of very convinient lazy static metrics from prometheus package

// This will register to prometheus::default_registry()
lazy_static! {
    static ref HIGH_FIVE_COUNTER: IntCounter =
        register_int_counter!("highfives", "Number of high fives received").unwrap();
}

// ...

let prometheus = PrometheusMetricsBuilder::new("api")
    .endpoint("/metrics")
    .registry(prometheus::default_registry()) // <<<<<<<<<<<<<<<<<<<<<< ERROR
    .build()
    .map_err(|e| RunServiceError::Prometheus(e.to_string()))?;

Though, from the first glance it should be perfectly possible to accept registry by reference instead

feat: configurable cardinality for endpoint pattern label

Hello, first thanks for this wonderful crate.
Like discussed in #20 and implemented in #30 we currently try to get the match_pattern of the request and use that as the label value for endpoint.

This is a feature request to be able to configure the cardinality for the endpoint pattern in the label of the metrics. I started to develop a POC and I will probably do a PR.

This is related to this downstream issue maplibre/martin#773. For the use case of the martin tile server we need to have the details by sources_ids, to be able to tell which source is causing which latencies or which source is being the most requested. The cardinality here is limited since we have a fixed number of sources.

To be clear we need to have a way to transform this:

martin_http_requests_total{endpoint="/{source_ids}/{z}/{x}/{y}",method="GET",status="200"} 31

into:

martin_http_requests_total{endpoint="/parcels/{z}/{x}/{y}",method="GET",status="200"} 11
martin_http_requests_total{endpoint="/address/{z}/{x}/{y}",method="GET",status="200"} 20

What are your take on this?

Option to map "path" to something else

I have an endpoint that is parametrized with and entity id

#[get("/get/{id}")]
async fn get(...) -> impl Responder {}

Using PrometheusMetric middleware I get different metric for each id (because thay make different path). Would be great to have an option to somehow map real path to /get/{id} or something. I'm not sure if it's feasible to automatically find mapping provided in get/post/... macros. However, maybe it would be enough to add option to provide custom function to modify path for purposes of storing metrics.

Maybe something like (or something similar) could work:

   fn map_fn<'a>(path: &'a str) -> Cow<'a, str> {
       ...
   }
   let prometheus = PrometheusMetrics::new("api", Some("/metrics"), None, map_fn);

Is there an option to add such functionality? Would that make sens or is there any better option?

Wrong response times

Currently the timing of the reponse times seems off. I think because on line 355 and 356:
fut: self.service.call(req), clock: SystemTime::now(),
Is in the wrong order, so the processing of the request is started before the timer is initiated.
Should be fixed by initiating the time before making the MetricsResponse object. If needed I am happy to make a pull request for this.

Upgrade to actix-web 2 + Futures 0.3

Would you be open to a PR that updates the actix_web dependency to 2.0.0? I have a working implementation if you'd be interested in that update.

0.2.0 Crate Release

Great work, just one thing, when fetching from crates.io the latest version available is 0.1.4 version. Could you bump the version and release on crates? I don't mind drafting a PR for github action/travis/etc... config to automate this process in the future. LMK your thoughts.

Metrcis not working as expected locally

Hi! I've been testing the crate on my local machine but it's not working as expected.

After making some requests to my Actix server, the metrics seem to be reset or started over (or not updating at all), It's bizarre what's happening. For example, the HTTP request counter is never updated or after being incremented, it decrements and no sense to me.
The /metrics endpoint doesn't keep itself consistent. Is this behavior expected or maybe I'm doing something wrong?

We're not doing anything weird on our side:

// middleware::metrics
pub fn metrics() -> PrometheusMetrics {
    PrometheusMetricsBuilder::new("quests")
        .endpoint("/metrics")
        .build()
        .unwrap()
}


// Actix routing
pub fn get_app_router(
    config: &Data<Config>,
    db: &Data<Database>,
    redis: &Data<RedisMessagesQueue>,
) -> App<
    impl ServiceFactory<
        actix_web::dev::ServiceRequest,
        Config = (),
        Response = actix_web::dev::ServiceResponse<impl MessageBody>,
        Error = actix_web::Error,
        InitError = (),
    >,
> {
    let cors = actix_cors::Cors::permissive();
    App::new()
        .app_data(query_extractor_config())
        .app_data(json_extractor_config())
        .app_data(path_extractor_config())
        .app_data(config.clone())
        .app_data(db.clone())
        .app_data(redis.clone())
        .wrap(cors)
        .wrap(TracingLogger::default())
        .wrap(middlewares::metrics())
        .wrap(middlewares::metrics_token(&config.wkc_metrics_bearer_token))
        .configure(routes::services)
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.