Giter Site home page Giter Site logo

ps2alerts / api Goto Github PK

View Code? Open in Web Editor NEW
10.0 3.0 3.0 3.03 MB

Centalised API that processes aggregates coming in from the Aggregator via RabbitMQ. Also is a REST interface for the website and external users.

Home Page: https://api.ps2alerts.com

License: GNU General Public License v3.0

Dockerfile 0.20% Shell 0.12% JavaScript 2.28% TypeScript 97.40%
kubernetes nestjs nodejs planetside2 typescript rabbitmq-consumer

api's People

Contributors

dependabot[bot] avatar depfu[bot] avatar maelstromeous avatar microwavekonijn avatar ryanjsims avatar zhenghung avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

api's Issues

Create RPC queue to enable Aggregator to create instances and get IDs back

We face a unique situation in terms of knowing what the instance ID actually is when we create it, and it should be the only real time we require a actual response from the API when it creates something.

@microwavekonijn has suggested using a RabbitMQ RPC connection where it can consume and then return the ID of the instance.

Struggling to find documentation on NestJS on how to do this however. We might have to implement some sort of return response via another queue that the Aggregator consumes which allows the Aggregator to apply the ID to it's in-memory array if we can't figure this one out.

Add internal endpoints for Aggregator use

For ps2alerts/aggregator#481, we require an internal endpoint to communicate the following with the Aggregator:

  • Instance status and state handling
  • Facilitation of inserting and reading FacilityControl for an instance to produce map states

This requires an internally shared secret sent via auth header to authenticate requests coming from the Aggregator.

This should be done via a Guard that nestJS offers, then we can simply wrap any controller method with a decorator to make the code quite clean, rather than doing an if statement on each one.

Create Message Queue data validator

We need some sort of middleware which validates the messages coming in from the PS2Alerts Aggregator module sent via MQ. In theory, these messages should be clean, but we should have some sort of simple validation to ensure required keys are sent through of the correct types etc.

Create mechanism to process Global Aggregates after alert has finished

In order to implement a Global Aggregate based on a bracket system, we need to know what the bracket of the alert is. Unforrtunately, now that we've implemented the population based bracket system rather than time based, we are unable to wholeheartedly calculate the alert bracket until the alert has finished.

Therefore, we cannot attribute the global aggregates to a bracket until the alert has completed.

There are two viable methods to process this:

1) Delay the processing of Global Aggregates until alert end

This would be the least work, and will result in eventual consistency. However, this will generate a large backlog of messages to churn through at the alert finish. This will need to be tested quite througholy and ensure Rabbit can take the load as well as the API containers.

Implementation:

Attach an instance ID to the global aggregators. In the queue consumer, check if that alert has been finished. If it hasn't, chuck it back into the queue until the alert has finished. We can then pull out the instance's bracket and apply that to the global aggregator.

A new queue is likely to be needed, so we have "instance" queues which are the instance aggregators, and the "eventual" queue, which needs to have a long TTL applied so it doesn't start deleting messages.

2) Scrap sending Global Aggregation messages and apply Instance Aggregators on top of Global Aggregators

There will need to be some sort of trigger against alerts that have just finished to process this information and then apply it on top of the Global Aggregators. This is a lot more work, as there needs to be 2 things in place:

  1. A mechanism to flag an alert to process (and we need some sort of count to ensure all pending messages have been processed before applying)
  2. Code put in place to loop through all Instance Aggregate data and apply it on top of Globals. This presents a maintainability problem as we now need code to apply counts on top of other counts.

Build Aggregration controllers

The foundation to accept data from the websocket is complete, we have a method of being able to route messages to appropiate controllers to process.

We now need to flesh out these controllers and actually save data. Should be fairly trivial thing to do, we just need to be clever about validation.

Create common repo to sync MQ endpoint tags

We should create a repo in order to share the message queue endpoint names between the aggregator and the API. Right now they're duplicate files on both repos and both have to change at the same time.

Create vehicle data endpoint

I need an endpoint making in the API module which allows easy gathering of Vehicle Data, basically this call:
https://census.daybreakgames.com/get/ps2:v2/vehicle?c:limit=1000
I need it to return a list of vehicles compiled to the following interface:

export interface VehicleDataInterface {
  id: number;
  name: string;
  faction: Faction
}

Should be a seperate controller, maybe /data/vehicles. The purpose of the endpoint is that it should initially pull the data out of census and store the compiled data in Redis for easier retrival from the website. This will mean that we can keep the data somewhat fresh (12 hours say). I'm currently pulling it directly out of Census but not only is that a Census API call every single page load, but also it's currently exposing our service ID

Implement REST read endpoints

Expose the data directly through an http endpoint.

Endpoints currently in mind:

Instances

Exposes the currently active instances, and provides a history with a paginated list. Needs to be filterable by server, date/time and zone.

  • GET instance
  • GET instances list (paginated) by filters world / zone / faction victory
  • GET active instances
  • GET instances by world
  • GET instances by zone (maybe)

Aggregates

Path: <ROOT>/aggregates/

Global

Path: <ROOT>/aggregates/global/

Global Character Aggregate

Path: <ROOT>/aggregates/global/character

Exposes statistics for each character on a world.

  • GET single char ID
  • GET list of characters
  • GET list of characters by world ID
  • GET list of characters sorted by metric (e.g. kills)
  • GET list of characters by world ID sorted by metric

Global Class Aggregate

Path: <ROOT>/aggregates/global/class

Exposes statistics for particlar classes on a world

  • GET single loadout (class) ID globally
  • GET single loadout ID for world
  • GET all loadouts
  • GET all loadouts by world

Global Facility Control

Path: <ROOT>/aggregates/global/facility

  • GET single facility ID globally
  • GET single facility ID for world
  • GET all facilities by world
  • GET all facilities by zone
  • GET all facilities by world and zone

Global Facition Combat

Path: <ROOT>/aggregates/global/faction

  • GET by faction globally
  • GET by faction for world
  • GET all factions for world

Potentially GET for all factions by instances on a zone, not sure if that's useful or not. Might have to do a zone combat metric at some point if people want it. Could do it via an Mongo aggregate.

Global Outfits

Path: <ROOT>/aggregates/global/outfit

  • GET by outfit ID (potentially might need world ID due to PS4 US and PS4 EU)
  • GET list of outfits by world
  • GET list of outfits globally sorted by metric
  • GET list of outfits by world sorted by metric

Global Weapons

Path: <ROOT>/aggregates/global/weapon

  • GET by weapon ID
  • GET by weapon ID by world
  • GET list of weapons globally
  • GET list of weapons by world
  • GET list of weapons globally sorted by metric
  • GET list of weapons by world sorted by metric

Instance

Path: <ROOT>/aggregates/instance/

Instance Character Aggregate

Path: <ROOT>/aggregates/instance/character

Exposes statistics for each character for an instance.

  • GET single char ID within instance
  • GET list of characters within instance
  • GET list of characters sorted by metric (e.g. kills) within instance

Instance Class Aggregate

Path: <ROOT>/aggregates/instance/class

  • GET single loadout (class) ID within instance
  • GET all loadouts within instance

Instance Facility Control

Path: <ROOT>/aggregates/instance/facility

  • GET single facility ID within instance
  • GET all facilities within instance

Instance Facition Combat

Path: <ROOT>/aggregates/instance/faction

  • GET by faction within instance
  • GET all factions within instance

Instance Outfits

Path: <ROOT>/aggregates/instance/outfit

  • GET by outfit ID within instance
  • GET list of outfits within instance
  • GET list of outfits within instance sorted by metric

Instance Population Aggregate

Path: <ROOT>/aggregates/instance/population

Gets the population history as detected by the Aggregator for the instance

  • GET list of population metrics within instance, sorted by time ASC

Instance Weapons

Path: <ROOT>/aggregates/instance/weapon

  • GET by weapon ID within instance
  • GET list of weapons within instance
  • GET list of weapons within instance sorted by metric

DEPRICATED Endpoint notes

Endpoint Notes - Requests and Responses

All timestamps provided are in UNIX TIMESTAMP format.

THIS DOCUMENTATION IS NOT YET COMPLETE. DO NOT USE FOR PRODUCTION SERVICES!

Alerts

Example request:

GET /v2/alerts/10000

Example Response:

{
  "data": {
    "id": 10000,
    "started": 1438169834,
    "ended": 1438177048,
    "server": 1,
    "zone": 2,
    "winner": "VS",
    "isDraw": false,
    "isDomination": false,
    "isValid": true,
    "inProgress": false
  }
}

This endpoint supports embeds. Embeds allow you to request other information about alerts. Please see the list at the bottom of this document for a list of Alert Embeds.

Alerts/Active

Todo

Alerts/Counts

Todo

Embeds

Supported Alerts embeds:

Each of the responses as demonstrated below are all shown AFTER the alert details are shown. The ... at the top of each response represents the alert data, signifying that the response shown is after the alert data.

...s within the response itself signify that there is more data that would likely come. If these dots are missing, you are seeing the entire response.

Classes

Shows all data relating to classes for an alert.

GET /v2/alerts/10000?embed=classes

...
"classes": {
    "data": [
        {
          "id": 1, // ClassID
          "kills": 463,
          "deaths": 449,
          "teamkills": 6,
          "suicides": 5
        },
        ...
    ]
}

Combats

Shows all kill totals for each faction for an alert. Does NOT show weapon statistics, that has it's own embed.

GET /v2/alerts/10000?embed=combats

...
"combats": {
    "data": {
        "kills": {
            "vs": 3873,
            "nc": 2863,
            "tr": 4762,
            "total": 11498
        },
        "deaths": {
            "vs": 4291,
            "nc": 3085,
            "tr": 4913,
            "total": 12289
        },
        "teamkills": {
            "vs": 244,
            "nc": 113,
            "tr": 189,
            "total": 546
        },
        "suicides": {
            "vs": 124,
            "nc": 50,
            "tr": 71,
            "total": 245
        }
    }
}

CombatHistory

Gets the running total of each faction's kill totals as the alert went on. All data should be sorted by timestamp ascending. Data is sampled at 30 second intervals.

GET /v2/alerts/10000?embed=combatHistorys

...
"combatHistorys": {
    "data": [
        {
            "timestamp": 1438169860,
            "vs": 14,
            "nc": 14,
            "tr": 14
        },
        ...
        {
            "timestamp": 1438170760,
            "vs": 805,
            "nc": 805,
            "tr": 921
        },
        ...
    ]
}

MapInitials

Gets the initial map state so that accurate maps can be built.

GET /v2/alerts/10000?embed=mapInitials

...
"mapInitials": {
    "data": [
        {
            "facilityID": 7500,
            "facilityType": 4,
            "facilityFaction": 1
        },
        ...
    ]
}

Maps

Gets all facility captures and defences (may add filters later for this) for an alert.

GET /v2/alerts/10000?embed=maps

...
"maps": {
    "data": [
        {
            "timestamp": 1438169876,
            "facilityID": 118000,
            "facilityNewFaction": 3,
            "facilityOldFaction": 2,
            "durationHeld": 4961,
            "controlVS": 29,
            "controlNC": 22,
            "controlTR": 47,
            "server": 1,
            "zone": 2,
            "outfitCaptured": 37521582920072694,
            "isDefence": false
        },
        ...
    ]
}

Outfits

Gets all statistics for each outfit during the alert.

Note, in this endpoint, there is 3 outfits with the IDs of -1, -2, and -3. These represent the players who are not in an outfit for each faction. So for example, a VS player who doesn't have an outfit will be counted as outfit -1.

GET /v2/alerts/10000?embed=outfits

...
"outfits": {
    "data": [
        {
            "outfit": {
                "id": 37530800324403534,
                "name": "PLANETSIDE POLICE DEPT",
                "tag": "PSDP",
                "faction": 3
            },
            "metrics": {
                "kills": 32,
                "deaths": 23,
                "teamkills": 1,
                "suicides": 0,
                "captures": 0
            }
        },
        ...
    ]
}

Players

Gets all statistics for each player during the alert.

GET /v2/alerts/10000?embed=players

...
"players": {
    "data": [
        {
            "player": {
                "id": 5428359100732927473,
                "name": "MoZaiNai",
                "outfitID": 37530800324403534,
                "faction": 3
            },
            "metrics": {
                "kills": 32,
                "deaths": 23,
                "teamkills": 1,
                "suicides": 0,
                "headshots": 5
            }
        },
        ...
    ]
}

Populations

Gets the running Population totals over the course of the alert. All data should be sorted by timestamp ascending. Data is sampled at 30 second intervals.

GET /v2/alerts/10000?embed=populations

...
"populations": {
    "data": [
        {
            "timestamp": 1438169840,
            "vs": 146,
            "nc": 100,
            "tr": 155,
            "total": 401
        },
        ...
    ]
}

Vehicles

Gets each vehicle's type metrics during the alert.

GET /v2/alerts/10000?embed=vehicles

...
"vehicles": {
    "data": [
        {
            "id": 1, // Vehicle ID
            "kills": {
                "infantry": 73,
                "vehicle": 15,
                "total": 88
            },
            "deaths": {
                "infantry": 98,
                "vehicle": 82,
                "total": 180
            },
            "bails": 139
        },
        ...
    ]
}

NOTE TO DEV: Implement player based vehicle stats endpoint

Weapons

Gets each weapon's type metrics during the alert

GET /v2/alerts/10000?embed=weapons

...
"weapons": {
    "data": [
        {
          "id": 4601, // WeaponID
          "kills": 44,
          "headshots": 2,
          "teamkills": 1
        },
        ...
    ]
}

NOTE TO DEV: Implement player based weapon stats endpoint

Alerts/Counts

Alerts/Counts/Daily

Returns the daily totals for all servers combined.

GET /v2/alerts/counts/daily

"data": {
    "2014-10-29": { // All dates are in (Y-m-d format)
        "data": {
            "vs": 9,
            "nc": 3,
            "tr": 4,
            "draw": 1,
            "total": 17
        }
    },
    "2014-10-30" : {
        ...
    }
    ...
}

Alerts/Counts/DailyByServer

Returns the daily totals broken down by server, by date

GET /v2/alerts/counts/dailyByServer

"data": {
    "1": {
        "2014-10-29": {
            "data": {
                "vs": 2,
                "nc": 0,
                "tr": 1,
                "draw": 0,
                "total": 3
            }
        },
        "2014-10-30": {
            ...
        },
        ...
    }
    "10": {
        ...
    }
}

Alerts/History

Returns the latest results across all servers

GET /v2/alerts/history

"data": [
    {
        "id": 19352,
        "started": 1455368259,
        "ended": 1455373660,
        "server": 10,
        "zone": 8,
        "winner": "TR",
        "isDraw": false,
        "isDomination": false,
        "isValid": true,
        "inProgress": false
    },
    ...
}

Also supports embeds. Recommended use is ?embed=map.

API update error

TypeError: Update document requires atomic operators
    at new UpdateOneOperation (/app/node_modules/mongodb/lib/operations/update_one.js:12:13)
    at Collection.updateOne (/app/node_modules/mongodb/lib/collection.js:758:5)
    at MongoQueryRunner.<anonymous> (/app/node_modules/typeorm/driver/mongodb/MongoQueryRunner.js:445:85)
    at step (/app/node_modules/typeorm/node_modules/tslib/tslib.js:141:27)
    at Object.next (/app/node_modules/typeorm/node_modules/tslib/tslib.js:122:57)
    at /app/node_modules/typeorm/node_modules/tslib/tslib.js:115:75
    at new Promise (<anonymous>)
    at Object.__awaiter (/app/node_modules/typeorm/node_modules/tslib/tslib.js:111:16)
    at MongoQueryRunner.updateOne (/app/node_modules/typeorm/driver/mongodb/MongoQueryRunner.js:442:24)
    at MongoEntityManager.updateOne (/app/node_modules/typeorm/entity-manager/MongoEntityManager.js:542:33)
    at MongoOperationsService.<anonymous> (/app/dist/services/mongo/mongo.operations.service.js:86:27)
    at Generator.next (<anonymous>)
    at /app/dist/services/mongo/mongo.operations.service.js:20:71
    at new Promise (<anonymous>)
    at __awaiter (/app/dist/services/mongo/mongo.operations.service.js:16:12)
    at MongoOperationsService.upsertOne (/app/dist/services/mongo/mongo.operations.service.js:83:16)

Flesh out controllers to handle messages coming in from Aggregator

The controllers all need the ability to consume from the aggregator MQ. We have the functionality in place to do this now, as proven by the POC performed recently.

instancedeath.controller.js

    @MessagePattern('instanceDeath')
    public handleMessage(@Payload() data: InstanceMetagameMessageData): void {
       ... do things
    }

CI: Set up code quality checks

Currently we are performing no code quality checks on PRs, we need to bring this up to the same standard as the current Websocket project to ensure code quality.

Statistical endpoints

So we need to make a start on generating metrics and statistics out of the system for consumption. Below are my thoughts on what we'll need initially that goes beyond the aggregator system we currently have:

Victory Counts

/metrics/victories

This needs to provide an aggregated (via mongo) summary of all victories, specifically:

  • Per-faction wins
  • Draws
  • Filterable to show per-zone victories and per-faction basis
  • Filterable for date ranges, world, zone etc (not needed for MVP)

Authentication token system

Since the project was mostly private back in the day, there was no real need to provide a token system. We need a method of distributing API keys to people who wish to use the API, including the website, and add a middleware to check these tokens.

Potentially could implement OAuth2, if people feel brave.

DEPRICATED Documentation notes

Response codes

The API has standardized response codes. The API provides 3 messages for you, the HTTP response code (i.e 200 for OK response), a API-Centric Error code (listed below) and a hopefully meaningful message.

Example Error Response

Request: HTTP GET "/v2/alerts/1"

Response:
{
    "error": {
        "code": "API-EMPTY",
        "http_code": 203,
        "message": "No data / Empty"
    }
}

List of API-Centric response codes and their meanings

'API-MALFORMED-REQUEST'

The data you sent couldn't be parsed correctly. This either means you sent invalid JSON or not the parameters expected by the API.

'API-NOT-FOUND'

Basically a 404. The resource endpoint was not located.

'API-DOH'

Server error. Hopefully a message has been passed. If you encounter this, please make an issue.

'API-UNAUTHORIZED'

Unauthorized access to the endpoint requested. *** TO IMPLEMENT ***

'API-DENIED'

Your user level is unable to access this resource. Mainly used for locking down in-development endpoints. *** Currently nothing uses this yet ***

'API-EMPTY'

The particular data you were trying to find returned empty. Either that means it doesn't exist or there was simply no data to show.

Create ActiveInstances endpoint

We require an active instances read endpoint in order to pull out the active alerts from the API, this is required for website MVP.

Add Census data replacement for Map Lattices

Since Census can no longer be trusted to keep lattice data updated, we need to currently maintain it manually.

Additionally, Census has been known to bork requests for the map data, so we should do as little requests to it as possible to ensure stability.

Expand upon the current Oshur endpoint and enable similar endpoints to supply data for the other zones.

ps2alerts/website#413

Create mechanism to calculate bracket based off population

Using brackets based off time isn't really ideal. There's numerous issues doing this via prime time brackets based off time:

  • School nights (people will play less late)
  • Bank / national holidays will extend play time
  • Abnormal events (such as COVID, everyone's playing more)
  • People might be having a good night, so denying them a prime time alert seems... unfair.
  • On the flipside, if people are having a bad night, alerts would be classed as prime when they're totally not.

Therefore, we should design the bracket system to go off an average of both total populations (>400 players on the continent) and each faction needs more than 150 players per faction to ensure that 1) the alert was relatively fair and 2) There was enough people to classify this as a good alert fight (should also exclude underpowered alerts totally, or keep them to a minimum).

Instead of morning, afternoon and prime, we will have the following:

Bracket Per side Total
Dead 0-47 (<1 platoon) <144
Low 48-95 (1-2 platoons) 144-287
Medium 96-143 (2-3 platoons) 288-431
High 144-191 (3-4 platoons) 432-575
Prime 192-300 (4+ platoons) >576

Brackets will be renamed to "Activity Levels"

Execution

  1. Create new cronjob and run it every minute
  2. Pull in the population histories and calculate the average.
  3. Remove the first 5 minutes of the alert, due to how we calculate populations takes 3 minutes to properly calculate so it will skew the average (we will need to filter +5 minutes in time for that instance, easy enough
  4. Calculate the average for each faction and totally. If the faction doesn't line up mark it as

Sactum like API authorization

We are going to take a similar approach to what Laravel Sanctum offers for Laravel. We are going to allow our the SPA to directly communicate to our API without the need of an API key using a csrf token, other applications need an API key to get access to the API.

What this will not do is make it so that our API gets used by only people we allow, instead it allows us to track who is using our API and in what way. It will also allows us to later add an authentication system for user specific features.

The system requires the following features:

  • An route to make our SPA stateful by setting the necessary cookies;
  • A middleware function that tries to verify the csrf token, or the API key;
  • The system needs to be headless as in you can't store the csrf tokens;
  • Optional: A way to revoke tokens (thought I don't think it is possible).

You can probably just copy the code from any middleware that implements the csrf feature and extend it with an API key validation if the csrf token is not set.

Add sorting to REST endpoints

We require the ability for clients to request data in a certain sorted format. An immediate use case is territory instances sorted by date started DESC (recent)

Fix typehinting for Mongo Operations Service

src/services/mongo/mongo.operations.service.ts

Currently we're using a feckton of any or any[] typecasts, which is breaking the point of using TypeScript. I was struggling to figure out how to supply the entity type to the function and make it return correctly, due to some weirdity with Mongo and how it returns entities not being compatible.


Websocket server

The server will be used to send out updates in real time to the clients. The protocol will work as follows:

  • The client connects to the websocket and receives a collection of all available rooms/channels;
  • The client subscribes to a room/channel, upon subscribing it receives a current state of events;
  • The client will hence forward receive partial objects with updated states that it is able to merge with the initial state it received.

Other requirements:

  • Only use the websocket for active alerts/events, or use it for everything(don't know if there is a downside when it comes to performance/complexity for clientside code);
  • Active alerts/events will be updated in realtime requiring an active system to handle this flow of information;
  • An important aspect is the order of operations of the code. In other words: a client should not miss information or have faulty information as said information was updated while it was being retrieved nor should it have to wait until said faulty information get replaced with new information.

Might tie in nicely with the idea of removing more detailed data later down the line. E.g. kills of a player: just have an array of killed characters with details. You can then quickly tell what version of the data you have got and get the relevant data that is missing. It can help create a more lazy system.

Important note

The API can run multiple instances meaning that one instance will not receive all messages from RabbitMQ.

Archive current API

Lot of tasks are going to be handled via the website issue, however specifically for the API:

  • Ensure database has been exported and imported to archive server
  • Set up DNS client to point api.old.ps2alerts.com to archive server
  • Update URLs of website and API for the new setup

Create ability to store character and outfit information

Currently we store only string IDs in each record for every aggregate. This doesn't contain a name, therefore for the website and generally serving API requests we require to resolve this name.

Thinking of handling this via:

  1. Creating two new aggregator endpoints which accept character and outfit data. This can be triggered from the Character Broker code we have for Census already.
    1a) We'll have to figure out whether we always send the information to the API and it can ignore it or store it as it sees fit, or if we somehow figure out if the API has the information already.
  2. Via the entities where this information is required, perform a one-to-many JOIN which will extract this information out of the database and serve it with the API request.

Populate Entity Examples

Swagger currently doesn't show any examples, we need to add them to the entities. Thankfully we have done this in the InstanceMetagameEntity, so it should be easy to replicate.

Add population endpoint

Create an endpoint to access live server population (preferably per continent if possible) outside of active alerts.

Create instance facility control time history

Currently we have no data on historical territory percentges over time for an alert. We should add a cronjob similar to population history and kill counts which resemble territory control over time.

Global Victory Aggregate taking 2+ seconds to respond

This aggregate loads a lot of the homepage statistics and it's extremely under performant. It's currently cached by Redis but I believe that it's ineffective and isn't caching properly.

This appears to have been worsened by the recent database migration.

Solution 1:

  1. Set the TTL to 24 hours for the aggregate on the endpoint
  2. Using the new internal API call, have it invalidate the cache upon instance end
  3. Regenerate the cache

Solution 2:

  1. Investihate why the query is taking so long to respond in Mongo.
  2. Attempt to fix the issue.

Solution 3:

  1. Implement pagination on the endpoint and the website, and bring the data in chunks

Database index issues

We have a high CPU database issue again. I believe this was introduced when I changed the indexing using character.id, weapon.id etc etc.

  1. Check if subkey indexing is possible. If not, we'll have to force adding the ID of the record to index on it
  2. If possible, adjust the indexing.

Write API in Nest JS

I want to make a start with setting up a basic api using Nest JS. I want to choose that mainly because of a few reasons. First it is same language as the websocket and will be similar to the choice for the front-end of the website. This means we can reuse utilities and models across the different services. Second it has integration for a websocket server.

I terms of where this service will sit across the other services will be as follows:

  • Websocket: Will always be a singular entity that collects data and stores it in the database;
  • API/Websocket server: Will make the data available to the front-end and third party services as well as provide additional services(e.g. administrative tasks);
  • Website: Will display the data as an SPA.

With that said we will do something similar to Laravel Airlock/Sanctum to authenticate the site, and also allow the generation of API keys(maybe even OAuth) for third-party services. The API will be in charge of allowing retrieval of data, account management, and administrative tasks(e.g. planning custom events).

Now on the real problem of implementing a real time data stream to clients. The problem is that the websocket updates the data and is not able to talk (at least in a good way) to the API service, that and the fact that we might want to scale the API makes it so that we should look at a more passive system. My proposition would be to let the API pull the most recent data and send it to the clients directly every so often(we could even filter it so only changed data is send). This way we have full control over the stream of data(e.g. rate of sending data) and it would resolve some of the issue of the old websocket server(e.g. race condition). Note that this implementation is only necessary for ongoing alerts/events, we can have an initial api call that tells the client whether to get the data from the api or from the websocket(data will always be available from the api, the websocket can always send an event to tell the client to get it from the api because of the end of an alert).

I know that the current project is written in PHP and that Laravel/Lumen has been discussed, however there are some downsides to those. Mostly the lack of good maintained Mongo ODMs, and the fact that it can't act as a websocket server.

Before I conclude there are some choices to make when using Nest JS as it is pretty agnostic.

  • Webserver: Fastify(more performant then Express, so a no brainer);
  • Websocket server: Socket.io or ws(ws will be more work, socket.io will maybe have less performance);
  • Session storage: JWT.

Other services that might be interesting to look at are Kafka and RabbitMQ to handle events across services.

Refactor querying system allowing more flexible querying

We need to slightly refactor our sorting / querying system to allow inclusion of JSON encoded objects. This should enable us, via some sanity checking, to be extremely flexible in how we allow users to query for data. We'll quite simply decode this string and send the result to mongo as we would normally.

Example:

GET /some/endpoint/1234?filter={instance:"10-37435",facility:{$gt: 7000}}&sort={timeStarted:1}

This will return the result as expected from querying mongo directly.

  • Implement changes to query options parsing the object and then passing that to Mongo
  • Ensure that SQL-eqse injection is not possible via this method.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.