Giter Site home page Giter Site logo

elog-plus's Introduction

ELOG+

Overview

The ELOG+ is a Java-based microservice that provides a robust platform for managing log entries, logbooks, and offers secure authentication and authorization mechanisms.

Table of Contents

Features

The Log Entry Microservice offers the following key features:

  • Log Entry Management: Create, update, retrieve, and delete log entries.
  • Logbook Support: Log entries can belong to one or more logbooks.
  • Tagging: Logbooks can define tags for categorizing log entries.
  • Authentication and Authorization:
    • User Authentication: Users can sign up and log in with varying levels of access (read, write, admin) on specific logbooks.
    • Application Token Authentication: Secure access for applications with generated tokens.

Getting Started

The ELOG+ Microservice is a Java backend application designed to manage log entries. Each log entry can belong to one or more logbooks, and each logbook can define one or more tags to categorize and specify the log entries. The application supports user or application token authentication and authorization for securing access. Additionally, it provides fine-grained access control, allowing users to have read, write, and admin authorization on each logbook.

Scalability

Scalability is a critical aspect on modern Microservices architecture, the new ELOGs uses stateless REST api allowing to have more than one instance that run into modern orchestrator. This help to increase the number of http request that can be processed.

Prerequisites

ELOG+ run on java virtual machine version 19+. A dockerfile is provided to permit the creation of a container image.

Application tokens

ELOG+ provides robust support for external application authentication through the use of custom-managed JSON Web Tokens (JWT). This functionality allows for secure and token-based communication between ELOG+ and third-party applications. To utilize this feature, a cryptographic key must be configured through the ELOG_PLUS_APP_TOKEN_JWT environment variable. To generate a cryptographic key for JWT token verification, you need to execute the following command using OpenSSL:

openssl rand -hex <size> 

Here, should be replaced by the desired byte size for the cryptographic key. This generated key needs to be set in the ELOG_PLUS_APP_TOKEN_JWT environment variable for ELOG+ to utilize it for JWT token generation and validation. To pre-configure a root token, associate a JSON-formatted string to the ELOG_PLUS_ROOT_AUTHENTICATION_TOKEN_JSON environment variable. The JSON string should include the name and expiration properties for each root token you wish to establish. The following code snippet demonstrates how to create a root token named root-token-1 with an expiration date set to December 31, 2024:

ELOG_PLUS_ROOT_AUTHENTICATION_TOKEN_JSON: '[{"name":"root-token-1","expiration":"2024-12-31"}]'

Execute the application in docker

To launch the backend application, simply execute the following command:

docker compose -f docker-compose.yml -f docker-compose-app.yml up --build 

Note: the '--build' parameter force the backend service to be rebuilt every time is started up (useful when the code is updated).

This command initiates the Docker Compose setup, which includes spinning up Minio, Kafka, MongoDB, and the backend as a unified Docker Compose application. Once the process is complete, you can access the backend through port 8080.

elog-plus's People

Contributors

bisegni avatar ocboogie avatar

Stargazers

 avatar

Watchers

 avatar Joshua Guiman avatar

elog-plus's Issues

Implement superseding

  • POST /logs/{id}/supersede takes in the content for a new log, marks log with id id as superseded by the newly created log, and responds with the id of the newly created log.
  • Add information about superseding to the log DTO such as "supersededBy": "001" .

Include attachments in log queries

When using GET /logs, include the following field:

[
  {
    ...
    "attachments": [
      {
        "id": "001",
        "mineType": "image/png",
        "previewable": true,
        "filename": "image.png"
      },
      {
        "id": "001",
        "mineType": "text/plain",
        "previewable": false,
        "filename": "text.txt"
      }
    ]
    ...
  }
  ...
]

where previewable indicates if there exists a preview for the attachment.

`PUT /logbooks/{logbookId}`

Create the endpoint PUT /logbooks/{logbookId} used for updating logbooks. Preferably, this would work similarly to PUT /logbooks/{logbookId}/shifts with both tags and shifts. For example, suppose there exists a logbook with the following properties

{
  "id": "123",
  "name": "example",
  "tags": [
    {
      "id": "tag-1",
      "name": "Failure"
    },
    {
      "id": "tag-2",
      "name": "Important"
    }
  ],
  "shifts": [
    {
      "id": "shift-1"
      "name": "Morning shift",
      "from": "09:00",
      "to": "17:00"
    },
    {
      "id": "shift-2"
      "name": "Night shift",
      "from": "017:00",
      "to": "01:00"
    }
  ]
}

Then, PUT /logbooks/123 with body

{
  "name": "new example",
  "tags": [
    {
      "id": "tag-1",
      "name": "Failure"
    }
  ],
  "shifts": [
    {
      "id": "shift-1"
      "name": "Morning shift",
      "from": "09:00",
      "to": "17:00"
    }
  ]
}

would remove the tag with id tag-2, remove the shift with id shift-2, and rename the logbook to new example. However, you can also add new tags and entries with this endpoint. For example, PUT /logbooks/123 with body

{
  "name": "new example",
  "tags": [
    {
      "id": "tag-1",
      "name": "Failure"
    },
    {
      "id": "tag-2",
      "name": "Important"
    },
    {
      "name": "Super important"
    }
  ],
  "shifts": [
    {
      "id": "shift-1"
      "name": "Morning shift",
      "from": "09:00",
      "to": "17:00"
    },
    {
      "id": "shift-2"
      "name": "Night shift",
      "from": "017:00",
      "to": "01:00"
    },
    {
      "name": "Super Late Night shift",
      "from": "01:00",
      "to": "09:00"
    }
  ]
}

would create a new tag, create a new shift, and rename the logbook to "new example".

Internal sever error when creating new shift in `PUT /logbooks/{logbookId}`

{
  "id": "64bff9f3c3bd9f158ebfa2b8",
  "name": "pep",
  "tags": [],
  "shifts": [
    { "name": "Morning shift", "from": "16:09", "to": "17:09", "id": null }
  ]
}

to PUT /logbooks/64bff9f3c3bd9f158ebfa2b8 responded with

{
  "timestamp": "2023-07-27T23:09:36.813+00:00",
  "status": 500,
  "error": "Internal Server Error",
  "path": "/v1/logbooks/64bff9f3c3bd9f158ebfa2b8",
  "java.lang.UnsupportedOperationException": "java.lang.UnsupportedOperationException"
}

and logged

elog_plus-app-1              | java.lang.UnsupportedOperationException: null
elog_plus-app-1              | 	at java.base/java.util.AbstractList.add(AbstractList.java:155) ~[na:na]
elog_plus-app-1              | 	at java.base/java.util.AbstractList.add(AbstractList.java:113) ~[na:na]
elog_plus-app-1              | 	at edu.stanford.slac.elog_plus.service.LogbookService.verifyShiftAndUpdate(LogbookService.java:234) ~[main/:na]
elog_plus-app-1              | 	at edu.stanford.slac.elog_plus.service.LogbookService.update(LogbookService.java:139) ~[main/:na]
elog_plus-app-1              | 	at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) ~[na:na]
elog_plus-app-1              | 	at java.base/java.lang.reflect.Method.invoke(Method.java:578) ~[na:na]
...

Give recognizable errors when entries aren't found

I'm implementing the 404 page on the frontend, and the error that GET /logs responds with is not easily recognizable:

{
    "timestamp": "2023-07-17T20:45:41.229+00:00",
    "status": 200,
    "error": "OK",
    "path": "/v1/logs/abc",
    "errorCode": -2,
    "errorMessage": "The log has not been found",
    "errorDomain": "LogService::getFullLog"
}

Simply just making the HTTP status 404 would be enough for me to catch the error correctly.

Include follow ups in `GET /logs/{id}` if the `includeFollowUps` query parameter

On the frontend every GET /logs/{id} is immediately followed by GET /logs/{id}/follow-up, so we might as well just respond with that data in the first place. We can use the same DTO as GET /logs. So, GET /logs/649dc6232928d5771e430e66?includeFollowUps=true would respond with

{
    "errorCode": 0,
    "payload": {
        "id": "649dc6232928d5771e430e66",
        "logbook": "ACCEL",
        "tags": [],
        "title": "Entry 1A",
        "text": "Entry 1A body text",
        "author": "Senaida Shields",
        "attachments": [],
        "logDate": "2023-06-29T17:57:55.938",
        "commitDate": "2023-06-29T17:57:55.938",
        "progDate": "2023-06-29T17:57:55.938",
        "followUps": [
            {
                "id": "649dc6332928d5771e430e67",
                "logbook": "ACCEL",
                "title": "Entry 2A",
                "author": "Shantae Mraz",
                "tags": [],
                "attachments": [],
                "logDate": "2023-06-29T17:58:11.372"
            },
            {
                "id": "649dc63e2928d5771e430e68",
                "logbook": "ACCEL",
                "title": "Entry 2B",
                "author": "Cristal O'Connell",
                "tags": [],
                "attachments": [],
                "logDate": "2023-06-29T17:58:22.62"
            }
        ]
    }   
}

This change might not make sense from a general API perspective (i.e., another app using this API may not need this information), so it might be a good idea to include follow ups only if the includeFollowUps query parameter is set.

Attachment files are rename to `uploadFile`

Example:

POST /api/v1/attachment HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9
Connection: keep-alive
Content-Length: 192
Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryuMYWVgzSljn31C3v
Host: localhost:5173
Origin: http://localhost:5173
Referer: http://localhost:5173/new-entry
Sec-Fetch-Dest: empty
Sec-Fetch-Mode: cors
Sec-Fetch-Site: same-origin
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36
sec-ch-ua: "Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "macOS"

------WebKitFormBoundaryuMYWVgzSljn31C3v
Content-Disposition: form-data; name="uploadFile"; filename="test.txt"
Content-Type: text/plain


------WebKitFormBoundaryuMYWVgzSljn31C3v--

responds with ID, 64a350b57d5da53890b3fff4. Then after creating the log with the attachment 64a350b57d5da53890b3fff4,

GET /api/v1/logs/64a350be7d5da53890b3fff5?includeFollowUps=true HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9
Connection: keep-alive
Host: localhost:5173
Referer: http://localhost:5173/64a350be7d5da53890b3fff5
Sec-Fetch-Dest: empty
Sec-Fetch-Mode: cors
Sec-Fetch-Site: same-origin
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36
sec-ch-ua: "Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "macOS"

responds with

{
    "errorCode": 0,
    "payload": {
        "id": "64a350be7d5da53890b3fff5",
        "logbook": "ACCEL",
        "tags": [],
        "title": "test.txt",
        "text": "",
        "author": "Lavonna Daniel",
        "attachments": [
            {
                "id": "64a350b57d5da53890b3fff4",
                "fileName": "uploadFile",
                "contentType": "text/plain",
                "previewState": "PreviewNotAvailable"
            }
        ],
        "followUp": [],
        "logDate": "2023-07-03T22:50:38.319",
        "commitDate": "2023-07-03T22:50:38.319",
        "progDate": "2023-07-03T22:50:38.319"
    }
}

Create `includeHistory` query param for `GET /logs/{id}`

To make something like this:
image
I need an includeHistory query param for the GET /logs/{id} that will then include all past revisions of the log ordered by most recent to oldest. So, the response of GET /logs/64a346687d5da53890b3ffef might look like:

{
  "errorCode": 0,
  "payload": {
    "id": "649dc6232928d5771e430e66",
    "logbook": "ACCEL",
    "tags": [],
    "title": "Entry 1A",
    "text": "Entry 1A body text",
    "author": "Senaida Shields",
    "attachments": [],
    "history": [
      {
        "id": "649dc6332928d5771e430e67",
        "logbook": "ACCEL",
        "tags": [],
        "title": "Entry 2A",
        "text": "Entry 2A body text",
        "author": "Shantae Mraz",
        "attachments": [],
        "logDate": "2023-06-29T17:58:11.372",
        "commitDate": "2023-06-29T17:58:11.372",
        "progDate": "2023-06-29T17:58:11.372"
      },
      {
        "id": "649dc63e2928d5771e430e68",
        "logbook": "ACCEL",
        "tags": [],
        "title": "Entry 2B",
        "text": "Entry 2B body text",
        "author": "Cristal O'Connell",
        "attachments": [],
        "logDate": "2023-06-29T17:58:22.62",
        "commitDate": "2023-06-29T17:58:22.62",
        "progDate": "2023-06-29T17:58:22.62"
      },
      {
        "id": "649dc6ae2928d5771e430e6a",
        "logbook": "ACCEL",
        "tags": [],
        "title": "Entry 2C",
        "text": "Entry 2C body text",
        "author": "Reyes Rippin",
        "attachments": [],
        "logDate": "2023-06-29T18:00:14.173",
        "commitDate": "2023-06-29T18:00:14.173",
        "progDate": "2023-06-29T18:00:14.173"
      },
      {
        "id": "64a6eb56fe834e7a7f6c8152",
        "logbook": "ACCEL",
        "tags": [
          "❤️"
        ],
        "title": "arst",
        "text": "abc",
        "author": "Rachelle West",
        "attachments": [],
        "logDate": "2023-07-06T16:27:02.465",
        "commitDate": "2023-07-06T16:27:02.465",
        "progDate": "2023-07-06T16:27:02.465"
      }
    ],
    "logDate": "2023-06-29T17:57:55.938",
    "commitDate": "2023-06-29T17:57:55.938",
    "progDate": "2023-06-29T17:57:55.938"
  }
}

Suggestions for log querying

  1. Rename GET /v1/search to GET /v1/logs
  2. Remove POST /v1/search as this functionality should be covered by GET /v1/logs
  3. Rename GET /v1/search/parameter to GETS /logbooks

What do you think?

Header tags getting stripped

POST /entries with

{
  "title": "Body test",
  "text": "<h1>H1</h1><h2>H2</h2>",
  "logbook": "accel",
  "attachments": [],
  "tags": []
}

Then, getting the entry with GET /entries/{id} responds with

{
    "errorCode": 0,
    "payload": {
        "id": "64bffd6dc3bd9f158ebfa2e3",
        "logbook": "accel",
        "tags": [],
        "title": "Body test",
        "text": "H1H2",
        "loggedBy": "Harry Beahan",
        "attachments": [],
        "followUp": [],
        "loggedAt": "2023-07-25T16:50:53.61",
        "eventAt": "2023-07-25T16:50:53.61"
    }
}

Download attachments

Create endpoint GET /attachments/{id}/download which responds with the original file, or redirects to the file.

Rework `GET /logs` to allow for infinite scroll and spotlighting logs

The response from /logs can just be an array. No need to have other fields (e.g., empty, first, last, etc.).

Update /logs to have the following query parameters:

  • anchor: The id of the log that "anchors" the query
  • logsAfter How many logs to include after the anchor log
  • logsBefore How many logs to include, including the anchor, before the anchor log (e.g., if logsBefore=0 then the anchor post wouldn't be included, and if logsBefore=1 and logsAfter=0 only the the anchor post would be returned)

Also, this should work with other parameters such as logbook.

log creation

add new api for the log creation

  • POST /logs creates a log and responses with the id of the created log.
    Request body:
    {
        "text": "Beam alignment",
        "title": "Aligned the beam",
        "attachments": ["{attachment_id}"]
    }
    Response:
    {
      "errorCode": 0,
      "errorMessage": "string",
      "errorDomain": "string",
      "payload": "{new_log_id}"
    }
  • POST /logs/{id}/follow-up creates a follow-up log and marks the log with id {id} with the follow-up log id. If there is already a follow-up log, respond with an error (this shouldn't even be possible on the frontend).
    Request body:
    {
        "text": "Beam alignment",
        "title": "Aligned the beam",
        "attachments": ["{attachment_id}"]
    }
    Response:
    {
      "errorCode": 0,
      "errorMessage": "string",
      "errorDomain": "string",
      "payload": "{new_log_id}"
    }

Implement log creatation

Suggested endpoints:

  • POST /attachments uploads an attachment, responding with the new attachment's id.
    Request headers:enctype="multipart/form-data"
    Request body: contents of the file
    Response:
    {
      "errorCode": 0,
      "errorMessage": "string",
      "errorDomain": "string",
      "payload": "{attachment id}"
    }
  • POST /logs creates a log and responses with the id of the created log.
    Request body:
    {
        "text": "Beam alignment",
        "title": "Aligned the beam",
        "attachments": ["{attachment_id}"]
    }
    Response:
    {
      "errorCode": 0,
      "errorMessage": "string",
      "errorDomain": "string",
      "payload": "{new_log_id}"
    }
  • POST /logs/{id}/follow-up creates a follow-up log and marks the log with id {id} with the follow-up log id. If there is already a follow-up log, respond with an error (this shouldn't even be possible on the frontend).
    Request body:
    {
        "text": "Beam alignment",
        "title": "Aligned the beam",
        "attachments": ["{attachment_id}"]
    }
    Response:
    {
      "errorCode": 0,
      "errorMessage": "string",
      "errorDomain": "string",
      "payload": "{new_log_id}"
    }

Create the `sortBy` query param for `GET /entries`

Create the sortBy query parameter for GET /entries with two options: loggedAt and eventAt, which will of course order the entries returned by either loggedAt or eventAt respectively. Default to eventAt (which is not the current behavior).

`POST /entries` not accepting `eventAt`

POST /v1/entries with the body

{
  "logbook": "ACCEL",
  "title": "Event at test",
  "text": "",
  "tags": [],
  "attachments": [],
  "eventAt": "2021-07-21T18:27:19.637Z"
}

is successful however is saved as

   {
      "id": "64bace23260ade3d85f08d9b",
      "logbook": "ACCEL",
      "title": "Event at test",
      "loggedBy": "Courtney Heller",
      "tags": [],
      "attachments": [],
      "loggedAt": "2023-07-21T18:27:47.474",
      "eventAt": "2023-07-21T18:27:47.474"
    },

Author has name "null"

Got this from the backend:

{
	"author": "null Physics",
	"id": "63e5aafbd411b19631db9d96",
	"logDate": "2023-01-30T08:15:09",
	"logbook": "MCC",
	"priority": "NORMAL",
	"title": "Daily BCS Checks - 1/30/2023 - Passed"
}

`POST /entries` always fails

{
  "title": "New entry",
  "text": "",
  "logbooks": ["64dbf75656dcd74ec2358996"],
  "attachments": [],
  "tags": []
}

to POST /entries where 64dbf75656dcd74ec2358996 is the id of a logbook responds with

{
    "timestamp": "2023-08-16T18:44:11.907+00:00",
    "status": 404,
    "error": "Not Found",
    "path": "/v1/entries",
    "errorCode": -1,
    "errorMessage": "The Entry has not been found",
    "errorDomain": "LogbookService:getLogbookByName"
}

Give a recognizable error when uploading attachments that are too big

Right now, if you try to upload an attachment that's too big, the API responds with an internal server error:

{
    "timestamp": "2023-07-07T18:47:58.846+00:00",
    "status": 500,
    "error": "Internal Server Error",
    "path": "/v1/attachment",
    "org.springframework.web.multipart.MaxUploadSizeExceededException": "org.springframework.web.multipart.MaxUploadSizeExceededException: Maximum upload size exceeded"
}

However, I can't detect that the attachment is too big from this error which is necessary, since we want to display to the user that attachment is too big.

Add `summarizes` field to entry DTO

Add the summarizes field to the EntryNew DTO and Entry DTO with the following schema:

{
  // ...
  summarizes?: {
    shift: string;
    date: string;
  }
  // ...
}

with the whole object it self being optional and identifying if the entry is a shift summary (i.e., if an entry has the summarizes then it is a summary, otherwise it is a normal entry) and with shift being the id of the shift and date being of the format YYYY-MM-DD.

Provide logbook information on `GET /tags`

Add the logbook id to each tag. For example,

[
  {
      "id": "cb3b6c3c-5a7f-4aae-94e7-e99fc79c15b7",
      "name": "newest-tag"
  },
  {
      "id": "696c3a53-d08d-4ca4-972c-eac8835b74c1",
      "name": "newest-tag"
  },
  {
      "id": "1ac0e81a-0ec7-4fcd-98ae-71bdd8aa24fb",
      "name": "newest-tag"
  }
]

would be

[
  {
      "id": "cb3b6c3c-5a7f-4aae-94e7-e99fc79c15b7",
      "name": "newest-tag",
      "logbook": "cb3b6c3c-5a7f-4aae-94e7-e99fc79c15b7"
  },
  {
      "id": "696c3a53-d08d-4ca4-972c-eac8835b74c1",
      "name": "newest-tag",
      "logbook": "cb3b6c3c-5a7f-4aae-94e7-e99fc79c15b8"
  },
  {
      "id": "1ac0e81a-0ec7-4fcd-98ae-71bdd8aa24fb",
      "name": "newest-tag",
      "logbook": "cb3b6c3c-5a7f-4aae-94e7-e99fc79c15b9"
  }
]

or could use the key logbookId. Either are fine with me.

Add `includeFollowingUp` query param to `GET /logs/{id}`

If includeFollowingUp is true, then GET /logs/{id} includes, if entry with id id is a follow up, the entry that it follows up:

{
    "errorCode": 0,
    "payload": {
        "id": "64ac93f14b66e64fe7570419",
        "logbook": "ACCEL",
        "tags": [],
        "title": "This is a follow up to `not a follow up`",
        "text": "",
        "author": "Pierre Weissnat",
        "attachments": [],
        "followUp": [],
        "followingUp": {
          // ...
          "title": "not a follow up"
          // ...
        }
        "logDate": "2023-07-10T23:27:45.341",
        "commitDate": "2023-07-10T23:27:45.341",
        "progDate": "2023-07-10T23:27:45.341"
    }
}

This can use the same DTO as followUp and supersede.

Implement follow up logs

Suggested endpoints:

  • GET /logs/{id}/follow-ups responds with all the follow ups to log with id id in the same format as /logs
  • POST /logs/{id}/follow-ups takes in the content for a log (e.g., title, body content attachments, etc.) and responds with the id of the newly created follow up log.

Also, add follow up information to the log DTO such as:

{
  ...
  "followUps": [
    "001",
    "002"
  ]
  ...
}

Logbook tags and shifts

"logbooks": [
	{
		"name": "ACELL",
		"shifts": [
			{	
				"id": "id-1",
				"name": "morning",
				"from": "8am",
				"to": "12pm"
			}
		],
		"tags": [
			{
				"id": "id-1",
				"name": "tag-1",
				"logbook": "ACCEL",
				"default-color": "red"
			},
			{
				"id": "id-2",
				"name": "tag-2",
				"logbook": "PEP",
				"default-color": "red"
			}
		]
	}
]

Validate attachment ids in `POST /logs`

{
  "logbook": "ACCEL",
  "title": "Title",
  "text": "Body text",
  "tags": [],
  "attachments": [
    null
  ]
}

to POST /logs successfully creates the log. Then, for example, GET logs/64ac4ca8da65e6306818036d responds with

{
  "timestamp": "2023-07-10T18:23:50.678+00:00",
  "status": 200,
  "error": "OK",
  "path": "/v1/logs/64ac4ca8da65e6306818036d",
  "errorCode": -1,
  "errorMessage": "The given id must not be null",
  "errorDomain": "AttachmentService::getAttachment"
}

Missing names in `/v1/search`

Missing this information:
image
What I get from the backend:

{
  id: "63e5aafbd411b19631db9d47",
  entryType: "LOGENTRY",
  filePs: null,
  filePreview: null,
  logbook: "FACILITIES",
  priority: "NORMAL",
  segment: null,
  tags: null,
  title:
    "OFF NORMAL - B033 Shannon Harvey called to informed that a ckt #12 has no power outside of the clean room.",
  text: "Shannon said the panel # is B033PB1001 but the ckt #12 was still on. She will check back to see if this needs to be addressed today or possibly tomorrow.",
  logDate: "2023-01-30T23:17:17",
  commitDate: "2023-01-30T23:17:23",
  progDate: "2023-01-30T23:17:17",
},

Add a machine recognizable field `reason` to BAD REQUEST errors

Supersedes #45. For example, PUT /logbooks/{logbookId} and PUT /logbooks/{logbookId}/shifts might respond with 400 when there is overlapping shifts. However, I can't tell if the error was caused by overlapping shifts or any other validation error programmatically. So, add a field reason that has some code to identify what kind of error it is. In this case, for example, shiftOverlapping or something.

Implement previews

Suggested endpoint: GET /attachments/{id}/preview.webp or GET /attachments/{id}/preview.jpeg.
I would prefer webp as it is smaller and widely supported, however if that's not possible, jpeg would also work.

Why is `/v1/search` POST?

I would expect it to be GET, as it does not create any new data and is idempotent (as far as I can tell).

Rework API design

In the process of thinking over how we would implement the new features discussed in the meeting, I created an OpenAPI spec with a lot of changes from our current design. The changes are a combination of personal API design recommendations that are not directly related to the new features (such as renaming /logs to /entries) and some are changes I need to implement the new features. Although, it should have feature parity with the current design (i.e., everything possible in the current design should be possible with this design). We can discuss more about it Wednesday, as there are a good number of changes from the current design. Here's the spec, and you can render it out with Swagger Editor:

openapi: 3.0.3
info:
  title: ""
  description: ""
  version: 1.0.11
tags:
  - name: logbooks
    description: ""
  - name: tags
    description: ""
  - name: entries
    description: ""
  - name: attachments
    description: ""
paths:
  /logbooks:
    get:
      tags:
        - logbooks
      summary: Get all logbooks
      responses:
        '200':
          description: Successful operation
          content:
            application/json:
              schema:
                type: array
                items:
                  $ref: '#/components/schemas/Logbook'
    post:
      tags:
        - logbooks
      summary: Create a new logbook (only allowed by admins)
      requestBody:
        required: true
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/LogbookNew'
      responses:
        '201':
          description: Successfully created new logbook
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/Logbook'
  /tags:
    get:
      tags:
        - tags
      summary: Get all tags
      parameters:
        - in: query
          name: logbooks
          schema:
            type: array
            example: ["logbookId-1", "logbookId-2"]
            items:
              type: string
              format: logbookId
          required: false
          description: Only include tags in these logbooks
      responses:
        '200':
          description: Successful operation
          content:
            application/json:
              schema:
                type: array
                items:
                  $ref: '#/components/schemas/Tag'
    post:
      tags:
        - tags
      summary: Create a new tag
      requestBody:
        required: true
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/TagNew'
      responses:
        '201':
          description: Tag created successfully
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/Tag'
        '400':
          description: Invalid request payload
        '404':
          description: Logbook not found
  /entries:
    get:
      tags:
        - entries
      summary: Query entries
      parameters:
        - in: query
          name: startDate
          schema:
            type: string
            format: ISO 8601
            default: current time
          required: false
          description: Only include entries after this date. Defaults to current time.
        - in: query
          name: endDate
          schema:
            type: string
            format: ISO 8601
          required: false
          description: Only include entries before this date. If not supplied, then does not apply any filter
        - in: query
          name: contextSize
          schema:
            type: integer
          description: Include this number of entires before the startDate (used for hightlighting entries)
        - in: query
          name: limit
          required: true
          schema:
            type: integer
          description: Limit the number the number of entries after the start date.

        - in: query
          name: search
          schema:
            type: string
          description: Typical search functionality
        - in: query
          name: tags
          schema:
            type: array
            items: 
              type: string
              format: tagId
          description: Only include entries that use one of these tags
        - in: query
          name: logbooks
          schema:
            type: array
            items: 
              type: string
              format: logbookId
          description: Only include entries that belong to one of these logbooks
      responses:
        '200':
          description: Successful operation
          content:
            application/json:
              schema:
                type: array
                items:
                  $ref: '#/components/schemas/EntrySummary'
    post:
      tags:
        - entries
      summary: Create new entry. 
      requestBody:
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/EntryNew'
        required: true
      responses:
        '201':
          description: Successfully created new entry
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/Entry'
        '404':
          description: Entry not found, if following up or superseding
        '409':
          description: Entry already superseded

  /entries/{entryId}:
    get:
      tags:
        - entries
      summary: Get entry
      parameters:
        - name: entryId
          in: path
          description: ID of the entry to return
          required: true
          schema:
            type: string
            format: entryId
      responses:
        '200':
          description: Successful operation
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/Entry'
        '404':
          description: Entry not found
  /entries/{entryId}/supersede:
    post:
      tags:
        - entries
      summary: Supersede entry with id `entryId`. 
      parameters:
        - name: entryId
          in: path
          description: ID of the entry to supersede
          required: true
          schema:
            type: string
            format: entryId
      requestBody:
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/EntryNew'
        required: true
      responses:
        '201':
          description: Successfully superseded entry
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/Entry'
        '409':
          description: Entry already superseded
        '404':
          description: Entry not found
  /entries/{entryId}/follow-ups:
    post:
      tags:
        - entries
      summary: Follow up entry with id `entryId`. 
      parameters:
        - name: entryId
          in: path
          description: ID of the entry to follow up
          required: true
          schema:
            type: string
            format: entryId
      requestBody:
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/EntryNew'
        required: true
      responses:
        '201':
          description: Successfully followed up entry
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/Entry'
        '404':
          description: Entry not found

  /attachments:
    post:
      summary: Upload an attachment
      tags:
        - attachments
      requestBody:
        required: true
        content:
          multipart/form-data:
            schema:
              type: object
              properties:
                file:
                  type: string
                  format: binary
      responses:
        '201':
          description: Attachment uploaded successfully
        '400':
          description: Invalid request payload

  /attachments/{attachmentId}/preview.jpg:
    get:
      summary: Get the preview image of the attachment
      tags:
        - attachments
      parameters:
        - in: path
          name: attachmentId
          schema:
            type: string
          required: true
      responses:
        '200':
          description: Preview image found and returned
          content:
            image/jpeg:
              schema:
                type: string
                format: binary
        '404':
          description: Preview image not found

  /attachments/{attachmentId}/download:
    get:
      summary: Download the full attachment
      tags:
        - attachments
      parameters:
        - in: path
          name: attachmentId
          schema:
            type: string
          required: true
      responses:
        '200':
          description: Attachment file found and returned
          content:
            '*/*':
              schema:
                type: string
                format: binary
        '404':
          description: Attachment file not found

components:
  schemas:
    Logbook:
      type: object
      required: ["id", "name"]
      properties:
        id:
          type: string
          format: logbookId
          example: logbookId
        name:
          type: string
          example: ACCEL
    LogbookNew:
      type: object
      required: ["name"]
      properties:
        name:
          type: string
          example: ACCEL
    Tag:
      type: object
      required: ["id", "name", "logbook"]
      properties:
        id:
          type: string
          format: tagId
          example: tagId
        name:
          type: string
          example: ACCEL
        logbook:
          type: string
          format: logbookId
          example: logbookId
    TagNew:
      type: object
      required: ["name", "logbook"]
      properties:
        name:
          type: string
          example: ACCEL
        logbook:
          type: string
          format: logbookId
          example: logbookId
    EntrySummary:
      type: object
      required: ["id", "logbook", "tags", "title", "loggedBy", "loggedAt"]
      properties:
        id:
          type: string
          format: entryId
          example: entryId
        logbook:
          type: string
          format: logbookId
          example: logbookId
        tags:
          type: array
          items:
            type: string
            format: tagId
          example: [tagId-1, tagId-2]
        title:
          type: string
          example: Entry title!
        loggedBy:
          type: string
          example: Boogie Mikulec
        loggedAt:
          type: string
          format: ISO 8601
          example: 2020-07-10 15:00:00.000
        eventAt:
          type: string
          format: ISO 8601
          example: 2020-07-10 15:00:00.000
    Entry:
      type: object
      required: ["id", "logbook", "tags", "title", "loggedBy", "loggedAt", "eventAt", "body", "attachments"]
      properties:
        id:
          type: string
          format: entryId
          example: entryId
        logbook:
          type: string
          format: logbookId
          example: logbookId
        tags:
          type: array
          items:
            type: string
            format: tagId
          example: [tagId-1, tagId-2]
        title:
          type: string
          example: Entry title!
        loggedBy:
          type: string
          example: Boogie Mikulec
        loggedAt:
          type: string
          format: ISO 8601
          example: Entry title!
        eventAt:
          type: string
          format: ISO 8601
          example: 2020-07-10 15:00:00.000
        supersededBy:
          type: string
          format: entryId
          example: entryId
        body:
          type: string
          format: HTML
          example: "<strong>this is the body</strong>"
        attachments:
          type: array
          items:
            $ref: '#/components/schemas/Attachment'
        followsUp:
          $ref: '#/components/schemas/EntrySummary'
        followUps:
          type: array
          items:
            $ref: '#/components/schemas/EntrySummary'
        history:
          type: array
          items:
            $ref: '#/components/schemas/EntrySummary'
    EntryNew:
      type: object
      required: ["title", "logbook", "body", "tags", "attachments"]
      properties:
        logbook:
          type: string
          format: logbookId
          example: logbookId
        tags:
          type: array
          items:
            type: string
            format: tagId
          example: [tagId-1, tagId-2]
        body:
          type: string
          format: HTML
          example: "<strong>this is the body</strong>"
        attachments:
          type: array
          items:
            type: string
            format: attachmentId
          example: [attachmentId-1, attachmentId-2]
        eventAt:
          type: string
          format: ISO 8601
          example: 2020-07-10 15:00:00.000
    Attachment:
      type: object
      required: ["id", "filename", "mimeType", "previewState"]
      properties:
        id:
          type: string
          format: id
          example: attachmentId
        filename:
          type: string
          format: entryId
          example: figure.png
        mimeType:
          type: string
          format: mime-type
          example: text/javascript
        previewState:
          type: string
          enum: ["na", "processing", "done"]

Create the `orderBy` query parameter for `GET /entries`

We might need to talk about this one (because there might be some weird interactions with startDate and endDate). Create the orderBy query parameter with two options desc and asc which will order the entries descending or ascending respectively. Default to desc (which is the current behavior).

Add the `hideSummaries` query param to `GET /entries`

Add the hideSummaries query param to GET /entries where when true will hide all summaries and when false, include the summaries as if they were normal entries. Default to false (i.e., GET /entries should include summaries by default).

Create mini previews

Suggested endpoint: GET /attachments/{id}/previewMini.jpeg. The images are rendered at 32x32.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.