Giter Site home page Giter Site logo

h4ad / serverless-adapter Goto Github PK

View Code? Open in Web Editor NEW
119.0 3.0 6.0 15.59 MB

Run REST APIs and other web applications using your existing Node.js application framework (NestJS, Express, Koa, tRPC, Fastify and many others), on top of AWS, Azure, Huawei and many other clouds.

Home Page: https://serverless-adapter.viniciusl.com.br/

License: MIT License

Shell 0.02% TypeScript 77.11% CSS 0.46% MDX 22.41%
serverless aws aws-lambda sqs sns huawei fastify lambda nodejs hapi koa trpc azure azure-functions firebase firebase-functions deepkit digital-ocean

serverless-adapter's Introduction

🚀 Serverless Adapter

Install   |    Usage   |    Support   |    Examples   |    Benchmark   |    Architecture   |    Credits

npm package Build Status Downloads Issues Code Coverage Commitizen Friendly

Run REST APIs and other web applications using your existing Node.js application framework (NestJS, Deepkit, Express, Koa, Hapi, Fastify, tRPC and Apollo Server), on top of AWS Lambda, Azure, Digital Ocean and many other clouds.

This library was a refactored version of @vendia/serverless-express, I create a new way to interact and extend event sources by creating contracts to abstract the integrations between each library layer.

Why you would use this libray instead of @vendia/serverless-express?

  • Better APIs to extend library functionality.
    • You don't need me to release a new version to integrate with the new event source, you can create an adapter and just call the addAdapter method when building your handler.
  • All code can be extended, if you want to modify the current behavior you can.
    • This is important because if you find a bug, you can quickly resolve it by extending the class, and then you can submit a PR to fix the bug.
  • All code was written in Typescript.
  • Well documented, any method, class, or interface has comments to explain the behavior.
  • We have >99% coverage.

Installing

To be able to use, first install the library:

npm i --save @h4ad/serverless-adapter

Usage

To start to use, first you need to know what you need to import, let's start showing the ServerlessAdapter.

import { ServerlessAdapter } from '@h4ad/serverless-adapter';

We need to pass to Serverless Adapter the instance of your api, let's look an example with:

import { ServerlessAdapter } from '@h4ad/serverless-adapter';
import { ExpressFramework } from '@h4ad/serverless-adapter/lib/frameworks/express';
import { DefaultHandler } from '@h4ad/serverless-adapter/lib/handlers/default';
import { PromiseResolver } from '@h4ad/serverless-adapter/lib/resolvers/promise';
import { ApiGatewayV2Adapter } from '@h4ad/serverless-adapter/lib/adapters/aws';

const express = require('express');

const app = express();
export const handler = ServerlessAdapter.new(app)
  .setFramework(new ExpressFramework())
  .setHandler(new DefaultHandler())
  .setResolver(new PromiseResolver())
  .addAdapter(new ApiGatewayV2Adapter())
  // if you need more adapters
  // just append more `addAdapter` calls
  .build();

Documentation

See how to use this library here.

Breaking Changes

I will not consider updating/breaking compatibility of a NodeJS framework as a breaking change, because I had a lot of supported frameworks and if I created a major version for each one it would be a mess.

So if you want predictability, pin the version with ~ instead of ^.

Examples

You can see some examples of how to use this library here.

Benchmark

See the speed comparison between other libraries that have the same purpose in the Benchmark Section.

Credits

Honestly, I just refactored all the code that the @vendia team and many other contributors wrote, thanks so much to them for existing and giving us a brilliant library that is the core of my current company.

Sponsors

serverless-adapter's People

Contributors

dependabot[bot] avatar glasser avatar h4ad avatar leonardodimarchi avatar semantic-release-bot avatar stackia avatar tobloef avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

serverless-adapter's Issues

Add body-parser support

Feature Request

Is your feature request related to a problem? Please describe.
As CorsFramework did, I want to create something like this for body-parser so that I can create a better version of ApolloFramework that doesn't rely on internal logic to process buffers.

Describe the solution you'd like
Currently I only see the body-parser solution, but I need more time to find other possible solutions.
The initial idea is to create a base body parser framework and reuse to create BodyParserJsonFramework and so on.

Are you willing to resolve this issue by submitting a Pull Request?

  • Yes, I have the time, and I know how to start.
  • Yes, I have the time, but I don't know how to start. I would need guidance.
  • No, I don't have the time, although I believe I could do it if I had the time...
  • No, I don't have the time and I wouldn't even know how to start.

LazyFramework should rethrow exception throwed by factory function

Current Behavior

Based on the LazyFramework sample, I used the factory function to initialize the OpenID Connect client.

import { ServerlessAdapter } from '@h4ad/serverless-adapter';
import { AlbAdapter, ApiGatewayV1Adapter } from '@h4ad/serverless-adapter/lib/adapters/aws';
import { FastifyFramework } from '@h4ad/serverless-adapter/lib/frameworks/fastify';
import { LazyFramework } from '@h4ad/serverless-adapter/lib/frameworks/lazy';
import { DefaultHandler } from '@h4ad/serverless-adapter/lib/handlers/default';
import { PromiseResolver } from '@h4ad/serverless-adapter/lib/resolvers/promise';
import app from '@libs/presentations/app';
import { InitializerUtil } from '@utils/initializer';

async function bootstrap() {
    await InitializerUtil.initialize(); // <-- Initialize OpenID Client
    return app;
}

const fastify = new FastifyFramework();
const framework = new LazyFramework(fastify, bootstrap);

export const main = ServerlessAdapter.new(null)
    .setFramework(framework)
    .setHandler(new DefaultHandler())
    .setResolver(new PromiseResolver())
    .addAdapter(process.env.IS_OFFLINE ? new ApiGatewayV1Adapter() : new AlbAdapter())
    .build();

However, in rare cases, a timeout exception may occur due to poor connectivity to the OIDC provider.

RPError: outgoing request timed out after 3500ms
    at /opt/nodejs/node_modules/openid-client/lib/helpers/request.js:137:13
    at runNextTicks (node:internal/process/task_queues:60:5)
    at process.processTimers (node:internal/timers:509:9)
    at async Issuer.discover (/opt/nodejs/node_modules/openid-client/lib/issuer.js:171:22)
    at Object.init (/src/libs/gateways/openid.ts:16:20)
    at async Promise.all (index 1)
    at Object.initialize (/src/utils/initializer.ts:5:5)
    at bootstrap (/src/functions/authentication-api/handler.ts:11:5)

In this case, Lambda's Init phase should fail, but because LazyFramework catches the exception that occurs, the Init phase is treated as successful.

https://github.com/H4ad/serverless-adapter/blob/main/src/frameworks/lazy/lazy.framework.ts#L55

Expected Behavior

Lambda's Init phase should fail if an exception occurs within the factory function.

Environment

  • Version: 2.17.0
  • Platform: Linux (WSL2 Ubuntu 20.04)
  • Node.js Version: 18.14.2

Add Docs

Feature Request

Add better documentation to the library with packages like:

Is your feature request related to a problem? Please describe.

Currently, we only have README.md as documentation.

Describe the solution you'd like

Implement one of those libraries to generate the documentation automatically:

Are you willing to resolve this issue by submitting a Pull Request?

  • Yes, I have the time, and I know how to start.
  • Yes, I have the time, but I don't know how to start. I would need guidance.
  • No, I don't have the time, although I believe I could do it if I had the time...
  • No, I don't have the time and I wouldn't even know how to start.

Azure Function Adapter: Cannot serve static image

Hello,

I was trying to serve a React SPA using NestJS + FastifyAdapter + Azure/HttpTriggerV4Adapter.

// main.ts
import { NestFactory } from '@nestjs/core';
import { FastifyAdapter, NestFastifyApplication } from '@nestjs/platform-fastify';
import { LazyFramework } from '@h4ad/serverless-adapter/lib/frameworks/lazy';
import { FastifyFramework } from '@h4ad/serverless-adapter/lib/frameworks/fastify';
import { ServerlessAdapter } from '@h4ad/serverless-adapter';
import { AzureHandler } from '@h4ad/serverless-adapter/lib/handlers/azure';
import { PromiseResolver } from '@h4ad/serverless-adapter/lib/resolvers/promise';
import { AppModule } from './app/app.module';
import { HttpTriggerV4Adapter } from './temp/temp';

async function bootstrap() {
  const app = await NestFactory.create<NestFastifyApplication>(AppModule, new FastifyAdapter());
  await app.init();
  return app.getHttpAdapter().getInstance();
}

const fastifyFramework = new FastifyFramework();
const framework = new LazyFramework(fastifyFramework, bootstrap);

export const handler = ServerlessAdapter.new(null)
  .setFramework(framework)
  .setHandler(new AzureHandler({
    useContextLogWhenInternalLogger: false
  }))
  .setResolver(new PromiseResolver())
  .addAdapter(new HttpTriggerV4Adapter())
  .build();
// app.module.ts
import { Module } from '@nestjs/common';
import { ServeStaticModule } from '@nestjs/serve-static';
import { join } from 'path';

@Module({
  imports: [
    ServeStaticModule.forRoot({
      rootPath: join(__dirname, '..', 'client', 'build')
    }),
  ],
  controllers: [],
  providers: [],
})
export class AppModule {}

While the index.html along with the .css and .js were served correctly, the images failed to load with the following errors:

  • http://localhost:7071/static/media/logo.6ce24c58023cc2f8fd88fe9d219db6c6.svg net::ERR_CONTENT_LENGTH_MISMATCH 200 (OK)
  • GET http://localhost:7071/favicon.ico net::ERR_CONTENT_LENGTH_MISMATCH 200 (OK)

I copied locally the azure/http-trigger-v4.adapter.ts in order to debug and I found that when the response content-type was image/... the body was base64 string.

If you convert the base64 body to Uint8Array then it works fine.
I added the following temporary change:

// azure/http-trigger-v4.adapter.ts
public getResponse({
        body,
        isBase64Encoded,
        statusCode,
        headers: originalHeaders,
    }: GetResponseAdapterProps<HttpRequest>): HttpResponseSimple {
        const headers = getFlattenedHeadersMap(originalHeaders, ',', true);
        const cookies = this.getAzureCookiesFromHeaders(originalHeaders);

        if (headers['set-cookie']) delete headers['set-cookie'];

        let test: any = body;
        if (headers['content-type'].startsWith('image')) {
            test = Uint8Array.from(atob(body), c => c.charCodeAt(0));
        }

        return {
            body: test,
            statusCode,
            headers,

            // I tried to understand this property with
            // https://docs.microsoft.com/en-us/aspnet/web-api/overview/formats-and-model-binding/content-negotiation
            // but I don't know if it's worth implementing this guy as an option
            // I found out when this guy is set to true and the framework sets content-type, azure returns 500
            // So I'll leave it as is and hope no one has any problems.
            enableContentNegotiation: false,
            cookies,
        };
    }

It would be nice if there was a fix not only for images but also for other binary types.

`Transfer-Encoding: chunked` response support

Feature Request

Remove chunked encoding on responses instead of letting the encoding go through and have a content-length be calculated including the encoding.

Is your feature request related to a problem? Please describe.

When curl'd via an AWS ALB, the route below returns the result below, which includes the chunked response encoding control bytes in the response payload.

< HTTP/2 200 
< server: awselb/2.0
< date: Mon, 01 Jan 2024 05:58:45 GMT
< content-type: text/plain
< content-length: 65
< x-powered-by: Express
< 
19
INITIAL PAYLOAD RESPONSE

17
FINAL PAYLOAD RESPONSE

0

Express Route:

app.get("/chunked-response", async (req, res) => {
  // Send headers right away
  res.setHeader("Content-Type", "text/plain");
  res.setHeader("Transfer-Encoding", "chunked");
  res.status(200);

  // Send initial payload
  res.write("INITIAL PAYLOAD RESPONSE\n");

  // Wait for 5 seconds
  await sleep(5000);

  // Send final payload and close out the response
  res.end("FINAL PAYLOAD RESPONSE\n");
});

Describe the solution you'd like

I don't believe an ALB invoking a Lambda supports Transfer-Encoding: chunked at all, but there are cases where code will exist that writes the response as chunked because it doesn't know the content length ahead of time. Yes, in that case the entire response is going to be buffered within the Lambda so the length will end up being known.

It would be beneficial if the Transfer-Encoding: chunked header was stripped and the control characters were removed before the response body was returned, OR, at the very least throw a nasty exception if a Transfer-Encoding: chunked response header is encountered.

Describe alternatives you've considered

Not using Transfer-Encoding: chunked... however, the specific case that I'm handling is a comparison test of when the first bytes to a response arrive from this invocation method vs via another method that I'm working on, so I have to use chunked if I want the Express code to be compariable.

Are you willing to resolve this issue by submitting a Pull Request?

Potentially.

  • Yes, I have the time, and I know how to start.
  • Yes, I have the time, but I don't know how to start. I would need guidance.
  • No, I don't have the time, although I believe I could do it if I had the time...
  • No, I don't have the time and I wouldn't even know how to start.

Adapter - AWS API Gateway Websocket

Feature Request

Is your feature request related to a problem? Please describe.

A few days ago i implemented a chat functionality at my api using api gateway websocket from aws. To do that, i've created an adapter (to forward the request context to my application) by extending aws simple adapter (just like SQS adapter) and creating the required interfaces.

It would be nice to have this adapter available in the library 🚀

Describe the solution you'd like

Create an adapter to handle api gateway websocket requests, maybe a more robust one than mine (that only forwards the request context).

Describe alternatives you've considered

  • Custom adapter extending aws simple adapter to forward the request context to my application

Are you willing to resolve this issue by submitting a Pull Request?

Right now i don't have the time for it, but it would be awesome to contribute!

  • Yes, I have the time, and I know how to start.
  • Yes, I have the time, but I don't know how to start. I would need guidance.
  • No, I don't have the time, although I believe I could do it if I had the time...
  • No, I don't have the time and I wouldn't even know how to start.

Export CJS and ESM Lib

Feature Request

Is your feature request related to a problem? Please describe.

I have a serverless setup with type: "module" and no bundler like vite, webpack etc (I use serverless-jetpack to upload only necessary files). With the actual output config of vite the lib is cjs only and cannot be imported like said in the doc under my configuration.
Because of that I have no choice but to import with default export
For example
import awsPkg from "@h4ad/serverless-adapter/lib/adapters/aws/index.js";
It's not very convenient.

Describe the solution you'd like
It would be great if the lib was exported as both cjs and esm with appropriate resolution in package.json. It shoudn't impact cjs users and benefit to esm users

Are you willing to resolve this issue by submitting a Pull Request?

I can try to do it but I prefer to wait some comments on this request if I missed something

Support Lambda@Edge

Does anyone know if this supports Lambda@Edge out of the box?

I think the event structures are different to Api Gateway Lambda and I'm not even sure about Lambda HTTP connect.

Yh I'd be interested on some feedback regarding this. Thanks.

Roadmap to V4?

This is the milestone for being able to update this library to V4.

Goals:

  • Split each handler, adapter and framework into its own package.
  • Be more independent from NodeJS, so you can easily migrate from NodeJS to Deno or another type of environment such as Bun or Cloudflare Workers.

About dividing the library into several packages, we will have:

  • @h4ad/serverless-adapter: The core library which contains all contracts and core functionality.
  • @h4ad/serverless-adapter-aws: Contains all AWS Adapters and Handler.
  • @h4ad/serverless-adapter-azure: Which contains all Azure adapters and handler.
  • @h4ad/serverless-adapter-firebase: Contains all Firebase Adapters and Handlers.
  • @h4ad/serverless-adapter-huawei: Contains all Huawei adapters and handlers.
  • @h4ad/serverless-adapter-gcp: Contains all GCP adapters and handlers.
  • @h4ad/serverless-adapter-digital-ocean: Contains all Digital Ocean Adapters and Handlers.
  • @h4ad/serverless-adapter-express: Contains the express framework.
  • @h4ad/serverless-adapter-deepkit: Contains the deepkit framework.
  • @h4ad/serverless-adapter-fastify: Contains the fastify framework.
  • @h4ad/serverless-adapter-hapi: Contains the hapi framework.
  • @h4ad/serverless-adapter-koa: Contains the koa framework.
  • @h4ad/serverless-adapter-apollo-server: Contains the apollo-server framework.
  • @h4ad/serverless-adapter-nodejs: Contains the network code related to Request and Response for NodeJS
  • @h4ad/serverless-adapter-fetch: Contains the network code related to Request and Response for Fetch Spec, that can be used inside Cloudflare Workers and other kind of workers.

Things to consider:

  • Should I use the monorepo approach?
    • If I use this scheme I must have multiple packages with different typescript versions and different nodejs versions.
  • Should I create an organization instead of leaving it inside my nickname?

FastifyFramework Issue

I set my project like below:
but i got error log, I use fastify + AWS CDK + Lambda. How to fix this one?

import { ServerlessAdapter } from "@h4ad/serverless-adapter";
import { FastifyFramework } from "@h4ad/serverless-adapter/lib/frameworks/fastify";
import { PromiseResolver } from '@h4ad/serverless-adapter/lib/resolvers/promise';
import { DefaultHandler } from '@h4ad/serverless-adapter/lib/handlers/default';


export const handler = ServerlessAdapter.new(app)
.setFramework(new FastifyFramework())
.setHandler(new DefaultHandler())
.setResolver(new PromiseResolver())
.build();

{
  "errorType": "Error",
  "errorMessage": "SERVERLESS_ADAPTER: Couldn't find adapter to handle this event.",
  "trace": [
    "Error: SERVERLESS_ADAPTER: Couldn't find adapter to handle this event.",
    "    at DefaultHandler2.getAdapterByEventAndContext (/var/task/index.js:48729:17)",
    "    at Runtime.handler (/var/task/index.js:49490:32)",
    "    at Runtime.handleOnceNonStreaming (/var/runtime/Runtime.js:73:25)"
  ]
}

AWS SQS - Reporting batch item failures

Feature Request

Add support to parse the response from framework to be able to report batching item failures.

Is your feature request related to a problem? Please describe.

Ideally, if a process a list of items, I should be able to report if one of those items fail.

Describe the solution you'd like

Add support to response like:

{ 
  "batchItemFailures": [ 
        {
            "itemIdentifier": "id2"
        },
        {
            "itemIdentifier": "id4"
        }
    ]
}

We need to change these lines:

public getResponse(): IEmptyResponse {
return EmptyResponse;
}

Maybe inside SQSAdapterOptions can have a flag to handle the response only if batch is set.

export interface SQSAdapterOptions {
/**
* The path that will be used to create a request to be forwarded to the framework.
*
* @defaultValue /sqs
*/
sqsForwardPath?: string;
/**
* The http method that will be used to create a request to be forwarded to the framework.
*
* @defaultValue POST
*/
sqsForwardMethod?: string;
}

It would be great if the answer could be inferred by changing the option, like: return IEmptyResponse if batch mode is false or BatchResult if batch mode is true.

Another option is to create another adapter that extends the SQSAdapter or just create an instance to reuse the canHandle and getRequest method and change the behavior of getResponse.

Describe alternatives you've considered

No other solution instead the solution above.

Are you willing to resolve this issue by submitting a Pull Request?

  • Yes, I have the time, and I know how to start.
  • Yes, I have the time, but I don't know how to start. I would need guidance.
  • No, I don't have the time, although I believe I could do it if I had the time...
  • No, I don't have the time and I wouldn't even know how to start.

Jest to Mocha

Feature request

Is your feature request related to an issue? Please describe.

Currently, code coverage is broken after adding deepkit integration due to loaders in nodejs.

Describe the desired solution

Perhaps the only solution that fits better is to use mocha, I've seen GoogleChrome/lighthouse#14047 using mocha so I could work to get back the correct code coverage.

Streaming causes timeouts with NestJS

Current Behavior

Hello, I'm trying to implement streaming with NestJS but I'm encountering timeouts every time as if response.end didn't complete the task.

In the logs, we can see that the task expires:

2024-03-14T08:33:16.384Z c9a00e2b-1511-4f2d-81f7-4dddca1d4da7 Task timed out after 10.03 seconds

The end is executed as we can see:

2024-03-14T08:33:08.568Z c9a00e2b-1511-4f2d-81f7-4dddca1d4da7 INFO streaming1 - end

I don’t know what is wrong. Can you help me please.

  • CloudWatch Logs
START RequestId: c9a00e2b-1511-4f2d-81f7-4dddca1d4da7 Version: $LATEST
2024-03-14T08:33:06.426Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM StreamingContext::createStream stream created
2024-03-14T08:33:06.426Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME RAPID Runtime::handleOnceStreaming invoking handler
2024-03-14T08:33:06.441Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME RAPID Runtime::handleOnceStreaming handler returned
[Nest] 8  - 03/14/2024, 8:33:06 AM     LOG [NestApplication] Nest application successfully started +127412ms
2024-03-14T08:33:06.548Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	streaming1 - start
2024-03-14T08:33:06.550Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::write 11 callback: undefined
2024-03-14T08:33:06.551Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::origWrite true
2024-03-14T08:33:06.551Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::write outputData len 0
2024-03-14T08:33:06.552Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::write outputSize 0
2024-03-14T08:33:06.560Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::write 5 callback: undefined
2024-03-14T08:33:06.560Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::origWrite true
2024-03-14T08:33:06.560Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::write outputData len 0
2024-03-14T08:33:06.561Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::write outputSize 0
2024-03-14T08:33:06.762Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::write 5 callback: undefined
2024-03-14T08:33:06.762Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::origWrite true
2024-03-14T08:33:06.762Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::write outputData len 0
2024-03-14T08:33:06.762Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::write outputSize 0
2024-03-14T08:33:07.062Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::write 5 callback: undefined
2024-03-14T08:33:07.063Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::origWrite true
2024-03-14T08:33:07.063Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::write outputData len 0
2024-03-14T08:33:07.063Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::write outputSize 0
2024-03-14T08:33:07.464Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::write 5 callback: undefined
2024-03-14T08:33:07.464Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::origWrite true
2024-03-14T08:33:07.464Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::write outputData len 0
2024-03-14T08:33:07.464Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::write outputSize 0
2024-03-14T08:33:07.965Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::write 5 callback: undefined
2024-03-14T08:33:07.966Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::origWrite true
2024-03-14T08:33:07.966Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::write outputData len 0
2024-03-14T08:33:07.966Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM ResponseStream::write outputSize 0
2024-03-14T08:33:08.568Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	streaming1 - end
2024-03-14T08:33:08.568Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME RAPID Runtime::handleOnceStreaming result is awaited.
2024-03-14T08:33:08.570Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM rapid response 
{
    "status": "OK"
}
2024-03-14T08:33:08.571Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME RAPID RAPID response undefined
2024-03-14T08:33:08.571Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM StreamingContext::createStream scheduleNext
2024-03-14T08:33:08.571Z	c9a00e2b-1511-4f2d-81f7-4dddca1d4da7	INFO	RUNTIME STREAM StreamingContext::scheduleNextNow entered
2024-03-14T08:33:16.384Z c9a00e2b-1511-4f2d-81f7-4dddca1d4da7 Task timed out after 10.03 seconds
  • lambda.ts
import { ServerlessAdapter } from '@h4ad/serverless-adapter';
import { ExpressFramework } from '@h4ad/serverless-adapter/lib/frameworks/express';
import { LazyFramework } from '@h4ad/serverless-adapter/lib/frameworks/lazy';
import { AwsStreamHandler } from '@h4ad/serverless-adapter/lib/handlers/aws';
import { ApiGatewayV2Adapter } from '@h4ad/serverless-adapter/lib/adapters/aws';
import { NestFactory } from '@nestjs/core';
import * as cookieParser from 'cookie-parser';
import { DummyResolver } from '@h4ad/serverless-adapter/lib/resolvers/dummy';
import { SentryFilter } from './filters/sentry.filter';
import * as Sentry from '@sentry/node';
import * as process from 'process';
import { AppModule } from './app.module';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);
  app.use(cookieParser());

  await app.init();

  const httpAdapter = app.getHttpAdapter();
  app.useGlobalFilters(new SentryFilter(httpAdapter));
  Sentry.init({
    dsn: process.env.SENTRY_DSN,
    environment: process.env.APP_ENV,
  });

  return httpAdapter.getInstance();
}

const express = new ExpressFramework();
const framework = new LazyFramework(express, bootstrap);

export const handler = ServerlessAdapter.new(null)
  .setFramework(framework)
  .setHandler(new AwsStreamHandler())
  .setResolver(new DummyResolver())
  .addAdapter(new ApiGatewayV2Adapter())
  .build();
  • controller.ts
@Get('/streaming')
async streaming(@Res() response: Response) {
  console.log('streaming1 - start');

  const sendAndSleep = (response: any, counter: number) => {
    if (counter > 5) {
      console.log('streaming1 - end');

      response.end();
    } else {
      response.write(' ;i=' + counter);
      // response.flush();
      counter++;
      setTimeout(function () {
        sendAndSleep(response, counter);
      }, counter * 100);
    }
  };

  response.setHeader('Content-Type', 'text/html; charset=utf-8');
  response.setHeader('Transfer-Encoding', 'chunked');

  response.write('Thinking...');

  sendAndSleep(response, 1);
}
  • serverless.yml
service: streaming
frameworkVersion: '3'

plugins:
  - serverless-jetpack
  - serverless-dotenv-plugin

provider:
  name: aws
  runtime: nodejs18.x
  region: eu-west-1
  profile: aws-default
  stage: ${opt:stage, 'development'}
  environment:
    NODE_ENV: ${self:provider.stage}
    AWS_LAMBDA_RUNTIME_VERBOSE: 3

functions:
  app:
    timeout: 10
    handler: dist/lambda.handler
    url: true
    reservedConcurrency: 5

custom:
  jetpack:
    concurrency: 5

  dotenv:
    basePath: './'
    include:
      - .env.${opt:stage, 'development'}

Environment

  • Version: ^4.2.0
  • Platform: Mac
  • Node.js Version: 18

CallbackResolver and AWS Lambda

Current Behavior

Almost all requests using CallbackResolver inside AWS Lambda are taking too long,
with PromiseResolver I didn't see this same delay.

Expected Behavior

To work as PromiseResolver works.

Steps to Reproduce the Problem

  1. Use CallbackResolver on setResolver.
  2. Deploy your app inside AWS Lambda.
  3. Make some request.

Environment

  • Version: 2.6.0
  • Platform: Linux
  • Node.js Version: 14.x

Peer Dependencies for Frameworks

Current Behavior

Now, because I use explicit types in Framework implementation of contract, I have problems in new projects when I only use express, I could not build my API without installing peer dependencies.

Expected Behavior

I could build my API with ExpressFramework (ex.) without being annoyed with problems of typings of Koa, Fastify, etc.

Steps to Reproduce the Problem

  1. npm i --save @h4ad/serverless-adapter
  2. Integrate with your API
  3. npm run build

Environment

  • Version: 1.0.1
  • Platform: Linux
  • Node.js Version: 12.22.10

[Feature] Digital Ocean Functions Support

Feature Request

Is your feature request related to a problem? Please describe.

Add support to deploy apps inside Digital Ocean Fuctions.

Describe the solution you'd like

Like Azure or Firebase, we need:

  • A new handler called DigitalOceanHandler.
    • This handler can extend DefaultHandler and just forward values with appropriate types.
  • A new adapter called DigitalOceanFunctionAdapter.
    • Must receive the event and handle it correctly.

Are you willing to resolve this issue by submitting a Pull Request?

  • Yes, I have the time, and I know how to start.
  • Yes, I have the time, but I don't know how to start. I would need guidance.
  • No, I don't have the time, although I believe I could do it if I had the time...
  • No, I don't have the time and I wouldn't even know how to start.

Apollo Server

Feature request

Inspired apollo-server-integrations/apollo-server-integration-aws-lambda#38.

Is your feature request related to an issue? Please describe.
Added support for Apollo Server as Framework/Handler.

Describe the solution you would like

In theory, we already have support for Apollo Server if we use express as middleware.
But maybe we can do better, the solutions are:

  • Add support for Apollo Server as Framework.
  • Add support for Apollo Server as Handler, so you don't need to create another Request/Response.

In both ways, I don't exactly know how to deal with cors, maybe I can create a CorsFramework to wrap around any other framework to add cors features.

Are you willing to resolve this issue by submitting a pull request?

  • Yes, I have time and I know how to start.
  • Yes, I have time, but I don't know how to start. I would need guidance.
  • No, I don't have time, although I believe I could do it if I had time...
  • No, I don't have time and I wouldn't even know how to start.

how to use tRPC with Lambda?

This repo looks amazing. I'm trying to find examples to use tRPC with AWS Lambda. I know the tRPC project advertises support for Lambda, but I see Typescript compile errors when I try.

Do you have a tRCP/Lambda sample?

Thank you!

Api Gateway V2 Adapter - Cookies

Current Behavior

The adapter for Api Gateway V2 are not handling cookies correctly accords the docs.

Expected Behavior

When we see set-cookies in headers, It should return cookies array instead cookies property inside headers.

Steps to Reproduce the Problem

  1. Set header with set-cookie
  2. Use ApiGatewayV2Adapter
  3. See your cookie not being set by AWS.

Environment

  • Version: 2.0.1
  • Platform: Linux
  • Node.js Version: 14.x

Timing/Buffer issue when using AWSStreamHandler

Current Behavior

When using the AWS Stream Handler, writes are buffered until either another write occurs or the stream ends. In the case of another write, the second write then experiences the above behaviour.

Expected Behavior

The write to be immediately dispatched and does not depend on another write or the end of the stream

Steps to Reproduce the Problem

const app: Express = express();
app.use("/test", function (req, res, next) {
        //when using text/plain it did not stream
        //without charset=utf-8, it only worked in Chrome, not Firefox
        res.setHeader("Content-Type", "text/html; charset=utf-8");
        res.setHeader("Transfer-Encoding", "chunked");

        res.write("Thinking...");
        sendAndSleep(res, 1);
        // (res as unknown as Writable).on("")
        // return;
});

function sendAndSleep(response: any, counter: number) {
        if (counter > 5) {
                response.end();
        } else {
                response.write(" ;i=" + counter);
                // response.flush();
                counter++;
                setTimeout(function () {
                        sendAndSleep(response, counter);
                }, counter * 1000);
        }
}

const expressFramework = new ExpressFramework();
const framework = new LazyFramework(expressFramework, () => app);

export const handler = ServerlessAdapter.new(null)
        .setFramework(framework)
        .setHandler(new AwsStreamHandler())
        .setResolver(new DummyResolver())
        .setLogger(createDefaultLogger({ level: "info" }))
        .addAdapter(new ApiGatewayV2Adapter())
        // more options...
        //.setFramework(new ExpressFramework())
        .build();

In the above example, you could use the interval that a user receives two numbers to determine the timing. The difference between when 2 is sent and 3 is sent over the network is 4 seconds, which should not be the case. It should be 3 seconds. This shows that 3 is in fact being written but not yet sent, then once 4 is written the 3 is sent over the network.

Environment

  • Version: 4.0.1
  • Platform: Win (local) Linux (cloud)
  • Node.js Version: v18

AWS DynamoDB - Reporting batch item failures

Feature Request

Add support to parse the response from framework to be able to report batching item failures.

Is your feature request related to a problem? Please describe.

Ideally, if a process a list of items, I should be able to report if one of those items fail.

Describe the solution you'd like

Add support to response like:

{ 
  "batchItemFailures": [ 
        {
            "itemIdentifier": "id2"
        },
        {
            "itemIdentifier": "id4"
        }
    ]
}

We need to change these lines:

public getResponse(): IEmptyResponse {
return EmptyResponse;
}

Maybe inside DynamoDBAdapterOptions can have a flag to handle the response only if batch is set.

export interface DynamoDBAdapterOptions {
/**
* The path that will be used to create a request to be forwarded to the framework.
*
* @defaultValue /dynamo
*/
dynamoDBForwardPath?: string;
/**
* The http method that will be used to create a request to be forwarded to the framework.
*
* @defaultValue POST
*/
dynamoDBForwardMethod?: string;
}

It would be great if the answer could be inferred by changing the option, like: return IEmptyResponse if batch mode is false or BatchResult if batch mode is true.

Another option is to create another adapter that extends the DynamoDBAdapter or just create an instance to reuse the canHandle and getRequest method and change the behavior of getResponse.

Describe alternatives you've considered

No other solution instead the solution above.

Are you willing to resolve this issue by submitting a Pull Request?

  • Yes, I have the time, and I know how to start.
  • Yes, I have the time, but I don't know how to start. I would need guidance.
  • No, I don't have the time, although I believe I could do it if I had the time...
  • No, I don't have the time and I wouldn't even know how to start.

Request body of FirebaseHandler

Current Behavior

Request body inside Firebase Handler is returning object sometimes and this can break structures like deepkit which depends on body type being buffer only or UInt8Array.

Expected Behavior

The body being only buffer or UInt8Array.

Steps to Reproduce the Problem

  1. Run deepkit with HttpFirebaseHandler integrated.

Environment

  • Version: 2.7.0
  • Platform: Linux
  • Node.js Version: 16.x

Add Cors Framework

Feature Request

Is your feature request related to an issue? Please describe.
Add the ability to specify and manipulate cors without relying on the framework specification.

In that case, we can solve the problem with #56 and we can use in other frameworks as well like deepkit, trpc and others that don't have an easy way to deal with cors.

Describe the solution you would like
I think the best way to handle this is to create a CorsFramework that wraps another framework like LazyFramework.

So the solution could be something like:

const expressFramework = new ExpressFramework();
const finalFramework = new CorsFramework(expressFramework);

Are you willing to resolve this issue by submitting a Pull Request?

  • Yes, I have the time, and I know how to start.
  • Yes, I have the time, but I don't know how to start. I would need guidance.
  • No, I don't have the time, although I believe I could do it if I had the time...
  • No, I don't have the time and I wouldn't even know how to start.

Possible issues with Cookies

Hi @laverdet and @ml27299,

Sorry to tag you both, but I saw you both creating issues on vendia-serverless: CodeGenieApp/serverless-express#609 and CodeGenieApp/serverless-express#554.

My library has the same intention of vendia/serverless-express but I construct it be more extensible and easier to maintain and customize than vendia.

Because my library is a refactored version of vendia/serverless-express, maybe it could have some issues that you both describe in the issues above.

I'm proposing that you try my library and let me know if the problems with cookies will persist, if so, I can work on trying to fix these problems.
If so, I'll have more description and also someone to test with because I currently don't have APIs that use ALB or cookies, so it's hard for me to test those scenarios.

For ALB, I have two problems to fix from what I saw from your issues:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.