Giter Site home page Giter Site logo

kikobeats / cacheable-response Goto Github PK

View Code? Open in Web Editor NEW
213.0 7.0 24.0 316 KB

An HTTP compliant route path middleware for serving cache response with invalidation support.

License: MIT License

JavaScript 100.00%
http cache cache-control server-rendering

cacheable-response's Introduction

cacheable-response

Last version Coverage Status NPM Status

An HTTP compliant route path middleware for serving cache response with invalidation support.

Why

Server Side Rendering (SSR) is a luxurious but necessary thing if you want to have a first class user experience.

The main issue of doing server-side things is the extra cost associated with dynamic things: The server will take CPU cycles to compute the value to be served, probably discarded in the next page reload, losing precious resources in the process.

Instead of serving a real time™ – and costly – response, we can say it is OK serving a pre-calculated response but much much cheaper.

That will save CPU cycles, saving them for things that really matters.

Caching states

Value Description
MISS The resource was looked into the cache but did not find it, so a new copy is generated and placed into the cache.
HIT The resources was found into the cache, being generated by a previous access.
EXPIRED The resouce was found but it is expired, being necessary regerate it.
BYPASS The cache is forced to be bypassed, regenerating the resource.
STALE The resource is expired but it's served while a new cache copy is generated in background.

Install

$ npm install cacheable-response --save

Get Started

cacheable-response is a HTTP middleware for a serving pre-calculated response.

It's like a LRU cache but with all the logic necessary for auto-invalidate response copies and refresh them.

Imagine you are currently running an HTTP microservice to compute something heavy in terms of CPU

const server = ({ req, res }) => {
  const data = doSomething(req)
  res.send(data)
}

To leverage caching capabilities, just you need to adapt your HTTP based project a bit for following cacheable-response interface

const cacheableResponse = require('cacheable-response')

const ssrCache = cacheableResponse({
  get: ({ req, res }) => ({
    data: doSomething(req),
    ttl: 86400000 // 24 hours
  }),
  send: ({ data, res, req }) => res.send(data)
})

At least, cacheable-response needs two things:

  • get: It creates a fresh cacheable response associated with the current route path.
  • send: It determines how the response should be rendered.

cacheable-response is framework agnostic: It could be used with any library that accepts (request, response) as input.

const http = require('http')
/* Explicitly pass `cacheable-response` as server */
http
  .createServer((req, res) => ssrCache({ req, res }))
  .listen(3000)

It could be use in the express way too:

const express = require('express')
const app = express()

/* Passing `cacheable-response` instance as middleware */
app
  .use((req, res) => ssrCache({ req, res }))

See more examples.

At all times the cache status is reflected as x-cache headers in the response.

The first resource access will be a MISS.

HTTP/2 200
cache-control: public, max-age=7200, stale-while-revalidate=300
ETag: "d-pedE0BZFQNM7HX6mFsKPL6l+dUo"
x-cache-status: MISS
x-cache-expired-at: 1h 59m 60s

Successive resource access under the ttl period returns a HIT

HTTP/2 200
cache-control: public, max-age=7170, stale-while-revalidate=298
ETag: "d-pedE0BZFQNM7HX6mFsKPL6l+dUo"
x-cache-status: HIT
x-cache-expired-at: 1h 59m 30s

After ttl period expired, the cache will be invalidated and refreshed in the next request.

In case you need you can force invalidate a cache response passing force=true as part of your query parameters.

curl https://myserver.dev/user # MISS (first access)
curl https://myserver.dev/user # HIT (served from cache)
curl https://myserver.dev/user # HIT (served from cache)
curl https://myserver.dev/user?force=true # BYPASS (skip cache copy)

In that case, the x-cache-status will reflect a 'BYPASS' value.

Additionally, you can configure a stale ttl:

const cacheableResponse = require('cacheable-response')

const ssrCache = cacheableResponse({
  get: ({ req, res }) => ({
    data: doSomething(req),
    ttl: 86400000, // 24 hours
    staleTtl: 3600000 // 1h
  }),
  send: ({ data, res, req }) => res.send(data)
})

The stale ttl maximizes your cache HITs, allowing you to serve a no fresh cache copy while doing revalidation on the background.

curl https://myserver.dev/user # MISS (first access)
curl https://myserver.dev/user # HIT (served from cache)
curl https://myserver.dev/user # STALE (23 hours later, background revalidation)
curl https://myserver.dev/user?force=true # HIT (fresh cache copy for the next 24 hours)

The library provides enough good sensible defaults for most common scenarios and you can tune these values based on your use case.

API

cacheableResponse([options])

options

bypassQueryParameter

Type: string
Default: 'force'

The name of the query parameter to be used for skipping the cache copy in an intentional way.

cache

Type: boolean
Default: new Keyv({ namespace: 'ssr' })

The cache instance used for backed your pre-calculated server side response copies.

The library delegates in keyv, a tiny key value store with multi adapter support.

If you don't specify it, a memory cache will be used.

compress

Type: boolean
Default: false

Enable compress/decompress data using brotli compression format.

get

Required
Type: function

The method to be called for creating a fresh cacheable response associated with the current route path.

async function get ({ req, res }) {
  const data = doSomething(req, res)
  const ttl = 86400000 // 24 hours
  const headers = { userAgent: 'cacheable-response' }
  return { data, ttl, headers }
}

The method will received ({ req, res }) and it should be returns:

  • data object|string: The content to be saved on the cache.
  • ttl number: The quantity of time in milliseconds the content is considered valid on the cache. Don't specify it means use default ttl.
  • createdAt date: The timestamp associated with the content (Date.now() by default).

Any other property can be specified and will passed to .send.

In case you want to bypass the cache, preventing caching a value (e.g., when an error occurred), you should return undefined or null.

key

Type: function
Default: ({ req }) => req.url)

It specifies how to compute the cache key, taking req, res as input.

Alternatively, it can return an array:

const key = ({ req }) => [getKey({ req }), req.query.force]

where the second parameter represents whether to force the cache entry to expire.

logger

Type: function
Default: () => {}

When it's present, every time cacheable-response is called, a log will be printed.

send

Required
Type: function

The method used to determinate how the content should be rendered.

async function send ({ req, res, data, headers }) {
  res.setHeader('user-agent', headers.userAgent)
  res.send(data)
}

It will receive ({ req, res, data, ...props }) being props any other data supplied to .get.

staleTtl

Type: number|boolean|function
Default: 3600000

Number of milliseconds that indicates grace period after response cache expiration for refreshing it in the background. The latency of the refresh is hidden from the user.

This value can be specified as well providing it as part of .get output.

The value will be associated with stale-while-revalidate directive.

You can pass a false to disable it.

ttl

Type: number|function
Default: 86400000

Number of milliseconds a cache response is considered valid.

After this period of time, the cache response should be refreshed.

This value can be specified as well providing it as part of .get output.

If you don't provide one, this be used as fallback for avoid keep things into cache forever.

serialize

Type: function
Default: JSON.stringify

Set the serializer method to be used before compress.

deserialize

Type: function
Default: JSON.parse

Set the deserialize method to be used after decompress.

Pro-tip: Distributed cache with CloudFlare™️

This content is not sponsored; Just I consider CloudFlare is doing a good job offering a cache layer as part of their free tier.

Imagine what could be better than having one cache layer? Exactly, two cache layers.

If your server domain is connected with CloudFlare you can take advantage of unlimited bandwidth usage.

For doing that, you need to setup a Page Rule over your domain specifing you want to enable cache. Read more how to do that.

Next time you query about a resource, a new cf-cache-status appeared as part of your headers response.

HTTP/2 200
cache-control: public, max-age=7200, stale-while-revalidate=300
ETag: "d-pedE0BZFQNM7HX6mFsKPL6l+dUo"
x-cache-status: MISS
x-cache-expired-at: 1h 59m 60s
cf-cache-status: MISS

CloudFlare will respect your cache-control policy, creating another caching layer reflected by cf-cache-status

HTTP/2 200
cache-control: public, max-age=7200, stale-while-revalidate=300
ETag: "d-pedE0BZFQNM7HX6mFsKPL6l+dUo"
x-cache-status: MISS
x-cache-expired-at: 1h 59m 60s
cf-cache-status: HIT

Note how in this second request x-cache-status is still a MISS.

That's because CloudFlare way for caching the content includes caching the response headers.

The headers associated with the cache copy will the headers from the first request. You need to look at cf-cache-status instead.

You can have a better overview of the percentage of success by looking your CloudFlare domain analytics

Examples

Make a PR for adding your project!

Bibliography

License

cacheable-response © Kiko Beats, released under the MIT License.
Authored and maintained by Kiko Beats with help from contributors.

kikobeats.com · GitHub Kiko Beats · Twitter @Kikobeats

cacheable-response's People

Contributors

dependabot-preview[bot] avatar dependabot[bot] avatar greenkeeper[bot] avatar ikoala avatar kikoanis avatar kikobeats avatar macrozone avatar rickynyairo avatar stigkj avatar wanchengcheng avatar whooehoo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

cacheable-response's Issues

NextJs Caching Issue

https://github.com/vercel/next.js/tree/canary/examples/ssr-caching

Seems not to work on the current Next JS Version

Getting this error on Home Page

(node:20872) UnhandledPromiseRejectionWarning: TypeError: argument entity is required
at etag (C:\xampp\htdocs\next-app\node_modules\etag\index.js:72:11)
at C:\xampp\htdocs\next-app\node_modules\cacheable-response\index.js:93:32
at processTicksAndRejections (internal/process/task_queues.js:93:5)
(Use node --trace-warnings ... to show where the warning was created)
(node:20872) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag --unhandled-rejections=strict (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:20872) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

Parameter to provide backend in KeyV

I was wondering since, the underlying caching mechanism usage keyv and KeyV has support for the different backend for caching, can we provide some parameter to control that from cacheable-response ?

cache = new Keyv({ namespace: 'ssr' }),

can be modified to have support for backend

new Keyv('redis://user:pass@localhost:6379', { namespace: 'cache' });

Action required: Greenkeeper could not be activated 🚨

🚨 You need to enable Continuous Integration on Greenkeeper branches of this repository. 🚨

To enable Greenkeeper, you need to make sure that a commit status is reported on all branches. This is required by Greenkeeper because it uses your CI build statuses to figure out when to notify you about breaking changes.

Since we didn’t receive a CI status on the greenkeeper/initial branch, it’s possible that you don’t have CI set up yet. We recommend using Travis CI, but Greenkeeper will work with every other CI service as well.

If you have already set up a CI for this repository, you might need to check how it’s configured. Make sure it is set to run on all new branches. If you don’t want it to run on absolutely every branch, you can whitelist branches starting with greenkeeper/.

Once you have installed and configured CI on this repository correctly, you’ll need to re-trigger Greenkeeper’s initial pull request. To do this, please click the 'fix repo' button on account.greenkeeper.io.

Getting [ERR_HTTP_HEADERS_SENT] when used with nextjs

Getting this error when using cacheable-response with nextjs server:

web-error.logErrorError [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the client
    at ServerResponse.setHeader (_http_outgoing.js:518:11)
    at /opt/prd/web/releases/33428632e3a0c8b1fbbfab3aa86d7e955654f639/packages/web/node_modules/cacheable-response/index.js:38:9
    at /opt/prd/web/releases/33428632e3a0c8b1fbbfab3aa86d7e955654f639/packages/web/node_modules/cacheable-response/index.js:104:5
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (internal/process/task_queues.js:97:5)

Any attempt to catch / log at cacheable-response setup yields no console logs. It may be a nextjs issue but I can't tell from the stack trace.

Header "if-none-match" causes server side re-rendering

I found that when the browser provides request header "if-none-match" with etag value same as the generated content, cacheable-response doesn't save the response in cache after expired.
This causes the server executes SSR on every request because cacheable-response doesn't save the response cache.

Should the cache saving block

cacheable-response/index.js

Lines 123 to 127 in 55eca7a

if (!isHit) {
const payload = { etag, createdAt, ttl, data, ...props }
const value = await compress(payload)
await cache.set(key, value, ttl)
}
being place before the 304 response block

cacheable-response/index.js

Lines 117 to 121 in 55eca7a

if (!isModified) {
res.statusCode = 304
res.end()
return
}
?

Memory leak with keyv

This is just a quick FYI.

I have a memory leak in production only with a SSR React app, related to the keyv module, which is used by cacheable-response.

Production urls are called with a lot of different query params:

example.com?campaign=1
example.com?campaign=2
example.com?campaign=3
example.com?campaign=4

This creates entries with a TTL, but these keys never get deleted due to a double prefix when the delete method is called, so the store keeps growing with pretty heavy values (full html response).

I created a PR to fix that: jaredwray/keyv#108

I currently fixed the issue by keeping cacheable-response but setting the store to a Redis one.

I would argue that keyv strategy to delete keys set with a TTL is still pretty naive (not to say wrong), as the deletion will only happen the next time a get happens with the exact same key.

In my case, the same url may not be called twice, so even with the fix I made, the store would keep growing.

I strongly advise people with the same issue to move to Redis, as it has a proper TTL entry implemented:
https://redis.io/commands/expire

This was hard for us to find, that's why I said I had the bug in "production only" as we were not replaying traffic on test environments.

We may implement a way to create soak tests on our test environments to avoid these kind of issues, we are looking at https://github.com/buger/goreplay.

Getting errors while using it with fastify-nextjs

server.js

const cacheableResponse = require("cacheable-response");
const next = require("next");
const fastify = require("fastify");

const port = parseInt(process.env.PORT, 10) || 3000;
const dev = process.env.NODE_ENV !== "production";
const app = next({ dev });

const handle = app.getRequestHandler();

const ssrCache = cacheableResponse({
  ttl: 1000 * 60 * 60, // 1hour
  get: async ({ req, res, pagePath, queryParams }) => ({
    data: await app.renderToHTML(req, res, pagePath, queryParams)
  }),
  send: ({ data, res }) => res.send(data)
});

app.prepare().then(() => {
  const server = fastify({ logger: { level: "error" } });

  server.get("/", (req, res) => ssrCache({ req, res, pagePath: "/" }));

  server.get("/blog/:id", (req, res) => {
    const queryParams = { id: req.params.id };
    const pagePath = "/blog";
    return ssrCache({ req, res, pagePath, queryParams });
  });

  server.get("*", (req, res) => handle(req, res));

  server.listen(port, err => {
    if (err) throw err;
    console.log(`> Ready on http://localhost:${port}`);
  });
});

Error Log

{"level":50,"time":1566390512702,"pid":31997,"hostname":"tss-H110M-H","reqId":1,"req":{"method":"GET","url":"/blog/first","hostname":"localhost:3000","remoteAddress":"127.0.0.1","remotePort":60968},"res":{"statusCode":500},"err":{"type":"NodeError","message":"The \"url\" argument must be of type string. Received type undefined","stack":"TypeError [ERR_INVALID_ARG_TYPE]: The \"url\" argument must be of type string. Received type undefined\n    at Url.parse (url.js:154:11)\n    at urlParse (url.js:148:13)\n    at Url.resolve (url.js:663:29)\n    at urlResolve (url.js:659:40)\n    at _getKey (/home/tss/ssr-caching-app/node_modules/cacheable-response/index.js:24:15)\n    at /home/tss/ssr-caching-app/node_modules/cacheable-response/index.js:87:17\n    at Object.server.get (/home/tss/ssr-caching-app/server.js:28:12)\n    at preHandlerCallback (/home/tss/ssr-caching-app/node_modules/fastify/lib/handleRequest.js:112:30)\n    at preValidationCallback (/home/tss/ssr-caching-app/node_modules/fastify/lib/handleRequest.js:101:5)\n    at handler (/home/tss/ssr-caching-app/node_modules/fastify/lib/handleRequest.js:70:5)"},"msg":"The \"url\" argument must be of type string. Received type undefined","v":1}

Does nextjs getServerSideProps have issues with cacheable-response?

We're running a custom server next site on AWS Lightsail (2 GB RAM, 1 vCPU) and to improve performance we implemented cacheable-response based on the updated example version #18786. But we had large issues with the Node server choking on 50 users. I know it's very vague, but I'm wondering if it was something to do with our custom server config in conjunction with cacheable-response with getServerSideProps. I posted here but no resposne, and was just wondering if you might know or an issue or can see an issue.

server.js

const express = require('express')
const next = require('next')
const compression = require('compression')
const cacheableResponse = require('cacheable-response')

const port = process.env.PORT || 3000
const dev = process.env.NODE_ENV_CUSTOM === 'dev'

const app = next({ dev })
const handle = app.getRequestHandler()

const CACHE_MAX_AGE = dev
  ? 1 // for dev, to disable caching
  : 1000 * 60 * 60 // for !dev, 1hr

app
  .prepare()
  .then(() => {
    const server = express()
    server.enable('strict routing')

    server.use(compression())

    // Caching
    // Disable on _next files
    server.get('/_next/*', (req, res) => handle(req, res))

    // Disable on static files
    server.get('/static/*', (req, res) => handle(req, res))

    // Disable on API endpoints within NextJS
    server.all('/api/*', (req, res) => handle(req, res))

    server.all('*', (req, res) => {
      // NextJS issue with handling URL /_error
      // https://github.com/vercel/next.js/issues/9443
      if (req.originalUrl === '/_error') {
        res.redirect(301, '/page-not-found')
      }

      return ssrCache(req, res)
    })

    server.listen(port, err => {
      if (err) throw err
      console.log(`> Ready on port ${port} 💁🏻‍♀️`)
    })
  })
  .catch(ex => {
    console.error(ex.stack)
    process.exit(1)
  })

const ssrCache = cacheableResponse({
  ttl: CACHE_MAX_AGE,
  get: async ({ req, res }) => {
    const rawResEnd = res.end
    const data = await new Promise(resolve => {
      res.end = payload => resolve(payload)
      app.render(req, res, req.path, {
        ...req.query,
        ...req.params
      })
    })
    res.end = rawResEnd
    return { data }
  },
  send: ({ data, res }) => res.send(data)
})

_document.js

export default class SennepDocument extends Document {
  static async getInitialProps (ctx) {
    const sheet = new ServerStyleSheet()
    const originalRenderPage = ctx.renderPage

    try {
      ctx.renderPage = () =>
        originalRenderPage({
          enhanceApp: App => props => sheet.collectStyles(<App {...props} />)
        })

      const initialProps = await Document.getInitialProps(ctx)
      return {
        ...initialProps,
        styles: (
          <>
            {initialProps.styles}
            {sheet.getStyleElement()}
          </>
        )
      }
    } finally {
      sheet.seal()
    }
  }

  render () {
    return (
      <Html lang='en'>
        <Head>
          <style
            dangerouslySetInnerHTML={{
              __html: `
            ${CSS_FONTS}
            ${CSS_GLOBAL}
          `
            }}
          />
          <TrackingGtmHead />
        </Head>
        <body>
          <TrackingGtmBody />
          <Main />
          <NextScript />
        </body>
      </Html>
    )
  }
}

_app.js

export default class CustomApp extends App {
  static propTypes = {
    carouselData: PropTypes.arrayOf(PropTypes.object).isRequired
  }

  static defaultProps = {
    carouselData: []
  }

  static async getInitialProps () {
    let mutatedApiData = null

    try {
      const apiData = await apiGetStaticRoute('carousel')
      if (apiData.success) {
        mutatedApiData = appGenerateCarousel(apiData.response.carousel)
      } else {
        throw apiData.response
      }
      return {
        ...mutatedApiData
      }
    } catch ({ message, code }) {
      return {
        error: { message, code }
      }
    }
  }

  render () {
    const { Component, pageProps, carouselData } = this.props

    return (
      <div>
        <Provider store={store}>
          <SitePageTransitioner>
            <Component
              {...pageProps}
              carouselData={carouselData}
            />
          </SitePageTransitioner>
        </Provider>
      </div>
    )
  }
}

index.js

export default class PageHome extends React.PureComponent {
  static propTypes = {
    carouselData: PropTypes.arrayOf(PropTypes.object).isRequired,
    error: PropTypes.shape({
      code: PropTypes.number,
      message: PropTypes.string
    }),
    modules: PropTypes.array.isRequired
  }

  static defaultProps = {
    carouselData: [],
    modules: []
  }

  render () {
    const {
      carouselData,
      error,
      modules
    } = this.props

    if (error) {
      console.error(error)
      return <Error statusCode={error.code} message={error.message} />
    }

    return (
      <Home>
        <SiteModuleContainer
          header={{
            type: 'home',
            backgroundColor: COLOURS.yellow,
            theme: MODULE_THEMES_LIGHT
          }}
          modules={[...carouselData, ...modules]}
          footer={{
            backgroundColor: COLOURS.yellow,
            theme: MODULE_THEMES_LIGHT
          }}
        />
      </Home>
    )
  }
}

export async function getServerSideProps ({ query }) {
  try {
    const apiData = await apiGetStaticRoute('home', query)
    if (apiData.success) {
      const data = pageGenerateGenericPage(apiData.response)
      return {
        props: data
      }
    } else {
      throw apiData.response
    }
  } catch ({ message, code }) {
    return {
      props: {
        error: { message, code }
      }
    }
  }
}

Breaking change in 1.10

I just upgraded deps and ended up with an unexpected error, caused by a692392

This change of signature for getKey is breaking:

-(req, res) => ...
+({ req, res }) => ...

Took me quite some time to figure it out. I was using req.headers.host inside getKey, which now became opts.headers.host (opts.headers is undefined, hence TypeError).

Not sure what to do. Perhaps, the signature change should be reverted in 1.10.1, followed by the release of the new signature as 2.0.0

Opting out of cache for authenticated requests

Greetings,

This issue is totally a question and not really an issue with the project. I came here from vercel/next.js#6393 and I am intrigued by the Cloudflare/caching strategy here. I am curious of a number of things:

  1. If using a CDN for distributed cache, what purpose does the keyv/in memory store serve on the server instance?
  2. We currently have a few parts of each page that indicate the currently logged in user, but a majority of the page is the same content that a non-authenticated user would see. At present we are not caching any pages for users who are authenticated. Do you have any suggestions for dealing with this particular issue, either best practices for how to bypass the caching mechanism or a way to cache these pages as well safely? I also imagine a CDN would further complicate this issue?
  3. The cache control headers and etags allow the client's browser to cache the content, correct? Does setting these values allow the client to bypass the server request altogether for pages they have visited in the past up until the expiry?

I may have more questions. Appreciate the work you've done and looking forward to using this library in our next.js project.

Deploying to Vercel

I would like to know if this will work on Vercel serverless functions. Or is it limited to dedicated servers?

404s get cached and provided as 200

Hi!

I followed the next.js ssr-caching example, so I've got a route like this:

server.get('/product/:handle', (req, res) => { const queryParams = { handle: req.params.handle }; const pagePath = '/product/' + req.params.handle'; return ssrCache({ req, res, pagePath, queryParams }); });

When a non-existent handle gets hit, the server responds with 404 but the second time it's 200. How can I prevent this?

ETag is not implemented properly

I think that this library has some wrong assumptions on the ETag header. As I see the code, the ETag is being set on the response, but it's never read afterwards (on the request).

The library should look at the If-None-Match header and compare it to the cached response ETag and return 304 Not Modified when they're the same. Now, the library always sends data after the response expired which is a waste.

Example

Allow to bypass .send on a cache miss

Next.js example ssr-caching is currently broken (vercel/next.js#16372, vercel/next.js#16725) because there is no way to have Next.js ssr-render the page content on a cache miss without it also writing and .end-ing the response. Previous versions of the example relied on app.renderToHTML which has been deprecated because it's an internal API (vercel/next.js#14737).

I suggest we add a flag to bypass the .send() in cacheable-response when a cache miss happens, for those cases where the framework takes care of the req/res lifecycle till the end. We can add it either as an option in cacheableResponse({}) or as a property of .get()'s return value.

sending a HEAD will cache an empty result

"cacheable-response": "^2.7.3",

when you do a curl -I (or curl --head) cacheable response will cache the result (in case of a MISS or STALE) , which is usually empty (depending on the underlying webapp). Subsequent HITs will return an empty page therefore even if called with another method

Its therefore possible for a malicious user to do a kind of denial of service attack.

My suggestion would be to only allow GET requests to be cached and/or always incorperate the method into the cache-key

customizable "force"-param

Problem: some customers shared urls with force params, unaware that this param is only used for internal usage. So the customer asked us to change it and show a 404 when a force param is used.

The force param is currently hard-coded, so a change to this library is required.

i18n support

Hey, I went through the docs and am not sure about how it handles language in request.
Does the cache invalidate if language in the request changes?

I have a nextjs app running with i18next and ssr got pretty buggy when I try to use cacheable-response.

Thanks a lot for tour work by the way, docs are very well written and the package itself is very easy to use.

aws lambda and cloudflare cdn

Hello,

this package seems to be great, but before use it i would like to know if i can use with aws lambda due to serverless funcions limitations...

and also how can use it along with cloudflare cache-everything setup.

thank you very much.

type mismatch in index.d.ts

  • getKey is optional , but defined as required
  • send is defined as (SendParams, ...any[]) but are called with send({ data, res, ...props })

Possible to flush cache on running instance without having to restart?

Hi there,

Is there an option (didn't find any in the documentation, so I guess this is a feature request disguised as a question :-) ) to forcefully flush the cache of a running express instance which uses this library to cache requests?

Use case: I use a headless CMS to hydrate a SSR react app. The headless CMS is able to call web hooks on create/update/delete actions. My plan would be to implement a route in express (web hook endpoint) to allow cache being flushed whenever a CMS user changes content, as this can cause the entire page to be stale (e.g. changing menu items).

Maybe there's already a hidden query parameter for that, like force?

Type Description for Typescript

Problem

I was trying to use the library on a typescript project and ended up having to silence or solve several linting errors since the type description for the library is not available.

Solution

I wrote a sample type description file on this GitHub Gist

Action(s)

  • If it already exists, please point me to a type description that I can use
  • Please review the file and suggest any changes.
  • Upon approval, proceed with adding the description to DefinitelyTyped for access by future users.

Thank you.
@Kikobeats

iltorb has been deprecated

according to iltorb's author:

The zlib module provides APIs for brotli compression/decompression starting with Node.js v10.16.0, please use it over iltorb

how to use zlib to initialize cacheable-response with option cache is true?

Yarn workspaces - EventSource's response has a MIME type ("text/html") that is not "text/event-stream"

I've a project with a yarn workspaces architecture, following this example form the next.js documentation, that looks like:

packages
|- webapp
| |- pages
| |- ...
| |- server.js
|- libraryA
|- libraryB

server.js looks like

const cacheableResponse = require('cacheable-response');

const ssrCache = cacheableResponse({
  get: async ({ req, res, pagePath, queryParams }) => ({
    data: await app.renderToHTML(req, res, pagePath, queryParams),
    ttl: 7200000 // 2 hours
  }),
  send: ({ data, res }) => res.send(data)
});

app.prepare()
  .then(() => {
    const server = express();

    server.use('/_next', express.static(path.join(__dirname, '.next')));
    server.use('/static', express.static(path.join(__dirname, 'static')));
    
    .... 

    server.get('/account', (req, res) => {
        return ssrCache({ req, res, pagePath: '/account' });
    });

    ...
  }
);

And I'm getting this errors on the client

EventSource's response has a MIME type ("text/html") that is not "text/event-stream". Aborting the connection.
 Error with on-demand-entries-ping: Unexpected token < in JSON at position 0

Which makes the website eventually fail until I make several hard refreshes.

Do you have any idea why?
Or maybe do you have a working example with yarn workspaces?

Also I had to specify server.use('/_next', express.static(path.join(__dirname, '.next'))); and server.use('/static', express.static(path.join(__dirname, 'static'))); because it didn't work without it.

Thanks 🙂

Making iltorb an optional dependency to reduce default package footprint

With the addition of compress: true / false in #16, the module has become pretty heavy. It adds over 3MB of binaries via iltorb, which are not needed for most users. The fact that cacheable-response brings in binaries also makes its setup potentially error-prone on systems for which the binaries are not available. When the app is deployed (e.g. via docker), the size of the artifact also increases.

Given that compress is false by default and that the feature was added only 8 days ago, would it be possible to make iltorb an optional peer dependency? That should help the majority of users quite a lot.

An example of a module that does not install all its dependencies by default is https://github.com/cyrilwanner/next-optimized-images, if that's interesting. WDYT?

?force=true to clear a cache

Hi, I use cacheable-response and it is very nice! thank you for the amazing job!

After all I don't think it is a issue... its more a question...

If I use my URL and ?force=true the next access will gerenate a new cache?

I mean, I have a news website and if I find a mistake I dont want to wait 10 minutes to people receive the new version of the site, will ?force=true clear the cache?

How do not cache when ssr get data Error

Cacheable-response works fine in normal case
But in some cases i don't want to use cache
E.g
SSR get Data Error, it will cache error page
how to pass this case

caching header not available in response headers

I have created a next.js application from this example: https://github.com/vercel/next.js/tree/canary/examples/ssr-caching
i the example cache-control header exists but is my app no cache-control header available.
is meaning that configuration have problem?

const ssrCache = cacheableResponse({
  ttl: 1000 * 60 * 60, // 1hour
  get: async ({ req, res }) => {
    const data = await app.render(req, res, '/p', {
      ...req.query,
      ...req.params
    })
    // Add here custom logic for when you do not want to cache the page, for
    // example when the page returns a 404 status code:
    if (res.statusCode === 404) {
      res.end(data)
      return
    }
    const ttl = 1000 * 60 * 60 // 1hour
    const headers = { userAgent: 'cacheable-response' }

    return { data, ttl, headers }
  },
  send: ({ req, res, data, headers }) => {
    res.setHeader('user-agent', headers.userAgent)
    res.send(data)
  }
})
app.prepare().then(() => {
  const server = express()
  server.use(compression())
  server.use(bodyParser.json())
  server.use(cookieParser())
  server.get('/p/:random_key/:product_name/', (req, res) => {
    return ssrCache({ req, res })
  })
})

export default `getKey`/`createKey` function

the default getKey function or more precisely the createKey function is not that trivial.

In our case we want to prefix it with a build-id, so that after a deployement responses get a new cache key (and therefore MISS).

This is often crucial because the cached response often also caches resources like js-bundles and css files. If the names of these bundles change (which is often the case), the cached response will point to the wrong js bundles that no longer exist on the server.

Caching until the data has changed

Hi @Kikobeats 👋

I came across your PR in Next.js that replaces LRU cache with the library and decided to drop a line. Curious if cacheable-response could potentially offer a solution to my problem.

Imagine a Next.js app that's sitting in front of a GraphQL server, which serves a more or less static dataset. This can be a list of products, which normally changes once in a while, but this can happen at any time. Alternatively, the website can focus on showing weather forecasts, which update a few times a day.

What we need for these conditionally static websites is to avoid rendering React trees on every request. At the same time, we want to have an option to flush or miss the cache before any for the requests after running some async checker. On a product website this could be a light GraphGL query to the backend asking the modification time and on the weather website we can just check the current time. In both of these cases, I would like to cache pages forever until the condition has changed, but at the same time I would never want to send the x-cache-expired-at header to the clients, because any page refresh at any time can unexpectedly result the new data.

Implementing these tricks seems somewhat possible by adding additional express logic around your module, but I'm wondering if it's possible to add some native support. WDYT?

Thanks for your work!

handle static files

Hello, thank you for this great library.
I've been facing this error everytime a request is made to load static files with extensions such js, json, png, jpg, jpeg, svg.

Why is it happening exactly?
Anything I can do to fix it?

RenderToNodeStream

Hi,
Is any possibility to use it with renderToNodeStream?

I don't know how to get full html of this method

Cachable response setting headers

hi,
I was able to resolve my previous issue by my own.

Now i am having difficult time in setting headers

Please see my server.js File

const cacheableResponse = require("cacheable-response");
const express = require("express");
const next = require("next");

const port = parseInt(process.env.PORT, 10) || 5000;
const dev = process.env.NODE_ENV !== "production";
const app = next({ dev });

const handle = app.getRequestHandler();

const ssrCache = cacheableResponse({
get: async ({ req, res, path }) => ({
data: await new Promise((resolve) => {
let data = app.render(req, res, path, {
...req.query,
...req.params,
});
return data
}),
ttl: 7200000, // 2 hours
}),
send: ({ data, res, req }) => res.send(data),
});

app.prepare().then(() => {
const server = express();

server.get("/post/:id", (req, res) => ssrCache({ req, res, path: "/post" }));

server.get("/", (req, res) => ssrCache({ req, res, path: '/index' }));

server.get("*", (req, res) => handle(req, res));

server.listen(port, (err) => {
if (err) throw err;
console.log(> Ready on http://localhost:${port});
});
});

But in this headers are not setting which are show in

node_modules\cacheable-response\index.js

Can you please help me, letting me know where i am doing wrong ?

How to flush cache at background after every request ?

Hi, I notice that by adding ?force=true to flush cache, so how about flush cache at background after every request(do the same thing without force param) ?

Just like SWR, through constant request to flush cache all the time, I think this can always return fresh cache to browser.

error importing cacheable-response

Hi there,

I am facing this issue when trying to use this library:

ERROR in ./node_modules/iltorb/build/bindings/iltorb.node 1:0
Module parse failed: Unexpected character '�' (1:0)
You may need an appropriate loader to handle this file type, currently no loaders are configured to process this file. See https://webpack.js.org/concepts#loaders
(Source code omitted for this binary file)
@ ./node_modules/iltorb/index.js 10:39-78
@ ./node_modules/compress-brotli/index.js
@ ./node_modules/cacheable-response/index.js

(it also seems iltorb is now depreciated - and recommends using zlib
Not sure if they are related.)

Any help appreciated !

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.