Giter Site home page Giter Site logo

adobe / fetch Goto Github PK

View Code? Open in Web Editor NEW
47.0 33.0 11.0 2.96 MB

Simplified HTTP/1(.1) and HTTP/2 requests with Server Push Support

License: Apache License 2.0

JavaScript 99.98% Shell 0.02%
fetch-api http2 http-client http stream rfc-7234 cache library fetch nodejs

fetch's Introduction

Adobe Fetch

Light-weight Fetch API implementation transparently supporting both HTTP/1(.1) and HTTP/2

codecov CircleCI GitHub license GitHub issues Renovate enabled semantic-release Install size Current version


About

@adobe/fetch in general adheres to the Fetch API Specification, implementing a subset of the API. However, there are some notable deviations:

  • Response.body returns a Node.js Readable stream.
  • Response.blob() is not implemented. Use Response.buffer() instead.
  • Response.formData() is not implemented.
  • Cookies are not stored by default. However, cookies can be extracted and passed by manipulating request and response headers.
  • The following values of the fetch() option cache are supported: 'default' (the implicit default) and 'no-store'. All other values are currently ignored.
  • The following fetch() options are ignored due to the nature of Node.js and since @adobe/fetch doesn't have the concept of web pages: mode, referrer, referrerPolicy, integrity and credentials.
  • The fetch() option keepalive is not supported. But you can use the h1.keepAlive context option, as demonstrated here.

@adobe/fetch also supports the following non-spec extensions:

  • Response.buffer() returns a Node.js Buffer.
  • Response.url contains the final url when following redirects.
  • The body that can be sent in a Request can also be a Readable Node.js stream, a Buffer, a string or a plain object.
  • There are no forbidden header names.
  • The Response object has an extra property httpVersion which is one of '1.0', '1.1' or '2.0', depending on what was negotiated with the server.
  • The Response object has an extra property fromCache which determines whether the response was retrieved from cache.
  • The Response object has an extra property decoded which determines whether the response body was automatically decoded (see Fetch option decode below).
  • Response.headers.plain() returns the headers as a plain object.
  • Response.headers.raw() returns the internal/raw representation of the headers where e.g. the Set-Cokkie header is represented with an array of strings value.
  • The Fetch option follow allows to limit the number of redirects to follow (default: 20).
  • The Fetch option compress enables transparent gzip/deflate/br content encoding (default: true).
  • The Fetch option decode enables transparent gzip/deflate/br content decoding (default: true).

Note that non-standard Fetch options have been aligned with node-fetch where appropriate.

Features

  • supports reasonable subset of the standard Fetch specification
  • Transparent handling of HTTP/1(.1) and HTTP/2 connections
  • RFC 7234 compliant cache
  • Support gzip/deflate/br content encoding
  • HTTP/2 request and response multiplexing support
  • HTTP/2 Server Push support (transparent caching and explicit listener support)
  • overridable User-Agent
  • low-level HTTP/1.* agent/connect options support (e.g. keepAlive, rejectUnauthorized)

ESM/CJS support

This package is native ESM and no longer provides CommonJS exports. Use 3.x version if you still need to use this package with CommonJS.

Installation

Note:

As of v4 Node version >= 14.16 is required.

$ npm install @adobe/fetch

API

Apart from the standard Fetch API

  • fetch()
  • Request
  • Response
  • Headers
  • Body

@adobe/fetch exposes the following non-spec extensions:

  • context() - creates a new customized API context
  • reset() - resets the current API context, i.e. closes pending sessions/sockets, clears internal caches, etc ...
  • onPush() - registers an HTTP/2 Server Push listener
  • offPush()- deregisters a listener previously registered with onPush()
  • clearCache() - clears the HTTP cache (cached responses)
  • cacheStats() - returns cache statistics
  • noCache() - creates a customized API context with disabled caching (convenience)
  • h1() - creates a customized API context with enforced HTTP/1.1 protocol (convenience)
  • keepAlive() - creates a customized API context with enforced HTTP/1.1 protocol and persistent connections (convenience)
  • h1NoCache() - creates a customized API context with disabled caching and enforced HTTP/1.1 protocol (convenience)
  • keepAliveNoCache() - creates a customized API context with disabled caching and enforced HTTP/1.1 protocol with persistent connections (convenience)
  • createUrl() - creates a URL with query parameters (convenience)
  • timeoutSignal() - ceates a timeout signal (convenience)

Context

An API context allows to customize certain aspects of the implementation and provides isolation of internal structures (session caches, HTTP cache, etc.) per API context.

The following options are supported:

interface ContextOptions {
  /**
   * Value of `user-agent` request header
   * @default 'adobe-fetch/<version>'
   */
  userAgent?: string;
  /**
   * The maximum total size of the cached entries (in bytes). 0 disables caching.
   * @default 100 * 1024 * 1024
   */
  maxCacheSize?: number;
  /**
   * The protocols to be negotiated, in order of preference
   * @default [ALPN_HTTP2, ALPN_HTTP1_1, ALPN_HTTP1_0]
   */
  alpnProtocols?: ReadonlyArray< ALPNProtocol >;
  /**
   * How long (in milliseconds) should ALPN information be cached for a given host?
   * @default 60 * 60 * 1000
   */
  alpnCacheTTL?: number;
  /**
   * (HTTPS only, applies to HTTP/1.x and HTTP/2)
   * If not false, the server certificate is verified against the list of supplied CAs. An 'error' event is emitted if verification fails; err.code contains the OpenSSL error code.
   * @default true
   */
  rejectUnauthorized?: boolean;
  /**
   * Maximum number of ALPN cache entries
   * @default 100
   */
  alpnCacheSize?: number;
  h1?: Http1Options;
  h2?: Http2Options;
};

interface Http1Options {
  /**
   * Keep sockets around in a pool to be used by other requests in the future.
   * @default false
   */
  keepAlive?: boolean;
  /**
   * When using HTTP KeepAlive, how often to send TCP KeepAlive packets over sockets being kept alive.
   * Only relevant if keepAlive is set to true.
   * @default 1000
   */
  keepAliveMsecs?: number;
  /**
   * (HTTPS only)
   * If not false, the server certificate is verified against the list of supplied CAs. An 'error' event is emitted if verification fails; err.code contains the OpenSSL error code.
   * @default true
   */
  rejectUnauthorized?: boolean;
  /**
   * (HTTPS only)
   * Maximum number of TLS cached sessions. Use 0 to disable TLS session caching.
   * @default 100
   */
  maxCachedSessions?: number;
}

interface Http2Options {
  /**
   * Max idle time in milliseconds after which a session will be automatically closed. 
   * @default 5 * 60 * 1000
   */
  idleSessionTimeout?: number;
  /**
   * Enable HTTP/2 Server Push?
   * @default true
   */
  enablePush?: boolean;
  /**
   * Max idle time in milliseconds after which a pushed stream will be automatically closed. 
   * @default 5000
   */
  pushedStreamIdleTimeout?: number;
  /**
   * (HTTPS only)
   * If not false, the server certificate is verified against the list of supplied CAs. An 'error' event is emitted if verification fails; err.code contains the OpenSSL error code.
   * @default true
   */
  rejectUnauthorized?: boolean;
};

Common Usage Examples

Access Response Headers and other Meta data

  import { fetch } from '@adobe/fetch';

  const resp = await fetch('https://httpbin.org/get');
  console.log(resp.ok);
  console.log(resp.status);
  console.log(resp.statusText);
  console.log(resp.httpVersion);
  console.log(resp.headers.plain());
  console.log(resp.headers.get('content-type'));

Fetch JSON

  import { fetch } from '@adobe/fetch';

  const resp = await fetch('https://httpbin.org/json');
  const jsonData = await resp.json();

Fetch text data

  import { fetch } from '@adobe/fetch';

  const resp = await fetch('https://httpbin.org/');
  const textData = await resp.text();

Fetch binary data

  import { fetch } from '@adobe/fetch';

  const resp = await fetch('https://httpbin.org//stream-bytes/65535');
  const imageData = await resp.buffer();

Specify a timeout for a fetch operation

Using timeoutSignal(ms) non-spec extension:

  import { fetch, timeoutSignal, AbortError } from '@adobe/fetch';

  const signal = timeoutSignal(1000);
  try {
    const resp = await fetch('https://httpbin.org/json', { signal });
    const jsonData = await resp.json();
  } catch (err) {
    if (err instanceof AbortError) {
      console.log('fetch timed out after 1s');
    }
  } finally {
    // avoid pending timers which prevent node process from exiting
    signal.clear();
  }

Using AbortController:

  import { fetch, AbortController, AbortError } from '@adobe/fetch';

  const controller = new AbortController();
  const timerId = setTimeout(() => controller.abort(), 1000);
  const { signal } = controller;

  try {
    const resp = await fetch('https://httpbin.org/json', { signal });
    const jsonData = await resp.json();
  } catch (err) {
    if (err instanceof AbortError) {
      console.log('fetch timed out after 1s');
    }
  } finally {
    // avoid pending timers which prevent node process from exiting
    clearTimeout(timerId);
  }

Stream an image

  import { createWriteStream } from 'fs';
  import { fetch } from '@adobe/fetch';

  const resp = await fetch('https://httpbin.org/image/jpeg');
  resp.body.pipe(createWriteStream('saved-image.jpg'));

Post JSON

  import { fetch } from '@adobe/fetch';

  const method = 'POST';
  const body = { foo: 'bar' };
  const resp = await fetch('https://httpbin.org/post', { method, body });

Post JPEG image

  import { createReadStream } from 'fs';
  import { fetch } from '@adobe/fetch';

  const method = 'POST';
  const body = createReadStream('some-image.jpg');
  const headers = { 'content-type': 'image/jpeg' };
  const resp = await fetch('https://httpbin.org/post', { method, body, headers });

Post form data

  import { FormData, Blob, File } from 'formdata-node'; // spec-compliant implementations
  import { fileFromPath } from 'formdata-node/file-from-path'; // helper for creating File instance from disk file

  import { fetch } from '@adobe/fetch';

  const method = 'POST';
  const fd = new FormData();
  fd.set('field1', 'foo');
  fd.set('field2', 'bar');
  fd.set('blob', new Blob([0x68, 0x65, 0x6c, 0x69, 0x78, 0x2d, 0x66, 0x65, 0x74, 0x63, 0x68]));
  fd.set('file', new File(['File content goes here'], 'file.txt'));
  fd.set('other_file', await fileFromPath('/foo/bar.jpg', 'bar.jpg', { type: 'image/jpeg' }));
  const resp = await fetch('https://httpbin.org/post', { method, body: fd });

GET with query parameters object

import { createUrl, fetch } from '@adobe/fetch';

const qs = {
  fake: 'dummy',
  foo: 'bar',
  rumple: "stiltskin",
};

const resp = await fetch(createUrl('https://httpbin.org/json', qs));

or using URLSearchParams:

import { fetch } from '@adobe/fetch';

const body = new URLSearchParams({
  fake: 'dummy',
  foo: 'bar',
  rumple: "stiltskin",
});

const resp = await fetch('https://httpbin.org/json', { body });

Cache

Responses of GET and HEAD requests are by default cached, according to the rules of RFC 7234:

import { fetch } from '@adobe/fetch';

const url = 'https://httpbin.org/cache/60'; // -> max-age=60 (seconds)
// send initial request, priming cache
let resp = await fetch(url);
assert(resp.ok);
assert(!resp.fromCache);

// re-send request and verify it's served from cache
resp = await fetch(url);
assert(resp.ok);
assert(resp.fromCache);

You can disable caching per request with the cache: 'no-store' option:

import { fetch } from '@adobe/fetch';

const resp = await fetch('https://httbin.org/', { cache: 'no-store' });
assert(resp.ok);
assert(!resp.fromCache);

You can disable caching entirely:

import { noCache } from '@adobe/fetch';
const { fetch } = noCache();

Advanced Usage Examples

HTTP/2 Server Push

Note that pushed resources will be automatically and transparently added to the cache. You can however add a listener which will be notified on every pushed (and cached) resource.

  import { fetch, onPush } from '@adobe/fetch';

  onPush((url, response) => console.log(`received server push: ${url} status ${response.status}`));

  const resp = await fetch('https://nghttp2.org');
  console.log(`Http version: ${resp.httpVersion}`);

Use h2c (http2 cleartext w/prior-knowledge) protocol

  import { fetch } from '@adobe/fetch';

  const resp = await fetch('http2://nghttp2.org');
  console.log(`Http version: ${resp.httpVersion}`);

Force HTTP/1(.1) protocol

  import { h1 } from '@adobe/fetch';
  const { fetch } = h1();

  const resp = await fetch('https://nghttp2.org');
  console.log(`Http version: ${resp.httpVersion}`);

HTTP/1.1 Keep-Alive

import { keepAlive } from '@adobe/fetch';
const { fetch } = keepAlive();

const resp = await fetch('https://httpbin.org/status/200');
console.log(`Connection: ${resp.headers.get('connection')}`); // -> keep-alive

Extract Set-Cookie Header

Unlike browsers, you can access raw Set-Cookie headers manually using Headers.raw(). This is an @adobe/fetch only API.

import { fetch } from '@adobe/fetch';

const resp = await fetch('https://httpbin.org/cookies/set?a=1&b=2');
// returns an array of values, instead of a string of comma-separated values
console.log(resp.headers.raw()['set-cookie']);

Self-signed Certificates

import { context } from '@adobe/fetch';
const { fetch } = context({ rejectUnauthorized: false });

const resp = await fetch('https://localhost:8443/');  // a server using a self-signed certificate

Set cache size limit

  import { context } from '@adobe/fetch';
  const { fetch } = context({
    maxCacheSize: 100 * 1024, // 100kb (Default: 100mb)
  });

  let resp = await fetch('https://httpbin.org/bytes/60000'); // ~60kb response
  resp = await fetch('https://httpbin.org/bytes/50000'); // ~50kb response
  console.log(cacheStats());

Disable caching

  import { noCache } from '@adobe/fetch';
  const { fetch } = noCache();

  let resp = await fetch('https://httpbin.org/cache/60'); // -> max-age=60 (seconds)
  // re-fetch
  resp = await fetch('https://httpbin.org/cache/60');
  assert(!resp.fromCache);

Set a custom user agent

  import { context } from '@adobe/fetch';
  const { fetch } = context({
    userAgent: 'custom-fetch'
  });

  const resp = await fetch('https://httpbin.org//user-agent');
  const json = await resp.json();
  console.log(json['user-agent']);

More examples

More example code can be found in the test source files.

Development

Build

$ npm install

Test

$ npm test

Lint

$ npm run lint

Troubleshooting

You can enable @adobe/fetch low-level debug console output by setting the DEBUG environment variable to adobe/fetch*, e.g.:

$ DEBUG=adobe/fetch* node test.js

This will produce console outout similar to:

  ...
  adobe/fetch:core established TLS connection: #48 (www.nghttp2.org) +2s
  adobe/fetch:core www.nghttp2.org -> h2 +0ms
  adobe/fetch:h2 reusing socket #48 (www.nghttp2.org) +2s
  adobe/fetch:h2 GET www.nghttp2.org/httpbin/user-agent +0ms
  adobe/fetch:h2 session https://www.nghttp2.org established +1ms
  adobe/fetch:h2 caching session https://www.nghttp2.org +0ms
  adobe/fetch:h2 session https://www.nghttp2.org remoteSettings: {"headerTableSize":8192,"enablePush":true,"initialWindowSize":1048576,"maxFrameSize":16384,"maxConcurrentStreams":100,"maxHeaderListSize":4294967295,"maxHeaderSize":4294967295,"enableConnectProtocol":true} +263ms
  adobe/fetch:h2 session https://www.nghttp2.org localSettings: {"headerTableSize":4096,"enablePush":true,"initialWindowSize":65535,"maxFrameSize":16384,"maxConcurrentStreams":4294967295,"maxHeaderListSize":4294967295,"maxHeaderSize":4294967295,"enableConnectProtocol":false} +0ms
  adobe/fetch:h2 session https://www.nghttp2.org closed +6ms
  adobe/fetch:h2 discarding cached session https://www.nghttp2.org +0ms
  ... 

Additionally, you can enable Node.js low-level debug console output by setting the NODE_DEBUG environment variable appropriately, e.g.

$ export NODE_DEBUG=http*,stream*
$ export DEBUG=adobe/fetch*

$ node test.js

Note: this will flood the console with highly verbose debug output.

Acknowledgement

Thanks to node-fetch and github/fetch for providing a solid implementation reference.

License

Apache 2.0

fetch's People

Contributors

dependabot[bot] avatar dominique-pfister avatar marquiserosier avatar maxakuru avatar renovate-bot avatar renovate[bot] avatar semantic-release-bot avatar sjinks avatar stefan-guggisberg avatar thejc avatar trieloff avatar tripodsan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fetch's Issues

Replace underlying Fetch API implementation

Is your feature request related to a problem? Please describe.
fetch-h2, the current underlying Fetch API implementation, has a serious scalability bottleneck: Parallel requests to the same HTTP/2 origin are effectively serialised, thus defeating the purpose of HTTP/2 Multiplexing. See #14.

Describe the solution you'd like
A. Replacing fetch-h2 with another Fetch API implementation which transparently supports HTTP/1(.1) and HTTP/2 and which provides better scalability for parallel requests. node-fetch-h2 seems to be a good candidate.

B. Alternatively, implement the relevant parts of the Fetch API and write a simple abstraction of the builtin http, https and http2 modules tailored to our needs.

Describe alternatives you've considered
I've created a fetch-h2 issue and looked into creating a PR. The code however is hardly documented and the flow hard to follow. The maintainer doesn't seem to be very responsive either.

make cache size limit configurable

Is your feature request related to a problem? Please describe.
Currently an in-memory cache with LRU eviction policy is used with a hardcoded max_entries limit.

Describe the solution you'd like
It should be possible to configure the cache with a size limit, i.e. the maximum amount of memory used by the cached entries. While it won't be possible to calculate an exact heap memory footprint an approximation should be fine.

error fetching content via `post` (lowercase)

Description

async function testFetch() {
  const context = fetchAPI.context({
    // httpProtocol: 'http1',
    // httpsProtocols: ['http1'],
  });
  try {
    const resp = await context.fetch('https://httpbin.org/post', {
      method: 'post',
      headers: {
        accept: 'application/json',
      }
    });
    console.log(resp.status);
    const text = await resp.text();
    return {
      statusCode: 200,
      body: text,
    };
  } finally {
    await context.disconnectAll();
  }
}

produces:

Error [ERR_HTTP2_STREAM_ERROR]: Stream closed with error code NGHTTP2_PROTOCOL_ERROR
    at ClientHttp2Stream._destroy (internal/http2/core.js:2095:13)
    at ClientHttp2Stream.destroy (internal/streams/destroy.js:37:8)
    at ClientHttp2Stream.[kMaybeDestroy] (internal/http2/core.js:2111:12)
    at Http2Stream.onStreamClose [as onstreamclose] (internal/http2/core.js:479:26)

Additional context
after forcing http1, it works.

v2: PATCH doesn't work with (json) objects

Description

  const { fetch } = require('@adobe/helix-fetch');

  const method = 'PATCH';
  const body = { foo: 'bar' };
  const resp = await fetch('https://httpbin.org/patch', { method, body });

doesn't work

The automated release is failing 🚨

🚨 The automated release from the 3.x branch failed. 🚨

I recommend you give this issue a high priority, so other packages depending on you can benefit from your bug fixes and new features again.

You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can fix this πŸ’ͺ.

Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.

Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the 3.x branch. You can also manually restart the failed CI job that runs semantic-release.

If you are not sure how to resolve this, here are some links that can help you:

If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.


The release 3.4.0 on branch 3.x cannot be published as it is out of range.

Based on the releases published on other branches, only versions within the range >=3.3.1 <3.4.0 can be published from branch 3.x.

The following commits are responsible for the invalid release:

  • fix: branch release (4895d43)
  • fix: branch release (4f1782e)
  • feat: force new 3.x release (cb7be50)
  • chore(deps): update external fixes (bb901c2)
  • chore(deps): update external fixes (#340) (75c5d81)
  • build(deps): bump json5 from 1.0.1 to 1.0.2 (#335) (566d164)
  • chore(deps): update dependency husky to v8.0.3 (#334) (4628278)
  • chore(deps): update dependency eslint to v8.31.0 (899cfe2)

Those commits should be moved to a valid branch with git merge or git cherry-pick and removed from branch 3.x with git revert or git reset.

A valid branch could be main.

See the workflow configuration documentation for more details.


Good luck with your project ✨

Your semantic-release bot πŸ“¦πŸš€

process exits with http2 and redirects

Description
when fetching content from a http2 server that sends redirects, the process just quits. no error, no trace, nothing.

when forcing http1, it works.

To Reproduce

async function testFetch() {
  const context = fetchAPI.context({
    // httpProtocol: 'http1',
    // httpsProtocols: ['http1'],
  });
  try {
    console.log('fetching....');
    const resp = await context.fetch('https://embed.spotify.com/?uri=spotify:artist:4gzpq5DPGxSnKTe4SA8HAU', {
      redirect: 'follow',
    });
    console.log(resp.ok, resp.status, resp.headers);

    console.log(await resp.text());
  } catch (e) {
    console.error(e);
    throw e;
  } finally {
    await context.disconnectAll();
  }
}

the url above sends the 2 redirects:

$ curl -I "https://embed.spotify.com/?uri=spotify:artist:4gzpq5DPGxSnKTe4SA8HAU"
HTTP/2 302
server: envoy
date: Thu, 28 May 2020 02:00:38 GMT
content-type: text/html
content-length: 154
location: https://open.spotify.com/embed/?uri=spotify:artist:4gzpq5DPGxSnKTe4SA8HAU
strict-transport-security: max-age=31536000
x-content-type-options: nosniff
via: HTTP/2 edgeproxy, 1.1 google
alt-svc: clear

$ curl -I "https://open.spotify.com/embed/?uri=spotify:artist:4gzpq5DPGxSnKTe4SA8HAU"
HTTP/2 301
server: envoy
date: Thu, 28 May 2020 02:00:52 GMT
content-type: text/html
content-length: 178
location: https://open.spotify.com/embed?uri=spotify:artist:4gzpq5DPGxSnKTe4SA8HAU
strict-transport-security: max-age=31536000
x-content-type-options: nosniff
via: HTTP/2 edgeproxy, 1.1 google
alt-svc: clear

$ curl -I "https://open.spotify.com/embed?uri=spotify:artist:4gzpq5DPGxSnKTe4SA8HAU"
HTTP/2 200
server: envoy
content-type: text/html; charset=UTF-8
vary: Accept-Encoding
...

Option to force a specific protocol

Is your feature request related to a problem? Please describe.
helix-fetch is transparently using the built-in http, https and http2 modules und the hood. The majority of testing tools like nock or pollyjs however only support the http/https modules.

Describe the solution you'd like
Provide an option to force e.g. the HTTP/1(.1) protocol.

Additional context
adobe/helix-pipeline#592 (comment)

[TypeScript] TypeError: Cannot read properties of undefined (reading 'ALPN_HTTP1_1')

Description

import { ALPNProtocol, context } from '@adobe/fetch';

const ctx = context({
    alpnProtocols: [ALPNProtocol.ALPN_HTTP1_1],
});

When transpiled into JS, this generates an error:

TypeError: Cannot read properties of undefined (reading 'ALPN_HTTP1_1')

This happens because the alpnProtocols line gets transpiled like this:

alpnProtocols: [fetch_1.ALPNProtocol.ALPN_HTTP1_1],

which is obviously incorrect.

To Reproduce

Please see above.

Expected behavior

  • No error message
  • ALPNProtocol.ALPN_HTTP1_1 gets transpiled as "http/1.1"

Version: 3.2.1

Additional context

Support AbortController for aborting fetch operations

Is your feature request related to a problem? Please describe.
AbortController is part of the DOM Living standard. It allows to abort an in-progress fetch operation in a standard way via the signal option. Specifically it allows to apply a timeout to a fetch operation in a standard-compliant portable way.

Describe the solution you'd like
Export AbortController, AbortError and optionally a small helper timeoutSignal for specifying a timeout signal to support the following code snippets:

const { fetch, AbortController, AbortError } = require('@adobe/helix-fetch`);

const controller = new AbortController();
setTimeout(() => controller.abort(), 1000);

try {
  await fetch('https://httpbin.org/delay/2', { signal: controller.signal });
} catch (err) {
  if (err instanceof AbortError) {
    console.log('fetch timed out');
  }
}

or shorter:

const { fetch, timeoutSignal, AbortError } = require('@adobe/helix-fetch`);

try {
  await fetch('https://httpbin.org/delay/2', { signal: timeoutSignal(1000) });
} catch (err) {
  if (err instanceof AbortError) {
    console.log('fetch timed out');
  }
}

The existing non-standard timeout option and TimeoutError should be deprecated.

Additional context
Abortable fetch
fetch timeout discussion

process not terminated when fetch request failed due to timeout.

Description
When I run a test to verify that the timeout option works, the process hangs if the timeout occurs.

To Reproduce
run:

const assert = require('assert');
const nock = require('nock');
const fetchAPI = require('@adobe/helix-fetch');

async function run() {
  nock('https://www.example.com')
    .get('/test.html')
    .delay(2000)
    .reply(200, 'foo');

  // create own context and disable http2
  const context = fetchAPI.context({
    httpProtocol: 'http1',
    httpsProtocols: ['http1'],
  });
  try {
    const resp = await context.fetch('https://www.example.com/test.html', {
      cache: 'no-store',
      redirect: 'follow',
      timeout: 1000,
    });
    const text = await resp.text();
    console.log(resp.ok, resp.status, resp.headers, text);
    assert.fail('should timeout');
  } catch (e) {
    console.error(e);
    assert.equal(e.message, 'GET https://www.example.com/test.html timed out after 1000 ms');
  } finally {
    // await context.disconnectAll();
  }
}

run().catch(console.error);

Expected behavior
don't know if this is expected. but the process hangs for a while.
calling context.disconnectAll() solves the problem.

Set-Cookie response headers lose semantics

Description
When a server returns more than one set-cookie response headers, the response.headers.get('set-cookie') will return one value, where all values are concatenated by commas.

To Reproduce
Steps to reproduce the behavior:

  1. Fetch a URL from a server that returns multiple set-cookie headers, e.g. https://www.dropbox.com
  2. Look at the value of response.headers.get('set-cookie')
  3. When the response looks like this in curl:
set-cookie:  gvc=MjM0MDU4MTU5Nzg3MzE0MzE2Nzk4NTk1MDA0NDkxMTk5NTEzNTc2; expires=Sun, 10 Oct 2027 13:36:25 GMT; HttpOnly; Path=/; SameSite=None; Secure
set-cookie:  t=lTCMMujd5gvO17wv9zPLRrvC; Domain=dropbox.com; expires=Fri, 10 Oct 2025 13:36:24 GMT; HttpOnly; Path=/; SameSite=None; Secure
set-cookie:  __Host-js_csrf=lTCMMujd5gvO17wv9zPLRrvC; expires=Fri, 10 Oct 2025 13:36:24 GMT; Path=/; SameSite=None; Secure
set-cookie:  __Host-ss=jtJon8XHjQ; expires=Fri, 10 Oct 2025 13:36:24 GMT; HttpOnly; Path=/; SameSite=Strict; Secure
set-cookie:  locale=en; Domain=dropbox.com; expires=Sun, 10 Oct 2027 13:36:25 GMT; Path=/; SameSite=None; Secure

you'll notice that response.headers.get('set-cookie') looks like this (line feeds added for readibility):

gvc=MjM0MDU4MTU5Nzg3MzE0MzE2Nzk4NTk1MDA0NDkxMTk5NTEzNTc2; 
expires=Sun, 10 Oct 2027 13:36:25 GMT; HttpOnly; Path=/; SameSite=None;
Secure,t=lTCMMujd5gvO17wv9zPLRrvC; ... (rest omitted)

Using a standard cookie parser, it will report that there's a Secure,t cookie, because it is unable to determine that the former is a flag for the first gvc cookie.

Expected behavior
It should be possible to parse the cookies received, either by getting the initial N values for that header or by using a different delimiter.

timeoutSignal may keep node process alive

Description
In my test suite, I'm using @adobe/helix-fetch and its timeoutSignal to abort requests that exceed 20 seconds. If I run one of my tests isolated (with npx mocha -g '...', I notice that the test succeeds, but the node process keeps running for about 20 seconds. If I avoid the call to timeoutSignal, the node process terminates immediately after executing the test.

To Reproduce
Steps to reproduce the behavior:

  1. Create a test that fetches some page with @adobe/helix-fetch
  2. Optionally, nock that request, so it executes even faster
  3. Add a timeoutSignal with a timeout of at least 20 seconds
  4. Run the test isolated with npx mocha -g '...', the process will appear to hang after the test has executed.

Interestingly, this behaviour cannot be reproduced when running the test with it.only.

Expected behavior
Node process should exit when the test has been executed.

Version:
2.2.1

Additional context

node will crash when use AbortController

Description
I use AbortController to control fetch timeout.
AbortError catched but nodejs still exit.
I tested it, not every timeout will crash the process.
Do I need to try-cathc the controller.abort() call?
I used it according to this example: https://github.com/adobe/helix-fetch#specify-a-timeout-for-a-fetch-operation

Expected behavior
process do not crash

Version:
"@adobe/helix-fetch": "^3.0.0"
nodejs v16.13.0

Log:

[2022/1/1 22:01:36][error] fetch data error: AbortError - fetching: The operation was aborted.
[2022/1/1 22:01:36][info] retry after 10s
D:\proj\node_modules\@adobe\helix-fetch\src\core\h2.js:235
      reject(new RequestAbortedError());
             ^
RequestAbortedError
    at EventEmitter.onAbortSignal (D:\proj\node_modules\@adobe\helix-fetch\src\core\h2.js:235:14)
    at EventEmitter.emit (node:events:390:28)
    at EventEmitter.emit (node:domain:475:12)
    at AbortSignal.dispatchEvent (D:\proj\node_modules\@adobe\helix-fetch\src\fetch\abort.js:67:41)
    at AbortSignal.fire (D:\proj\node_modules\@adobe\helix-fetch\src\fetch\abort.js:72:10)
    at AbortController.abort (D:\proj\node_modules\@adobe\helix-fetch\src\fetch\abort.js:137:39)
    at Timeout._onTimeout (D:\proj\src\utils\fetch.ts:44:49)
    at listOnTimeout (node:internal/timers:557:17)
    at processTimers (node:internal/timers:500:7)

Code:

const defaultTimeout = 30 * 1000
const defaultFetch = newFetchContext()
interface TimeoutRequestOptions extends RequestOptions {
    timeout?: number
}
export async function fetchWithTimeout(url: string, options: TimeoutRequestOptions = {}): Promise<Response> {
    const controller = new helixFetch.AbortController()
    const timerId = setTimeout(() => controller.abort(), options.timeout ? options.timeout : defaultTimeout)

    try {
        options.signal = controller.signal
        const response = await defaultFetch.fetch(url, options)
        return response
    } catch (err) {
        if (err.name === 'RequestAbortedError') {
            err = new Error('Fetch Timeout')             **<-- NOT HAPPEN.** 
        }
        throw err
    } finally {
        clearTimeout(timerId)
    }
}

call stack this:
while(true) {
    try {
        try {
            fetchWithTimeout(someUrl)
        } catch(e) {
            e.message = `fetching: ${e.message}`
            throw e
        }
    } catch(err) {
        logger.error('fetch data error: %s - %s', err.name, err.message)
        logger.info('retry after 10s')
        sleep(10s)
    }

    sleep(someTime)
}

FetchError: The socket is already bound to an Http2Session

Description
After a cached Http2Session gets discarded (due to an error, e.g. ECONNRESET, or idleSessionTimeout) in rare occasions subsequent Http/2 requests to the same host may encounter errors like ERR_HTTP2_SOCKET_BOUND, ERR_STREAM_DESTROYED or similar.

To Reproduce
Difficult to reproduce reliably

Response: don't set content-type if there's no body

Description
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Version:
run: $ hlx --version

Additional context
Add any other context about the problem here.

Sockets not closed when body is not consumed

Description
it looks like the sockets are not closed when the body is not consumed; despite a call to disconnect all.

To Reproduce

const fetchAPI = require('@adobe/helix-fetch');

async function request(context, error) {
  const resp = await context.fetch('https://adobeioruntime.net/api/v1/web/helix/helix-services/word2md@v1?path=%2Fdefault.md&shareLink=https%3A%2F%2Fadobe.sharepoint.com%2Fsites%2FTheBlog%2FShared%2520Documents%2Ftheblog&rid=VqNCOOblZXzBlnpLTvgG39uWoAIrGDWF&src=adobe%2Ftheblog%2Fdd25127aa92f65fda6a0927ed3fb00bf5dcea069', {
    headers: {
      accept: 'application/json',
    }
  });
  if (error) {
    throw error;
  }
  const text = await resp.text();
  console.log(resp.status, text);
}

async function run() {
  const context = fetchAPI.context({});
  try {
    await request(context);
    await request(context, new Error('user error'));
  } finally {
    await context.disconnectAll();
  }
}
run().catch(console.error);
$ node src/simple.js
404 Error while converting document
Error: user error
    at run (src/simple.js:36:28)
    at processTicksAndRejections (internal/process/task_queues.js:97:5)
...(process hangs)...

Note

adding keepAlive: false seems to solve the problem:

  const context = fetchAPI.context({
    http1: {
      keepAlive: false,
    },
  });

expose fetch context to allow custom configuration

Is your feature request related to a problem? Please describe.
helix-fetch uses a transparent default context which encapsulates the cache and provides options for the underlying fetch-h2 module. It's currently not possible to customise the helix-fetch context (e.g. cache limits, separate isolated caches, default protocols etc).

Describe the solution you'd like
helix-fetch should expose the context creation in order to allow different fetch configurations.

Additional context
Related issues:

concurrent http2 requests return the same response for all requests

Description
when issuing several requests at once, they all return the same response, if the hostname resolves to the same IP.

  • switching to http1 solves the problem
  • adding an explicit host header solves the problem.

To Reproduce
run the following:

const { fetch } = require('@adobe/helix-fetch').context({
  // httpProtocol: 'http1',
  // httpsProtocols: ['http1'],
});
const crypto = require('crypto');

async function request(url) {
  console.log('requesting', url)
  const res = await fetch(url, {
    // headers: {
    //   host: new URL(url).hostname,
    // },
    cache: 'no-store',
  });
  const data = await res.text();
  const md5 = crypto.createHash('md5').update(data).digest().toString('hex');
  console.log('result', url, 'md5=', md5, res.headers.raw());
}

async function run() {
  await Promise.all([
    request('https://lr-landing-davidnuescheler.project-helix.page/'),
    request('https://n2-davidnuescheler.hlx.page/index.html'),
    request('https://theblog--adobe.hlx.page/index.html'),
  ]);
}

run().catch(console.error);

result:

$ node index.js
requesting https://lr-landing-davidnuescheler.project-helix.page/
requesting https://n2-davidnuescheler.hlx.page/index.html
requesting https://theblog--adobe.hlx.page/index.html
result https://theblog--adobe.hlx.page/index.html md5= 22971378ba99ace7ba88e6474e7e0c04 { 'cache-control': 'max-age=604800, must-revalidate, private',
  'content-type': 'text/html; charset=UTF-8',
  link:
   '<https://lr-landing-davidnuescheler.project-helix.page/index.html>; rel="canonical"',
  date: 'Thu, 11 Jun 2020 05:18:39 GMT',
  age: '0',
  vary: 'X-Debug,X-Strain,X-Request-Type',
  'strict-transport-security': 'max-age=31536000',
  'x-version': '332; src=332; cli=5.8.2; rev=online' }
result https://n2-davidnuescheler.hlx.page/index.html md5= 22971378ba99ace7ba88e6474e7e0c04 { 'cache-control': 'max-age=604800, must-revalidate, private',
  'content-type': 'text/html; charset=UTF-8',
  link:
   '<https://lr-landing-davidnuescheler.project-helix.page/index.html>; rel="canonical"',
  date: 'Thu, 11 Jun 2020 05:18:39 GMT',
  age: '0',
  vary: 'X-Debug,X-Strain,X-Request-Type',
  'strict-transport-security': 'max-age=31536000',
  'x-version': '332; src=332; cli=5.8.2; rev=online' }
result https://lr-landing-davidnuescheler.project-helix.page/ md5= 22971378ba99ace7ba88e6474e7e0c04 { 'cache-control': 'max-age=604800, must-revalidate, private',
  'content-type': 'text/html; charset=UTF-8',
  link:
   '<https://lr-landing-davidnuescheler.project-helix.page/index.html>; rel="canonical"',
  date: 'Thu, 11 Jun 2020 05:18:39 GMT',
  age: '0',
  vary: 'X-Debug,X-Strain,X-Request-Type',
  'strict-transport-security': 'max-age=31536000',
  'x-version': '332; src=332; cli=5.8.2; rev=online' }

The automated release is failing 🚨

🚨 The automated release from the master branch failed. 🚨

I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.

You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can resolve this πŸ’ͺ.

Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.

Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.

If you are not sure how to resolve this, here is some links that can help you:

If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.


Invalid npm token.

The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.

If you are using Two-Factor Authentication, make configure the auth-only level is supported. semantic-release cannot publish with the default auth-and-writes level.

Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.


Good luck with your project ✨

Your semantic-release bot πŸ“¦πŸš€

Allow customizing the socket for support proxy setups

Is your feature request related to a problem? Please describe.
Using helix-fetch behind corporate firewalls requires the need to configure proxy settings.

Describe the solution you'd like
In order to give the freedom to the caller how to best implement various setups, helix-fetch should allow to use a custom socket for requests. This enables callers to provide their own socket (e.g. using the tunnel package)

Describe alternatives you've considered
Alternatives would be for helix-fetch to accept some structured proxy settings and sets up all the sockets by itself. Given the variety of configuration options, this would come at a high cost in terms of API surface.

Additional context
We're happy to contribute this and to work with the team on a good path forward to unblock our users.

Cancelling a POST with a string body generates an asynchronous uncaught error

Description
When doing a POST request with a string passed in as the body, helix-fetch internally converts it to a Readable stream. When an ongoing request is aborted, the body stream is destroyed, sending along an error. The code that normalizes the body to a Readable stream adds a handler for the 'error' event, but currently does so only when the original body was a Readable stream. So if the original body is e.g. a string, then the Readable stream version of it will not have an error handler, and aborting the request will cause it to emit an 'error' event which is never caught, causing e.g. tests of the functionality to fail.

To Reproduce
Steps to reproduce the behavior:

  1. run a test where you perform a POST fetch with string data
  2. abort the request
  3. get an error about an uncaught error, which happens after the test has already finished (this is because destroying the Readable stream schedules the error event on the next tick)

Expected behavior
The error should be caught (or not emitted at all).

Persistent ERR_HTTP2_INVALID_SESSION errors when network infrastructure changes

Description
It seems that requesting a http/2 server starts to fail, if the DNS of the host changes to a server that doesn't support http/2.
Looking at the code in fetch-h2, it seems that the http/2 sessions are cached by hostname and not by ip.

To Reproduce

not so trivial...

Expected behavior
Ideally, the client wouldn't notice the switch and the request is internally restarted.

Action Required: Fix Renovate Configuration

There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved.

Location: config
Error type: Invalid allowedVersions
Message: The following allowedVersions does not parse as a valid version or range: "<15>"

Redirects should be followed by default

Description
helix-fetch doesn't follow redirects by default. You have to explicitly set the redirect: 'follow' option. According to the Fetch spec redirect: 'follow'should be the implicit default.

To Reproduce

const resp = await fetch('https://httpstat.us/307');
assert.equal(resp.status, 200); // -> fails
assert.equal(resp.redirected, true); // -> fails

Expected behavior
Redirects should be followed by default, i.e. the above example should not fail

timeout does not work for slow responses

Description
the timeout that can be specified in the options only applies to the connection/first-byte response time, but not to the overall response time. if the caller wants to limit the time fetch spends downloading a resource, this is not possible.

To Reproduce

it('handles dripped response', async() => {
  const context = fetchAPI.context({
    // httpProtocol: 'http1',
    // httpsProtocols: ['http1'],
  });
  try {
    const resp = await context.fetch('https://httpbin.org/drip?duration=10&numbytes=10&code=200&delay=1', {
      cache: 'no-store',
      redirect: 'follow',
      timeout: 3000,
    });
    const text = await resp.text();
    console.log(resp.ok, resp.status, resp.headers, text);
    assert.fail('should timeout');
  } catch (e) {
    console.error(e);
    // ok!
  } finally {
    await context.disconnectAll();
  }
}).timeout(5000);

Expected behavior
the request should timeout and the test should pass.

TS7016: Could not find a declaration file for module './fetch/errors'

Description

TypeScript build generates "Could not find a declaration file for module './fetch/errors'" error when trying to build a project that uses @adobe/helix-fetch.

To Reproduce

npm init -y
npm i -D typescript @types/node
npm i @adobe/helix-fetch

tsconfig.js:

{
  "compilerOptions": {
    "target": "es2016",
    "module": "commonjs",
    "esModuleInterop": true,
    "strict": true
  }
}

test.ts:

import { fetch } from '@adobe/helix-fetch';

Run tsc:

npx tsc

Expected behavior
No errors reported

Actual behavior

node_modules/@adobe/helix-fetch/src/api.d.ts:13:29 - error TS7016: Could not find a declaration file for module './fetch/errors'. '/test/node_modules/@adobe/helix-fetch/src/fetch/errors.js' implicitly has an 'any' type.

13 import { SystemError } from "./fetch/errors";
                               ~~~~~~~~~~~~~~~~


Found 1 error.

Version:
run: $ hlx --version: Command 'hlx' not found

npm ls @adobe/helix-fetch: @adobe/[email protected]

Additional information

When tsc sees import { SystemError } from "./fetch/errors", it expects to find at ./fetch/errors.ts or ./fetch/errors.d.ts, but neither exists. Moreover, SystemError does not exist in ./fetch/errors.js:

/test/node_modules/@adobe/helix-fetch/src$ grep class fetch/errors.js
/* eslint-disable max-classes-per-file */
class FetchBaseError extends Error {
class FetchError extends FetchBaseError {
class AbortError extends FetchBaseError {

Redirected POST is not supported

Description
POST requests which are redirected by the server fail with an error.

To Reproduce

const method = 'POST';
const json = { foo: 'bar' };
const resp = await fetch('https://httpstat.us/307', { method, json });
// => Error: URL got redirected to https://httpstat.us/, which 'fetch-h2' doesn't support for POST

Expected behavior
POST requests should be redirected.

Workaround

const method = 'POST';
const json = { foo: 'bar' };
let resp = await fetch('https://httpstat.us/307', { method, json, redirect: 'manual' });
if (resp.status === 307) {
  resp = await fetch(resp.headers.get('location'), { method, json, redirect: 'manual' });
}

Additional context
Known limitation of underlying fetch-h2 implementation.

parallel requests don't scale

Is your feature request related to a problem? Please describe.
The time to run parallel requests to the same http2 origin increases roughly linearly with the number of requests which is unexpected given HTTP/2 multiplexing.

Describe the solution you'd like
Either the underlying issue in fetch-h2 gets fixed or we use an alternative implementation under the hood, e.g. node-fetch-h2.

Describe alternatives you've considered
See above.

Additional context
grantila/fetch-h2#85

node exit with Error [ERR_HTTP2_SESSION_ERROR]

Description
process crashed with error when fetch in try-catch context
the exception was caught, but nodejs still exit.

Version:
"@adobe/helix-fetch": "^3.0.0"
nodejs v16.13.0

Crash log:
[2021/12/31 20:26:23][error] error: FetchError - Session closed with error code 1
[2021/12/31 20:26:23][info] retry after 10s
Error [ERR_HTTP2_SESSION_ERROR]: Session closed with error code 1
at new NodeError (node:internal/errors:371:5)
at Http2Session.onGoawayData (node:internal/http2/core:679:21) {
code: 'ERR_HTTP2_SESSION_ERROR'
}

Support timeout on fetch requests

Is your feature request related to a problem? Please describe.
Running multiple fetch requests in parallel may end up causing denial-of-service attacks if they are long-lived.

Describe the solution you'd like
Regular XHR requests support a timeout that will fail the request after a specific amount of time.
There is currently no clear spec for it in the fetch API, but some discussion around it exist (see links below)

Would be great to have a first draft implementation in the library to offer a decent enough implementation until the spec is finalized.

Describe alternatives you've considered
This can alternatively be implemented consumer-side by using timeouts and promises, and rejecting the promise before the request ends, but the socket is still left open until the fetch actually finishes so it doesn't fully removes the issue

Additional context
See:

FetchError: The socket is already bound to an Http2Session

Description
Concurrent http2 fetch requests to the same origin and using different contexts may lead to FetchError: The socket is already bound to an Http2Session.

To Reproduce

const doFetch = async (url) => {
  const ctx = context();
  try {
    return ctx.fetch(url);
  } finally {
    await ctx.reset();
  }
};

const N = 10; // # of parallel requests
const TEST_URL = 'https://httpbin.org/bytes/'; // HTTP2
// generete array of 'randomized' urls
const urls = Array.from({ length: N }, () => Math.floor(Math.random() * N)).map((num) => `${TEST_URL}${num}`);
// send requests
const responses = await Promise.all(urls.map((url) => doFetch(url)));

query parameter support

Is your feature request related to a problem? Please describe.
I think it would benefit to add querystring support; so every piece of code that uses query strings doesn't have to write its own implementation of appending them to the URL.

Describe the solution you'd like
just use URL.searchParams code

Describe alternatives you've considered
query-string library

Include header name when reporting illegal value

when a http header value is not valid, it just reports:

invalid request header Invalid character in header content ["dummy"]

it would be useful, if the header name is included in the error instead of dummy.

AbortController, AbortError not fully usable from TypeScript

Description
The .d.ts file specifies some exported symbols as types instead of as values, but users need to use those symbols as values. Specifically, at least:

  • Example code in the README includes if (err instanceof AbortError) but this doesn't compile in TypeScript since currently the .d.ts exports AbortError as an interface instead of as a declare class
  • To use the aborting (early cancellation) functionality, a user needs to create an AbortController but this is not possible in TypeScript since the current .d.ts exports the AbortController as a type instead of as a declare class

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Repository problems

These problems occurred while renovating this repository. View logs.

  • WARN: Encrypted value is using deprecated PKCS1 padding, please change to using PGP encryption.

Awaiting Schedule

These updates are awaiting their schedule. Click on a checkbox to get an update now.

  • chore(deps): update dependency mocha to v10.7.3

Edited/Blocked

These updates have been manually edited so Renovate will no longer make changes. To discard all commits and start over, click on a checkbox.

Detected dependencies

circleci
.circleci/config.yml
  • codecov 3.3.0
  • cimg/node 18.2
github-actions
.github/workflows/semantic-release.yaml
  • actions/checkout v4@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/setup-node v4
.github/workflows/semver-check.yaml
npm
package.json
  • debug 4.3.6
  • http-cache-semantics 4.1.1
  • lru-cache 7.18.3
  • @semantic-release/changelog 6.0.3
  • @semantic-release/git 10.0.1
  • c8 8.0.1
  • chai 4.5.0
  • chai-as-promised 7.1.2
  • chai-bytes 0.1.2
  • chai-iterator 3.0.2
  • eslint 8.57.0
  • eslint-config-airbnb-base 15.0.0
  • eslint-plugin-header 3.1.1
  • eslint-plugin-import 2.29.1
  • formdata-node 6.0.3
  • husky 8.0.3
  • lint-staged 15.2.8
  • mocha 10.7.0
  • mocha-multi-reporters 1.5.1
  • nock 13.5.4
  • parse-cache-control 1.0.1
  • parse-multipart-data 1.5.0
  • semantic-release 22.0.12
  • sinon 17.0.2
  • stream-buffers 3.0.3
  • node >=14.16

  • Check this box to trigger a request for Renovate to run again on this repository

Regression: location header is made absolute with redirect=manual

Description
fetching a redirect response manually should not alter the location header. but since v2, it is made absolute.

eg:

$ curl -sD - 'https://httpbingo.org/redirect-to?url=/foo.html'
HTTP/2 302
access-control-allow-credentials: true
access-control-allow-origin: *
location: /foo.html
date: Thu, 04 Feb 2021 03:10:22 GMT
content-length: 0
server: Fly/7226d36 (2021-01-25)
via: 2 fly.io

but with fetch:

    it('supports redirect: manual with path location', async () => {
      const resp = await fetch(`${protocol}://httpbingo.org/redirect-to?url=/foo.html&status_code=307`, { redirect: 'manual', cache: 'no-store' });
      assert.strictEqual(resp.status, 307);
      assert.strictEqual(resp.headers.get('location'), '/foo.html');
      assert.strictEqual(resp.redirected, false);
    });
AssertionError [ERR_ASSERTION]: Expected values to be strictly equal:
+ actual - expected

+ 'http://httpbingo.org/foo.html'
- '/foo.html'
Expected :/foo.html
Actual   :http://httpbingo.org/foo.html

To Reproduce
see failing test in: #130

rename to a more generic name?

in order to make it project helix/franklin independent, maybe consider rename it so a more generic name?

  • @adobe/phetsch
  • @adobe/phetch
  • @adobe/fetch-hx (http1 + http2)
  • @adobe/w3c-node-fetch
  • ...

or even without @adobe/

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.