Giter Site home page Giter Site logo

Comments (15)

michael-raymond avatar michael-raymond commented on September 25, 2024 8

I ran into this today, and managed to get it working better, though perhaps not entirely. I don't fully understand what's happening, and came to the issues to look for an explanation.

If you start the S3 Upload promise first, but don't await it, then append to the zip, then await the .finalize() call, then await the Upload promise, it will go faster.

...

(async () => {

  ...

  const uploadPromise = s3Upload.done()

  const archive = archiver('zip', { zlib: { level: 0 } });
  archive.pipe(s3UploadStream);

  ...

  archive.append(downloadStream, { name: 'file.jpg' });

  await archive.finalize();
  await uploadPromise;
  console.log('done');
})();

I also tried await Promise.all([archive.finalize(), s3Upload.done()]), which also worked for my case, but was slower. I was zipping ~90 small files, and it took 18 seconds if I started both promises at the same time, but only 11 if I started the upload promise before appending. I didn't get any progress callbacks until after the finalisation had completed though. So I'm still confused as to what's actually happening.

from node-archiver.

Keithcat767 avatar Keithcat767 commented on September 25, 2024 5

This gist helped me a ton after running into swallowed errors and my lambdas ending without any notice: https://gist.github.com/amiantos/16bacc9ed742c91151fcf1a41012445e?permalink_comment_id=3804034#gistcomment-3804034. Might help you too!

from node-archiver.

thovden avatar thovden commented on September 25, 2024 4

Try increasing the highWaterMark of your PassThrough stream.

I have the same problem. To me it looks it is not streaming at all. Increasing the highWaterMark in the Passthrough stream seemingly works only because it loads more data in memory. Once the size of the input files are larger than the highWaterMark / memory it fails again. I have an event handler for httpUploadProgress (I use the S3 SDK uploader: import { Upload } from '@aws-sdk/lib-storage') and all of those logs come at the end, indicating there is no streaming until archiver.finalize().

from node-archiver.

noahw3 avatar noahw3 commented on September 25, 2024 4

Also experiencing this issue.

I dug into it a bit. The problem seems to be that the underlying _finalize call is never made, so node recognizes this as a hanging promise that will never resolve and exits.

In Archiver.prototype.finalize, this._queue.idle() is false so _finalize is not called directly from there.

Changing the order of promises as @michael-raymond mentioned above seems to make it work. In this case, _finalize gets called when onQueueDrain fires. Without the promise reordering, however, this never fires normally.

Anyway that's as far as I dug, hopefully this is helpful.

from node-archiver.

6eyu avatar 6eyu commented on September 25, 2024 2

I am having the same problem as well. For me it seems especially files with larger file size tend to cause this issue. For some small files it works fine. I hope this can be solved.

Similar problem here today. I solved this by adding httpsAgent in the s3Client config

const { S3Client} = require("@aws-sdk/client-s3");
const { NodeHttpHandler } = require("@aws-sdk/node-http-handler");
const https = require('https');

const s3Client = new S3Client({ 
    region: "ap-southeast-2",
    requestHandler: new NodeHttpHandler({
        httpsAgent: new https.Agent({
            KeepAlive: true,
            rejectUnauthorized: true
        })
})

from node-archiver.

kuhe avatar kuhe commented on September 25, 2024 2

Try increasing the highWaterMark of your PassThrough stream.

from node-archiver.

danielkochdakitec avatar danielkochdakitec commented on September 25, 2024 1

I am having the same problem as well. For me it seems especially files with larger file size tend to cause this issue. For some small files it works fine. I hope this can be solved.

from node-archiver.

punkuz avatar punkuz commented on September 25, 2024 1

I am facing very similar issue, where I am pushing files by zipping them to a sftp server, for small multiple files its working, but for large files, for example I have 3 files each one above 1.12 gb, then I am getting "Archive Error Error: Aborted". has anyone tried such large files, or this library can't handle such big files

from node-archiver.

kvzaytsev avatar kvzaytsev commented on September 25, 2024 1

Hello ! Is there any update on this issue? I am experiencing the same problem.
Adding httpsAgent did not solve the issue.

I am testing on archiving s3 prefix which contains 200 objects.

from node-archiver.

deathemperor avatar deathemperor commented on September 25, 2024

Try increasing the highWaterMark of your PassThrough stream.

do you mean like this?

const archive = Archiver("zip", {
      zlib: { level: 0 },
      highWaterMark: 1000 * 1024 * 1024,
    });

You can see I set it as 1GB but still not working

from node-archiver.

CentralMatthew avatar CentralMatthew commented on September 25, 2024

Any updates on this?
Have the same issue

from node-archiver.

vgjenks avatar vgjenks commented on September 25, 2024

Try increasing the highWaterMark of your PassThrough stream.

Thank you. This was extremely helpful when nothing else was, for a problem related to another package.

from node-archiver.

fullstackzach avatar fullstackzach commented on September 25, 2024

We had trouble with our process ending with no error while trying to implement zip streaming with the s3 v3 sdk. The thing that helped us:

  1. creating a new s3 client for each readable / writeable stream (roger that this isn't optimal..)
  2. not using await on archive.finalize, but instead returning a new promise, and resolving on "end"

Hope this is helpful in getting someone get past this issue.

import {
  GetObjectCommand,
  GetObjectCommandOutput,
  ListObjectsV2Command,
  PutObjectCommand,
  S3Client,
} from "@aws-sdk/client-s3";
import { PassThrough } from "stream";
import { Upload } from "@aws-sdk/lib-storage";
import * as archiver from "archiver";

....

export const getReadableStreamFromS3 = async (
  key: string,
  bucketName: string,
): Promise<GetObjectCommandOutput["Body"] | undefined> => {
  const client = new S3Client({
    forcePathStyle: true,
    region,
    endpoint,
  });

  const command = new GetObjectCommand({
    Bucket: bucketName,
    Key: key,
  });

  const response = await client.send(command);

  return response.Body;
};

export const getWritableStreamFromS3 = (zipFileKey: string, bucketName: string): PassThrough => {
  const passthrough = new PassThrough();

  const client = new S3Client({
    forcePathStyle: true,
    region,
    endpoint,
  });

  new Upload({
    client,
    params: {
      Bucket: bucketName,
      Key: zipFileKey,
      Body: passthrough,
    },
  }).done();

  return passthrough;
};


export const generateAndStreamZipfileToS3 = async (
  s3KeyList: string[],
  zipFileS3Key: string,
  bucketName: string,
): Promise<void> => {
  // eslint-disable-next-line no-async-promise-executor
  return new Promise(async (resolve, reject) => {
    const pass = new PassThrough();
    const archive = archiver("zip", { zlib: { level: 9 } });
    const chunks: Buffer[] = [];

    archive.on("error", (err) => reject(err));
    pass.on("error", (err) => reject(err));
    pass.on("data", (chunk) => chunks.push(chunk));
    pass.on("end", async () => {
      const buffer = Buffer.concat(chunks);

      const uploadParams = {
        Bucket: bucketName,
        Key: zipFileS3Key,
        Body: buffer,
      };

      await s3Client.send(new PutObjectCommand(uploadParams));

      resolve();
    });

    archive.pipe(pass);

    for (const s3Key of s3KeyList) {
      const stream = (await getReadableStreamFromS3(s3Key, bucketName)) as Readable;
      archive.append(stream, { name: s3Key.split("/").pop()! });
    }

    archive.finalize();
  });
};

from node-archiver.

Dirshant avatar Dirshant commented on September 25, 2024

A similar problem with a Heap out of memory error,
I am appending 100000 file streams to archive.append(file_stream, { name: file_name }) one by one in a loop
seems like the archiver keeps the reference of all streams in heap memory finally throwing the Heap out of memory error.

from node-archiver.

marsilinou97 avatar marsilinou97 commented on September 25, 2024

I have the same problem, when I add ~200 images to the archive everything works fine, when I increase number of images to 1k my lambda exits with success status after 5 minutes (my lambda timeout is 15 minutes).

I tried the following:

  • Increase highWaterMark to 1GB
  • Override requestHandler using NodeHttpHandler
  • Await promises in order as mentioned above
    await zip.finalize();
    console.log("Finalized the zip archive");
    await uploadPromise

from node-archiver.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.