Giter Site home page Giter Site logo

muxinc / upchunk Goto Github PK

View Code? Open in Web Editor NEW
314.0 39.0 46.0 543 KB

Uploads Chunks! Takes big files, splits them up, then uploads each one with care (and PUT requests).

License: MIT License

HTML 1.13% JavaScript 14.64% TypeScript 84.23%
chunk chunk-upload mux large-file-upload gcs-upload

upchunk's Introduction

Mux Logo

UpChunk

Build Status

UpChunk uploads chunks of files! It's a JavaScript module for handling large file uploads via chunking and making a put request for each chunk with the correct range request headers. Uploads can be paused and resumed, they're fault tolerant, and it should work just about anywhere.

UpChunk is designed to be used with Mux direct uploads, but should work with any server that supports resumable uploads in the same manner. This library will:

  • Split a file into chunks (in multiples of 256KB).
  • Make a PUT request for each chunk, specifying the correct Content-Length and Content-Range headers for each one.
  • Retry a chunk upload on failures.
  • Allow for pausing and resuming an upload.

Installation

NPM

npm install --save @mux/upchunk

Yarn

yarn add @mux/upchunk

Script Tags

<script src="https://unpkg.com/@mux/upchunk@3"></script>

Basic Usage

Getting an upload URL from Mux.

You'll need to have a route in your application that returns an upload URL from Mux. If you're using the Mux Node SDK, you might do something that looks like this.

const Mux = require('@mux/mux-node');
const mux = new Mux({
  tokenId: process.env.MUX_TOKEN_ID,
  tokenSecret: process.env.MUX_TOKEN_SECRET,
});

module.exports = async (req, res) => {
  // This ultimately just makes a POST request to https://api.mux.com/video/v1/uploads with the supplied options.
  const upload = await mux.video.uploads.create({
    cors_origin: 'https://your-app.com',
    new_asset_settings: {
      playback_policy: ['public'],
    },
  });

  // Save the Upload ID in your own DB somewhere, then
  // return the upload URL to the end-user.
  res.end(upload.url);
};

Then, in the browser with plain Javascript

import * as UpChunk from '@mux/upchunk';

// Pretend you have an HTML page with an input like: <input id="picker" type="file" />
const picker = document.getElementById('picker');

picker.onchange = () => {
  const getUploadUrl = () =>
    fetch('/the-endpoint-above').then((res) =>
      res.ok ? res.text() : throw new Error('Error getting an upload URL :(')
    );

  const upload = UpChunk.createUpload({
    endpoint: getUploadUrl,
    file: picker.files[0],
    chunkSize: 30720, // Uploads the file in ~30 MB chunks
  });

  // subscribe to events
  upload.on('error', (err) => {
    console.error('๐Ÿ’ฅ ๐Ÿ™€', err.detail);
  });

  upload.on('progress', (progress) => {
    console.log(`So far we've uploaded ${progress.detail}% of this file.`);
  });

  upload.on('success', () => {
    console.log("Wrap it up, we're done here. ๐Ÿ‘‹");
  });
};

Or, in the browser with React

import React, { useState } from 'react';
import * as UpChunk from '@mux/upchunk';

function Page() {
  const [progress, setProgress] = useState(0);
  const [statusMessage, setStatusMessage] = useState(null);

  const handleUpload = async (inputRef) => {
    try {
      const response = await fetch('/your-server-endpoint', { method: 'POST' });
      const url = await response.text();

      const upload = UpChunk.createUpload({
        endpoint: url, // Authenticated url
        file: inputRef.files[0], // File object with your video fileโ€™s properties
        chunkSize: 30720, // Uploads the file in ~30 MB chunks
      });

      // Subscribe to events
      upload.on('error', (error) => {
        setStatusMessage(error.detail);
      });

      upload.on('progress', (progress) => {
        setProgress(progress.detail);
      });

      upload.on('success', () => {
        setStatusMessage("Wrap it up, we're done here. ๐Ÿ‘‹");
      });
    } catch (error) {
      setErrorMessage(error);
    }
  };

  return (
    <div className="page-container">
      <h1>File upload button</h1>
      <label htmlFor="file-picker">Select a video file:</label>
      <input
        type="file"
        onChange={(e) => handleUpload(e.target)}
        id="file-picker"
        name="file-picker"
      />

      <label htmlFor="upload-progress">Downloading progress:</label>
      <progress value={progress} max="100" />

      <em>{statusMessage}</em>
    </div>
  );
}

export default Page;

API

createUpload(options)

Returns an instance of UpChunk and begins uploading the specified File.

options object parameters

  • endpoint type: string (url) | function (required)

    URL to upload the file to. This can be either a string of the authenticated URL to upload to, or a function that returns a promise that resolves that URL string. The function will be passed the file as a parameter.

  • file type: File (required)

    The file you'd like to upload. For example, you might just want to use the file from an input with a type of "file".

  • headers type: Object | function

    An object, a function that returns an object, or a function that returns a promise of an object. The resulting object contains any headers you'd like included with the PUT request for each chunk.

  • chunkSize type: integer (kB), default:30720

    The size in kB of the chunks to split the file into, with the exception of the final chunk which may be smaller. This parameter must be in multiples of 256.

  • maxFileSize type: integer

    The maximum size of the file in kb of the input file to be uploaded. The maximum size can technically be smaller than the chunk size, and in that case there would be exactly one chunk.

  • attempts type: integer, default: 5

    The number of times to retry any given chunk if the upload attempt fails with a retriable response status (see: retryCodes, below). After attempting attempts times, an error event will be dispatched and uploading will halt.

  • delayBeforeAttempt type: number (seconds), default: 1.0

    The time in seconds to wait before attempting to upload a chunk again.

  • retryCodes type: number[] (HTTP Status), default: [408, 502, 503, 504]

    The HTTP Status codes that indicate a given (failed) chunk upload request attempt is retriable. See also: attempts option, above.

  • method type: "PUT" | "PATCH" | "POST", default: PUT

    The HTTP method to use when uploading each chunk.

  • dynamicChunkSize type: boolean, default: false

    Whether or not the system should dynamically scale the chunkSize up and down to adjust to network conditions.

  • maxChunkSize type: integer (kB), default: 512000

    When dynamicChunkSize is true, the largest chunk size that will be used, in kB.

  • minChunkSize type: integer (kB), default: 256

    When dynamicChunkSize is true, the smallest chunk size that will be used, in kB.

  • useLargeFileWorkaround type: boolean, default: false

    Falls back to reading entire file into memory for cases where support for streams is unreliable (see, e.g. this upchunk issue and the corresponding webkit bug report).

UpChunk Instance Properties

  • offline type: (readonly) boolean default: false

    Indicates whether or not currently offline. While offline, uploading will pause and resume automatically once back online. See also: offline and online events, below.

  • paused type: (readonly) boolean default: false

    Indicates whether or not uploading has been temporarily paused via the pause() method. See also: pause() and resume() methods, below.

UpChunk Instance Methods

  • pause()

    Pauses an upload after the current in-flight chunk is finished uploading.

  • resume()

    Resumes an upload that was previously paused.

  • abort()

    The same behavior as pause(), but also aborts the in-flight XHR request.

UpChunk Instance Events

Events are fired with a CustomEvent object. The detail key is null if an interface isn't specified.

  • attempt { detail: { chunkNumber: Integer, chunkSize: Integer } }

    Fired immediately before a chunk upload is attempted. chunkNumber is the number of the current chunk being attempted, and chunkSize is the size (in bytes) of that chunk.

  • attemptFailure { detail: { message: String, chunkNumber: Integer, attemptsLeft: Integer } }

    Fired when an attempt to upload a chunk fails.

  • chunkSuccess { detail: { chunk: Integer, attempts: Integer, response: XhrResponse } }

    Fired when an indvidual chunk is successfully uploaded.

  • error { detail: { message: String, chunkNumber: Integer, attempts: Integer } }

    Fired when a chunk has reached the max number of retries or the response code is fatal and implies that retries should not be attempted.

  • offline

    Fired when the client has gone offline.

  • online

    Fired when the client has gone online.

  • progress { detail: [0..100] }

    Fired continuously with incremental upload progress. This returns the current percentage of the file that's been uploaded.

  • success

    Fired when the upload is finished successfully.

FAQ

How do I cancel an upload?

Our typical suggestion is to use pause() or abort(), and then clean up the UpChunk instance however you'd like. For example, you could do something like this:

// upload is an UpChunk instance currently in-flight
upload.abort();

// In many cases, just `abort` should be fine assuming the instance will get picked up by garbage collection
// If you want to be sure, you can manually delete the instance.
delete upload;

Credit

The original idea for this came from the awesome huge uploader project, which is what you need if you're looking to do multipart form data uploads. ๐Ÿ‘

Also, @gabrielginter ported upchunk to Flutter.

upchunk's People

Contributors

akojo avatar aminamos avatar bgentry avatar bgila avatar cjpillsbury avatar clearlythuydoan avatar davekiss avatar decepulis avatar dependabot[bot] avatar dmitrykashinskyatspark avatar dylanjha avatar happylinks avatar james-mux avatar jaredsmith avatar jsanford8 avatar luizmacfilho avatar michaellimair avatar mlrsmith avatar mmcc avatar mmvsk avatar nerbeer avatar pchang211 avatar philcluff avatar skidder avatar stefanosala avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

upchunk's Issues

Browser memory usage

Been finding that when uploading large files (i.e. 6gb), the browsers RAM usage increases to include the filesize. This can exceed the users RAM and end up with the users browsers tab crashing or their computer crashing.

Looking at this myself, I've been able to reproduce it on both Firefox (102) and Chrome (103) on both Windows and Mac M1.

Firefox's dev tools are able to identify a bunch of ArrayBuffers that take up as much RAM as the filesize, but is unable to attribute it to a js call stack(maybe browser internals). Chrome dev tools however are unable to see the memory at all.
image

Googling, I'm not finding much authoritive, but it does seem like file.slice as done by Upchunk can be problematic.

this.reader.readAsArrayBuffer(this.file.slice(start, start + length));
. Suggestions I'm finding appear to be to use a ReadableStream instead as that's stream based and is probably more suited for reading out chunks without loading in the whole file.

How to pass additional data with request

Hi, I'm using this package in my Laravel 11 project using Livewire v3. I need to pass the file along with other data I have. How can I achieve this? My top level element with x-data on it contains:

Note the createUpload function here. That's where it's happening

<section
  x-data="{
    uploader: null,
    dropping: false,
    progress: 0,
    schedule: {
      file: null,
      notes: null,
      start_processing_at: '{{ now()->addMinutes(5)->toDateTimeString() }}',
      priority: 50
    },

    cancel () {
      if (! this.uploader) {
        return
      }

      this.uploader.abort()

      $nextTick(() => {
        this.uploader = null
        this.progress = 0
      })
    },

    setDropFile (event) {
      const file = event.dataTransfer?.files[0] || event.target.files[0]

      if (! file) {
        return
      }

      this.schedule.file = file
    },

    handleUpload () {
      this.uploader = createUpload({
        file: this.schedule.file,
        endpoint: '{{ route('livewire.upload') }}',
        method: 'post',
        headers: {
          'X-CSRF-TOKEN': '{{ csrf_token() }}'
        },
        chunkSize: 2.5 * 1024
      })

      this.uploader.on('progress', (progress) => {
        this.progress = progress.detail
      })

      this.uploader.on('chunkSuccess', (response) => {
        if (! response.detail.response.body) {
            return
        }

        $wire.call('handleUploadSuccess', file.name, JSON.parse(response.detail.response.body).file)
      })

      this.uploader.on('uploadSuccess', () => {
        this.uploader = null
        this.progress = 0
      })
    }
  }"
>

'attemptCount' is not working as intended on network error

Hi!
I noticed some strange behaviour of retry attempts when request is failed due to network error and idk if it's intended or not :)
attemptCount is updated only when we receive response, but if we get network error or CORS error in my case, attemptCount is not changing and we keep spamming server with requests.
Shouldn't attempts control this case too?

/**
   * Manage the whole upload by calling getChunk & sendChunk
   * handle errors & retries and dispatch events
   */
  private sendChunks() {
    if (this.paused || this.offline || this.success) {
      return;
    }

    this.getChunk()
      .then(() => this.sendChunk())  <-------- maybe here?
      .then((res) => {
        this.attemptCount = this.attemptCount + 1;   <-------- Shouldn't it be somewhere else?

        if (SUCCESSFUL_CHUNK_UPLOAD_CODES.includes(res.statusCode)) {

        .....

      .catch((err) => {  <-------- sendChunk failed and skipped attemptCount++
        if (this.paused || this.offline) {
          return;
        }

        // this type of error can happen after network disconnection on CORS setup
        this.manageRetries();
      });

Uploading from mobile device browser in background

Hey Guys

Thank you very much for the great work. I'm using Mux with upchunk to direct upload videos from my React web app.
So far it's been very positive, and I didn't have any problem to implement it for desktop web. That being said, I have a big problem using the web app on mobile devices :

as soon I switch from my browser application to another application and therefore put the browser app in "background-mode", the upload stops. It resumes when I make the browser come back to foreground. I've tried this on Chrome && Safari.

This is very annoying for large video files because it means that when a user uploads a video, he has to wait for the upload to be complete before doing anything else with his phone ... which is very bad ! ๐Ÿ˜ข

Can anyone give me a hand here ? Or some tips on what to do ? I implemented the whole thing on a web client because I had the exact same problem with my react-native app... so it's quite a bummer.

I'm willing to submit a PR if someone can guide me on what do, and I can review anything ๐Ÿ˜Š

Thanks a lot !

TypeError: Cannot read properties of null (reading 'includes')

Hello, the following error occurs when uploading files,

upchunk.ts:155 Uncaught (in promise) TypeError: Cannot read properties of null (reading 'includes')
at ye (upchunk.ts:155:42)
at ve.sendChunkWithRetries (upchunk.ts:611:9)
at async ve.sendChunks (upchunk.ts:643:9)

Th thing is that this issue does not happen when uploading files locally, it happens only on the production server.
The response code from the backend is 200 OK, no issues with it whatsoever.

get asset ID or playback ID when upload is finished?

I'm trying to do the following:

  1. User uploads video (from browser) using upchunk
  2. After the upload is complete, I want to display their video

This would be easy if upchunk gave me the asset ID or playback ID when the upload is finished. But as far as I can tell, it doesn't, and I need to add a webhook in order to get this functionality.

I am try to resume upload, but it seems it restart again but no resume.

Lets say I add 100 mb file to upload and after 50 mb network failure occurs, after restoring internet it restart again from very beginning, so we can it's restartable but not resumable. Is it expected behaviour or am i doing something wrong. I believe there is some error in library as it doesn't store any chunk related data to resume.

Here is my code:

  const handleUpload = async (inputRef) => {
        try {
            //   const response = await fetch('https://storage.googleapis.com/resumable/upload/storage/v1/b/picsello-staging/o?uploadType=resumable&name=galleries%2F406%2Foriginal%2Fa12e384d-90a1-4a80-8f3d-977eb1f8244c.jpeg&upload_id=ADPycdt7jXCKxTYrO21tzi80raNAIptB80PdXvAmNVZpXvMMFTTRujiQy8bi6kE9QhmTHsR3PBCF3_8ARByC0-7ZErieeiszZ56x', { method: 'POST' });
            //   const url = await response.text();
            let url = "https://storage.googleapis.com/resumable/upload/storage/v1/b/picsello-staging/o?uploadType=resumable&name=galleries%2F406%2Foriginal%2Fa12e384d-90a1-4a80-8f3d-977eb1f8244c.jpeg&upload_id=ADPycdt7jXCKxTYrO21tzi80raNAIptB80PdXvAmNVZpXvMMFTTRujiQy8bi6kE9QhmTHsR3PBCF3_8ARByC0-7ZErieeiszZ56x"
            uploader = UpChunk.createUpload({
                endpoint: url, // Authenticated url
                file: inputRef.files[0], // File object with your video fileโ€™s properties
                chunkSize: 30720, // Uploads the file in ~30 MB chunks
            });

            // Subscribe to events
            uploader.on('error', error => {
                // setStatusMessage(error.detail);
                console.log(error.detail);
            });

            uploader.on('progress', progress => {
                setProgress(progress.detail);
                console.log(progress.detail);
            });

            uploader.on('success', () => {
                // setStatusMessage("Wrap it up, we're done here. ๐Ÿ‘‹");
                console.log("Wrap it up, we're done here. ๐Ÿ‘‹");
            });

            uploader.on('offline', () => {
                console.log("offline");
                uploader.pause()
            });

            uploader.on('online', () => {
                console.log("online");
                uploader.resume()
            });

        // console.log(uploader)
        // setObj(uploader)
        setUploader(uploader)

        } catch (error) {
            //   setErrorMessage(error);
            console.log(error)
        }
    }

Neither it's pause and resume methods are working.

I am using "@mux/upchunk": "^3.2.0",

The video asset get auto deleted after 24 hours?

Im working with a project where as when I create a video asset from my course platform nextjs project the video asset get deleted automatically after 24 hours which is very strange?

Here is the api activity logs from mux dashboard:

image

Feature Request - Allow blobs from MediaRecorder

Is it possible to suggest being able to upload video live while a webcam/screen recording is happening. Currently the blobs have to be stored in the browser, turned into a File, and then uploaded. The user has to then wait for the upload, and if the browser fails/refreshes then the video is lost. The file property requires a file, and MediaRecorder returns blobs.

mediarecorder = new MediaRecorder(camera);
mediarecorder.addEventListener('dataavailable', function(e) {
    // Send to UpChunk to be uploaded?
    upchunk.pushblob(e.data);
});

Or otherwise pass the MediaRecorder instance to Upchunk to handle it?

UpChunk.createUpload({
    endpoint: url,
    recorder: new MediaRecorder(camera);
});

TypeError occurred using a Japanese file name

The following TypeError occurred When I uploaded a file with a Japanese file name. The upchunk.js didn't send a request when this error occurred. I tried to use a new File object with an encoded Japanese filename using encodeURIComponent, but it also failed with the same TypeError.

upchunk.cjs.js:3 Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'statusCode')
    at upchunk.cjs.js:3:16848
    at ve.sendChunkWithRetries (upchunk.cjs.js:3:17664)
    at async upchunk.cjs.js:3:17295

Cursor_ใจ_ๅ—ๆณจ็ฎก็†_ๅ—ๆณจ็™ป้Œฒ_-_TS_MARKET_PLACE

My javascript source code is here:

const file = $("#form_file")[0].files[0];
const blob = file.slice(0, file.size, file.type);
const encoded_file = new File([blob], encodeURIComponent(file.name), {type: file.type});
const upload = UpChunk.createUpload({
    endpoint: url,
    file: encoded_file,
    chunkSize: 4096
});
upload.on("error", (err) => {
    console.error("[ChunkUpload] Error", err.detail);
});
upload.on("progress", ({ detail: progress }) => {
    console.log(`Progress: ${progress}%`);
});
upload.on("chunkSuccess", ({ detail }) => {
    const { response } = detail;
    const { result, order_asset_id } = JSON.parse(response.body);
    if (result === 201) {
        console.log("Chunk upload succeded!");
    } else {
        console.log(`Chunk upload failed. status: ${result}`);
    }
});
upload.on("success", () => {
    console.log("Upload succeded!");
});

I have attached the following jpg file I used. the file name is "ๆ—ฅๆœฌ่ชžใƒ•ใ‚กใ‚คใƒซ.jpg"
ๆ—ฅๆœฌ่ชžใƒ•ใ‚กใ‚คใƒซ

How do we get asset_id of video which is uploaded

On success callback, there is nothing passed to that method instead of the error object. I am confused about how we will get the URL of the uploaded asset which I will use to preview the video.

// subscribe to events
upload.on('success', err => {
console.log("Wrap it up, we're done here. ๐Ÿ‘‹");
});

Error 501 - not implemented, when trying to upload to AWS S3 bucket via pre-signed URL

I am getting this 501 error. This the output from console. It starts uploading, then on the size of the first chunk it suddenly gives this error:

So far we've uploaded 1.7410714285714286% of this file.
So far we've uploaded 1.8176020408163265% of this file.
So far we've uploaded 1.8813775510204083% of this file.
So far we've uploaded 1.9451530612244896% of this file.
So far we've uploaded 2.0216836734693877% of this file.
So far we've uploaded 2.0408163265306123% of this file.

PUT https://input-bucket-in.s3.eu-west-1.amazonaws.com/646d2fa9a1394137bedbfaf403ae1a88?X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAXZOCCXS4LD73UY46%2F20230523%2Feu-west-1%2Fs3%2Faws4_request&X-Amz-Date=20230523T212705Z&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Signature=2b6cfa004dad7c3a3011fbec4323070af666567e47bc429c488756ae5fc88e97 
**501 (Not Implemented)**

๐Ÿ’ฅ ๐Ÿ™€ {message: 'Server responded with 501. Stopping upload.', chunk: 0, attempts: 1}

What have I tried

  1. Ensure bucket name, region, CORS, Policy etc is correct
  2. Tested link in PostMan and it works, uploads file nicely
  3. Tryed to confuse ChatGPT, just in case, and I broke it...

Here is my code as well...

import * as UpChunk from '@mux/upchunk/dist/upchunk.mjs'

function send_video(file, data) {

	console.log('starting video send');
		
	const upload = UpChunk.createUpload({
		endpoint: data['url'],
		file: file,
		chunkSize: 5120, // Uploads the file in ~5 MB chunks
	});
	
	// subscribe to events
	upload.on('error', err => {
		console.error('๐Ÿ’ฅ ๐Ÿ™€', err.detail);
	});
	
	upload.on('progress', progress => {
		console.log(`So far we've uploaded ${progress.detail}% of this file.`);
	});

	upload.on('success', () => {
		console.log("Wrap it up, we're done here. ๐Ÿ‘‹");
	});
	
 }

Here is my server side PHP code to get one time pre-signed url

$result = $S3Client->getCommand('PutObject', [
    'Bucket' => $options['aws_video_bucket'],
    'Key'    => $key,
 ]);

$request = $S3Client->createPresignedRequest($result, '+1 hour');

$response = array(
    'status' => 'success',
    'url' => (string) $request->getUri()
);

I am using Yarn to maintain packages and Parcel to bundle scripts
Here are my package.json dependencies

 "dependencies": {
    "@mux/upchunk": "^3.2.0",
    "@parcel/transformer-sass": "^2.8.3",
    "parcel": "latest",
    "process": "^0.11.10"
  },

Can anyone help?

parallel upload of chunks

Are you interested to take in an PR to send chunks in parallel and would you in that case be open to discuss the approach to take?

I need to upload 10-100 GB files and want to see if I can improve the speed by sending n chunks in parallel to my backend.

How to handle cleanup?

Hi!

Upchunk works well for me, but our app has some performance issues if people upload multiple large files in successions. Upchunk is probably not to blame, but I'm doublechecking. I can see that there's no destroy() method. When I'm unmounting the component that uses Upchunk, I'm only calling .abort().

But calling .abort() will not remove event listeners, nor other state from the class. Is there other way how to cleanup after Upchunk? Thanks!

Error on deployed Next.js website

Hey everyone,

I have started experiencing a weird bug that only shows up on a deployed website

image

When running the project locally the problem does not arise. Do you know where it might be coming from?

I am using Upchunk v 3.2.0 & pnpm 8.5.1

Package.json

"dependencies": {
    // other unrelated deps
    "@mux/mux-node": "^7.3.0",
    "@mux/mux-player-react": "^1.11.0",
    "@mux/upchunk": "3.2.0",
    "next": "13.4.3",
    "swr": "^2.1.5",
    "ui": "workspace:*",
  },
  "devDependencies": {
    // other unrelated deps
    "tsconfig": "workspace:*",
    "typescript": "^5.1.1-rc"
  }

Code that fails:


import * as UpChunk from "@mux/upchunk";
 const startUpload = () => {
    setIsUploading(true);

    const upload = UpChunk.createUpload({
      endpoint: createUpload,
      file: file.file!,
    });

    upload.on("error", (err: any) => {
      console.log("Error in upload", err);
      setErrorMessage(err.detail.message);
    });
    upload.on("progress", (progress: any) => {
      setProgress(Math.floor(progress.detail));
    });
    upload.on("success", async () => {
      setIsPreparing(true);
    });
  };

My code is taken from the MUX example found here https://github.com/vercel/next.js/tree/canary/examples/with-mux-video
P.S I downgraded the library to 3.1.0 and the problem is no longer appearing on the production website.

tus comparison

Hi there! Upchunk looks like a fantastic project. Bravo. Could anyone shed light on how Upchunk compares to tus? Does Mux support tus?

Upload of large files 4GB+ fails on Safari. UpChunk or WebKit problem?

Hi,

we have noticed a problem, while uploading large video files, of size 4GB+ on MacOS Safari. I am not sure, if anything can be done in UpChunk library, or if this is solely WebKit problem.

The UpChunk dispatches 'success' event right after the upload is started, that is because the very first call to ChunkedStreamIterable::read() fails, resulting in completed (done) upload.

The read call fails, because of an I/O error (NotReadableError)

image

Before attempting the upload, top reports only about 100MB of free RAM, if that can mean anything.

If I truncate the file to about 3GB, the upload work.

Do you have any experience with this?

Specification:

  • UpChunk v3.3.2
  • Safari 17.0 (19616.1.27.211.1)
  • Apple M1 with 8GB RAM
  • macOS 14.0 (23A344)

TypeError: Cannot read property 'createUpload' of undefined

I am trying to use upchunk to upload video in mux with direct uploads, but shows following error:
TypeError: Cannot read property 'createUpload' of undefined. I am trying to upload with react-dropzone and following is a part of my code:

import UpChunk from '@mux/upchunk'

 <Dropzone
              className={classes.dropzone}
              maxSize={MAX_SIZE}
              accept="video/*"
              onDrop={this.uploadHandler}
            >...</Dropzone>

uploadHandler = (accepted) => {
    accepted.forEach((file) => {
      // console.log(file)
      const getUploadUrl = () => {
        fetch('/video-upload').then((res) =>
          res.ok
            ? res.text()
            : console.log(new Error('Error getting an upload URL :(')),
        )
      }

      console.log(getUploadUrl, 'getUploadUrl')

      const upload = UpChunk.createUpload({
        endpoint: '/video-upload',
        file,
        chunkSize: 5120, // Uploads the file in ~5mb chunks
      })

      // subscribe to events
      upload.on('error', (err) => {
        console.error('๐Ÿ’ฅ ๐Ÿ™€', err.detail)
      })

      upload.on('progress', (progress) => {
        console.log(`So far we've uploaded ${progress.detail}% of this file.`)
      })

      upload.on('success', () => {
        console.log("Wrap it up, we're done here. ๐Ÿ‘‹")
      })
    })
  }

Infinite recursion from old event-target-shim dependency

An issue from the event-target-shim dependency causes the browser process to get stuck in an infinite recursion when an event is dispatched. This prevents uploads from succeeding and eventually the browser tab crashes. This was fixed in 6.0.2, could you please upgrade?

note: i was able to get this working by changing package.json with

"typescript": "^4.1.3",
...
"event-target-shim": "^6.0.2",

`progress` event jumps backwards

Notice in this screenshot the progress events go from 3 to 49, then jumps backwards to 33, then progresses again to 82, then jumps back to 67.

progress-evt_2020-08-17_18-00-47


In this example. my source file is 10.6 MB. At the default chunk size 5120 that will be 3 chunks. Because this backwards jump happens at 33 and 67, it seems like some funny business going with the progress event that gets emitted at the boundary of a new chunk.

Here's a video of the bug: https://stream.new/v/EBPMDDqXrxsT01ToQwBbVx2027dx5dqucOoCxkXcCMEIk

Multiple uploads

Is the a way to upload multiples files at the same time with UpChunk.createUpload of a Direct uploads and not just the files[0] ?

here is my code :

const handleUpload = async (inputRef) => {
    try {
      const upload = UpChunk.createUpload({
        endpoint: createUpload, // Authenticated url
        file: inputRef.files[0], // File object with your video fileโ€™s properties
        chunkSize: 5120, // Uploads the file in ~5mb chunks
      });

      // Subscribe to events
      upload.on("error", (error) => {
        setStatusMessage(error.detail);
      });

      upload.on("progress", (progress) => {
        console.log("progress", progress.detail);
        setProgress(progress.detail);
      });

      upload.on("success", () => {
        setStatusMessage("Wrap it up, we're done here. ๐Ÿ‘‹");
      });
    } catch (error) {
      console.log("error", error);
      setErrorMessage(error);
    }
  };

Cancel upload

Is there a way to cancel the upload? I only see success and failed events.

Thank you!

[Bug] Failure to Handle 'Range' Response Header in Chunked Uploads to GCS

Hello Mux Team! ๐Ÿ‘‹

I've encountered an issue with upchunk's handling of resumable chunked uploads to Google Cloud Storage (GCS). Specifically, the library does not appear to process the Range header in HTTP 308 responses correctly, potentially leading to incomplete uploads despite successful transmission of all data chunks.

Current Behavior:

  • When uploading chunks (other than the last one) to GCS, the expected server response for a successful upload is 308 Resume Incomplete. This response includes a Range header, as specified in the GCS documentation.
  • The upchunk library currently does not evaluate the Range header in these 308 responses. Consequently, it might continue transmitting the full range of bytes without recognizing that the upload has not been completed successfully on the server's end.
  • There is also a lack of verification to ensure the final chunk results in the object being marked as complete (typically indicated by an HTTP 200 or 201 response, and specifically not an HTTP 308). This omission could lead clients to mistakenly believe the upload was successful when it wasn't.

Expected Behavior:

  • The library should use the upper value in the Range header of each 308 response to accurately determine the starting point for each successive chunk. Alternatively, chunks that were not fully received could be sent again in their entirety.
  • After transmitting the final chunk, the library should verify the server's response to confirm that the upload is indeed complete and successful.

Suggested Fix:

  • Implement logic to parse and use the Range header in 308 responses to adjust the starting byte position for subsequent chunks, or use the existing retry logic to resend any chunks that were not fully received.
  • Add a check after the final chunk is sent to verify the response status (HTTP 200 or 201) to confirm the successful completion of the upload.

I believe that fixing these issues should improve the reliability of chunked uploads to GCS, or any other service that makes use of the response Range header. I'm happy to submit a PR or collab on a solution!

Thanks!

[Feature Request] Response object inside the 'error' event

It seems sensible to provide access to the response object within the callback of the 'error' event. For instance, this would be useful if we need to handle specific server issues such as insufficient storage or conflicts. Currently, we only receive a message with the status code.

[Feature request] Emit "progress" during chunk uploads as well as after each chunk.

Currently, the "progress" event is only emitted after each chunk has been uploaded.
My request is to emit this event while uploading each chunk as well.

AFAIK, this can't be accomplished with the native fetch function that this library uses. So this feature would require the use of XMLHttpRequest or a library like axios.

The reason for this request is to be able to give feedback to the user more often while uploading larger chunks.
Our uploads to Mux seem to have quite high latency (several seconds) after each chunk. To mitigate this we could increase the chunk size, but then we run into this problem with the lack of progress feedback.

Allow for server-side rendering

In the constructor we set up event listeners for online/offline and we do absolutely no guarding, which makes it a pain to use this in any SSR context.

License

Hi, could you please add a license to the package? :-)

Retrieving Asset ID after upchunk finishes an upload?

Is there a specific reason behind upchunk not returning any kind of asset id, after it finishes an upload?

In my opinion, this should definitely be added - as it's extremely useful to store in database when saving the succesfull upload into my database when upchunk finishes.

Currently i save the uploadId instead, and then later retrieve the asset from the webhook using the uploadId, but after that i never have any use for that uploadId again. But the asset id is the one actually being useful for many things.

Please let me know, if i've misunderstood something here. But it just seems like something is missing in the library, or a better explanation in the docs.

Progress event arguments bug

Problem

The progress event seems to fire with an object as the only argument: {"IsTrusted":false}
The actual upload seems to work fine, without problems.

Code

this.uploader.on('progress', (progress) => {
  console.log(JSON.stringify(progress))
})

Output

{"IsTrusted":false}

Affected versions

I have tried versions 2.3.1, 3.2.0 & 3.1.0
The same outcome on all three versions

My environment

Web framework: Nuxt 2, webpack 4
I've tested this both in production and locally, with the same outcome.

Source code

https://gist.github.com/BlueBazze/4f05a44d5e5712b3fdfa7e50aed2ac8d

The function responsible for uploading the video is on line 247: https://gist.github.com/BlueBazze/4f05a44d5e5712b3fdfa7e50aed2ac8d#file-uploadupchunk-vue-L247

What i've tried

Originally i was using the version 2.3.1 because of another error last year: #88
Tried to update to both version 3.1.0 and 3.2.0

Already tried the different imports * as UpChunk, {UpChunk} & {createUpload}
Tried all three js versions mjs, cjs & regular

The keyword IsTrusted does not appear as part of UpChunk source.
Did see it in the dist js files being initiated as a property on a class (if im not mistaken)

Cant deny it might be a problem on my end, but i have gone through everything and i dont see the problem.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.