Giter Site home page Giter Site logo

diffusers.js's Introduction

Hi there, I'm Arthur Islamov




Readme Card

๐Ÿ‘€ GitHub Stats


๐Ÿ’ซ Tech Stack and Tools

End Stack
GraphQL

diffusers.js's People

Contributors

dakenf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

diffusers.js's Issues

Conversion script & documentation

I saw you have the convert/ and scripts/ folders in the repo but I cannot find a python converter script to transform current stable diffusion checkpoints & safetensors into ONNX format. Is there anything planned or a resource you can point me to for being able to convert the stable diffusion models (1.4, 1.5, 2.0, 2.1, XL, etc) so that we can use them in this repo?

ORTStableDiffusionPipeline Support + Unexpected input data type. Actual: (tensor(float)) , expected: (tensor(int64))

Ok... so I just want to use my stable diffusion model combine with my own lora + LCM_LoRa

I Use Optimum to convert model [SD1.5 base model + my lora + LCM_LoRa] to onnx

But I think this repo doesn't have ORTStableDiffusionPipeline Support.

SO I test it with LatentConsistencyModelPipeline, and StableDiffusionPipeline + LCMScheduler
(convert DiffusionPipeline.ts)

But I always got same error.

 โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘ 0% | ETA: 0s | 0/31/home/waganawa/Documents/Code/node/diffusers.js/node_modules/onnxruntime-node/dist/backend.js:45
                    resolve(__classPrivateFieldGet(this, _OnnxruntimeSessionHandler_inferenceSession, "f").run(feeds, fetches, options));
                                                                                                           ^

Error: Unexpected input data type. Actual: (tensor(float)) , expected: (tensor(int64))
    at /home/waganawa/Documents/Code/node/diffusers.js/node_modules/onnxruntime-node/dist/backend.js:45:108
    at process.processTicksAndRejections (node:internal/process/task_queues:77:11)

well I don't Know what is causing this error....
Someone help please?

my onnx repo : https://huggingface.co/WGNW/chamcham_v1_checkpoint_onnx

No wasm backend found.

Hello, I'm trying out the WebGPU demo and enabled Experimental WebAssembly and JSPI in the Chromium browser. Having loaded the model, after refreshing the page, I was prompted with

no available backend found. ERR: [wasm] RuntimeError: Aborted(both async and sync fetching of the wasm failed). Build with -sASSERTIONS for more info.

Do you know how to fix it?

SDTurbo - 'MultiHeadAttention_0' Failed to run JSEP kernel

Hi!

Thank you for this great work.

I'm trying to run SDTurbo with diffusers.js.

I've followed the instructions from this issue to export the model to ONNX.

154. # optimization_options.enable_qordered_matmul = False
155. optimization_options.enable_packed_qkv = False # not supported on webgpu
156. optimization_options.enable_packed_kv = False # not supported on webgpu
 python Stable-Diffusion-ONNX-FP16/conv_sd_to_onnx.py \
 --model_path "stabilityai/sd-turbo" \
 --output_path "./model/sdturbo-fp16"  \
 --fp16
Full log of the export
2024-01-11 22:52:11.126633: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-01-11 22:52:11.126680: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-01-11 22:52:11.128271: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-01-11 22:52:13.292449: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Loading pipeline components...: 100% 5/5 [00:42<00:00,  8.53s/it]
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
/usr/local/lib/python3.10/dist-packages/transformers/modeling_attn_mask_utils.py:66: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if input_shape[-1] > 1 or self.sliding_window is not None:
/usr/local/lib/python3.10/dist-packages/transformers/modeling_attn_mask_utils.py:137: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if past_key_values_length > 0:
/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py:273: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py:281: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if causal_attention_mask.size() != (bsz, 1, tgt_len, src_len):
/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py:313: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
/usr/local/lib/python3.10/dist-packages/torch/onnx/symbolic_opset9.py:5856: UserWarning: Exporting aten::index operator of advanced indexing in opset 17 is achieved by combination of multiple ONNX operators, including Reshape, Transpose, Concat, and Gather. If indices include negative values, the exported graph will produce incorrect results.
  warnings.warn(
/usr/local/lib/python3.10/dist-packages/diffusers/models/unet_2d_condition.py:915: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if dim % default_overall_up_factor != 0:
/usr/local/lib/python3.10/dist-packages/diffusers/models/downsampling.py:135: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert hidden_states.shape[1] == self.channels
/usr/local/lib/python3.10/dist-packages/diffusers/models/downsampling.py:144: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert hidden_states.shape[1] == self.channels
/usr/local/lib/python3.10/dist-packages/diffusers/models/upsampling.py:149: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert hidden_states.shape[1] == self.channels
/usr/local/lib/python3.10/dist-packages/diffusers/models/upsampling.py:165: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if hidden_states.shape[0] >= 64:
/usr/local/lib/python3.10/dist-packages/diffusers/models/unet_2d_condition.py:1206: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if not return_dict:
/usr/local/lib/python3.10/dist-packages/diffusers/models/autoencoders/autoencoder_kl.py:265: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if not return_dict:
/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/jit_utils.py:307: UserWarning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied. (Triggered internally at ../torch/csrc/jit/passes/onnx/constant_fold.cpp:179.)
  _C._jit_pass_onnx_node_shape_type_inference(node, params_dict, opset_version)
/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py:702: UserWarning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied. (Triggered internally at ../torch/csrc/jit/passes/onnx/constant_fold.cpp:179.)
  _C._jit_pass_onnx_graph_shape_type_inference(
/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py:1209: UserWarning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied. (Triggered internally at ../torch/csrc/jit/passes/onnx/constant_fold.cpp:179.)
  _C._jit_pass_onnx_graph_shape_type_inference(
/usr/local/lib/python3.10/dist-packages/diffusers/models/autoencoders/autoencoder_kl.py:306: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if not return_dict:
2024-01-11 23:02:48.140679604 [W:onnxruntime:, transformer_memcpy.cc:74 ApplyImpl] 1 Memcpy nodes are added to the graph main_graph for CUDAExecutionProvider. It might have negative impact on performance (including unable to run CUDA graph). Set session_options.log_severity_level=1 to see the detail logs before this message.
2024-01-11 23:02:48.143225771 [W:onnxruntime:, session_state.cc:1166 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2024-01-11 23:02:48.143247983 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
2024-01-11 23:02:58.092696522 [W:onnxruntime:, session_state.cc:1166 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2024-01-11 23:02:58.092735644 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
ONNX pipeline saved to model/sdturbo-fp16
Loading pipeline components...:   0% 0/6 [00:00<?, ?it/s]2024-01-11 23:03:10.174160414 [W:onnxruntime:, transformer_memcpy.cc:74 ApplyImpl] 1 Memcpy nodes are added to the graph main_graph for CUDAExecutionProvider. It might have negative impact on performance (including unable to run CUDA graph). Set session_options.log_severity_level=1 to see the detail logs before this message.
2024-01-11 23:03:10.178587318 [W:onnxruntime:, session_state.cc:1166 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2024-01-11 23:03:10.178615811 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
Loading pipeline components...:  33% 2/6 [00:00<00:01,  2.19it/s]2024-01-11 23:03:11.979303480 [W:onnxruntime:, transformer_memcpy.cc:74 ApplyImpl] 1 Memcpy nodes are added to the graph main_graph for CUDAExecutionProvider. It might have negative impact on performance (including unable to run CUDA graph). Set session_options.log_severity_level=1 to see the detail logs before this message.
2024-01-11 23:03:11.983210207 [W:onnxruntime:, session_state.cc:1166 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2024-01-11 23:03:11.983247143 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
Loading pipeline components...:  67% 4/6 [00:02<00:01,  1.85it/s]2024-01-11 23:03:16.868251774 [W:onnxruntime:, transformer_memcpy.cc:74 ApplyImpl] 3 Memcpy nodes are added to the graph main_graph for CUDAExecutionProvider. It might have negative impact on performance (including unable to run CUDA graph). Set session_options.log_severity_level=1 to see the detail logs before this message.
2024-01-11 23:03:16.881676989 [W:onnxruntime:, session_state.cc:1166 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2024-01-11 23:03:16.881703685 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
Loading pipeline components...:  83% 5/6 [00:07<00:02,  2.13s/it]2024-01-11 23:03:17.788958820 [W:onnxruntime:, session_state.cc:1166 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2024-01-11 23:03:17.788983933 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
Loading pipeline components...: 100% 6/6 [00:16<00:00,  2.79s/it]
ONNX pipeline is loadable

Everything seems to export and load properly in the browser with webgpu. And I'm also able to run the text-encoder & vae-decoder of the exported model with webgpu without issue.

However, when I try to run a step of the unet, I get this error:

ort.webgpu.min.js:10 Uncaught (in promise) Error: failed to call OrtRun(). ERROR_CODE: 1, ERROR_MESSAGE: Non-zero status code returned while running MultiHeadAttention node. Name:'MultiHeadAttention_0' Status Message: Failed to run JSEP kernel
    at t.checkLastError (ort.webgpu.min.js:10:491501)
    at t.run (ort.webgpu.min.js:10:486314)
    at async t.OnnxruntimeWebAssemblySessionHandler.run (ort.webgpu.min.js:10:477016)
    at async a.run (ort.webgpu.min.js:10:1152723)
    ...

It's not clear why this operator fails as it seems supported & running fine in sd21. Is it a known issue? Any pointer would be welcome!

Model rename and Model load issue

When firstly executed node index.mjs with the sample code for node.js, provided error occurs:

node:internal/process/promises:288
            triggerUncaughtException(err, true /* fromPromise */);
            ^

[Error: ENOENT: no such file or directory, rename 'F:\Programming\.cache\aislamov\stable-diffusion-2-1-base-onnx\model_index.json.tmp' -> ''F:\Programming\.cache\aislamov\stable-diffusion-2-1-base-onnx\model_index.json'] {                                                -> 
  errno: -4058,
  code: 'ENOENT',
  syscall: 'rename',
  path: ''F:\Programming\\.cache\\aislamov\\stable-diffusion-2-1-base-onnx\\model_index.json.tmp',
  dest: ''F:\Programming\\.cache\\aislamov\\stable-diffusion-2-1-base-onnx\\model_index.json'
}

When I run that secondly, another error occurs:

Downloading model_index.json | โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ | 100% | 587 Bytes/587 Bytes
undefined:1


SyntaxError: Unexpected end of JSON input
    at JSON.parse (<anonymous>)
    at getModelJSON (file:///F:/Programming/node_modules/@aislamov/diffusers.js/dist/index-node.esm.js:109:15)
    at async DiffusionPipeline.fromPretrained (file:///F:/Programming/node_modules/@aislamov/diffusers.js/dist/index-node.esm.js:1178:19)

Node.js v18.15.0

I think here is similar error to that first one - model_index.json.tmp should be renamed to model_index.json, so then the Unexpected end of JSON input won't occur anymore.

Last error i got is:

F:\Programming\node_modules\onnxruntime-node\dist\backend.js:24
            __classPrivateFieldGet(this, _OnnxruntimeSessionHandler_inferenceSession, "f").loadModel(pathOrBuffer, options);
                                                                                           ^

Error: Failed to find kernel for BiasSplitGelu(1) (node BiasSplitGelu_0). Kernel not found
    at new OnnxruntimeSessionHandler (F:\Programming\node_modules\onnxruntime-node\dist\backend.js:24:92)
    at F:\Programming\node_modules\onnxruntime-node\dist\backend.js:64:29
    at process.processTicksAndRejections (node:internal/process/task_queues:77:11)

Node.js v18.15.0

Node example -> Class CoreMLExecution is implemented in both

Just tried node example (with node v20.10.0)

import { DiffusionPipeline } from '@aislamov/diffusers.js'
import { PNG } from 'pngjs'

const pipe = DiffusionPipeline.fromPretrained('aislamov/stable-diffusion-2-1-base-onnx')
const images = pipe.run({
  prompt: "an astronaut running a horse",
  numInferenceSteps: 30,
})

const data = await images[0].mul(255).round().clipByValue(0, 255).transpose(0, 2, 3, 1)

const p = new PNG({ width: 512, height: 512, inputColorType: 2 })
p.data = Buffer.from(data.data)
p.pack().pipe(fs.createWriteStream('output.png')).on('finish', () => {
  console.log('Image saved as output.png');
})

and got such error

objc[63467]: Class CoreMLExecution is implemented in both  node_modules/@aislamov/diffusers.js/node_modules/onnxruntime-node/bin/napi-v3/darwin/arm64/libonnxruntime.1.16.1.dylib (0x118318cb8) and node_modules/onnxruntime-node/bin/napi-v3/darwin/arm64/libonnxruntime.1.14.0.dylib (0x11abbceb0). One of the two will be used. Which one is undefined.

file:///***/test.mjs:5
const images = pipe.run({
                    ^

TypeError: pipe.run is not a function
    at file:///***/test.mjs:5:21

Cutomization Help (ORTStableDiffustionPipeline)

ok so my previous issue is sloved (my misstake..) but there is another problem....

I made my ORTStableDiffusionPipeline onnx infer some img's but they allways has same qulity problem

output

Like this image, all generated images from pipeline (modified to use the LCM scheduler for the Stable Diffusion pipeline.) are not cleary.

Personally, I think the contents of the internal implementation code are almost the same as the original diffuser, but I don't know why there is a difference in the results, so I leave a question.

I ask for advice on why these results are being printed.

Kernel not found

node_modules\onnxruntime-node\dist\backend.js:24
            __classPrivateFieldGet(this, _OnnxruntimeSessionHandler_inferenceSession, "f").loadModel(pathOrBuffer, options);
                                                                                           ^

Error: Failed to find kernel for BiasSplitGelu(1) (node BiasSplitGelu_0). Kernel not found
    at new OnnxruntimeSessionHandler (C:\Users\gabri\Documents\GitHub\glow-backend-js\node_modules\onnxruntime-node\dist\backend.js:24:92)
    at C:\Users\gabri\Documents\GitHub\glow-backend-js\node_modules\onnxruntime-node\dist\backend.js:64:29
    at process.processTicksAndRejections (node:internal/process/task_queues:77:11)
import { DiffusionPipeline } from '@aislamov/diffusers.js'
import { PNG } from 'pngjs'

const pipe = DiffusionPipeline.fromPretrained('aislamov/stable-diffusion-2-1-base-onnx');
const images = pipe.run({
    prompt: "an astronaut running a horse",
    numInferenceSteps: 30,
})

const data = await images[0].mul(255).round().clipByValue(0, 255).transpose(0, 2, 3, 1)

const p = new PNG({ width: 512, height: 512, inputColorType: 2 })
p.data = Buffer.from(data.data)
p.pack().pipe(fs.createWriteStream('output.png')).on('finish', () => {
    console.log('Image saved as output.png');
})

Cannot use local model, Z:/AI/models/model.onnx

I'm trying to use
const classifierSession = await DiffusionPipeline.fromPretrained('C:\\users\\johna\\downloads\\model.onnx');
but it throws the error:
`Uncaught HubApiError Error: Api error with status 404. Request ID: Root=1-659715a0-16b3db9344fef60e2f28f710, url: https://huggingface.co/C:/users/johna/downloads/model.onnx/resolve/main/model_index.json
at createApiError (z:\AI\node_modules@huggingface\hub\dist\index.mjs:27:17)
at downloadFile (z:\AI\node_modules@huggingface\hub\dist\index.mjs:708:17)
at processTicksAndRejections (internal/process/task_queues:95:5)
--- await ---
at processTicksAndRejections (internal/process/task_queues:95:5)
--- await ---
at runMainESM (internal/modules/run_main:55:21)
at executeUserEntryPoint (internal/modules/run_main:78:5)
at (internal/main/run_main_module:23:47)
index.mjs:27
Process exited with code 1

`
how can I use a local model instead of attempting to reach an online one?

Unable to Create Session (Protobuf Parsing Error)

I've been trying to run this project locally but I keep getting the following error: "Error: Can't create a session. ERROR_CODE: 7, ERROR_MESSAGE: Failed to load model because protobuf parsing failed."

I cleared the site data as mentioned in the FAQ but this did not fix the issue. Is there anything else that is needed to fix this issue? Any assistance would be appreciated. Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.