Giter Site home page Giter Site logo

neuralmagic / sparsify Goto Github PK

View Code? Open in Web Editor NEW
315.0 27.0 27.0 7.35 MB

ML model optimization product to accelerate inference.

License: Apache License 2.0

Makefile 0.92% Python 98.35% HTML 0.55% Dockerfile 0.19%
sparsify smaller-models quantization pruning inference-performance sparsification-recipe computer-vision image-classification object-detection pytorch

sparsify's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sparsify's Issues

Error while installing sparsify via pip

While I was installing sparsify on Windows 10, via cmd using the command 'pip install sparsify' I encountered some errors :
error_sparsify

After searching on google I found out that pysqlite3 was supported in Python2 and now is a part of the standard library in Python3, and needs no explicit installation via pip. I don't know what to do next and I am not able to use sparsify.

Python Version: 3.9.0
Pip Version: 21.1.3

PyTorch to ONNX weight names may change on export, causing name mismatch in subsequent training

Hi,
I'm using my own PyTorch model and its torch.onnx.export() function to obtain an onnx model for sparsification.

However, PyTorch to ONNX does not guarantee to retain the weight names, which raises an error when I fine tune the model with the produced recipe.

The error I get is, for example:
RuntimeError: All supplied parameter names or regex patterns not found.No match for 2425 in found parameters []. Supplied ['2425']
Which means that one of the existing layers had a name change to 2425. I tried with the option mentioned in the thread above but it didn't work.

Issue when logging in

Describe the bug
When I try to run sparsify.login api_token. I'm getting the error below.

Expected behavior
Could you help me solve this? Thanks

To Reproduce
Exact steps to reproduce the behavior:

pip install sparsify.nighlty
pip install numpy==1.21.6
pip install sparsezoo==1.5.0
sparsify.login api_token
sparsify.run -h

Errors

  File "/anaconda/envs/sparsify-env/bin/sparsify.run", line 5, in <module>
    from sparsify.cli.run import main
  File "/anaconda/envs/sparsify-env/lib/python3.8/site-packages/sparsify/__init__.py", line 18, in <module>
    from .login import *
  File "/anaconda/envs/sparsify-env/lib/python3.8/site-packages/sparsify/login.py", line 38, in <module>
    from sparsezoo.analyze.cli import CONTEXT_SETTINGS
  File "/anaconda/envs/sparsify-env/lib/python3.8/site-packages/sparsezoo/__init__.py", line 20, in <module>
    from .model import *
  File "/anaconda/envs/sparsify-env/lib/python3.8/site-packages/sparsezoo/model/__init__.py", line 17, in <module>
    from .model import *
  File "/anaconda/envs/sparsify-env/lib/python3.8/site-packages/sparsezoo/model/model.py", line 22, in <module>
    from sparsezoo.analytics import sparsezoo_analytics
  File "/anaconda/envs/sparsify-env/lib/python3.8/site-packages/sparsezoo/analytics.py", line 154, in <module>
    sparsezoo_analytics = GoogleAnalytics("sparsezoo", sparsezoo_version)
  File "/anaconda/envs/sparsify-env/lib/python3.8/site-packages/sparsezoo/analytics.py", line 68, in __init__
    self._disabled = analytics_disabled()
  File "/anaconda/envs/sparsify-env/lib/python3.8/site-packages/sparsezoo/analytics.py", line 43, in analytics_disabled
    return env_disabled or is_gdpr_country()
  File "/anaconda/envs/sparsify-env/lib/python3.8/site-packages/sparsezoo/utils/gdpr.py", line 93, in is_gdpr_country
    country_code = get_country_code()
  File "/anaconda/envs/sparsify-env/lib/python3.8/site-packages/sparsezoo/utils/gdpr.py", line 79, in get_country_code
    geo = geocoder.ip(ip)
  File "/home/azureuser/.local/lib/python3.8/site-packages/geocoder/api.py", line 498, in ip
    return get(location, provider='ipinfo', **kwargs)
  File "/home/azureuser/.local/lib/python3.8/site-packages/geocoder/api.py", line 198, in get
    return options[provider][method](location, **kwargs)
  File "/home/azureuser/.local/lib/python3.8/site-packages/geocoder/base.py", line 407, in __init__
    self._before_initialize(location, **kwargs)
  File "/home/azureuser/.local/lib/python3.8/site-packages/geocoder/ipinfo.py", line 80, in _before_initialize
    if location.lower() == 'me' or location == '':
AttributeError: 'NoneType' object has no attribute 'lower'

How to Implement config file in training pipeline

I have optimized my model via sparsify and have got the config file but I don't know how to use the config file with the optimization code given:
image

I made a python file where I loaded my PyTorch model and passed it to the function 'ScheduledOptimizer' as the 'MODEL' parameter given in the above 'Code for optimization' but I don't know what is the optimizer variable or what to pass to it and if I leave it like this only it says
"optimizer" is not defined.

Error While exporting the model

hello ,I followed all the steps in the document for my custom model of yolov5 and got struck near exporting step .could you please help me with it ?
image

Trying to apply sparsify on 1-layer transformer model

I have tried the sparsify interface. When hitting run, the server crashes with the trace bellow.
Any help is welcomed!

Michael

2021-11-13 00:19:27 sparsify.blueprints.jobs INFO retrieved job {'job': {'error': None, 'job_id': 'e1eb1c43eb614143b2c0a31285eca111', 'created': '2021-11-13T00:19:27.071845', 'modified': '2021-11-13T00:19:27.071870', 'type_': 'CreatePerfProfileJobWorker', 'status': 'pending', 'project_id': '8c8f6df6be0d4dd18d15716bdf7ff327', 'progress': None, 'worker_args': {'model_id': 'de66e42d06af4e4786b210c0ee59b0b2', 'profile_id': 'a8005237aef943c3bf4917ce0210bd5f', 'batch_size': 1, 'core_count': 4, 'pruning_estimations': True, 'quantized_estimations': False, 'iterations_per_check': 10, 'warmup_iterations_per_check': 5}}}
10.0.0.4 - - [13/Nov/2021 00:19:27] "GET /api/jobs/e1eb1c43eb614143b2c0a31285eca111 HTTP/1.1" 200 -
2021-11-13 00:19:27 sparsify.workers.projects_profiles INFO running perf profile for project_id 8c8f6df6be0d4dd18d15716bdf7ff327 and model_id de66e42d06af4e4786b210c0ee59b0b2 and profile_id a8005237aef943c3bf4917ce0210bd5f with batch_size:1, core_count:4, pruning_estimations:True, quantized_estimations:False, iterations_per_check:10, warmup_iterations_per_check:5
DeepSparse Engine, Copyright 2021-present / Neuralmagic, Inc. version: 0.8.0 (68df72e1) (release) (optimized) (system=avx512, binary=avx512)
2021-11-13 00:19:27 sparsify.blueprints.jobs INFO getting job e1eb1c43eb614143b2c0a31285eca111
2021-11-13 00:19:27 sparsify.blueprints.jobs INFO retrieved job {'job': {'error': None, 'job_id': 'e1eb1c43eb614143b2c0a31285eca111', 'created': '2021-11-13T00:19:27.071845', 'modified': '2021-11-13T00:19:27.155796', 'type_': 'CreatePerfProfileJobWorker', 'status': 'started', 'project_id': '8c8f6df6be0d4dd18d15716bdf7ff327', 'progress': {'iter_indefinite': False, 'iter_class': 'analysis', 'num_steps': 2, 'step_class': 'baseline_estimation', 'step_index': 0, 'iter_val': 0.0}, 'worker_args': {'model_id': 'de66e42d06af4e4786b210c0ee59b0b2', 'profile_id': 'a8005237aef943c3bf4917ce0210bd5f', 'batch_size': 1, 'core_count': 4, 'pruning_estimations': True, 'quantized_estimations': False, 'iterations_per_check': 10, 'warmup_iterations_per_check': 5}}}
10.0.0.4 - - [13/Nov/2021 00:19:27] "GET /api/jobs/e1eb1c43eb614143b2c0a31285eca111 HTTP/1.1" 200 -
[nm_ort 7fbe5589e700 >ERROR< supported_subgraphs /home/ubuntu/build/nyann/src/onnxruntime_neuralmagic/supported/subgraphs.cc:782] ==== FAILED TO COMPILE ====
Unexpected exception message: bad optional access
DeepSparse Engine, Copyright 2021-present / Neuralmagic, Inc. version: 0.8.0 (68df72e1) (release) (optimized)
Date: 11-13-2021 @ 00:19:27 UTC
OS: Linux linuxvm1 4.15.0-1061-azure #66-Ubuntu SMP Thu Oct 3 02:00:50 UTC 2019
Arch: x86_64
CPU: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz
Vendor: GenuineIntel
Cores/sockets/threads: [4, 1, 8]
Available cores/sockets/threads: [4, 1, 8]
L1 cache size data/instruction: 32k/32k
L2 cache size: 1Mb
L3 cache size: 35.75Mb
Total memory: 15.6651G
Free memory: 1.88776G

Assertion at /home/ubuntu/build/nyann/src/onnxruntime_neuralmagic/nm_execution_provider.cc:76

Backtrace:
0# wand::detail::abort_prefix(std::ostream&, char const*, char const*, int, bool, bool, unsigned long) in /home/mbetser/anaconda3/envs/optimize/lib/python3.8/site-packages/deepsparse/avx512/libonnxruntime.so.1.8.0
1# 0x00007FBE2913F285 in /home/mbetser/anaconda3/envs/optimize/lib/python3.8/site-packages/deepsparse/avx512/libonnxruntime.so.1.8.0
2# 0x00007FBE291410AE in /home/mbetser/anaconda3/envs/optimize/lib/python3.8/site-packages/deepsparse/avx512/libonnxruntime.so.1.8.0
3# 0x00007FBE2940D1C1 in /home/mbetser/anaconda3/envs/optimize/lib/python3.8/site-packages/deepsparse/avx512/libonnxruntime.so.1.8.0
4# 0x00007FBE29A5A668 in /home/mbetser/anaconda3/envs/optimize/lib/python3.8/site-packages/deepsparse/avx512/libonnxruntime.so.1.8.0
5# 0x00007FBE29A5D0A2 in /home/mbetser/anaconda3/envs/optimize/lib/python3.8/site-packages/deepsparse/avx512/libonnxruntime.so.1.8.0
6# 0x00007FBE29A603B9 in /home/mbetser/anaconda3/envs/optimize/lib/python3.8/site-packages/deepsparse/avx512/libonnxruntime.so.1.8.0
7# 0x00007FBE293EC76C in /home/mbetser/anaconda3/envs/optimize/lib/python3.8/site-packages/deepsparse/avx512/libonnxruntime.so.1.8.0
8# 0x00007FBE293F24C3 in /home/mbetser/anaconda3/envs/optimize/lib/python3.8/site-packages/deepsparse/avx512/libonnxruntime.so.1.8.0
9# 0x00007FBE293AC982 in /home/mbetser/anaconda3/envs/optimize/lib/python3.8/site-packages/deepsparse/avx512/libonnxruntime.so.1.8.0
10# 0x00007FBE293ACC05 in /home/mbetser/anaconda3/envs/optimize/lib/python3.8/site-packages/deepsparse/avx512/libonnxruntime.so.1.8.0
11# deepsparse::ort_engine::init(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, int, int, int, wand::safe_type<wand::parallel::use_current_affinity_tag, bool>, std::shared_ptrwand::parallel::scheduler_factory_t) in /home/mbetser/anaconda3/envs/optimize/lib/python3.8/site-packages/deepsparse/avx512/libdeepsparse.so
12# 0x00007FBE5FDDD7EB in /home/mbetser/anaconda3/envs/optimize/lib/python3.8/site-packages/deepsparse/avx512/deepsparse_engine.so
13# 0x00007FBE5FDDDA09 in /home/mbetser/anaconda3/envs/optimize/lib/python3.8/site-packages/deepsparse/avx512/deepsparse_engine.so
14# 0x00007FBE5FDFD986 in /home/mbetser/anaconda3/envs/optimize/lib/python3.8/site-packages/deepsparse/avx512/deepsparse_engine.so
15# 0x00007FBE5FDEAA09 in /home/mbetser/anaconda3/envs/optimize/lib/python3.8/site-packages/deepsparse/avx512/deepsparse_engine.so
16# 0x00005592DB0A07AE in /home/mbetser/anaconda3/envs/optimize/bin/python
17# _PyObject_MakeTpCall in /home/mbetser/anaconda3/envs/optimize/bin/python
18# 0x00005592DB0CAD6A in /home/mbetser/anaconda3/envs/optimize/bin/python
19# PyObject_Call in /home/mbetser/anaconda3/envs/optimize/bin/python
20# 0x00005592DB040689 in /home/mbetser/anaconda3/envs/optimize/bin/python
21# 0x00005592DB0A06C7 in /home/mbetser/anaconda3/envs/optimize/bin/python
22# 0x00007FBE42973029 in /home/mbetser/anaconda3/envs/optimize/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_pybind11_state.cpython-38-x86_64-linux-gnu.so
23# _PyObject_MakeTpCall in /home/mbetser/anaconda3/envs/optimize/bin/python

Please email a copy of this stack trace and any additional information to: [email protected]
Aborted

Improve documentation when exporting a recipe

What is the URL, file, or UI containing proposed doc change
Where does one find the original content or where would this change go?

This change would go in the main README of sparsify repository.

What is the current content or situation in question
https://github.com/neuralmagic/sparsify#exporting-a-recipe

What is the proposed change
In the main README it should be mentioned that tensorflow or torch should be installed with sparsify to be able to export them.
Screenshot from 2021-09-15 17-52-53

Moreover it should be mentioned the range of version for torch or tensorflow that can be handled:

Screenshot from 2021-09-15 17-53-41

Additional context
Related-To new tutorial with NerualMagic: AICoE/elyra-aidevsecops-tutorial#297

cannot install in virtual environment python=3.6

I kept running into a dependency error during installation:

ERROR: Cannot install sparsify==0.1.0, sparsify==0.1.1, sparsify==0.2.0, sparsify==0.3.0, sparsify==0.3.1, sparsify==0.4.0, sparsify==0.5.0, sparsify==0.5.1, sparsify==0.6.0, sparsify==0.7.0, sparsify==0.8.0 and sparsify==0.9.0 because these package versions have conflicting dependencies.

The conflict is caused by:
    sparsify 0.9.0 depends on pysqlite3-binary>=0.4.0
    sparsify 0.8.0 depends on pysqlite3-binary>=0.4.0
    sparsify 0.7.0 depends on pysqlite3-binary>=0.4.0
    sparsify 0.6.0 depends on pysqlite3-binary>=0.4.0
    sparsify 0.5.1 depends on pysqlite3-binary>=0.4.0
    sparsify 0.5.0 depends on pysqlite3-binary>=0.4.0
    sparsify 0.4.0 depends on pysqlite3-binary>=0.4.0
    sparsify 0.3.1 depends on pysqlite3-binary>=0.4.0
    sparsify 0.3.0 depends on pysqlite3-binary>=0.4.0
    sparsify 0.2.0 depends on pysqlite3-binary>=0.4.0
    sparsify 0.1.1 depends on pysqlite3-binary>=0.4.0
    sparsify 0.1.0 depends on pysqlite3-binary>=0.4.0

To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies

Steps to reproduce:

  1. conda create -n py3.6 python=3.6
  2. pip installl sparsify

Quantization in UI

Hi,
Quantization is not available in the UI, could you provide an approximate ETA? is there a recommended course of action for performing pruning with UI and quantization by other means?
Thanks!

Browser freezes when performing optimization (Mac M1 Max)

Describe the bug
After the performance of the model is analized, I am clicking on Optimize.
At this moment the page changes to "Optimization" and the browser freezes after 5 seconds.

Expected behavior
Optimization should start but does not.

Environment
Include all relevant environment information:

  1. OS Mac OS Monterey 12.6
  2. Python version 3.10.4
  3. Sparsify version or commit hash 1.0.0
  4. ML framework version(s) pytorch 1.12.1
  5. Other Python package versions [e.g. SparseZoo, DeepSparse, numpy, ONNX]: onnx 1.10.1, onnxruntime 1.12.1, sparsezoo 1.0.0, numpy 1.21.6 (note: DeepSparse not installed b/c of Mac m1 chip)
  6. Other relevant environment information [e.g. hardware, CUDA version]: Apple M1 Max 64GB

To Reproduce
Exact steps to reproduce the behavior:

Errors
Screenshot 2022-09-28 at 10 47 41

Additional context
Add any other context about the problem here. Also include any relevant files.

Having a sparsify-cli component to interact with the deployed sparsify server

Is your feature request related to a problem? Please describe.
As NeuralMagic User,

I would like to have a sparsify-cli component that I can use to interact with the deployed server for sparsify. This would allow me to automate the sparsify step to obtain a recipe (if it does not exists). In this way, the user does not need to use the UI but integrate that step in a pipeline that could retrieve an existing recipe if available and if not, it could request a recipe to the server providing the required inputs (e.g. URL to ONNX model).

Describe the solution you'd like

  • Have a sparsify-cli component that can interact with the server.

Describe alternatives you've considered

Additional context

Additional context
See: AICoE/elyra-aidevsecops-tutorial#297

Pretraining style training aware sparsification

Is your feature request related to a problem? Please describe.
I would like to try sparsifying several pretrained LLMs (e.g. Mistral 7b, Stable LM 3b etc). I have created a pretraining corpus (for causal LLMs) on topics I care about. The corpus is relatively small in terms of LLM pretraining, around 10b tokens, but is gigantic in terms of fine tuning. It seems such a corpus would be ideal for trying this out: https://github.com/neuralmagic/sparsify/blob/main/docs/training-aware-experiment-guide.md.

Describe the solution you'd like
Reading through the experiment guide, I cannot identify an appropriate dataset for causal pretraining data. Would appreciate some pointers on what I can try (let me be your guinea pig!).

:mega: Try Sparsify Alpha now! :mega:

🚨 July 2023 🚨: Sparsify's next generation is now in alpha as of version 1.6.0-alpha!

Sparsify enables you to accelerate inference without sacrificing accuracy by applying state-of-the-art pruning, quantization, and distillation algorithms to neural networks with a simple web application and one-command API calls.
Want to jump right in? Get started.

Want to kick the tires? Read the Sparsify Quickstart Guide.

If you encounter any issues while using Sparsify Alpha, please file an issue here.

How should I correctly train the model?

I am trying to sparsify distilbert-base-multilingual-cased.

  • First I have first converted it to .onxx format by following command sparseml.transformers.export_onnx --task mlm --model_path ./distilbert-base-multilingual-cased
  • then generated config file, which i took from the export with example training file.
  • I have changed the training file to fit my training process (datasets etc.) and tried to load model again.
  • Unfortunately I don't what is correct way to do so
  • If i load model from hugging face again names of layers exported to onxx and in the model don't match
    raise RuntimeError( RuntimeError: All supplied parameter names or regex patterns not found.No match for 958 in found parameters []. Supplied ['958']
  • I don't know how to apply changes directly to .onxx model - is that possible?

What I am doing wrong? What did I missed?

🚨 Next Gen Sparsify Early Access Waitlist🚨

We are excited to mention the next generation of Sparsify is underway. You can expect more features and simplicity to build sparse models to target optimal general performance at scale.

The next generation of Sparsify can help you optimize models from scratch or sparse transfer learn onto your data to target best-in-class inference performance on your deployment hardware.

We will share more in the coming weeks. In the meantime, sign up for our Early Access Waitlist and be the first to try the Sparsify Alpha.

  • Neural Magic Product Team

Sparse-fication recipe for yolov7

Hello there,

Thanks for working in Sparsify. This speed-ups are very impressive and amazing.

Yolov7 has been released recently and it is faster and more accurate than other object detectors at the moment. Do we have a plan to write sparse-fication recipe for this ?

I am open to work individually or with people to create a pull request.

Import error on optimization export page

Describe the bug
Optimization config file displays `No module named 'sparseml.pytorch.recal' on export.

Expected behavior
Should display the exported recipe

Environment
Include all relevant environment information:

  1. OS [e.g. Ubuntu 18.04]: Ubuntu 18.04
  2. Python version [e.g. 3.7]: 3.7
  3. Sparsify version or commit hash [e.g. 0.1.0, f7245c8]: 5432d66
  4. ML framework version(s) [e.g. torch 1.7.1]: 1.7.1
  5. Other Python package versions [e.g. SparseZoo, DeepSparse, numpy, ONNX]: SparseML 0.1.0
  6. Other relevant environment information [e.g. hardware, CUDA version]: n/a

To Reproduce
Exact steps to reproduce the behavior:

  1. create an optimization
  2. Hit Export
  3. Error appears under "optimization config file"

Errors
Screen Shot 2021-01-26 at 9 53 16 AM

Additional context
n/a

Can't import EpochRangeModifier

Describe the bug
When trying to create a recipe, the recipe cannot be generated because sparseml.pytorch.optim.EpochRangeModifier cannot be imported. From what I understood, it's because modifier.py moved from optim to sparsification in sparseml.

Expected behavior
Generate a recipe :)

Environment
Include all relevant environment information:

  1. OS [e.g. Ubuntu 18.04]: Arch
  2. Python version [e.g. 3.7]: 3.8
  3. Sparsify version or commit hash [e.g. 0.1.0, f7245c8]: 0.12
  4. ML framework version(s) [e.g. torch 1.7.1]: PyTorch 1.9.1
  5. Other Python package versions [e.g. SparseZoo, DeepSparse, numpy, ONNX]: sparseml 0.12
  6. Other relevant environment information [e.g. hardware, CUDA version]: N/A

To Reproduce
Exact steps to reproduce the behavior:

  • Import an onnx model (it sounds like the initial benchmark doesn't work either, I get "job cancelled".
  • Create a recipe using sparsify.

Errors
If applicable, add a full print-out of any errors or exceptions that are raised or include screenshots to help explain your problem.
It simply says it cannot import EpochRangeModifier from sparseml

Additional context
Add any other context about the problem here. Also include any relevant files.

Transformers Text Classification Profiling fails and crases sparsify server (data indices for a Gather operation)

Describe the bug
When trying to perform an initial profile of a transformers style text classification model an error is thrown relating to indices out of bounds for a Gather operation. The profiling stops and the server crashes

Expected behavior
Completion of profiling for a valid onnx export of a huggingface text classification model.

Environment
Include all relevant environment information:

  1. Debian- 11 - bullseye (Docker). AWS C6i.8xlarge
  2. Python 3.9.12
  3. Sparsify '0.12.1'
  4. torch: '1.9.1+cpu'
  1. AWS C6i.8xlarge

To Reproduce

  1. Make a fresh 3 class text classifier based on distilbert
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-multilingual-cased")
model = AutoModelForSequenceClassification.from_pretrained(
    "distilbert-base-multilingual-cased", num_labels=3)

model.save_pretrained("new_text_classifier")
tokenizer.save_pretrained("new_text_classifier")
  1. Export from transformers to onnx
sparseml.transformers.export_onnx --model_path new_text_classifier/ --sequence_length 128 --task text-classification
  1. Start sparisfy
sparsify --working-dir=.
  1. Enter path to model to upload new_text_classifier/model.onnx
  2. Hit Run to start profiling

Errors

2022-05-26 19:08:07.628652552 [E:onnxruntime:, sequential_executor.cc:352 Execute] Non-zero status code returned while running Gather node. Name:'Gather_7' Status Message: indices element out of data bounds, idx=5335349968635603386 must be within the inclusive range [-119547,119546]
NM: Fatal error encountered: Non-zero status code returned while running Gather node. Name:'Gather_7' Status Message: indices element out of data bounds, idx=5335349968635603386 must be within the inclusive range [-119547,119546], exiting.

Screen Shot 2022-05-26 at 3 09 10 PM
Screen Shot 2022-05-26 at 3 09 27 PM

Additional context
Add any other context about the problem here. Also include any relevant files.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.