Comments (11)
Hi @escorciav, I'm having some trouble reproducing this error on my end after building from develop; it might be a bit difficult to pinpoint the issue without more details on your forked repo/build changes. Do you see issues when pulling the latest develop branch and building with torch 2.1?
from aimet.
Ah I see, if you are in a conda environment perhaps there are several other differences in your packages compared to what we build with. I think our build process is very sensitive to package versions and build flags, so probably the safest way to go would be if you could build our pytorch 2.1 docker and then build aimet inside of that.
https://github.com/quic/aimet/blob/develop/packaging/docker_install.md#build-docker-image-locally
Otherwise, we should have pytorch 2.1-compatible release out very soon.
Edit: The pytorch 2.1 dockerfile was just changed to be the default torch-gpu docker
from aimet.
Update:
You can find torch 2.1-compatible AIMET whl files here (look for aimet_torch_<gpu/cpu>_pt21...
)
You should also be able to pip install the torch 2.1-compatible release if you have Ubuntu 22.04 w/ python 3.10
apt-get install liblapacke
python3 -m pip install aimet-torch
from aimet.
Thanks mate. I forgot to report that I managed to built it with conda succesfully.
Recipe that worked in my server:
conda install pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=11.8 -c pytorch -c nvidia
conda install pkg-config cmake
conda install -c conda-forge liblapacke
conda install eigen==3.3.7
pip install onnx==1.12.0 onnxruntime==1.15.1 jsonschema==4.19.0 spconv==2.3.6 bokeh onnxsim
echo pip install your project dependencies....
from aimet.
Prep and QuantSim looks as:
logging.info('Preparing model from AIMET...')
model = prepare_model(model_fp32)
# out = model(input_tensor)
## Actual quantization & tweaking
quantsim_kwargs = dict(
quant_scheme=QuantScheme.training_range_learning_with_tf_init,
rounding_mode='nearest',
in_place=True,
default_param_bw=8, # args.bit_width[0],
default_output_bw=8, # args.bit_width[1],
default_data_type=QuantizationDataType.int,
)
# logging.info(f'Inferring quantsim config...')
# aimet_common_path = Path(aimet_common.__path__[0])
# quant_config = (aimet_common_path / 'quantsim_config' /
# 'default_config_per_channel.json')
# quantsim_kwargs['config_file'] = quant_config
logging.info('Setting up QuantizationSimModel wrapper...')
# Create a new dictionary with modified keys, where '_1' is appended to each key except 'x'
model_sim = QuantizationSimModel(model=model, dummy_input=dummy_input,
**quantsim_kwargs)
As you can see, I even commented the per-channel quantization
from aimet.
I also found the same issue after using:
- latest branch develop
- as well as v1.30.0
What sort of details would be relevant?
from aimet.
Hi @escorciav, it seems likely that the failure is pybind11 related. I experimented a bit and found a couple ways to reproduce the failure in my environment:
- Downgrading my pybind11 version below 2.6
- Removing the cmake flag
-DPYBIND11_BUILD_ABI=\\\"_cxxabi1011\\\""
in the top-level CMakeLists file
Could you check what pybind11 version you have installed in your environment?
from aimet.
from conda list
pybind11 2.12.0 pypi_0 pypi
Confirmed: python -c "import pybind11; print(pybind11.__version__)"
2.12.0
Likewise, after cmake I find
C++ flags: ... -D_GLIBCXX_USE_CXX11_ABI=0 -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" ...
from aimet.
Does it make any sense to use this?
# set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -O0 -ggdb -fPIC -D_GLIBCXX_USE_CXX11_ABI=0 -DPYBIND11_BUILD_ABI=\\\"_cxxabi1011\\\"")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++17 -O0 -ggdb -fPIC -D_GLIBCXX_USE_CXX11_ABI=1 -DPYBIND11_BUILD_ABI=\\\"_cxxabi1017\\\"")
It worked locally but for some reason, while building in a server those flags are ignored 😖
from aimet.
BTW, Do you know if AIMET Pro 1.32 support Pytorch 2.1?
From QMP, afair the released date is lower than the corresponding issues.
Feel free to close the issue :)
from aimet.
Awesome, thanks for the update. Yes AIMET pro 1.32 should support pytorch 2.1 as well.
from aimet.
Related Issues (20)
- Will there be precision errors after switching to qnn after Aimet pipline?
- How to estimate export onnx-model time
- Shufflenet failing during onnx-simplification in AdaRound
- Adding check for values close to infinite
- Adding model level API for removing quantizers
- Make aimet_onnx tests agnostic to torch version
- Ignore torch.uint8 during quantization
- Fix typehint in create_encoding_from_dict
- Add documentation for SpConv and code example for it
- Need Export API to export encoding and weights in safetensor format
- Util to load torch model with weights from safetensors
- Enable acceptance tests for aimet-onnx variants
- Unpin torch version for aimet-onnx variants
- Added fixes for spconv in gpu
- Encoding analyzer documentation
- SpConv fixes for dlc export
- Add document update for keras QAT for BN
- Can not make great speed improvement on GPU
- Missing encodings for static activation tensors
- Receving Error: AttributeError: 'libPyIrGraph.IrQuantizationEncodingInfo' object has no attribute 'is_fixed_point'
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from aimet.