Comments (4)
By inspect here I can see that the bias is by default f32 but in the documentation here the quantized bias is s32.
@renato-arantes From oneDNN perspective, the datatype of the bias is user defined. Internally, it can be upconverted (first link).
Regarding the documentation you linked, this is just a tutorial to showcase how quantization workflow can be customized by oneDNN user (and that example uses signed int8 datatype for bias).
Now why would PyTorch not quantize the bias? This is a question to ask PyTorch maintainers, but in general, there is little reason to quantize the bias tensor as it is small when compared to layer weights and activations. Adding @milpuz01, @snadampal @malfet for more comments.
from onednn.
Hi @renato-arantes, do you mean quantized to s8
or s32
? The accumulation datatype used during Op
computation is governed by the accumulation_mode
attribute of the primitive. By default, f32 is used for floating-point primitives (or f64 for f64 primitives) and s32 is used for integral primitives. You may change the default behavior by setting the dnnl::accumulation_mode
to s32
. More details are in the Data Types section of oneDNN developer guide.
The example cnn_inference_int8.cpp
you mentioned shows the quantized bias is s8
because the data type of bias is set to s8
in line 114 auto conv_bias_md = memory::desc({conv_bias_tz}, dt::s8, tag::any);
.
After modifying the example with auto conv_bias_md = memory::desc({conv_bias_tz}, dt::s32, tag::any);
, It's then using s32
for bias, which can also be seen in the ONEDNN_VERBOSE
log:
onednn_verbose,primitive,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:abcd::f0 dst_u8::blocked:acdb::f0,attr-scales:dst:0:f32 ,,8x256x13x13,10.259
onednn_verbose,primitive,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:abcd::f0 dst_s8::blocked:AcdB64a4b::f0,attr-scales:dst:0:f32 ,,384x256x3x3,1.24707
onednn_verbose,primitive,exec,cpu,convolution,brgconv:avx512_core_vnni,forward_training,src_u8:a:blocked:acdb::f0 wei_s8:a:blocked:AcdB64a4b::f0 bia_s32:a:blocked:a::f0 dst_u8:a:blocked:acdb::f0,attr-scales:src0:0:f32+dst:0:f32+wei:0:f32 attr-post-ops:eltwise_relu ,alg:convolution_direct,mb8_ic256oc384_ih13oh13kh3sh1dh0ph1_iw13ow13kw3sw1dw0pw1,0.651855
onednn_verbose,primitive,exec,cpu,reorder,jit:uni,undef,src_u8::blocked:acdb::f0 dst_f32::blocked:abcd::f0,attr-scales:src0:0:f32 ,,8x384x13x13,0.10498
from onednn.
Hi @shu1chen,
Your answer is not related to my question that is about Pytorch, and not an example that you said I mentioned, but I did not. Maybe you are answering another question here by mistake?
Cheers,
Renato
from onednn.
By inspect here I can see that the bias is by default f32 but in the documentation here the quantized bias is s32.
Hi @renato-arantes, the second here is the same example in the source code as I referred.
I meant that perhaps you need to set accumulation_mode attribute of the primitive in Pytorch to change the default behavior.
from onednn.
Related Issues (20)
- [ACL] 3D convolution kernel `NEConv3D` is not integrated
- INT8 Performance difference between OneDNN v2.6.3 and v3.4.1 HOT 1
- Possible null pointer dereference in cpu_reorder_pd
- Assertion failure in brgemm in debug build on G3 aarch64 machine HOT 3
- question about matmul_perf example HOT 2
- Information regarding threading backend in oneDNN HOT 1
- could not create a primitive descriptor iterator HOT 5
- cpu: s390x: build fails with saturate was not declared in this scope HOT 7
- Enabling onednn Graph API from framework level HOT 1
- Conditions for Running brgemm_convolution_fwd_t and jit_avx512_common_convolution_fwd_t in oneDNN HOT 3
- oneDNN with Nvidia GPU supprt
- batchnorm requires consistent in- and output mem format_tags HOT 1
- Build fail with CPU_RUNTIME=SEQ and graph compiler backend HOT 4
- OneDNN graph APi for LLM generation HOT 7
- Understand the document on block level APIs(https://github.com/oneapi-src/oneDNN/pull/1852) HOT 1
- dnnl_sgemm occurs segmentation fault with special size HOT 2
- SSE41 kernels are broken HOT 2
- Generic OpenCL kernels are broken HOT 1
- GPU tests pass when they probably shouldn't
- Convolution Test Failures and Skips on AArch64
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from onednn.