Giter Site home page Giter Site logo

deftruth / lite.ai.toolkit Goto Github PK

View Code? Open in Web Editor NEW
3.5K 68.0 672.0 456.53 MB

🛠 A lite C++ toolkit of awesome AI models, support ONNXRuntime, MNN. Contains YOLOv5, YOLOv6, YOLOX, YOLOv8, FaceDet, HeadSeg, HeadPose, Matting etc. Engine: ONNXRuntime, MNN.

Home Page: https://github.com/DefTruth/lite.ai.toolkit

License: GNU General Public License v3.0

CMake 0.47% C++ 99.45% C 0.08% Shell 0.01%
yolox retinaface onnxruntime segmentation yolor yolop nanodet robustvideomatting mnn ncnn

lite.ai.toolkit's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lite.ai.toolkit's Issues

Cannot use GPU on windows..

Hi again! I am running this on windows, and I see the following message:

2021-10-11 19:50:28.8453389 [W:onnxruntime:, fallback_cpu_capability.cc:135 onnxruntime::GetCpuPreferredNodes] Force fallback to CPU execution for node: Slice_339

Why is this happening? Should i be able to run this on the GPU?

Thanks!

How to use your toolkit with onnxruntime-gpu with Linux Ubuntu

Hello @DefTruth
Thank for your works. I tested some face detections in your toolkit and they work well base on CPU in Ubuntu 16.04
I would like to use GPU so I download onnxruntime-linux-x64-gpu-1.7.0.tgz
I followed your suggestion:
cp you-path-to-downloaded-or-built-onnxruntime/lib/onnxruntime lite.ai.toolkit/lib
and use the headers offer by this repo, I just let these directories unchanged only copy lib
but when I check the results so they did not run on GPU
Can you give me some suggestions?

RVM onyx models to Google drive?

Hi! thanks for making this code available. Is it possible to upload the rvm models to google drive? I am unable to access Baidu where i am. Thank you!

Detectron2

Hi,
Thanks for sharing it. Could you please support the detectron2?

Thank you in advance

Using onnxruntime iobinding to eliminate useless transfers

It would be great to be able to use onnxruntime iobinding to eliminate useless transfers to improve the performance.

For the RobustVideoMatting network the author explains how use the iobinding in python to keep the tensors of the recurrent states in the GPU (avoiding copying them to the CPU and back to the GPU in the next frame):
https://github.com/PeterL1n/RobustVideoMatting/blob/master/documentation/inference.md

I've been looking for information to implement this iobinding in c++ but I couldn't find any reference.

I've figured out how to get the iobinding object from a pointer of the session:
ort_iobinding = new Ort::IoBinding(*ort_session);

To bind the outputs there are to methods:
void BindOutput(const char* name, const Value&);
void BindOutput(const char* name, const MemoryInfo&);

I think the correct way would be to create a MemoryInfo for CUDA device, but not sure if the following way would be correct:
Ort::MemoryInfo info_cuda("Cuda", OrtAllocatorType::OrtArenaAllocator, 0, OrtMemTypeDefault);
for (int i = 0; i < num_outputs; i++) ort_iobinding->BindOutput(output_node_names[i], info_cuda);

To bind the outputs there is only one method:
void BindInput(const char* name, const Value&);

To create the Ort::Value for the "src" input and bind it I think we can do:
Ort::Value srcTensor = Ort::Value::CreateTensor(memory_info_handler, src_values.data(), src_size, src_dims.data(), src_dims.size());
ort_iobinding->BindInput(input_node_names[0], srcTensor);

But I haven't been able to figure out how to create the tensors for the recurrent states as CUDA data.

I've tried the following code putting everything in the CPU just to check it works:

-Once at the begining:
ort_iobinding = new Ort::IoBinding(*ort_session);
for (int i = 0; i < num_outputs; i++) ort_iobinding->BindOutput(output_node_names[i], memory_info_handler);

  • Every frame:
    Ort::Value srcTensor = Ort::Value::CreateTensor(memory_info_handler, src_values.data(), src_size, src_dims.data(), src_dims.size());
    Ort::Value r1iTensor = Ort::Value::CreateTensor(memory_info_handler, r1i_values.data(), r1i_size, r1i_dims.data(), r1i_dims.size());
    Ort::Value r2iTensor = Ort::Value::CreateTensor(memory_info_handler, r2i_values.data(), r2i_size, r2i_dims.data(), r2i_dims.size());
    Ort::Value r3iTensor = Ort::Value::CreateTensor(memory_info_handler, r3i_values.data(), r3i_size, r3i_dims.data(), r3i_dims.size());
    Ort::Value r4iTensor = Ort::Value::CreateTensor(memory_info_handler, r4i_values.data(), r4i_size, r4i_dims.data(), r4i_dims.size());
    Ort::Value dsrTensor = Ort::Value::CreateTensor(memory_info_handler, dsr_values.data(), dsr_size, dsr_dims.data(), dsr_dims.size());
    ort_iobinding->BindInput(input_node_names[0], srcTensor);
    ort_iobinding->BindInput(input_node_names[1], r1iTensor);
    ort_iobinding->BindInput(input_node_names[2], r2iTensor);
    ort_iobinding->BindInput(input_node_names[3], r3iTensor);
    ort_iobinding->BindInput(input_node_names[4], r4iTensor);
    ort_iobinding->BindInput(input_node_names[5], dsrTensor);
    ort_session->Run(Ort::RunOptions{ nullptr }, *ort_iobinding);
    auto output_tensors = ort_iobinding->GetOutputValues();

And it works correctly, but obviously at the same performance. The point would be to figure out how to keep the tensors of the recurrent states always in the GPU memory.

无法运行example里面的test yolox

报错了

错误 LNK2001 无法解析的外部符号 "public: void __cdecl ortcv::YoloX::detect(class cv::Mat const &,class std::vector<struct ortcv::types::BoundingBoxType<float,float>,class std::allocator<struct ortcv::types::BoundingBoxType<float,float> > > &,float,float,unsigned int,unsigned int)" (?detect@YoloX@ortcv@@QEAAXAEBVMat@cv@@aeav?$vector@U?$BoundingBoxType@MM@types@ortcv@@v?$allocator@U?$BoundingBoxType@MM@types@ortcv@@@std@@@std@@mmii@Z) gyy_ort_test D:\Download\lite.ai-main\gyy_test\gyy_ort_test\gyy_ort_test\source.obj 1
请问这种情况应该怎么解决啊

模型都是怎么转换的?

我这边自定义训练的一个resnet数据模型是pt模型,怎么转onnx模型呢,我在您链接的vison里面没找到怎么转换?

ort_types.h Ubuntu gcc5.4 编译错误

ort_types.h:279:64: error: conversion from ‘ortcv::types::BoundingBoxType<int, double>’ to non-scalar type ‘ortcv::types::BoundingBoxType<int, float>’ requested
  BoundingBoxType<int> boxi = this->template convert_type<int>();

请问大概是什么原因?

Running models in half precision FP16

I am trying to run the FP16 version of the model "rvm_mobilenetv3_fp16.onnx"

I am trying to write a FP16 version of the helper function
Ort::Value ortcv::utils::transform::create_tensor()

I understand I have to use the function:
inline Value Value::CreateTensor(const OrtMemoryInfo* info, void* p_data, size_t p_data_byte_count, const int64_t* shape, size_t shape_len, ONNXTensorElementDataType type)

With ONNXTensorElementDataType = ONNX_TENSOR_ELEMENT_DATA_TYPE_BFLOAT16 // Non-IEEE floating-point format based on IEEE754 single-precision

But i am stuck working out how to handle half resolution vectors in c++

Probably it is necessary to handle uint16 types to handle the pointers and make some conversion to half floats at some point, but I am lost about how to handle this.

支持resnet50算法吗?

我转换的一个resnet50的算法,但是加载的时候,报错:
0x00007FF977504ED9 处(位于 xx.exe 中)引发的异常: Microsoft C++ 异常: std::length_error,位于内存位置 0x0000006D50B7EC30 处。
0x00007FF977504ED9 处(位于 xx.exe 中)有未经处理的异常: Microsoft C++ 异常: std::length_error,位于内存位置 0x0000006D50B7EC30 处。

图片

contribute-lite.ai-cv-detection-template

  • model information: The information for the model is listed below.
Project Address Author Model File Inference
yolov5 (🔥🔥💥↑) ultralytics yolov5-model-pytorch-hub detect.py

Note: this is a template issue for how to contribute you models. Just replace the "template" as your model or project name, such as contribute-lite.ai-cv-detection-YoloV5 .

big bug!发现了一个大bug

比如,在yolov5里面,推理时,预处理为,直接把图像缩放到输入大小,比如640*640,这样会导致很多图像扭曲变形,导致识别不准确:
Ort::Value YoloV5::transform(const cv::Mat &mat)
{
cv::Mat canva = mat.clone();
cv::cvtColor(canva, canva, cv::COLOR_BGR2RGB);
cv::resize(canva, canva, cv::Size(input_node_dims.at(3),
input_node_dims.at(2)));
// (1,3,640,640) 1xCXHXW

ortcv::utils::transform::normalize_inplace(canva, mean_val, scale_val); // float32
return ortcv::utils::transform::create_tensor(
canva, input_node_dims, memory_info_handler,
input_values_handler, ortcv::utils::transform::CHW);
}

而在python里面的代码,确是求的一个最小缩放比例,然后把原图安装缩放比例缩放,然后进行不够640的,补边处理,这样就不会对图像里面的进行不等比例缩放,图像不会扭曲;还原识别框的时候,进行反操作,这样的:

def letterbox(img, new_shape=(416, 416), color=(114, 114, 114), auto=False, scaleFill=False, scaleup=True):
shape = img.shape[:2] # current shape [height, width]
if isinstance(new_shape, int):
new_shape = (new_shape, new_shape)

r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
if not scaleup:
    r = min(r, 1.0)

ratio = r, r  # width, height ratios
new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1]  # wh padding
if auto:  # minimum rectangle
    dw, dh = np.mod(dw, 64), np.mod(dh, 64)  # wh padding
elif scaleFill:  # stretch
    dw, dh = 0.0, 0.0
    new_unpad = (new_shape[1], new_shape[0])
    ratio = new_shape[1] / shape[1], new_shape[0] / shape[0]  # width, height ratios

dw /= 2  # divide padding into 2 sides
dh /= 2
if shape[::-1] != new_unpad:  # resize
    img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR)
top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color)  # add border
return img, ratio, (dw, dh)

def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None):
# Rescale coords (xyxy) from img1_shape to img0_shape
if ratio_pad is None: # calculate from img0_shape
gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new
pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding
else:
gain = ratio_pad[0][0]
pad = ratio_pad[1]

coords[:, [0, 2]] -= pad[0]  # x padding
coords[:, [1, 3]] -= pad[1]  # y padding
coords[:, :4] /= gain
clip_coords(coords, img0_shape)
return coords

onnxruntime问题请教

Ort::Env m_env;
Ort::Session m_session;
请问这两个关系是怎么样的,之前看onnxruntime的文档介绍,Ort::Env是一个全局唯一的,如果要实现一个生产者消费者的推理模块来扩大推理引擎的并发性,是不是所有线程共用一个Ort::Env,每个消费者线程新建一个Ort::Session对象?麻烦不吝指教

Runtime Version Detected Sim always same even changed the person

Hi,

I succesully compiled on the Mac Os x.

trying face rec. algorithms and recognized that ONNX
Runtime Version Detected Sim always same even changed the person.

ie : lite_glint_arcface.cpp
model : std::string onnx_path = "../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx";

person a - person b
:

/var/folders/h6/7d637725049b0nf7_xqjkf640000gn/T/tmpl3pFGJ ; exit;
LITEORT_DEBUG LogId: ../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx
=============== Input-Dims ==============
input_node_dims: 1
input_node_dims: 3
input_node_dims: 112
input_node_dims: 112
=============== Output-Dims ==============
Output: 0 Name: embedding Dim: 0 :1
Output: 0 Name: embedding Dim: 1 :512
[ WARN:0] global /Users/yanjunqiu/Desktop/third_party/library/opencv/modules/core/src/matrix_expressions.cpp (1334) assign OpenCV/MatExpr: processing of multi-channel arrays might be changed in the future: opencv/opencv#16739
Default Version Detected Sim: 0.415043
Default Version Detected Dist: 1.08163
LITEORT_DEBUG LogId: ../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx
=============== Input-Dims ==============
input_node_dims: 1
input_node_dims: 3
input_node_dims: 112
input_node_dims: 112
=============== Output-Dims ==============
Output: 0 Name: embedding Dim: 0 :1
Output: 0 Name: embedding Dim: 1 :512
ONNXRuntime Version Detected Sim: 0.0349244

person-x personc

/var/folders/h6/7d637725049b0nf7_xqjkf640000gn/T/tmpzFKmvz ; exit;
LITEORT_DEBUG LogId: ../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx
=============== Input-Dims ==============
input_node_dims: 1
input_node_dims: 3
input_node_dims: 112
input_node_dims: 112
=============== Output-Dims ==============
Output: 0 Name: embedding Dim: 0 :1
Output: 0 Name: embedding Dim: 1 :512
[ WARN:0] global /Users/yanjunqiu/Desktop/third_party/library/opencv/modules/core/src/matrix_expressions.cpp (1334) assign OpenCV/MatExpr: processing of multi-channel arrays might be changed in the future: opencv/opencv#16739
Default Version Detected Sim: 0.0609607
Default Version Detected Dist: 1.37043
LITEORT_DEBUG LogId: ../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx
=============== Input-Dims ==============
input_node_dims: 1
input_node_dims: 3
input_node_dims: 112
input_node_dims: 112
=============== Output-Dims ==============
Output: 0 Name: embedding Dim: 0 :1
Output: 0 Name: embedding Dim: 1 :512
ONNXRuntime Version Detected Sim: 0.0349244

Ubuntu 16.04 编译问题

opencv,onnxruntime 已按照指引编译配置妥当,但是运行sh ./build.sh出现这样的错误:
1
2
大佬抽空看看。或者给个详细教程,哈哈

windows vs2019编译报错:

图片
core\ort_types.h(272,1): error C2440: “初始化”: 无法从“ortcv::types::BoundingBoxType<int,double>”转换为“ortcv::types::BoundingBoxType<int,float>”

图片

头文件包含问题

我另外一个工程要调用lite.ai.toolkit库时,我仍然需要包含onnxruntime mnn和ncnn库的头文件,其实,调用的时候,我只需要接口即可,可不可以尽量把onnxruntime mnn和ncnn库头文件目录的包含只限制到lite.ai.toolkit里面,其他程序调用lite.ai.toolkit时,不需要包含onnxruntime mnn和ncnn库头文件??

yolov5 代码问题请教

yolov5是一个基于anchor的算法,再前向推理的过程中应该涉及anchor的计算,但是在代码中没有看到andhor的任何信息,请问是怎么处理的?

yolov5 代码问题请教

yolov5是一个基于anchor的算法,再前向推理的过程中应该涉及anchor的计算,但是在代码中没有看到andhor的任何信息,请问是怎么处理的?

Ubuntu下编译错误,怎么解决呢

ubuntu@ubuntu-M12SWA-TF:~/lite.ai.toolkit$ sh ./build.sh
build directory exist! clearing ...
clear built files done ! & rebuilding ...
-- The C compiler identification is GNU 7.5.0
-- The CXX compiler identification is GNU 7.5.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
########## Checking Platform for: /home/ubuntu/lite.ai.toolkit ###########
==================================== Lite.AI.ToolKit 0.1.0 =============================
Project: lite.ai.toolkit
Version: 0.1.0
SO Version: 0.1.0
Build Type: MinSizeRel
Platform Name: linux
Root Path: /home/ubuntu/lite.ai.toolkit

################################### Engines Enable Details ... #######################################
-- INCLUDE_OPENCV: ON
-- ENABLE_ONNXRUNTIME: ON
-- ENABLE_MNN: OFF
-- ENABLE_NCNN: OFF
-- ENABLE_TNN: OFF
######################################################################################################
########## Setting up OpenCV libs for: /home/ubuntu/lite.ai.toolkit ###########
###########################################################################################
Installing Lite.AI.ToolKit Headers for ONNXRuntime Backend ...
-- Installing: /home/ubuntu/lite.ai.toolkit/build/lite.ai.toolkit/include/lite/ort/core/ort_config.h
··················
-- Configuring done
-- Generating done
-- Build files have been written to: /home/ubuntu/lite.ai.toolkit/build
[ 0%] Building CXX object CMakeFiles/lite.ai.toolkit.dir/lite/utils.cpp.o
[ 0%] Building CXX object CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/cava_ghost_arcface.cpp.o
[ 1%] Building CXX object CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/center_loss_face.cpp.o
[ 2%] Building CXX object CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/deeplabv3_resnet101.cpp.o
[ 2%] Building CXX object CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/colorizer.cpp.o
[ 3%] Building CXX object CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/age_googlenet.cpp.o
[ 3%] Building CXX object CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/cava_combined_face.cpp.o
[ 3%] Building CXX object CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/densenet.cpp.o
In file included from /home/ubuntu/lite.ai.toolkit/lite/utils.cpp:5:0:
/home/ubuntu/lite.ai.toolkit/lite/utils.h: In function ‘std::vector<_Tp> lite::utils::math::softmax(const T*, unsigned int, unsigned int&)’:
/home/ubuntu/lite.ai.toolkit/lite/utils.h:94:29: error: ‘expf’ is not a member of ‘std’
softmax_probs[i] = std::expf(logits[i]);
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.h:94:29: note: suggested alternative: ‘exp’
softmax_probs[i] = std::expf(logits[i]);
^~~~
exp
/home/ubuntu/lite.ai.toolkit/lite/utils.h: In function ‘std::vector<_Tp> lite::utils::math::softmax(const std::vector<_Tp>&, unsigned int&)’:
/home/ubuntu/lite.ai.toolkit/lite/utils.h:119:29: error: ‘expf’ is not a member of ‘std’
softmax_probs[i] = std::expf(logits[i]);
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.h:119:29: note: suggested alternative: ‘exp’
softmax_probs[i] = std::expf(logits[i]);
^~~~
exp
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp: In function ‘void lite::utils::draw_axis_inplace(cv::Mat&, const EulerAngles&, float, int)’:
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:48:47: error: ‘cosf’ is not a member of ‘std’
const int x1 = static_cast(size * std::cosf(yaw) * std::cosf(roll)) + tdx;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:48:47: note: suggested alternative: ‘cosh’
const int x1 = static_cast(size * std::cosf(yaw) * std::cosf(roll)) + tdx;
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:48:64: error: ‘cosf’ is not a member of ‘std’
const int x1 = static_cast(size * std::cosf(yaw) * std::cosf(roll)) + tdx;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:48:64: note: suggested alternative: ‘cosh’
const int x1 = static_cast(size * std::cosf(yaw) * std::cosf(roll)) + tdx;
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:50:35: error: ‘cosf’ is not a member of ‘std’
size * (std::cosf(pitch) * std::sinf(roll)
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:50:35: note: suggested alternative: ‘cosh’
size * (std::cosf(pitch) * std::sinf(roll)
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:50:54: error: ‘sinf’ is not a member of ‘std’
size * (std::cosf(pitch) * std::sinf(roll)
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:50:54: note: suggested alternative: ‘sinh’
size * (std::cosf(pitch) * std::sinf(roll)
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:51:37: error: ‘cosf’ is not a member of ‘std’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:51:37: note: suggested alternative: ‘cosh’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:51:55: error: ‘sinf’ is not a member of ‘std’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:51:55: note: suggested alternative: ‘sinh’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:51:74: error: ‘sinf’ is not a member of ‘std’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:51:74: note: suggested alternative: ‘sinh’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:54:48: error: ‘cosf’ is not a member of ‘std’
const int x2 = static_cast(-size * std::cosf(yaw) * std::sinf(roll)) + tdx;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:54:48: note: suggested alternative: ‘cosh’
const int x2 = static_cast(-size * std::cosf(yaw) * std::sinf(roll)) + tdx;
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:54:65: error: ‘sinf’ is not a member of ‘std’
const int x2 = static_cast(-size * std::cosf(yaw) * std::sinf(roll)) + tdx;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:54:65: note: suggested alternative: ‘sinh’
const int x2 = static_cast(-size * std::cosf(yaw) * std::sinf(roll)) + tdx;
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:56:35: error: ‘cosf’ is not a member of ‘std’
size * (std::cosf(pitch) * std::cosf(roll)
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:56:35: note: suggested alternative: ‘cosh’
size * (std::cosf(pitch) * std::cosf(roll)
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:56:54: error: ‘cosf’ is not a member of ‘std’
size * (std::cosf(pitch) * std::cosf(roll)
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:56:54: note: suggested alternative: ‘cosh’
size * (std::cosf(pitch) * std::cosf(roll)
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:57:37: error: ‘sinf’ is not a member of ‘std’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:57:37: note: suggested alternative: ‘sinh’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:57:56: error: ‘sinf’ is not a member of ‘std’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:57:56: note: suggested alternative: ‘sinh’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:57:73: error: ‘sinf’ is not a member of ‘std’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:57:73: note: suggested alternative: ‘sinh’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:60:47: error: ‘sinf’ is not a member of ‘std’
const int x3 = static_cast(size * std::sinf(yaw)) + tdx;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:60:47: note: suggested alternative: ‘sinh’
const int x3 = static_cast(size * std::sinf(yaw)) + tdx;
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:61:48: error: ‘cosf’ is not a member of ‘std’
const int y3 = static_cast(-size * std::cosf(yaw) * std::sinf(pitch)) + tdy;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:61:48: note: suggested alternative: ‘cosh’
const int y3 = static_cast(-size * std::cosf(yaw) * std::sinf(pitch)) + tdy;
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:61:65: error: ‘sinf’ is not a member of ‘std’
const int y3 = static_cast(-size * std::cosf(yaw) * std::sinf(pitch)) + tdy;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:61:65: note: suggested alternative: ‘sinh’
const int y3 = static_cast(-size * std::cosf(yaw) * std::sinf(pitch)) + tdy;
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp: In function ‘cv::Mat lite::utils::draw_axis(const cv::Mat&, const EulerAngles&, float, int)’:
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:86:47: error: ‘cosf’ is not a member of ‘std’
const int x1 = static_cast(size * std::cosf(yaw) * std::cosf(roll)) + tdx;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:86:47: note: suggested alternative: ‘cosh’
const int x1 = static_cast(size * std::cosf(yaw) * std::cosf(roll)) + tdx;
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:86:64: error: ‘cosf’ is not a member of ‘std’
const int x1 = static_cast(size * std::cosf(yaw) * std::cosf(roll)) + tdx;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:86:64: note: suggested alternative: ‘cosh’
const int x1 = static_cast(size * std::cosf(yaw) * std::cosf(roll)) + tdx;
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:88:35: error: ‘cosf’ is not a member of ‘std’
size * (std::cosf(pitch) * std::sinf(roll)
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:88:35: note: suggested alternative: ‘cosh’
size * (std::cosf(pitch) * std::sinf(roll)
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:88:54: error: ‘sinf’ is not a member of ‘std’
size * (std::cosf(pitch) * std::sinf(roll)
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:88:54: note: suggested alternative: ‘sinh’
size * (std::cosf(pitch) * std::sinf(roll)
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:89:37: error: ‘cosf’ is not a member of ‘std’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:89:37: note: suggested alternative: ‘cosh’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:89:55: error: ‘sinf’ is not a member of ‘std’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:89:55: note: suggested alternative: ‘sinh’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:89:74: error: ‘sinf’ is not a member of ‘std’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:89:74: note: suggested alternative: ‘sinh’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:92:48: error: ‘cosf’ is not a member of ‘std’
const int x2 = static_cast(-size * std::cosf(yaw) * std::sinf(roll)) + tdx;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:92:48: note: suggested alternative: ‘cosh’
const int x2 = static_cast(-size * std::cosf(yaw) * std::sinf(roll)) + tdx;
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:92:65: error: ‘sinf’ is not a member of ‘std’
const int x2 = static_cast(-size * std::cosf(yaw) * std::sinf(roll)) + tdx;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:92:65: note: suggested alternative: ‘sinh’
const int x2 = static_cast(-size * std::cosf(yaw) * std::sinf(roll)) + tdx;
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:94:35: error: ‘cosf’ is not a member of ‘std’
size * (std::cosf(pitch) * std::cosf(roll)
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:94:35: note: suggested alternative: ‘cosh’
size * (std::cosf(pitch) * std::cosf(roll)
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:94:54: error: ‘cosf’ is not a member of ‘std’
size * (std::cosf(pitch) * std::cosf(roll)
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:94:54: note: suggested alternative: ‘cosh’
size * (std::cosf(pitch) * std::cosf(roll)
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:95:37: error: ‘sinf’ is not a member of ‘std’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:95:37: note: suggested alternative: ‘sinh’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:95:56: error: ‘sinf’ is not a member of ‘std’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:95:56: note: suggested alternative: ‘sinh’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:95:73: error: ‘sinf’ is not a member of ‘std’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:95:73: note: suggested alternative: ‘sinh’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:98:47: error: ‘sinf’ is not a member of ‘std’
const int x3 = static_cast(size * std::sinf(yaw)) + tdx;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:98:47: note: suggested alternative: ‘sinh’
const int x3 = static_cast(size * std::sinf(yaw)) + tdx;
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:99:48: error: ‘cosf’ is not a member of ‘std’
const int y3 = static_cast(-size * std::cosf(yaw) * std::sinf(pitch)) + tdy;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:99:48: note: suggested alternative: ‘cosh’
const int y3 = static_cast(-size * std::cosf(yaw) * std::sinf(pitch)) + tdy;
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:99:65: error: ‘sinf’ is not a member of ‘std’
const int y3 = static_cast(-size * std::cosf(yaw) * std::sinf(pitch)) + tdy;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:99:65: note: suggested alternative: ‘sinh’
const int y3 = static_cast(-size * std::cosf(yaw) * std::sinf(pitch)) + tdy;
^~~~
sinh
In file included from /home/ubuntu/lite.ai.toolkit/lite/ort/cv/deeplabv3_resnet101.cpp:6:0:
/home/ubuntu/lite.ai.toolkit/lite/ort/core/ort_utils.h:33:80: warning: dynamic exception specifications are deprecated in C++11 [-Wdeprecated]
unsigned int data_format = CHW) throw(std::runtime_error);
^~~~~
In file included from /home/ubuntu/lite.ai.toolkit/lite/ort/cv/densenet.cpp:6:0:
/home/ubuntu/lite.ai.toolkit/lite/ort/core/ort_utils.h:33:80: warning: dynamic exception specifications are deprecated in C++11 [-Wdeprecated]
unsigned int data_format = CHW) throw(std::runtime_error);
^~~~~
In file included from /home/ubuntu/lite.ai.toolkit/lite/ort/cv/densenet.cpp:7:0:
/home/ubuntu/lite.ai.toolkit/lite/utils.h: In function ‘std::vector<_Tp> lite::utils::math::softmax(const T*, unsigned int, unsigned int&)’:
/home/ubuntu/lite.ai.toolkit/lite/utils.h:94:29: error: ‘expf’ is not a member of ‘std’
softmax_probs[i] = std::expf(logits[i]);
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.h:94:29: note: suggested alternative: ‘exp’
softmax_probs[i] = std::expf(logits[i]);
^~~~
exp
/home/ubuntu/lite.ai.toolkit/lite/utils.h: In function ‘std::vector<_Tp> lite::utils::math::softmax(const std::vector<_Tp>&, unsigned int&)’:
/home/ubuntu/lite.ai.toolkit/lite/utils.h:119:29: error: ‘expf’ is not a member of ‘std’
softmax_probs[i] = std::expf(logits[i]);
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.h:119:29: note: suggested alternative: ‘exp’
softmax_probs[i] = std::expf(logits[i]);
^~~~
exp
In file included from /home/ubuntu/lite.ai.toolkit/lite/ort/cv/colorizer.cpp:6:0:
/home/ubuntu/lite.ai.toolkit/lite/ort/core/ort_utils.h:33:80: warning: dynamic exception specifications are deprecated in C++11 [-Wdeprecated]
unsigned int data_format = CHW) throw(std::runtime_error);
^~~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp: In function ‘void lite::utils::blending_nms(std::vector<lite::types::BoundingBoxType<float, float> >&, std::vector<lite::types::BoundingBoxType<float, float> >&, float, unsigned int)’:
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:327:21: error: ‘expf’ is not a member of ‘std’
total += std::expf(buf[k].score);
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:327:21: note: suggested alternative: ‘exp’
total += std::expf(buf[k].score);
^~~~
exp
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:332:25: error: ‘expf’ is not a member of ‘std’
float rate = std::expf(buf[l].score) / total;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:332:25: note: suggested alternative: ‘exp’
float rate = std::expf(buf[l].score) / total;
^~~~
exp
In file included from /home/ubuntu/lite.ai.toolkit/lite/ort/cv/cava_combined_face.cpp:6:0:
/home/ubuntu/lite.ai.toolkit/lite/ort/core/ort_utils.h:33:80: warning: dynamic exception specifications are deprecated in C++11 [-Wdeprecated]
unsigned int data_format = CHW) throw(std::runtime_error);
^~~~~
In file included from /home/ubuntu/lite.ai.toolkit/lite/ort/cv/cava_combined_face.cpp:7:0:
/home/ubuntu/lite.ai.toolkit/lite/utils.h: In function ‘std::vector<_Tp> lite::utils::math::softmax(const T*, unsigned int, unsigned int&)’:
/home/ubuntu/lite.ai.toolkit/lite/utils.h:94:29: error: ‘expf’ is not a member of ‘std’
softmax_probs[i] = std::expf(logits[i]);
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.h:94:29: note: suggested alternative: ‘exp’
softmax_probs[i] = std::expf(logits[i]);
^~~~
exp
/home/ubuntu/lite.ai.toolkit/lite/utils.h: In function ‘std::vector<_Tp> lite::utils::math::softmax(const std::vector<_Tp>&, unsigned int&)’:
/home/ubuntu/lite.ai.toolkit/lite/utils.h:119:29: error: ‘expf’ is not a member of ‘std’
softmax_probs[i] = std::expf(logits[i]);
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.h:119:29: note: suggested alternative: ‘exp’
softmax_probs[i] = std::expf(logits[i]);
^~~~
exp
In file included from /home/ubuntu/lite.ai.toolkit/lite/ort/cv/center_loss_face.cpp:6:0:
/home/ubuntu/lite.ai.toolkit/lite/ort/core/ort_utils.h:33:80: warning: dynamic exception specifications are deprecated in C++11 [-Wdeprecated]
unsigned int data_format = CHW) throw(std::runtime_error);
^~~~~
In file included from /home/ubuntu/lite.ai.toolkit/lite/ort/cv/age_googlenet.cpp:5:0:
/home/ubuntu/lite.ai.toolkit/lite/ort/core/ort_utils.h:33:80: warning: dynamic exception specifications are deprecated in C++11 [-Wdeprecated]
unsigned int data_format = CHW) throw(std::runtime_error);
^~~~~
In file included from /home/ubuntu/lite.ai.toolkit/lite/ort/cv/age_googlenet.cpp:6:0:
/home/ubuntu/lite.ai.toolkit/lite/utils.h: In function ‘std::vector<_Tp> lite::utils::math::softmax(const T*, unsigned int, unsigned int&)’:
/home/ubuntu/lite.ai.toolkit/lite/utils.h:94:29: error: ‘expf’ is not a member of ‘std’
softmax_probs[i] = std::expf(logits[i]);
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.h:94:29: note: suggested alternative: ‘exp’
softmax_probs[i] = std::expf(logits[i]);
^~~~
exp
/home/ubuntu/lite.ai.toolkit/lite/utils.h: In function ‘std::vector<_Tp> lite::utils::math::softmax(const std::vector<_Tp>&, unsigned int&)’:
/home/ubuntu/lite.ai.toolkit/lite/utils.h:119:29: error: ‘expf’ is not a member of ‘std’
softmax_probs[i] = std::expf(logits[i]);
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.h:119:29: note: suggested alternative: ‘exp’
softmax_probs[i] = std::expf(logits[i]);
^~~~
exp
In file included from /home/ubuntu/lite.ai.toolkit/lite/ort/cv/cava_ghost_arcface.cpp:6:0:
/home/ubuntu/lite.ai.toolkit/lite/ort/core/ort_utils.h:33:80: warning: dynamic exception specifications are deprecated in C++11 [-Wdeprecated]
unsigned int data_format = CHW) throw(std::runtime_error);
^~~~~
CMakeFiles/lite.ai.toolkit.dir/build.make:75: recipe for target 'CMakeFiles/lite.ai.toolkit.dir/lite/utils.cpp.o' failed
make[2]: *** [CMakeFiles/lite.ai.toolkit.dir/lite/utils.cpp.o] Error 1
make[2]: *** 正在等待未完成的任务....
CMakeFiles/lite.ai.toolkit.dir/build.make:173: recipe for target 'CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/densenet.cpp.o' failed
make[2]: *** [CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/densenet.cpp.o] Error 1
CMakeFiles/lite.ai.toolkit.dir/build.make:103: recipe for target 'CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/cava_combined_face.cpp.o' failed
make[2]: *** [CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/cava_combined_face.cpp.o] Error 1
CMakeFiles/lite.ai.toolkit.dir/build.make:89: recipe for target 'CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/age_googlenet.cpp.o' failed
make[2]: *** [CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/age_googlenet.cpp.o] Error 1
CMakeFiles/Makefile2:237: recipe for target 'CMakeFiles/lite.ai.toolkit.dir/all' failed
make[1]: *** [CMakeFiles/lite.ai.toolkit.dir/all] Error 2
Makefile:90: recipe for target 'all' failed
make: *** [all] Error 2

How to build for windows 10 ? Many errors.

Hi I have a lot of errors when try to build on windows 10 with Qt (cmake and mingw64-make)
Screen Shot 2021-10-05 at 18 55 03

If there is precompiled DLL fow windows10 64 / 32 bit it would be perfect :)

Not working under windows 10

Hello,

I am trying to test it under Windows 10 with Visual Studio 2019.
It compiles fine, but it always gives me the same error :
the application was unable to start correctly 0xc000007b.
Already tried with Release, Debug, MinSizeRel and RelWithDebInfo.
Could you please help me ?

Thank you.

yolox_nano速度问题

我使用您的代码框架测试了一下yolox系列的推理速度,yolox_nano以外的模型推理速度都很正常,但是使用nano模型时,推理速度甚至低于yolox_s。所用的onnx文件均为利用官方coco数据集训练出来的pth文件转化得到。
我注意到yolox在定义nano模型时,有一段额外代码(./exps/default/nano.py中),如下图所示
image
这是否会有影响?

Access to models

Hi,

Thanks for your work. Could you please share the models on GDrive? because I can't access Baidu.

Thanks

yolox更新问题

自从yolox更新之后 似乎不再适用新版模型了 请问哪里需要进行调整呢

VS015 编译报错

图片
vs2019时正常,但是,项目需要vs2015,改成VS015活,编译报错,貌似对模板什么的支持不好

有加入multiple object tracking的打算吗?

如题,有没有打算提供基于深度学习的算法,例如这一两年的FairMOT,JDE等。搜遍所有开源的c++项目,目前就这个lite.ai.toolkit的品质最高,最适合写跨平台的程式和部署了

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.