Giter Site home page Giter Site logo

augmentedstartups / as-one Goto Github PK

View Code? Open in Web Editor NEW
591.0 13.0 102.0 259.12 MB

Easy & Modular Computer Vision Detectors, Trackers & SAM - Run YOLOv9,v8,v7,v6,v5,R,X in under 10 lines of code.

Home Page: https://www.augmentedstartups.com

License: GNU General Public License v3.0

Dockerfile 0.02% Shell 0.13% Python 99.38% Batchfile 0.03% Makefile 0.02% Cython 0.29% C++ 0.04% Cuda 0.08%
computer-vision opencv yolor yolov5 yolov7 yolox deep-learning object-detection pytorch tracking ultralytics yolov8 sam yolov9

as-one's Introduction

AS-One v2 : A Modular Library for YOLO Object Detection, Segmentation, Tracking & Pose

πŸ‘‹ Hello

==UPDATE: ASOne v2 is now out! We've updated with YOLOV9 and SAM==

AS-One is a python wrapper for multiple detection and tracking algorithms all at one place. Different trackers such as ByteTrack, DeepSORT or NorFair can be integrated with different versions of YOLO with minimum lines of code. This python wrapper provides YOLO models in ONNX, PyTorch & CoreML flavors. We plan to offer support for future versions of YOLO when they get released.

This is One Library for most of your computer vision needs.

If you would like to dive deeper into YOLO Object Detection and Tracking, then check out our courses and projects

Watch the step-by-step tutorial 🀝

πŸ’» Install

πŸ”₯ Prerequisites
pip install asone

For windows machine, you will need to install from source to run asone library. Check out instructions in πŸ‘‰ Install from Source section below to install on windows.

πŸ‘‰ Install from Source

πŸ’Ύ Clone the Repository

Navigate to an empty folder of your choice.

git clone https://github.com/augmentedstartups/AS-One.git

Change Directory to AS-One

cd AS-One

πŸ‘‰ For Linux
python3 -m venv .env
source .env/bin/activate

pip install -r requirements.txt

# for CPU
pip install torch torchvision
# for GPU
pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
πŸ‘‰ For Windows 10/11
python -m venv .env
.env\Scripts\activate
pip install numpy Cython
pip install lap
pip install -e git+https://github.com/samson-wang/cython_bbox.git#egg=cython-bbox

pip install asone onnxruntime-gpu==1.12.1
pip install typing_extensions==4.7.1
pip install super-gradients==3.1.3
# for CPU
pip install torch torchvision

# for GPU
pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
or
pip install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio===0.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
πŸ‘‰ For MacOS
python3 -m venv .env
source .env/bin/activate


pip install -r requirements.txt

# for CPU
pip install torch torchvision

Quick Start πŸƒβ€β™‚οΈ

Use tracker on sample video.

import asone
from asone import ASOne

model = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV9_C, use_cuda=True)
tracks = model.video_tracker('data/sample_videos/test.mp4', filter_classes=['car'])

for model_output in tracks:
    annotations = ASOne.draw(model_output, display=False)

Run in Google Colab πŸ’»

Open In Colab

Sample Code Snippets πŸ“ƒ

6.1 πŸ‘‰ Object Detection
import asone
from asone import ASOne

model = ASOne(detector=asone.YOLOV9_C, use_cuda=True) # Set use_cuda to False for cpu
vid = model.read_video('data/sample_videos/test.mp4')

for img in vid:
    detection = model.detecter(img)
    annotations = ASOne.draw(detection, img=img, display=True)

Run the asone/demo_detector.py to test detector.

# run on gpu
python -m asone.demo_detector data/sample_videos/test.mp4

# run on cpu
python -m asone.demo_detector data/sample_videos/test.mp4 --cpu
6.1.1 πŸ‘‰ Use Custom Trained Weights for Detector

Use your custom weights of a detector model trained on custom data by simply providing path of the weights file.

import asone
from asone import ASOne

model = ASOne(detector=asone.YOLOV9_C, weights='data/custom_weights/yolov7_custom.pt', use_cuda=True) # Set use_cuda to False for cpu
vid = model.read_video('data/sample_videos/license_video.mp4')

for img in vid:
    detection = model.detecter(img)
    annotations = ASOne.draw(detection, img=img, display=True, class_names=['license_plate'])
6.1.2 πŸ‘‰ Changing Detector Models

Change detector by simply changing detector flag. The flags are provided in benchmark tables.

  • Our library now supports YOLOv5, YOLOv7, and YOLOv8 on macOS.
# Change detector
model = ASOne(detector=asone.YOLOX_S_PYTORCH, use_cuda=True)

# For macOs
# YOLO5
model = ASOne(detector=asone.YOLOV5X_MLMODEL)
# YOLO7
model = ASOne(detector=asone.YOLOV7_MLMODEL)
# YOLO8
model = ASOne(detector=asone.YOLOV8L_MLMODEL)
6.2 πŸ‘‰ Object Tracking

Use tracker on sample video.

import asone
from asone import ASOne

# Instantiate Asone object
model = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV9_C, use_cuda=True) #set use_cuda=False to use cpu
tracks = model.video_tracker('data/sample_videos/test.mp4', filter_classes=['car'])

# Loop over track to retrieve outputs of each frame
for model_output in tracks:
    annotations = ASOne.draw(model_output, display=True)
    # Do anything with bboxes here

[Note] Use can use custom weights for a detector model by simply providing path of the weights file. in ASOne class.

6.2.1 πŸ‘‰ Changing Detector and Tracking Models

Change Tracker by simply changing the tracker flag.

The flags are provided in benchmark tables.

model = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV9_C, use_cuda=True)
# Change tracker
model = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOV9_C, use_cuda=True)
# Change Detector
model = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOX_S_PYTORCH, use_cuda=True)

Run the asone/demo_tracker.py to test detector.

# run on gpu
python -m asone.demo_tracker data/sample_videos/test.mp4

# run on cpu
python -m asone.demo_tracker data/sample_videos/test.mp4 --cpu
6.3 πŸ‘‰ Segmentation
import asone
from asone import ASOne

model = ASOne(detector=asone.YOLOV9_C, segmentor=asone.SAM, use_cuda=True) #set use_cuda=False to use cpu
tracks = model.video_detecter('data/sample_videos/test.mp4', filter_classes=['car'])

for model_output in tracks:
    annotations = ASOne.draw_masks(model_output, display=True) # Draw masks
6.4 πŸ‘‰ Text Detection

Sample code to detect text on an image

# Detect and recognize text
import asone
from asone import ASOne, utils
import cv2

model = ASOne(detector=asone.CRAFT, recognizer=asone.EASYOCR, use_cuda=True) # Set use_cuda to False for cpu
img = cv2.imread('data/sample_imgs/sample_text.jpeg')
results = model.detect_text(img)
annotations = utils.draw_text(img, results, display=True)

Use Tracker on Text

import asone
from asone import ASOne

# Instantiate Asone object
model = ASOne(tracker=asone.DEEPSORT, detector=asone.CRAFT, recognizer=asone.EASYOCR, use_cuda=True) #set use_cuda=False to use cpu
tracks = model.video_tracker('data/sample_videos/GTA_5-Unique_License_Plate.mp4')

# Loop over track to retrieve outputs of each frame
for model_output in tracks:
    annotations = ASOne.draw(model_output, display=True)

    # Do anything with bboxes here

Run the asone/demo_ocr.py to test ocr.

# run on gpu
 python -m asone.demo_ocr data/sample_videos/GTA_5-Unique_License_Plate.mp4

# run on cpu
 python -m asone.demo_ocr data/sample_videos/GTA_5-Unique_License_Plate.mp4 --cpu
6.5 πŸ‘‰ Pose Estimation

Sample code to estimate pose on an image

# Pose Estimation
import asone
from asone import PoseEstimator, utils
import cv2

model = PoseEstimator(estimator_flag=asone.YOLOV8M_POSE, use_cuda=True) #set use_cuda=False to use cpu
img = cv2.imread('data/sample_imgs/test2.jpg')
kpts = model.estimate_image(img)
annotations = utils.draw_kpts(kpts, image=img, display=True)
  • Now you can use Yolov8 and Yolov7-w6 for pose estimation. The flags are provided in benchmark tables.
# Pose Estimation on video
import asone
from asone import PoseEstimator, utils

model = PoseEstimator(estimator_flag=asone.YOLOV7_W6_POSE, use_cuda=True) #set use_cuda=False to use cpu
estimator = model.video_estimator('data/sample_videos/football1.mp4')
for model_output in estimator:
    annotations = utils.draw_kpts(model_output)
    # Do anything with kpts here

Run the asone/demo_pose_estimator.py to test Pose estimation.

# run on gpu
 python -m asone.demo_pose_estimator data/sample_videos/football1.mp4

# run on cpu
 python -m asone.demo_pose_estimator data/sample_videos/football1.mp4 --cpu

To setup ASOne using Docker follow instructions given in docker setup🐳

ToDo πŸ“

  • First Release
  • Import trained models
  • Simplify code even further
  • Updated for YOLOv8
  • OCR and Counting
  • OCSORT, StrongSORT, MoTPy
  • M1/2 Apple Silicon Compatibility
  • Pose Estimation YOLOv7/v8
  • YOLO-NAS
  • Updated for YOLOv8.1
  • YOLOV9
  • SAM Integration
Offered By πŸ’Ό : Maintained By πŸ‘¨β€πŸ’» :
AugmentedStarups AxcelerateAI

as-one's People

Contributors

1297rohit avatar ajmairkashif avatar augmentedstartups avatar kinza-kamal1 avatar mnmaqsood avatar muhammadramzan4 avatar shehryar-malik avatar umair-imran avatar zhora-im avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

as-one's Issues

AttributeError: module 'lap' has no attribute 'lapjv'

Dear All,

I am facing installation issues. More specific when i install numpy version 1.24 i am no facing issues with Lapjv but i am receiving error with floats. if i downgrade the numpy to 1.22 version i am facing the subject code error. with version 1.24 numpy the windows open recognize the first car but after fails with subject error. Your support will be appreciated.

Kind regards

Mismatch between detections and class ids when using bytetracker

Hello,

When using ByteTrack tracker I encountered and issue that the tracked objects and class ids / class names are sometimes mismatched. I see that in the detect_and_track function, all class ids are extracted and all detections are sent to the _tracker_update, but the tracker update filters out some of the targets with the box shape criteria resulting in a difference between numbers of bboxes_xyxy, ids and scores and number of class_ids returned by the detect_and_track function. Could this cause the error while displaying detections since the index of each bounding box does not match the index of the class id and ultimately class name?
If so, could a solution be to modify the _tracker_update to add an online_class_ids list and append the class ids extracted from dets, for each online target that passes the box shape criteria and return that list along with online_xyxys, online_ids, online_scores and use it later instead of the original class_ids?

Thanks in advance!

Reference variable before assignment

In "detect" method you have created "detections" variable and instead of it you use "detection".

def detect(self, 
               image: list,
               conf_thres: float = 0.25,
               iou_thres: float = 0.45,
               classes: int = None,
               agnostic_nms: bool = False,
               input_shape=(640, 640),
               max_det: int = 1000,
               filter_classes = None) -> list:
     
        # Image Preprocessing
        original_image, processed_image = self.image_preprocessing(image, input_shape)
        
        # Inference
        if self.use_onnx:
            # Input names of ONNX model on which it is exported   
            input_name = self.model.get_inputs()[0].name
            # Run onnx model 
            pred = self.model.run([self.model.get_outputs()[0].name], {input_name: processed_image})[0]
            # Run Pytorch model        
        else:
            processed_image = torch.from_numpy(processed_image).to(self.device)
            # Change image floating point precision if fp16 set to true
            processed_image = processed_image.half() if self.fp16 else processed_image.float() 
            pred = self.model(processed_image, augment=False, visualize=False)[0]
       
        # Post Processing
        if isinstance(pred, np.ndarray):
            pred = torch.tensor(pred, device=self.device)
        predictions = non_max_suppression(pred, conf_thres, 
                                          iou_thres, classes, 
                                          agnostic_nms, 
                                          max_det=max_det)
        
        for i, prediction in enumerate(predictions):  # per image
            if len(prediction):
                prediction[:, :4] = scale_coords(
                    processed_image.shape[2:], prediction[:, :4], original_image.shape).round()
                predictions[i] = prediction
        detections = predictions[0].cpu().numpy()
        image_info = {
            'width': original_image.shape[1],
            'height': original_image.shape[0],
        }

        self.boxes = detections[:, :4]
        self.scores = detections[:, 4:5]
        self.class_ids = detections[:, 5:6]

        if filter_classes:
            class_names = get_names()

            filter_class_idx = []
            if filter_classes:
                for _class in filter_classes:
                    if _class.lower() in class_names:
                        filter_class_idx.append(class_names.index(_class.lower()))
                    else:
                        warnings.warn(f"class {_class} not found in model classes list.")

            detection = detection[np.in1d(detection[:,5].astype(int), filter_class_idx)]

        return detections, image_info
File "/home/zhora/workspace/AS-One/asone/detectors/yolov5/yolov5_detector.py", line 118, in detect
    detection = detection[np.in1d(detection[:,5].astype(int), filter_class_idx)]
UnboundLocalError: local variable 'detection' referenced before assignment

Lap Package issue

When I attempt to 'pip install asone', I get error with the lap package
Has anyone else seen this issue? Is there a fix?

image

How to set a custom path for downloaded weights ?

Hello guys, Thank you for such amazing work!

Is there a way to specify a custom path for downloaded weights instead of the current working dir ?
e.g: to be able to use the Tracker this way

Tracker(asone.BYTETRACK, model, use_cuda=False, custom_model_path)

in case the custom_model_path already exists the .pts are loaded from that path (the same way it's downloaded to the working dir at the moment).

Thanks

Mac support

Hi,

Will this software work on Mac as well?

muliple stream not supported

On the yolo packages (v5,v6) we can open multiple streams by doing below:
python3 track.py --source streams.txt

And this would open multiple streams, where streams.txt looks like below:
video1.mp4
video2.mp4
video3.mp4

But AS-ONE main.py only can open one stream at a time? will multiple streams be supported in the near future?

Installation issues

Hello,

Is there a simple way to install this software on Linux? I face several incompatibility issue that are hard to resolve. Which version of super_gradient, numpy this software requires?

Run on Jetson Nvidia

Hi,
First thank you for a great git, I have a question (This is not an issue) this project can run on Jetson Xavier.
Did you try this project on some Nvidia Jetson?
Do I need to install something special to run this project on Nvidia Jetson or it's the same libraries?

Thank you again :)

TensorRT Support?

Hello. Would it be possible to integrate TensorRT support in the library for at least the models that support it? I have a yolov7 model to run on a Jetson Nano, and i would like the faster inference offered by tensorrt, while getting the accesibility and simplicity that this wonderful module brings. Thanks!

Support for Strong Sort

Support for strong sort will be very helpful, options can be given like whether to use strong sort or deep sort.

Non compatible with Yolov5 6.2 trained models

Traceback (most recent call last):
File "main.py", line 40, in
main(args)
File "main.py", line 6, in main
dt_obj = ASOne(
File "/home/zhora/workspace/AS-One/asone/asone.py", line 20, in init
self.detector = self.get_detector(detector)
File "/home/zhora/workspace/AS-One/asone/asone.py", line 25, in get_detector
detector = Detector(detector, use_cuda=self.use_cuda).get_detector()
File "/home/zhora/workspace/AS-One/asone/detectors/detector.py", line 17, in init
self.model = self._select_detector(model_flag, use_cuda)
File "/home/zhora/workspace/AS-One/asone/detectors/detector.py", line 25, in _select_detector
_detector = YOLOv5Detector(weights=weight,
File "/home/zhora/workspace/AS-One/asone/detectors/yolov5/yolov5_detector.py", line 26, in init
self.model = self.load_model(use_cuda, weights)
File "/home/zhora/workspace/AS-One/asone/detectors/yolov5/yolov5_detector.py", line 40, in load_model
model = attempt_load(weights, device=self.device, inplace=True, fuse=True)
File "/home/zhora/workspace/AS-One/asone/detectors/yolov5/yolov5/models/experimental.py", line 33, in attempt_load
ckpt = torch.load(w, map_location='cpu') # load
File "/home/zhora/workspace/venv/lib/python3.8/site-packages/torch/serialization.py", line 712, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/home/zhora/workspace/venv/lib/python3.8/site-packages/torch/serialization.py", line 1049, in _load
result = unpickler.load()
File "/home/zhora/workspace/venv/lib/python3.8/site-packages/torch/serialization.py", line 1042, in find_class
return super().find_class(mod_name, name)
AttributeError: Can't get attribute 'DetectionModel' on <module 'models.yolo' from '/home/zhora/workspace/AS-One/asone/detectors/yolov5/yolov5/models/yolo.py'>

YOLO-World integration

Hello,
Thanks a lot for this amazing implementation. My question is can we easily integrate yolo-world in As-One?
Do you have any future plan to do that soon ?

issue in YOLOv8Detector

hi,
I'm running the detector on a custom model (i.e., i provide the the weights and classes)
got an error in line 107
warnings.warn(
f"class {_class} not found in model classes list.")

I can resolve it when I replace line 99
class_names = get_names()
with
class_names = list(self.model.names.values())

Am I right?
thanks

import issue

when i run the code on linux/windows/google colab, i meet same mistakes:

image

Export Detector object

Hi,

Just wondering if I have to build my entire computer vision inside the AS-One project, or is there any simple way of exporting the detector I've created?

Thanks

ModuleNotFoundError: No module named 'asone.detectors.easyocr_detector'

Hello,
I try to build a docker container for asone based app, but I get some strange behavior.
First, the error:
root@9c83bf0cbc7b:/usr/src/app# python3
Python 3.8.10 (default, Nov 14 2022, 12:59:47)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

import asone
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python3.8/dist-packages/asone/init.py", line 1, in
from .asone import ASOne
File "/usr/local/lib/python3.8/dist-packages/asone/asone.py", line 6, in
import asone.utils as utils
File "/usr/local/lib/python3.8/dist-packages/asone/utils/init.py", line 6, in
from asone.utils.temp_loader import get_detector, get_tracker
File "/usr/local/lib/python3.8/dist-packages/asone/utils/temp_loader.py", line 1, in
from asone.detectors import YOLOv5Detector
File "/usr/local/lib/python3.8/dist-packages/asone/detectors/init.py", line 7, in
from asone.detectors.detector import Detector
File "/usr/local/lib/python3.8/dist-packages/asone/detectors/detector.py", line 13, in
from asone.detectors.easyocr_detector.text_detector import TextDetector
ModuleNotFoundError: No module named 'asone.detectors.easyocr_detector'

second, ls on asone distro show no easyocr_detector folder. (The folder exist on the github.com/augmentedstartups/AS-One)
root@9c83bf0cbc7b:/usr/src/app# ls /usr/local/lib/python3.8/dist-packages/asone/detectors
init.py pycache detector.py utils yolor yolov5 yolov6 yolov7 yolov8 yolox

I used "pip install asone".
I guess it is a misconfigured infos between PyPI and github!?

Add new trackers?

Hi,

Will the trackers: Deep OC-SORT and SMILEtrack be added?

Thanks!

typing-extensions package incompatibility with asone package

When going through the installation instructions and running $pip install typing_extensions==4.7.1, I get:

Collecting typing_extensions==4.7.1
Downloading typing_extensions-4.7.1-py3-none-any.whl.metadata (3.1 kB)
Downloading typing_extensions-4.7.1-py3-none-any.whl (33 kB)
Installing collected packages: typing_extensions
Attempting uninstall: typing_extensions
Found existing installation: typing-extensions 3.10.0.2
Uninstalling typing-extensions-3.10.0.2:
Successfully uninstalled typing-extensions-3.10.0.2

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
asone 0.3.3 requires typing-extensions==3.10.0.2, but you have typing-extensions 4.7.1 which is incompatible.

Successfully installed typing_extensions-4.7.1

Installation on apple m1

Hello,

it seems there are some issues in installating cython-bbox on Apple M1. Is there any alternative or solution to the compilation
ussue so that AsOne can be installed?

Thansk

Why coremltools are required on Linux

Hello,

I am trying to install this software on Linux, however, I get weird error:

yolov5_detector.py", line 8, in
import coremltools as ct
ModuleNotFoundError: No module named 'coremltools'

I am under the impression that coreml are for Apple machines.

results in a txt file

Is it possible to get a txt file with the coordinates of the detected objects? Maybe in the classical form: , , <bb_left>, <bb_top>, <bb_width>, <bb_height>, , , ,

Issue while installing lap and other bib related to VS

Hello,
Thank you for this amazing work!
I have some installation issues, I have this error while installing asone, or even doing it separately (for example here for pip install lap)

`INFO:

  ########### EXT COMPILER OPTIMIZATION ###########
  INFO: Platform      :
    Architecture: x64
    Compiler    : msvc

  CPU baseline  :
    Requested   : 'min'
    Enabled     : none
    Flags       : none
    Extra checks: none

  CPU dispatch  :
    Requested   : 'max -xop -fma4'
    Enabled     : none
    Generated   : none
  INFO: CCompilerOpt.cache_flush[825] : write cache to path -> C:\Users\AppData\Local\Temp\pip-install-etdq5zeh\lap_45220a53799847869a905004fb7a2314\build\temp.win-amd64-cpython-38\Release\ccompiler_opt_cache_ext.py

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for lap
Running setup.py clean for lap
Failed to build lap
Installing collected packages: lap
Running setup.py install for lap ... error
error: subprocess-exited-with-error

Γ— Running setup.py install for lap did not run successfully.
β”‚ exit code: 1
╰─> [107 lines of output]
Partial import of lap during the build process.
Generating cython files
running install
C:\Users\anaconda3\envs\py38_GPU_pyqt\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running config_cc
INFO: unifing config_cc, config, build_clib, build_ext, build commands --compiler options
running config_fc
INFO: unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
running build_src
INFO: build_src
INFO: building extension "lap.lapjv" sources
INFO: building data_files sources
INFO: build_src: building npy-pkg config files
running build_py
creating build
creating build\lib.win-amd64-cpython-38
creating build\lib.win-amd64-cpython-38\lap
copying lap\lapmod.py -> build\lib.win-amd64-cpython-38\lap
copying lap_init
.py -> build\lib.win-amd64-cpython-38\lap
running build_ext
INFO: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
INFO: customize MSVCCompiler
INFO: customize MSVCCompiler using build_ext
INFO: CCompilerOpt.cc_test_flags[1029] : testing flags (/O2)
creating C:\Users\CHOUCH1\AppData\Local\Temp\tmphagtbu1r\Users
creating C:\Users\CHOUCH
1\AppData\Local\Temp\tmphagtbu1r\Users\chouchen2-admin
creating C:\Users\CHOUCH1\AppData\Local\Temp\tmphagtbu1r\Users\chouchen2-admin\anaconda3
creating C:\Users\CHOUCH
1\AppData\Local\Temp\tmphagtbu1r\Users\chouchen2-admin\anaconda3\envs
creating C:\Users\CHOUCH1\AppData\Local\Temp\tmphagtbu1r\Users\chouchen2-admin\anaconda3\envs\py38_GPU_pyqt
creating C:\Users\CHOUCH
1\AppData\Local\Temp\tmphagtbu1r\Users\chouchen2-admin\anaconda3\envs\py38_GPU_pyqt\Lib
creating C:\Users\CHOUCH1\AppData\Local\Temp\tmphagtbu1r\Users\chouchen2-admin\anaconda3\envs\py38_GPU_pyqt\Lib\site-packages
creating C:\Users\CHOUCH
1\AppData\Local\Temp\tmphagtbu1r\Users\chouchen2-admin\anaconda3\envs\py38_GPU_pyqt\Lib\site-packages\numpy
creating C:\Users\CHOUCH1\AppData\Local\Temp\tmphagtbu1r\Users\chouchen2-admin\anaconda3\envs\py38_GPU_pyqt\Lib\site-packages\numpy\distutils
creating C:\Users\CHOUCH
1\AppData\Local\Temp\tmphagtbu1r\User\anaconda3\envs\py38_GPU_pyqt\Lib\site-packages\numpy\distutils\checks
INFO: CCompilerOpt.cc_test_flags[1029] : testing flags (/WX)
WARN: CCompilerOpt.init[1175] : feature 'AVX512_KNL' is disabled, MSVC compiler doesn't support it
WARN: CCompilerOpt.init[1175] : feature 'AVX512_KNM' is disabled, MSVC compiler doesn't support it
INFO: CCompilerOpt.init[1717] : check requested baseline
INFO: CCompilerOpt.feature_test[1482] : testing feature 'SSE' with flags ()
WARN: CCompilerOpt.dist_test[598] : CCompilerOpt._dist_test_spawn[732] : Command (C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.28.29910\bin\HostX86\x64\cl.exe /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\anaconda3\envs\py38_GPU_pyqt\include -IC:\Users\chouchen2-admin\anaconda3\envs\py38_GPU_pyqt\Include -IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.28.29910\include -IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.28.29910\include -IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.28.29910\include /TcC:\Users\chouchen2-admin\anaconda3\envs\py38_GPU_pyqt\Lib\site-packages\numpy\distutils\checks\cpu_sse.c /FoC:\Users\CHOUCH~1\AppData\Local\Temp\tmphagtbu1r\Users\anaconda3\envs\py38_GPU_pyqt\Lib\site-packages\numpy\distutils\checks\cpu_sse.obj /WX) failed with exit status 2 output

`

So, could you tell me if it is related to the Virtual Studio version? Thanks

MS Tools Requirements?

I tried to install but I encountered an error. I downloaded the MS Build Tools, but I do not know which package to install. (The interface is in Turkish)
error

How to use YOLO with pc camera

Hi everyone,
I found this repository, and it works perfectly with downloaded videos and fotos, but I have a question. Do you know how to use this repository with my pc camera or an external camera?

Thanks in advance

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.