Giter Site home page Giter Site logo

tryolabs / norfair Goto Github PK

View Code? Open in Web Editor NEW
2.3K 34.0 237.0 509.39 MB

Lightweight Python library for adding real-time multi-object tracking to any detector.

Home Page: https://tryolabs.github.io/norfair/

License: BSD 3-Clause "New" or "Revised" License

Python 100.00%
tracking tracking-algorithm kalman-filter object-tracking video-tracking video-inference-loop deepsort object-detection pose-estimation python

norfair's Introduction

Norfair by Tryolabs logo

Hugging Face Spaces Open in Colab

PyPI - Python Versions PyPI Documentation Board Build status DOI License

Norfair is a customizable lightweight Python library for real-time multi-object tracking.

Using Norfair, you can add tracking capabilities to any detector with just a few lines of code.

Tracking players with moving camera Tracking 3D objects
Tracking players in a soccer match Tracking objects in 3D

Features

  • Any detector expressing its detections as a series of (x, y) coordinates can be used with Norfair. This includes detectors performing tasks such as object or keypoint detection (see examples).

  • Modular. It can easily be inserted into complex video processing pipelines to add tracking to existing projects. At the same time, it is possible to build a video inference loop from scratch using just Norfair and a detector.

  • Supports moving camera, re-identification with appearance embeddings, and n-dimensional object tracking (see Advanced features).

  • Norfair provides several predefined distance functions to compare tracked objects and detections. The distance functions can also be defined by the user, enabling the implementation of different tracking strategies.

  • Fast. The only thing bounding inference speed will be the detection network feeding detections to Norfair.

Norfair is built, used and maintained by Tryolabs.

Installation

Norfair currently supports Python 3.8+. The latest tested version to support Python 3.7 is Norfair 2.2.0. Later versions may work, but no specific support is planned.

For the minimal version, install as:

pip install norfair

To make Norfair install the dependencies to support more features, install as:

pip install norfair[video]  # Adds several video helper features running on OpenCV
pip install norfair[metrics]  # Supports running MOT metrics evaluation
pip install norfair[metrics,video]  # Everything included

If the needed dependencies are already present in the system, installing the minimal version of Norfair is enough for enabling the extra features. This is particularly useful for embedded devices, where installing compiled dependencies can be difficult, but they can sometimes come preinstalled with the system.

Documentation

Getting started guide.

Official reference.

Examples & demos

Hugging Face Spaces Open in Colab

We provide several examples of how Norfair can be used to add tracking capabilities to different detectors, and also showcase more advanced features.

Note: for ease of reproducibility, we provide Dockerfiles for all the demos. Even though Norfair does not need a GPU, the default configuration of most demos requires a GPU to be able to run the detectors. For this, make sure you install NVIDIA Container Toolkit so that your GPU can be shared with Docker.

It is possible to run several demos with a CPU, but you will have to modify the scripts or tinker with the installation of their dependencies.

Adding tracking to different detectors

Most tracking demos are showcased with vehicles and pedestrians, but the detectors are generally trained with many more classes from the COCO dataset.

  1. YOLOv7: tracking object centroids or bounding boxes.
  2. YOLOv5: tracking object centroids or bounding boxes.
  3. YOLOv4: tracking object centroids.
  4. Detectron2: tracking object centroids.
  5. AlphaPose: tracking human keypoints (pose estimation) and inserting Norfair into a complex existing pipeline using.
  6. OpenPose: tracking human keypoints.
  7. YOLOPv2: tracking with a model for traffic object detection, drivable road area segmentation, and lane line detection.
  8. YOLO-NAS: tracking object centroids or bounding boxes.

Advanced features

  1. Speed up pose estimation by extrapolating detections using OpenPose.
  2. Track both bounding boxes and human keypoints (multi-class), unifying the detections from a YOLO model and OpenPose.
  3. Re-identification (ReID) of tracked objects using appearance embeddings. This is a good starting point for scenarios with a lot of occlusion, in which the Kalman filter alone would struggle.
  4. Accurately track objects even if the camera is moving, by estimating camera motion potentially accounting for pan, tilt, rotation, movement in any direction, and zoom.
  5. Track points in 3D, using MediaPipe Objectron.
  6. Tracking of small objects, using SAHI: Slicing Aided Hyper Inference.

ROS integration

To make it even easier to use Norfair in robotics projects, we now offer a version that integrates with the Robotic Operating System (ROS).

We present a ROS package and a fully functional environment running on Docker to do the first steps with this package and start your first application easier.

Benchmarking and profiling

  1. Kalman filter and distance function profiling using TRT pose estimator.
  2. Computation of MOT17 scores using motmetrics4norfair.

Norfair OpenPose Demo

How it works

Norfair works by estimating the future position of each point based on its past positions. It then tries to match these estimated positions with newly detected points provided by the detector. For this matching to occur, Norfair can rely on any distance function. There are some predefined distances already integrated in Norfair, and the users can also define their own custom distances. Therefore, each object tracker can be made as simple or as complex as needed.

As an example we use Detectron2 to get the single point detections to use with this distance function. We just use the centroids of the bounding boxes it produces around cars as our detections, and get the following results.

Tracking cars with Norfair

On the left you can see the points we get from Detectron2, and on the right how Norfair tracks them assigning a unique identifier through time. Even a straightforward distance function like this one can work when the tracking needed is simple.

Norfair also provides several useful tools for creating a video inference loop. Here is what the full code for creating the previous example looks like, including the code needed to set up Detectron2:

import cv2
import numpy as np
from detectron2.config import get_cfg
from detectron2.engine import DefaultPredictor

from norfair import Detection, Tracker, Video, draw_tracked_objects

# Set up Detectron2 object detector
cfg = get_cfg()
cfg.merge_from_file("demos/faster_rcnn_R_50_FPN_3x.yaml")
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5
cfg.MODEL.WEIGHTS = "detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl"
detector = DefaultPredictor(cfg)

# Norfair
video = Video(input_path="video.mp4")
tracker = Tracker(distance_function="euclidean", distance_threshold=20)

for frame in video:
    detections = detector(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
    detections = [Detection(p) for p in detections['instances'].pred_boxes.get_centers().cpu().numpy()]
    tracked_objects = tracker.update(detections=detections)
    draw_tracked_objects(frame, tracked_objects)
    video.write(frame)

The video and drawing tools use OpenCV frames, so they are compatible with most Python video code available online. The point tracking is based on SORT generalized to detections consisting of a dynamically changing number of points per detection.

Motivation

Trying out the latest state-of-the-art detectors normally requires running repositories that weren't intended to be easy to use. These tend to be repositories associated with a research paper describing a novel new way of doing detection, and they are therefore intended to be run as a one-off evaluation script to get some result metric to publish on a particular research paper. This explains why they tend to not be easy to run as inference scripts, or why extracting the core model to use in another standalone script isn't always trivial.

Norfair was born out of the need to quickly add a simple layer of tracking over a wide range of newly released SOTA detectors. It was designed to seamlessly be plugged into a complex, highly coupled code base, with minimum effort. Norfair provides a series of modular but compatible tools, which you can pick and choose to use in your project.

Comparison to other trackers

Norfair's contribution to Python's object tracker library repertoire is its ability to work with any object detector by being able to work with a variable number of points per detection, and the ability for the user to heavily customize the tracker by creating their own distance function.

If you are looking for a tracker, here are some other projects worth noting:

  • OpenCV includes several tracking solutions like KCF Tracker and MedianFlow Tracker which are run by making the user select a part of the frame to track, and then letting the tracker follow that area. They tend not to be run on top of a detector and are not very robust.
  • dlib includes a correlation single object tracker. You have to create your own multiple object tracker on top of it yourself if you want to track multiple objects with it.
  • AlphaPose just released a new version of their human pose tracker. This tracker is tightly integrated into their code base, and to the task of tracking human poses.
  • SORT and Deep SORT are similar to this repo in that they use Kalman filters (and a deep embedding for Deep SORT), but they are hardcoded to a fixed distance function and to tracking boxes. Norfair also adds some filtering when matching tracked objects with detections, and changes the Hungarian Algorithm for its own distance minimizer. Both these repos are also released under the GPL license, which might be an issue for some individuals or companies because the source code of derivative works needs to be published.

Benchmarks

MOT17 and MOT20 results obtained using motmetrics4norfair demo script on the train split. We used detections obtained with ByteTrack's YOLOX object detection model.

MOT17 Train IDF1 IDP IDR Rcll Prcn MOTA MOTP
MOT17-02 61.3% 63.6% 59.0% 86.8% 93.5% 79.9% 14.8%
MOT17-04 93.3% 93.6% 93.0% 98.6% 99.3% 97.9% 07.9%
MOT17-05 77.8% 77.7% 77.8% 85.9% 85.8% 71.2% 14.7%
MOT17-09 65.0% 67.4% 62.9% 90.3% 96.8% 86.8% 12.2%
MOT17-10 70.2% 72.5% 68.1% 87.3% 93.0% 80.1% 18.7%
MOT17-11 80.2% 80.5% 80.0% 93.0% 93.6% 86.4% 11.3%
MOT17-13 79.0% 79.6% 78.4% 90.6% 92.0% 82.4% 16.6%
OVERALL 80.6% 81.8% 79.6% 92.9% 95.5% 88.1% 11.9%
MOT20 Train IDF1 IDP IDR Rcll Prcn MOTA MOTP
MOT20-01 85.9% 88.1% 83.8% 93.4% 98.2% 91.5% 12.6%
MOT20-02 72.8% 74.6% 71.0% 93.2% 97.9% 91.0% 12.7%
MOT20-03 93.0% 94.1% 92.0% 96.1% 98.3% 94.4% 13.7%
MOT20-05 87.9% 88.9% 87.0% 96.0% 98.1% 94.1% 13.0%
OVERALL 87.3% 88.4% 86.2% 95.6% 98.1% 93.7% 13.2%

Commercial support

Tryolabs can provide commercial support, implement new features in Norfair or build video analytics tools for solving your challenging problems. Norfair powers several video analytics applications, such as the face mask detection tool.

If you are interested, please contact us.

Citing Norfair

For citations in academic publications, please export your desired citation format (BibTeX or other) from Zenodo.

License

Copyright ยฉ 2022, Tryolabs. Released under the BSD 3-Clause.

norfair's People

Contributors

3dgiordano avatar agosl avatar aguscas avatar dekked avatar diegofernandezc avatar donbraulio avatar facundo-lezama avatar fcakyon avatar gfugante avatar huh-david avatar javiber avatar joaqo avatar juanfkurucz avatar kadirnar avatar moooises avatar pjarbas avatar quantumdot avatar rocioxl avatar selimb avatar shafu0x avatar wakame1367 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

norfair's Issues

Import metrics error

@gerhc @draix @dekked hi thanks for sharing this source code when i try to install pip install norfair[video] # Adds several video helper features running on OpenCV
pip install norfair[metrics] # Supports running MOT metrics evaluation
pip install norfair[metrics,video] # Everything included
I i get the following error "ImportError: cannot import name 'metrics'"

Please let me knw what is the issue

Wrap Darknet yolov4 in Norfair's Detection objects

Hi. Thank you for the great tool.

I am wondering if it is possible to wrap darknet-based yolov4 (https://github.com/AlexeyAB/darknet/blob/master/darknet_video.py) in Norfair's detection objects. The darknet-based yolov4 (darknet_video.py) provides detections for each detected object in the frame in the following format:

class (str),
confidence (one value in str), and
bbox (a tuple with 4 values).

This is shown in the following image:

image

Thanks a million.

Error with video variable if not run as a script

Hi Joaqo,

Using colab, when I run the Norfair detectron2 demo code by pasting the code into a cell, I receive the following error related to this line: video = Video(input_path="./video.mp4")
ValueError: not enough values to unpack (expected 2, got 0)
(Full error pasted below)

When the same code is called as a script (!python detectron2_cars.py) it runs with no issues.
Perhaps this is not unexpected behaviour, but I can't work out why it works in a script but not in the cell.


ValueError Traceback (most recent call last)
in ()
19
20 # Norfair
---> 21 video = Video(input_path="./video.mp4")
22 tracker = Tracker(distance_function=centroid_distance, distance_threshold=20)
23

1 frames
/usr/local/lib/python3.6/dist-packages/norfair/video.py in init(self, camera, input_path, output_path, output_fps, label, codec_fourcc)
83 )
84 self.task = self.progress_bar.add_task(
---> 85 self.abbreviate_description(description),
86 total=total_frames,
87 start=self.input_path is not None,

/usr/local/lib/python3.6/dist-packages/norfair/video.py in abbreviate_description(self, description)
185 def abbreviate_description(self, description):
186 """Conditionally abbreviate description so that progress bar fits in small terminals"""
--> 187 _, terminal_columns = os.popen("stty size", "r").read().split()
188 space_for_description = (
189 int(terminal_columns) - 25

ValueError: not enough values to unpack (expected 2, got 0)

CPU bottleneck when running the pose estimation demo

Hi,

I am trying to track pose estimates using the "Tracking pedestrians with AlphaPose" demo as a reference. However I am using Nvidia trt-pose (https://github.com/NVIDIA-AI-IOT/trt_pose) instead of alpha pose as given in the demo.

The pose estimation alone runs well at around 25fps (having about 50% CPU usage), however when I include the pose tracking, my fps drops to about 10-12 fps and its definitely a CPU bottleneck as my CPU usage is around 98% when running tracking.
I would like to know if this is considered "normal" with the pose estimation tracking or I am doing something wrong in my end.

PC specs
GTX 1060 6GB
intel i7 8500 H
6GB ram

Thanks for the great work.

fast objects

heya, im trying to figure out what param i can tinker with to track fast moving objects. in my case the object is present in 4,5 frames. its being detected by yolo, but tracker wont pick up on it. i set distance to a higher amount and thinking transience might be it but i dont 100% understand it.

Unexpected keyword argument 'past_detections_length'

I was trying out to increase the length of a tracked object's past detections, but got an 'Unexpected keyword argument'. I am running norfair==0.3.1 (pip installed). I think the version of the pip package doesn't correspond with the one published in github? I attach a simple script to reproduce the issue:

from norfair import Tracker

max_distance_between_points=50

def euclidean_distance(detection, tracked_object):
    return np.linalg.norm(detection.points - tracked_object.estimate)

tracker = Tracker(
        distance_function=euclidean_distance,
        distance_threshold=max_distance_between_points,
        past_detections_length=50

    )

yolov5 integration with object detection

Hello,

Thanks for the repository at first. I am working on a NVIDIA Jetson TX2 developer kit and I can detect humans with yolov5 on GPU. How to make norfair work on GPU? If we can not is it lightweight for Jetson TX2 too?

Speeding up detection and tracking processes

Hi,

Thank you for the useful tracker. I just wanted to ask if it is possible to run a detector (such as yolo) to detect objects every N frames only, while norfair tracks objects and fills in the gaps efficiently to speed up the overall process. Running yolo or any other detector for each frame all the time might slow down the FPS.

If the idea is workable, which function from norfair can do the tracking when the detector is disabled?

Thanks a million..

Tracking for my own ojects.

Hello, thanks for the great work. My question is: I have trained weights for my custom objects. Could I just applied the weights and track the custom objects? Thanks a lot.

Meanwhile, in testing, I found there is a problem about drawbox:

module 'norfair' has no attribute 'draw_boxes'

Citation information

This tool is amazing, and deserves to be cited in my work. Can you please provide the name of the authors so I can cite this tool properly?

Add audio from original video

Hello. I'm using Norfair to track objects in MP4 videos (which also contain audio). I'm using code similar to the OpenPose example (updated to support the new OpenPose API). Here's an excerpt of the code:

import norfair
from norfair import Detection, Tracker, Video
import pyopenpose as op

frame_skip_period = 3
detection_threshold = 0.01
distance_threshold = 0.4

for input_path in args.files:
    video = Video(input_path=input_path)
    tracker = Tracker(
        distance_function=keypoints_distance,
        distance_threshold=distance_threshold,
        detection_threshold=detection_threshold,
        point_transience=2,
    )
    keypoint_dist_threshold = video.input_height / 25

    for i, frame in enumerate(video):
        detected_poses = pose_detector(frame)
        if detected_poses is not None:
            detections = (
                []
                if not detected_poses.any()
                else [
                    Detection(p, scores=s)
                    for (p, s) in zip(detected_poses[:, :, :2], detected_poses[:, :, 2])
                ]
            )
            tracked_objects = tracker.update(
                detections=detections, period=frame_skip_period
            )
            norfair.draw_points(frame, detections)
            norfair.draw_tracked_objects(frame, tracked_objects)
        video.write(frame)

Here's the output MP4:
image

It has the poses overlaid appropriately. However, the original video had audio. Is there a way this can be added to the output video with Norfair?

Tracking static objects with moving camera

Hi,

I am trying to track static objects on the road with a camera mounted to a moving vehicle. I set hit_inertia_min to 1, hit_inertia_max to 10 and point_transcience to 50. I noticed that at certain vehicle speeds, the tracking works fine. However, at slower speeds in a parking lot for example, the same object in view is interpreted as new object every few seconds. Is this because of hit_inertia_max?
What parameters are recommended to change and can we change these dynamically based on vehicle speed while tracking is in progress?

face mask integration

Hi, firstly thanks for the work.

I'm implementing a face mask detection system over street cameras and I wonder that is there any way to integrate face mask detection with norfair tracking system. I detected faces and I use it with norfair no problem. But when i want to integrate the face mask detection system i couldn't figure out how to track and determine whether a traced face have mask or not depending on the previous results.

Is there any way to do that? Thanks in advance

Darknet integration

Hello, thanks for sharing this awesome tracker.
Im trying to integrate it to darknet, but its not behaving as expected.

heres what i do:

detections = darknet.detect_image(network, class_names, darknet_image, thresh=0.2)
        detections2 = [
            Detection(get_centroid(detection[2], width, height), data=detection[2])
            for detection in detections
        ]
tracked_objects = tracker.update(detections=detections2)
norfair.draw_tracked_objects(frame_resized, tracked_objects)

the tracker is somewhat populated if i do a print(tracker.tracker_objects) but it doesnt look right and the draw_tracked_objects is dead. since darknet outputs x and y by default ive tried a variety of variations with converting to bbox etc. nothing has worked and im hoping for a little push :)

is the data parameter necessary in my case?

edit
i forgot to output darknets centroid as numpy array, its working now :)
for those who maybe end up in the same issue, change out get_centroid with something like this (just clean up my messy demodef):

def get_detlist(detections):
    arr = np.empty((0, 2), int)
    arr = np.append(arr, np.array([[detections[0],detections[1]]]), axis=0)
    return arr
    
Detection(get_detlist(detection[2]), data=detection[2])

Paths.draw() error

Hi all,

When I call Paths.draw() I get the following error even though draw_tracked_objects(), which gets the same input arguments, works fine.

Traceback (most recent call last):
  File "C:\Users\path/1445870238.py", line 76, in <module>
    Paths.draw(original_frame, tracked_objects)
TypeError: draw() missing 1 required positional argument: 'tracked_objects'

Detection and TrackedObject classes should contain class label (when applicable)

Hi,
before I begin I would like to thank you for the amazing work on this library and for the clear repo!

As the title suggests, I can't see why you are not giving end-users the possibility to keep track of class labels when working with traditional object detector. I read that you added the data attribute to the Detection class (which can store them eventually), and I also read on issue #47 that you don't use Norfair too much for different object classes, but I still think that an (optional) attribute regarding object classes would be great.

I was working on tracking different object classes, and I think it would be not optimal of me instantiating multiple trackers in order to keep track of multiple object classes. I guess the codebase right now is not ready to handle multiple classes in a single tracker, since different classes could be mixed up together by mistake, even though it would be a really nice addition.

If I had a class-label attribute for the Detection and/or TrackedObjects classes, I would love that.

Thanks for the hard work!

Drawing boxes

Hi, I'm successfully able to convert my x,y,w,h coordinates to cx,cy, create Detection objects with those center points, and draw them with draw_tracked_objects.

However I saw that there's a drawing.draw_tracked_boxes function, and thought it'd look nicer if I drew boxes instead, but I can't seem to figure out how this works. I attempted to create Detection objects with 4 points (xyxy), but this class doesn't seem to accept 4 points. Is drawing tracked boxes currently supported?

making it threaded?

In our experience, tracking runtime can be greatly reduced by threading each of the individual tracked objects (in this case TrackedObject) as there's less blocking etc. I don't have exact numbers, but it could easily be 10x with lots of objects and not-to-expensive detector. It's pretty easy to do as well.

Would there be interest in this? My proposal would be to have a single-threaded or multi-threaded mode, as the former can be easier to debug etc.

Get the index of input detections based on object id

Hi thanks for the great work.
I want to get the index of input detections based on an object id, how can I achieve that ?
For example, I tried:

detections = [Detection(p) for p in predictions.pred_boxes.get_centers().numpy()]
tracked_objects = tracker.update(detections=detections)

## I want to get the detections of objects whose id are 1 or 2
targeted_ids = [1, 2]
targeted_idx = [idx for idx, tObj in enumerate(tracked_objects) if tObj.id in targeted_ids]

## however, this is not I want to track
targeted_detections = detections[targeted_idx]

However, it turns out that the targeted_detections are not the objects I am interested in (i.e., with tracked id 1 or 2)

Thanks!

How to kill tracks when they leave scene?

First of all, thanks for the package. I am currently working on tracking cars using the detections from a yolov5 network. The problem is that it works fine when cars go through different lanes. My problem is that I am recording an entrance and exit (same lane) and when a car leaves and another one enters they get matched as the same car. I tried reducing the point_transience but that did not really help. Also, is there a common practice when setting up the hyperparameters of the tracker (like recording the average distance between detections?

visualising bboxes/masks

Hey, how about an extension for the Detection class and drawing.py to add bboxes and masks to directly visualize them?

Do you think this would fit norfair?

norfair with yolov5

hi, I just tried to real time- detect people and want to get for example named A person enter and camera saw it then first x,y coordinate and last one according to tracking but I couldnt. I also can not show the ids on the frame.. I examined on README tutorial but I can not still fix it. Could you help me ?

Max inertia

What happens when the hit inertia counter hits hit_inertia_max?
As per docs, hit_inertia_max defines how large inertia can grow, and therefore defines how long an object can live without getting matched to any detections. But does hit_inertia_min dictate how long an object lives before being destroyed for not getting matched?

Pose tracking

Hello,

I've a file that contains keypoints from my objects from all the frames of my video. I convert this values into numpy arrays to pass them to the norfair Detection object, which works correct. However, the don't always have the same shape in all the frames sometimes they have shape (11,2) frame n and in frame n+1 the shape is (10,2) which led me to the following error.

distances = np.linalg.norm(detected_pose.points - tracked_pose.estimate, axis=1)
ValueError: operands could not be broadcast together with shapes (11,2) (10,2) 

I'm trying to use this script as reference to develop my own one.

Thank you for the help.

class value and export coordinates to csv

Hi, I have a question related to issue # 19:
I'm working through the detectron2 demo (detectron2_cars.py) to understand the code and wanted to change the detected class from cars to person. I found that changing row 32 to "if c == 0" does this. How do I export/print a list of the detected classes?

Also, I would like to export to a csv file the centroid/center coordinates for each tracked object for each frame (similar to print_objects_as_table, but ideally with the data from each frame in one row). Any suggestions?

distance function examples

Are there any more examples of distance functions? I'm doing tracking with bounding boxes, and a feature extractor on the box. Is there an example that uses this type of setup for norfair?

tracker update empty list with yolov5

Hi community, I am trying to use the demo of yolov5 in a custom video.
The problem is that norfair does not draw anything at all.
I have done some debugging and I can see that yolo_detections look fine. It contains the bounding boxes, the score and the category. Besides, detections also looks fine, it contains the centroid of the object and the score.
The main problem lies on: tracked_objects = tracker.update(detections=detections).
tracked_objects remains an empty list...
Any ideas on what is going on?

Thanks in advance! ๐Ÿ˜„

Dependency Issue

I'm using Norfair and another library. However, I'm getting a version conflict for rich.

Is it possible to change the dependency of norfair to be able to use higher version of rich? I'm using poetry to install norfair and I'm getting a "SolveProblemError Because norfair (0.4.0) depends on rich (>=9.10.0,<10.0.0)". I need to use at least version 11.2.0.

I can force install higher version of rich using pip but I think it would be cleaner if the dependencies are clear. Norfair seems to work fine even with higher version of rich.,

MicroPython support

Hi, there are a lot of applications these days on embedded devices that need a customizable object tracking library like norfair. Is there any plan for supporting micropython?

hit_inertia_min and initialization_delay meaninng and behaviour

Thank you for the useful project! I have some misunderstanding of tracker parameters from the API. From the description of initialization_delay:

Each tracked object waits till its internal hit inertia counter goes over hit_inertia_min to be considered as a potential object to be returned to the user by the Tracker. The argument initialization_delay determines by how much the object's hit inertia counter must exceed hit_inertia_min to be considered as initialized and get returned to the user as a real object.

From there, my understanding was the following. Let's say we have a static object which we get during the next 10 shots, and then it disappears. If we set hit_inertia_min = 5, hit_inertia_max = 10, initialization_delay = 3, then the tracker will wait till it gets 5 matches (hits), then it considers the object as a potential object (which is not clear what it is). Then we wait till the counter exceed hit_inertia_min + initialization_delay = 8, and on the 9th hit, we will get a real object.

At the same time, I did a test run with a fixed detection that I feed to the tracker during 10 shots. After 20 updates, the outputs of tracker.update(detections=detections) looks like this:

Update number: 0
Tracked objects: []
____________
Update number: 1
Tracked objects: []
____________
Update number: 2
Tracked objects: []
____________
Update number: 3
Tracked objects: [Object_1(age: 3, hit_counter: 9, last_distance: 0.00, init_id: 33)]
____________
Update number: 4
Tracked objects: [Object_1(age: 4, hit_counter: 10, last_distance: 0.00, init_id: 33)]
____________
Update number: 5
Tracked objects: [Object_1(age: 5, hit_counter: 11, last_distance: 0.00, init_id: 33)]
____________
Update number: 6
Tracked objects: [Object_1(age: 6, hit_counter: 10, last_distance: 0.00, init_id: 33)]
____________
Update number: 7
Tracked objects: [Object_1(age: 7, hit_counter: 11, last_distance: 0.00, init_id: 33)]
____________
Update number: 8
Tracked objects: [Object_1(age: 8, hit_counter: 10, last_distance: 0.00, init_id: 33)]
____________
Update number: 9
Tracked objects: [Object_1(age: 9, hit_counter: 11, last_distance: 0.00, init_id: 33)]
____________
Update number: 10
Tracked objects: [Object_1(age: 10, hit_counter: 10, last_distance: 0.00, init_id: 33)]
____________
Update number: 11
Tracked objects: [Object_1(age: 11, hit_counter: 9, last_distance: 0.00, init_id: 33)]
____________
Update number: 12
Tracked objects: [Object_1(age: 12, hit_counter: 8, last_distance: 0.00, init_id: 33)]
____________
Update number: 13
Tracked objects: [Object_1(age: 13, hit_counter: 7, last_distance: 0.00, init_id: 33)]
____________
Update number: 14
Tracked objects: [Object_1(age: 14, hit_counter: 6, last_distance: 0.00, init_id: 33)]
____________
Update number: 15
Tracked objects: [Object_1(age: 15, hit_counter: 5, last_distance: 0.00, init_id: 33)]
____________
Update number: 16
Tracked objects: [Object_1(age: 16, hit_counter: 4, last_distance: 0.00, init_id: 33)]
____________
Update number: 17
Tracked objects: []
____________
Update number: 18
Tracked objects: []
____________
Update number: 19
Tracked objects: []
____________

Which means that when hits (age) exceed initialization_delay, we already have Object_1 in the list of object. At age value 9, nothing happens (change from potential to real object). Also, we see that when age exceeds initialization_delay, hit_counter start from the value 9 (what means this number?). Then the disappearance of an object at hit_counter value 3 is expected as well as max hit_counter value = 10 (but it's fluctuating from 10 to 11).

I would be glad for some clarifications on params meaning.

Detection class label

Hi @joaqo, thank you for the amazing work!

Im currently using deepsort with yolov5 for tracking the objects that Im interested in. I would like to change the deep sort and use your library which is faster. However I a question regard to results of the tracking. In Deepsort, Im able to return Tracker_id, Class name, confidence score and bbox. I wanted to ask you is there any way to access to these information via norfair tracking? I tried to access these information but no success. Would appropriate any help..

Thanks!

Embeddings in TrackedObject

Hi,

I'm trying to implement a tracker which uses embedding.
I'm storing each embedding in the detection.data, however I also need a place where to store the TrackedObject embedding, in order to compute a suitable distance.
Is there something similar to detection.data for TrackedObject?

v0.3.2 release date

Hello, we are using some of the features that were added afte rthe 0.3.1 release, when do you think to publish a new release?

yolov4demo defining the object to be detected

Thanks for this work!

Where is the object class to be detected defined (out of 80 classes) in yolov4demo.py? The class 'car' is defined somewhere I believe so where to change that?

Checking cuda availability in YOLOv4 demo

Hi,

I was running your YOLOv4 demo on a standard Google Colab CPU environment earlier. In YOLO.init(*) line 21, the map_location did not check the availability of CUDA. This threw me a NoneType Error which took a few minutes to locate and solve. Perhaps you can do a quick revision of this line to also include a check on cuda.is_available()?

Thanks =)

How to evalaute norfair on my own dataset ?

Hello ,

I am trying to implement norfair along with yolov4 with my own dataset. I want to know how well norfair is tracking objects in my dataset ? can you please help me in evaluating norfair ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.