Giter Site home page Giter Site logo

atomscott / sportslabkit Goto Github PK

View Code? Open in Web Editor NEW
220.0 6.0 19.0 253.77 MB

A python package for turning sports video into csv files

Home Page: https://sportslabkit.rtfd.io

License: GNU General Public License v3.0

Makefile 0.01% Jupyter Notebook 99.49% Python 0.50%
soccer tracking football data-science python computer-vision sports sports-analytics multi-object-tracking multiobject-tracking

sportslabkit's Introduction

SportsLabKit

Introduction

Meet SportsLabKit: The essential toolkit for advanced sports analytics. Designed for pros and amateurs alike, we convert raw game footage into actionable data.

We're kicking off with soccer and expanding to other sports soon. Need to quantify your game? Make human movement computable with SportsLabKit.

Features

Core Capabilities

  • High-Performance Tracking: In-house implementations of SORT, DeepSORT, ByteTrack, and TeamTrack for object tracking in sports.

Flexibility

  • Plug-and-Play Architecture: Swap out detection and ReID models on the fly. Supported models include YOLOv8 and torch-ReID.

Usability

  • 2D Pitch Calibration: Translate bounding boxes to 2D pitch coordinates.

  • DataFrame Wrappers: BoundingBoxDataFrame and CoordinatesDataFrame for effortless manipulation and analysis of tracking data.

Tutorials

  • Get Started: Your first steps in understanding and setting up SportsLabKit.
  • User Guide: A comprehensive guide for effectively using the toolkit in real-world scenarios.
  • Core Components: Deep dive into the essential elements that make up SportsLabKit, including tracking algorithms and DataFrame wrappers.

Installation

To install SportsLabKit, simply run:

pip install SportsLabKit

Note: We're in active development, so expect updates and changes.

Example Usage

To get started with tracking your first game, follow this simple example:

import sportslabkit as slk

from sportslabkit.mot import SORTTracker

# Initialize your camera and models
cam = slk.Camera(path_to_mp4)
det_model = slk.detection_model.load('YOLOv8x', imgsz=640)
motion_model = slk.motion_model.load('KalmanFilter', dt=1/30, process_noise=10000, measurement_noise=10)

# Configure and execute the tracker
tracker = SORTTracker(detection_model=det_model, motion_model=motion_model)
tracker.track(cam[:100])
res = tracker.to_bbdf()

save_path = "assets/tracking_results.mp4"
res.visualize_frames(cam.video_path, save_path)

# The tracking data is now ready for analysis

The output is a BoundingBoxDataFrame, a multi-level Pandas DataFrame that contains Team ID, Player ID, and various attributes like bounding box dimensions. Each row is indexed by Frame ID for easy analysis. The DataFrame is also customizable, allowing you to adapt Team and Player IDs as needed.

Example of BoundingBoxDataFrame

Roadmap

  • Better CV tools: Implement state of the art tracking methods, add event detection etc.

  • Unified Data Representation: In the pipeline are event data detection and a single DataFrame structure for both event and trajectory data.

  • Enhanced Compatibility: Upcoming support for data export to standard formats for easy integration with other tools.

Contributing

See the Contributing Guide for more information.

Contributors

All Contributors

Atom Scott
Atom Scott

🚧
Ikuma Uchida
Ikuma Uchida

shunsuke-iwashita
shunsuke-iwashita

🐛

This project follows the all-contributors specification. Contributions of any kind welcome!

Related Papers

SoccerTrack:
A Dataset and Tracking Algorithm for Soccer with Fish-eye and Drone Videos

Atom Scott*, Ikuma Uchida*, Masaki Onishi, Yoshinari Kameda, Kazuhiro Fukui, Keisuke Fujii

Presented at CVPR Workshop on Computer Vision for Sports (CVSports'22). *Authors contributed equally.

See papers that cite SoccerTrack on Google Scholar.

Citation

@inproceedings{scott2022soccertrack,
  title={SoccerTrack: A Dataset and Tracking Algorithm for Soccer With Fish-Eye and Drone Videos},
  author={Scott, Atom and Uchida, Ikuma and Onishi, Masaki and Kameda, Yoshinari and Fukui, Kazuhiro and Fujii, Keisuke},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={3569--3579},
  year={2022}
}

sportslabkit's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

sportslabkit's Issues

Template Google Colab Notebooks for Comprehensive Video Analysis

Search before asking

  • I have searched the SportsLabKit issues and found no similar feature requests.

Description

Hello,
I hope you've been well since yesterday aha! I wanted to put forth a suggestion that I believe could greatly benefit the user community of SportLabsKit. Have you considered setting up template Google Colab notebooks that offer a comprehensive example of processing a video from start to finish?
I believe it would be beneficial for the user community if SportLabsKit provided template Google Colab notebooks that offer a comprehensive walkthrough of processing a video from start to finish.

Use case

This could encompass steps like:

Importing a video.
Tracking players and the ball.
Displaying tracking results superimposed on the video.
Defining the field boundaries.
Analyzing data and visualizing it on a 2D pitch.
Drawing conclusions based on the analysis.
Roboflow's educational notebooks can serve as an inspiration: Roboflow Notebook Link

Additional

I understand that not all functionalities of SportLabsKit are fully polished. However, having iterative versions of such a Colab notebook, updated with each enhancement, could be a practical way to:

Facilitate quick testing and refinement.
Speed up development.
Act as an invaluable educational resource, thus lowering the entry barrier.
Having worked with computer vision for some time, I see a vast potential in SportLabsKit. I'm willing to contribute, be it in drafting, testing, or other aspects, to bring this suggestion to fruition.

Thank you for considering this proposal. Looking forward to the community's feedback!

Warm regards,
Joris

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!

Errors notebooks/02_user_guide/detection_with_yolov5.ipynb

Search before asking

  • I have searched the SoccerTrack issues and found no similar bug report.

SoccerTrack Component

Training

Bug

When I want to split the video into individual frames. (Step 5 in notebooks/02_user_guide/detection_with_yolov5.ipynb

using soccertrack.datasets.get_path('top_view')


import cv2

for frame_num, frame in enumerate(tqdm(cam.iter_frames())):
    file_path = f'{save_dir}/{frame_num:06d}.png'
    cv2.imwrite(file_path, frame)

Problems:

  1. {frame_num:06d}.png starting from 000000.png (but annotations starts from 000001.txt)
  2. When finishing this step: result 99% - 891/900

Environment

No response

Minimal Reproducible Example

No response

Additional

No response

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!

AttributeError: 'list' object has no attribute 'visualize_frames' (notebooks/01_get_started/introduction_to_soccertrack.ipynb)

Search before asking

  • I have searched the SoccerTrack issues and found no similar bug report.

SoccerTrack Component

No response

Bug

last code section (all the rest passed successfully):

save_path = "assets/tracking_results.mp4"
res.visualize_frames(cam.video_path, save_path)

out:

AttributeError Traceback (most recent call last)
Cell In [13], line 2
1 save_path = "assets/tracking_results.mp4"
----> 2 res.visualize_frames(cam.video_path, save_path)

AttributeError: 'list' object has no attribute 'visualize_frames'

Environment

No response

Minimal Reproducible Example

No response

Additional

No response

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!

[Feature] Intuitive Interface for Core Objects

質問する前に検索

  • SoccerTrack issues を検索したところ、同様の機能要望は見つかりませんでした。

説明

頭を使わずにSoccerTrackが活用できるような設計にする。

使用例

下記は使用イメージ、今後変わる可能性あり。

from soccertrack import MultiObjectTracker, ObjectDetector, Camera

camera = Camera(input_path="video.mp4")
detector = ObjectDetector(cfg="detector_conf.yaml")
tracker = MultiObjectTracker(cfg="detector_conf.yaml") 

for frame in camera.iter_frames(frame_type='include_camera_ino'):
    detections = detector(frame)
    tracked_objects = tracker.update(detections=detections)

tracker.results()
classDiagram
    Animal <|-- Duck
    Animal <|-- Fish
    Animal <|-- Zebra
    Animal : +int age
    Animal : +String gender
    Animal: +isMammal()
    Animal: +mate()
    class Duck{
        +String beakColor
        +swim()
        +quack()
    }
    class Fish{
        -int sizeInFeet
        -canEat()
    }
    class Zebra{
        +bool is_wild
        +run()
    }
Loading

その他

構成要素の実装

  • #23
  • A better ObjectDetector Object
  • A better MultiObjectTracker Object

既存ツールの調査

motpy
import numpy as np

from motpy import Detection, MultiObjectTracker

# create a simple bounding box with format of [xmin, ymin, xmax, ymax]
object_box = np.array([1, 1, 10, 10])

# create a multi object tracker with a specified step time of 100ms
tracker = MultiObjectTracker(dt=0.1)

for step in range(10):
    # let's simulate object movement by 1 unit (e.g. pixel)
    object_box += 1

    # update the state of the multi-object-tracker tracker
    # with the list of bounding boxes
    tracker.step(detections=[Detection(box=object_box)])

    # retrieve the active tracks from the tracker (you can customize
    # the hyperparameters of tracks filtering by passing extra arguments)
    tracks = tracker.active_tracks()

    print('MOT tracker tracks %d objects' % len(tracks))
    print('first track box: %s' % str(tracks[0].box))
nofair
import cv2
import numpy as np
from detectron2.config import get_cfg
from detectron2.engine import DefaultPredictor

from norfair import Detection, Tracker, Video, draw_tracked_objects

# Set up Detectron2 object detector
cfg = get_cfg()
cfg.merge_from_file("demos/faster_rcnn_R_50_FPN_3x.yaml")
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5
cfg.MODEL.WEIGHTS = "detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl"
detector = DefaultPredictor(cfg)

# Norfair
video = Video(input_path="video.mp4")
tracker = Tracker(distance_function=euclidean_distance, distance_threshold=20)

for frame in video:
    detections = detector(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
    detections = [Detection(p) for p in detections['instances'].pred_boxes.get_centers().cpu().numpy()]
    tracked_objects = tracker.update(detections=detections)
    draw_tracked_objects(frame, tracked_objects)
    video.write(frame)
multi-object-tracker
from motrackers import CentroidTracker # or IOUTracker, CentroidKF_Tracker, SORT
input_data = ...
detector = ...
tracker = CentroidTracker(...) # or IOUTracker(...), CentroidKF_Tracker(...), SORT(...)
while True:
    done, image = <read(input_data)>
    if done:
        break
    detection_bboxes, detection_confidences, detection_class_ids = detector.detect(image)
    # NOTE: 
    # * `detection_bboxes` are numpy.ndarray of shape (n, 4) with each row containing (bb_left, bb_top, bb_width, bb_height)
    # * `detection_confidences` are numpy.ndarray of shape (n,);
    # * `detection_class_ids` are numpy.ndarray of shape (n,).
    output_tracks = tracker.update(detection_bboxes, detection_confidences, detection_class_ids)
    # `output_tracks` is a list with each element containing tuple of
    # (<frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, <x>, <y>, <z>)
    for track in output_tracks:
        frame, id, bb_left, bb_top, bb_width, bb_height, confidence, x, y, z = track
        assert len(track) == 10
        print(track)

PRを提出する意思はありますか?

  • Yes!PRを投稿して協力できます。

PRを投稿して頂ける場合、以下の項目の確認もお願いします。

実装方針

確認項目

  • 要求された機能の詳細についてコア開発者と議論しました。
  • Pull Requestの下書きを作りました。
  • Pull Request を作りました。
  • PR をこの issue にリンクしました。

Data Synchronization

Hi Atom.
I have extracted the top-view and wide-view image frames and checked the first image but they are not synchronous. I have computed the homographies for wide-view and GNSS coordinates and converted them to the local pitch coordinates. To check them, I also computed the inverse homography matrix from the wide-view coordinates and applied it to the first frame of the wide-view local pitch coordinates and the first row (13:40:18.100000) of the GNSS local pitch coordinates and displayed them but they also are not matched.
top-view_00000000
wide-view_00000000

Red dots are from wide-view and blue dots are from GNSS.
output_gnss_wv

[Feature] Visualization of tracking results

Search before asking

  • I have searched the SoccerTrack issues and found no similar feature requests.

Description

Use case

Additional

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!

If you are willing to submit a PR, please do the following.

  • I have discussed the requested feature's details with core devs.
  • I have made a Pull Request (or draft PR).
  • I have linked the PR to this issue.

How to convert GNSS coordinates from 'lat, lon' to the pitch coordinates?

Hi, I've just downloaded SoccerTrack dataset and try to explore GNSS data.

However, the coordinates of GNSS data are in 'lat, lon', so I couldn't compare the coordinates which were obtained from camera.

How can I convert GNSS coordinates from 'lat, lon' to the pitch coordinates?

Thanks for releasing valuable soccer dataset publicly!

ModuleNotFoundError: No module named 'soccertrack.types.detection'; 'soccertrack.types' is not a package

Search before asking

  • I have searched the SoccerTrack issues and found no similar bug report.

SoccerTrack Component

Detection

Bug

When I importing the package
from soccertrack import detection_model
Run....

Output:

from soccertrack import detection_model
from soccertrack.utils import get_git_root
# let's load the first frame of the video

File /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/soccertrack/detection_model/init.py:1
----> 1 from soccertrack.detection_model.base import BaseDetectionModel
2 from soccertrack.logger import logger
4 from soccertrack.detection_model.yolov5 import YOLOv5, YOLOv5n, YOLOv5s, YOLOv5m, YOLOv5l, YOLOv5x

File /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/soccertrack/detection_model/base.py:10
8 from soccertrack.logger import logger
9 from soccertrack.types import Detection
---> 10 from soccertrack.types.detection import Detection
11 from soccertrack.types.detections import Detections
12 from soccertrack.utils import read_image

ModuleNotFoundError: No module named 'soccertrack.types.detection'; 'soccertrack.types' is not a package

how to fix this?
thank you in advance

Environment

No response

Minimal Reproducible Example

No response

Additional

No response

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!

From bbdf to pitch coordinates

I want to congratulate you about the amazing project you have created. It definitely provides a lot of value for the coaches and data analysts. Using only the lines of code in the readme page I have managed to create a bbdf from a video of a game of my team.

However, I want to dig deeper and calculate the distance covered by players and their speed and implement a pitch control algorithm. In order to do this I need to go from bbdf to pitch coordinates and visualize players position in a 2-d pitch.

I have read the user guide about the pitch coordinates but in the user guide you use already created csv's that I am not really sure how to create.

I want to ask the community to provide me with the appropriate lines of code that turn the bbdf that is created from the code in the readme page to pitch coordinates and how i will be able to visualize them. Please include comments to explain what its line of code exactly does.

[Feature] Evaluation for common MOT metrics

Search before asking

  • I have searched the SoccerTrack issues and found no similar feature requests.

Description

Use case

Additional

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!

If you are willing to submit a PR, please do the following.

  • I have discussed the requested feature's details with core devs.
  • I have made a Pull Request (or draft PR).
  • I have linked the PR to this issue.

AttributeError: 'ExponentialMovingAverage' object has no attribute 'gamma'

Search before asking

  • I have searched the SportsLabKit issues and found no similar bug report.

SportsLabKit Component

No response

Bug

I got this error when trying to run 06_tracking_the_players.ipynb:

スクリーンショット 2024-01-17 14 01 28

Maybe this error is concerned version of torch or other libraries but I couldn't fix this error.

Environment

  • SportsLabKit: 0.3.1 torch: 2.1.2 MacOS:14.2.1(23C71) Python 3.10.0

Minimal Reproducible Example

from sportslabkit.mot import TeamTracker
import numpy as np 

slk.logger.set_log_level('INFO')
det_model = slk.detection_model.load(
    model_name='yolov8',
    model=root/'models/yolov8/model=yolov8x-imgsz=512.pt',
    conf=0.25,
    iou=0.6,
    imgsz=640,
    device='mps',
    classes=0,
    augment=True,
    max_det=35
)

image_model = slk.image_model.load(
    model_name='mobilenetv2_x1_0',
    image_size=(32,32),
    device='cpu'
)

motion_model = slk.motion_model.load(
    model_name='ExponentialMovingAverage',
)
# motion_model = slk.motion_model.load(
#     model_name='SingleTargetLSTM',
#     model='/Users/atom/Github/SoccerTrack/models/teamtrack/LSTM-F_Soccer_Tsukuba3-epoch=79-val_nll_loss=-2.74.ckpt',
# )

keypoint_json = root / 'notebooks/02_user_guide/assets/soccer_keypoints.json'
cam.source_keypoints, cam.target_keypoints = slk.utils.load_keypoints(keypoint_json)

# calibration model return a 3x3 homography matrix for each frame
calibration_model = slk.calibration_model.load(
    model_name='DummyCalibrationModel',
    homographies=cam.H,
    mode='constant'
)

first_matching_fn = slk.matching.MotionVisualMatchingFunction(
    motion_metric=slk.metrics.EuclideanCMM2D(use_pred_pt=True),
    motion_metric_gate=0.2,
    visual_metric=slk.metrics.CosineCMM(),
    visual_metric_gate=0.2,
    beta=0.9,
)

second_matching_fn = slk.matching.SimpleMatchingFunction(
    metric=slk.metrics.EuclideanCMM2D(use_pred_pt=True),
    gate=0.9,
)

# team_detection_callback = slk.callbacks.TeamDetectionCallback(classication_model=TeamClassifier())

class PrintingCallback(): # removed 'slk.callbacks.Callback' because there is no 'Callback' class in mot/callbacks
    def on_track_sequence_start(self, tracker):
        tracklets = tracker.alive_tracklets + tracker.dead_tracklets
        print(f"Tracking started with {len(tracklets)} tracklets")
    
    def on_track_sequence_end(self, tracker):
        tracklets = tracker.alive_tracklets + tracker.dead_tracklets
        print(f"Tracking ended with {len(tracklets)} tracklets")

callbacks = [PrintingCallback()]

tracker = TeamTracker(
    detection_model=det_model,
    image_model=image_model,
    motion_model=motion_model,
    calibration_model=calibration_model,
    first_matching_fn=first_matching_fn,
    second_matching_fn=second_matching_fn,
    detection_score_threshold=0.6,
    max_staleness=2,
    min_length=2,
    callbacks=callbacks,
)

tracker.track(frames)[0]

Additional

'SingleTargetLinear' is not available, so I used EMA instead of it.

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!

About Top-View Coordinates.

Hi Atom,
I have downloaded the datasets and tried to visualize them using the visualize_frame function but the top-view coordinates don't seem to match the video.
And I have observed the drone sometimes rotates itself a little bit in a video but it seems that the dron_keypoints.json doesn't consider it.
I have wanted to convert top-view coordinates to local pitch coordinates and use them as ground truths.
I was wondering if I have missed some steps.

tv_output

Sphinx extensions to consider for better documentation

Below are a list of sphinx extensions that seem useful. Add to this list if there are more to consider.

  • - sphinx.ext.mathjax - (Sphinx loads this by default) for math formulas
  • - sphinxcontrib.bibtex - for bibliographic references
  • - sphinxcontrib.rsvgconverter - for SVG->PDF conversion in LaTeX output
  • - sphinx_copybutton for adding “copy to clipboard” buttons to all text/code boxes
  • - sphinx_gallery.load_style to load CSS styles for thumbnail galleries
  • [ ]

Error when import podm library

Search before asking

Question

Hi, I cloned this repo and tried to follow your tutorial notebook(detect_and_track.ipynb) in develop branch.

However, I faced an error as below, so I installed podm package but it still occurs.

Screen Shot 2022-08-30 at 8 15 53 PM

How should I fix this error?

Thanks.

Additional

No response

[Feature] Inference pipeline

質問する前に検索

  • SoccerTrack issues を検索したところ、同様の機能要望は見つかりませんでした。

説明

頭を使わずにSoccerTrackが活用できるパイプラインを実装する。

使用例

下記は使用イメージ、今後変わる可能性あり。

import yaml
from soccertrack import MOTPipeline

cfg = yaml.load("""
cameras:
  - 
    keypoints: xxx  
  - 
    keypoints: xxx  
object_detector:
  MODEL: YOLOv5
  CONF_THRE: 0.25
MultiObjectTracker:
  MODEL: DeepSORT
....
""")
mot_pipeline = MOTPipeline(cfg)
mot_pipeline.run()
soccertrack run --input_video sample.mp4 --config sample.yaml

その他

既存ツールの調査

MM-Tracking

PRを提出する意思はありますか?

  • Yes!PRを投稿して協力できます。

PRを投稿して頂ける場合、以下の項目の確認もお願いします。

実装方針

確認項目

  • 要求された機能の詳細についてコア開発者と議論しました。
  • Pull Requestの下書きを作りました。
  • Pull Request を作りました。
  • PR をこの issue にリンクしました。

[Feature] Evaluation for common Object Detection metrics

質問する前に検索

  • SoccerTrack issues を検索したところ、同様の機能要望は見つかりませんでした。

説明

使用例

その他

PRを提出する意思はありますか?

  • Yes!PRを投稿して協力できます。

PRを投稿して頂ける場合、以下の項目の確認もお願いします。

実装方針

  1. camera dataframeを作成
  • camera dataframeは以下の構成とする。
    • level 1 - teamid
    • level 2 - playerid
    • level 3
      • px - ピッチ座標 X
      • py - ピッチ座標 Y
      • bbox_x1
      • bbox_x2
      • bbox_y1
      • bbox_y2
  1. camera dataframeをtxt形式のデータに変換する。
  • 物体検出の評価用スクリプトでは、バウンディングボックスの情報をもつtxtファイルを入力として想定している。Ground Truthのtxtファイルは以下の構成である。

    sports_ball 6 234 45 362
    person 1 156 103 336
    person 36 111 198 416
    person 91 42 338 500
    ... 
    
  • Ground Truthのcamera dataframeをtxtファイルに変換するスクリプトでは、dataframe内の各行の特定の値(bbox_x1 , bbox_x2, bbox_y1 , bbox_y1)をtxt形式に変換して出力する。

  • 推論結果のtxtファイルは以下の構成である。confidence scoreが追加される。

    bottle 0.14981 80 1 295 500  
    bus 0.12601 36 13 404 316  
    horse 0.12526 430 117 500 307  
    pottedplant 0.14585 212 78 292 118  
    tvmonitor 0.070565 388 89 500 196  
    

明らかにしたいこと

  • GTと推論結果のcamera dataframeは同じ形にするのか?
  1. Pascal VOC metricsを使用する場合、 物体検出の評価用スクリプトpython pascalvoc.pyを実行する

確認項目

  • 要求された機能の詳細についてコア開発者と議論しました。
  • Pull Requestの下書きを作りました。
  • Pull Request を作りました。
  • PR をこの issue にリンクしました。

On language / 言語について

Although documentation will be mainly written in English, a large portion of communication may be in Japanese because most core developers are located in Japan. We recommend using a language that is easy for developers to use for issues, commit messages, pull requests, etc., as we want to prioritise development speed, especially in the early stages.

Both languages are acceptable so please do not hesitate to contribute!


ドキュメントは主に英語で書かれますが、コア開発者の多くが日本にいるため、コミュニケーションの大部分は日本語になる可能性があります。特に初期段階では開発スピードを優先したいので、イシュー、コミットメッセージ、プルリクエストなど、開発者が使いやすい言語の使用を推奨します。

どちらの言語でも構いませんので、ぜひご協力ください!

InvalidGitRepositoryError When Importing sportslabkit

Search before asking

  • I have searched the SportsLabKit issues and found no similar bug report.

SportsLabKit Component

Integrations

Bug

InvalidGitRepositoryError When Importing sportslabkit

Description:

I'm encountering an InvalidGitRepositoryError when trying to import sportslabkit into my project. The issue occurs when I execute the following lines:

import sportslabkit as slk
from sportslabkit.logger import show_df

The traceback points to an issue in the get_git_root() function in the utils.py file.

Environment

Environment:

  • Python version: 3.10.0
  • sportslabkit version: 0.3
  • Platform: Mac OS ARM / Google Colab

Minimal Reproducible Example

Traceback:

InvalidGitRepositoryError: /Users/jorisvillaseque/.pyenv/versions/3.10.0/lib/python3.10/site-packages/sportslabkit/utils/utils.py

Steps to Reproduce:

  1. Install sportslabkit via pip.
  2. Import sportslabkit in a Python script or this Google Colab notebook.

Additional

Expected Result:

The library should import successfully.

Actual Result:

An InvalidGitRepositoryError is thrown, preventing further execution of the code.

I tried reinstalling the library, installing GitPython, and even altering the sportslabkit source code, but the issue persists. Any help on how to resolve this would be greatly appreciated.

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!

Incorrect kwargs handling in visualize_frame

Search before asking

  • I have searched the SoccerTrack issues and found no similar bug report.

SoccerTrack Component

Other

Bug

When visualizing codf, if the information of the marker of home_team is edited from the argument, it is also reflected in the marker information of away_team.

Environment

No response

Minimal Reproducible Example

For example, if home_kwargs is set as follows, the output will be as follows

...

codf.visualize_frame(1, 
                ball_key="BALL", 
                home_kwargs={"zorder": 10, "ms": 10, "markerfacecolor": "w"},
)

...

codf_outpy

Additional

Simply modify the visualize_frame function in cofinatesdataframe.py slightly.

before

        _away_kwargs = merge_dicts(
            _marker_kwargs,
            {"zorder": 10, "ms": 10, "markerfacecolor": "r"},
            marker_kwargs,
            home_kwargs,
        )

after

        _away_kwargs = merge_dicts(
            _marker_kwargs,
            {"zorder": 10, "ms": 10, "markerfacecolor": "r"},
            marker_kwargs,
            away_kwargs,
        )

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!

[Feature] pitch area filtering

Motivation

On many occasions, an object detector will falsely detect an object outside of the region of interest (field, court, pitch etc.).

For most cases, this can be prevented by manually removing objects detected outside the ROI.

note
There will be edge cases such as a stray ball entering the pitch or streakers. Players frequently temporarily leave the pitch for throws ins and corner kicks. This as well should be easy to cover most cases by looking at the whole trajectory of the object, but that would be difficult to perform frame-by-frame.

Pitch

Frame-wise outlier detection. Remove any detections outside the ROI.
Maybe include a margin paramter.

Alternatives


Additional context

The basesline in SoccerNet-Tracking do not perform any explicit filtering of detections outside the field. Instead, the models are expected to learn to exclude any objects that are not of interest (such as the audience).

Fine-tuning our model to only consider the humans and the ball on the field leads to less false positives and false negatives in the detections and therefore improves both the HOTA and MOTA scores.
https://arxiv.org/pdf/2204.06918.pdf

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.