Giter Site home page Giter Site logo

rizwanmunawar / yolov7-segmentation Goto Github PK

View Code? Open in Web Editor NEW
294.0 3.0 75.0 4.99 MB

YOLOv7 Instance Segmentation using OpenCV and PyTorch

License: GNU General Public License v3.0

Shell 0.56% Python 97.23% Jupyter Notebook 2.22%
yolov7-mask imagesegmentation medium-article opencv-python yolov5

yolov7-segmentation's Introduction

yolov7-instance-segmentation

Coming Soon

  • Development of streamlit dashboard for Instance-Segmentation with Object Tracking

Code Medium Blog

Steps to run Code

  • Clone the repository
git clone https://github.com/RizwanMunawar/yolov7-segmentation.git
  • Goto the cloned folder.
cd yolov7-segmentation
  • Create a virtual envirnoment (Recommended, If you dont want to disturb python packages)
### For Linux Users
python3 -m venv yolov7seg
source yolov7seg/bin/activate

### For Window Users
python3 -m venv yolov7seg
cd yolov7seg
cd Scripts
activate
cd ..
cd ..
  • Upgrade pip with mentioned command below.
pip install --upgrade pip
  • Install requirements with mentioned command below.
pip install -r requirements.txt
  • Download weights from link and store in "yolov7-segmentation" directory.

  • Run the code with mentioned command below.

#for segmentation with detection
python3 segment/predict.py --weights yolov7-seg.pt --source "videopath.mp4"

#for segmentation with detection + Tracking
python3 segment/predict.py --weights yolov7-seg.pt --source "videopath.mp4" --trk

#save the labels files of segmentation
python3 segment/predict.py --weights yolov7-seg.pt --source "videopath.mp4" --save-txt
  • Output file will be created in the working directory with name yolov7-segmentation/runs/predict-seg/exp/"original-video-name.mp4"

RESULTS

Car Semantic Segmentation Car Semantic Segmentation Person Segmentation + Tracking

Custom Data Labelling

  • I have used roboflow for data labelling. The data labelling for Segmentation will be a Polygon box,While data labelling for object detection will be a bounding box

  • Go to the link and create a new workspace. Make sure to login with roboflow account.

1

  • Once you will click on create workspace, You will see the popup as shown below to upload the dataset.

2

  • Click on upload dataset and roboflow will ask for workspace name as shown below. Fill that form and then click on Create Private Project
  • Note: Make sure to select Instance Segmentation Option in below image. dataset

-You can upload your dataset now.

Screenshot 2022-09-17 155330

  • Once files will upload, you can click on Finish Uploading.

  • Roboflow will ask you to assign Images to someone, click on Assign Images.

  • After that, you will see the tab shown below.

6

  • Click on any Image in Unannotated tab, and then you can start labelling.

  • Note: Press p and then draw polygon points for segmentation

10

  • Once you will complete labelling, you can then export the data and follow mentioned steps below to start training.

Custom Training

  • Move your (segmentation custom labelled data) inside "yolov7-segmentation\data" folder by following mentioned structure.

ss

  • Go to the data folder, create a file with name custom.yaml and paste the mentioned code below inside that.
train: "path to train folder"
val: "path to validation folder"
# number of classes
nc: 1
# class names
names: [ 'car']
  • Download weights from the link and move to yolov7-segmentation folder.
  • Go to the terminal, and run mentioned command below to start training.
python3 segment/train.py --data data/custom.yaml \
                          --batch 4 \
                          --weights "yolov7-seg.pt"
                          --cfg yolov7-seg.yaml \
                          --epochs 10 \
                          --name yolov7-seg \
                          --img 640 \
                          --hyp hyp.scratch-high.yaml

Custom Model Detection Command

python3 segment/predict.py --weights "runs/yolov7-seg/exp/weights/best.pt" --source "videopath.mp4"

RESULTS

Car Semantic Segmentation Car Semantic Segmentation Person Segmentation + Tracking

References

My Medium Articles

yolov7-segmentation's People

Contributors

anhbantre avatar hullale avatar magedhelmy1 avatar rizwanmunawar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

yolov7-segmentation's Issues

不同级别目录的.py文件不能相互调用

不同级别目录的.py文件不能相互调用
Traceback (most recent call last):
File "segment/predict.py", line 24, in
from models.common import DetectMultiBackend
File "/mnt/yolov7-segmentation/models/common.py", line 28, in
from utils.general import (LOGGER, ROOT, Profile, check_requirements, check_suffix, check_version, colorstr,
ModuleNotFoundError: No module named 'utils.general'

which tools can show the labels files of segmentation?

i enable --save-txt module and get label txt of segmentation.
So, do you know which tool can show those txt?

labelme can not open txt. it only support json.
labelimg only support rectangle txt, not polygon txt.

another question
can you add function to calculate the area for segmentation?
for example, i detect crack on wall, and I would want to know the crack size.
thanks a lot

segment/predict.py google colab error

Code:

!python segment/predict.py --weights "/content/yolov7-segmentation/runs/train-seg/yolov7-seg2/weights/best.pt" --source "/content/yolov7-segmentation/person-2/test/images/*"

Error:

Traceback (most recent call last):
File "segment/predict.py", line 12, in
from sort_count import *
File "/content/yolov7-segmentation/segment/sort_count.py", line 40, in
matplotlib.use('TkAgg')
File "/usr/local/lib/python3.7/dist-packages/matplotlib/cbook/deprecation.py", line 296, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/matplotlib/cbook/deprecation.py", line 358, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/matplotlib/init.py", line 1281, in use
plt.switch_backend(name)
File "/usr/local/lib/python3.7/dist-packages/matplotlib/pyplot.py", line 237, in switch_backend
newbackend, required_framework, current_framework))
ImportError: Cannot load backend 'TkAgg' which requires the 'tk' interactive framework, as 'headless' is currently running

how to fix??

TypeError: can't convert np.ndarray of type numpy.uint16.

I checked issuse and found that there was the same problem. but my "weights" is yolov7-seg。 I don't know how to handle the error. Here is my error report. Can you help me
(yolov7-seg) PS N:\develop\pyProject\yolov7-segmentation> python .\segment\train.py
segment\train: weights=yolov7-seg.pt, cfg=models\segment\yolov7-seg.yaml, data=data\coco.yaml, hyp=data\hyps\hyp.scratch-high.yaml, epochs=300, batch_size=4, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, bucket=, cache=None, image_weights=False, device=0, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs\train-seg, name=exp, exist_ok=False,
quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, mask_ratio=4, no_overlap=False
YOLOv5 yolov7-segmentation-23-g5bfd902 Python-3.10.8 torch-1.11.0+cu115 CUDA:0 (NVIDIA GeForce RTX 2070 SUPER, 8192MiB)

hyperparameters: lr0=0.01, lrf=0.1, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.3, cls_pw=1.0, obj=0.7, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.9, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.1, copy_paste=0.1
TensorBoard: Start with 'tensorboard --logdir runs\train-seg', view at http://localhost:6006/

             from  n    params  module                                  arguments

0 -1 1 928 models.common.Conv [3, 32, 3, 1]
1 -1 1 18560 models.common.Conv [32, 64, 3, 2]
2 -1 1 36992 models.common.Conv [64, 64, 3, 1]
3 -1 1 73984 models.common.Conv [64, 128, 3, 2]
4 -1 1 8320 models.common.Conv [128, 64, 1, 1]
5 -2 1 8320 models.common.Conv [128, 64, 1, 1]
6 -1 1 36992 models.common.Conv [64, 64, 3, 1]
7 -1 1 36992 models.common.Conv [64, 64, 3, 1]
8 -1 1 36992 models.common.Conv [64, 64, 3, 1]
9 -1 1 36992 models.common.Conv [64, 64, 3, 1]
10 [-1, -3, -5, -6] 1 0 models.common.Concat [1]
11 -1 1 66048 models.common.Conv [256, 256, 1, 1]
12 -1 1 0 models.common.MP []
13 -1 1 33024 models.common.Conv [256, 128, 1, 1]
14 -3 1 33024 models.common.Conv [256, 128, 1, 1]
15 -1 1 147712 models.common.Conv [128, 128, 3, 2]
16 [-1, -3] 1 0 models.common.Concat [1]
17 -1 1 33024 models.common.Conv [256, 128, 1, 1]
18 -2 1 33024 models.common.Conv [256, 128, 1, 1]
19 -1 1 147712 models.common.Conv [128, 128, 3, 1]
20 -1 1 147712 models.common.Conv [128, 128, 3, 1]
21 -1 1 147712 models.common.Conv [128, 128, 3, 1]
22 -1 1 147712 models.common.Conv [128, 128, 3, 1]
23 [-1, -3, -5, -6] 1 0 models.common.Concat [1]
24 -1 1 263168 models.common.Conv [512, 512, 1, 1]
25 -1 1 0 models.common.MP []
26 -1 1 131584 models.common.Conv [512, 256, 1, 1]
27 -3 1 131584 models.common.Conv [512, 256, 1, 1]
28 -1 1 590336 models.common.Conv [256, 256, 3, 2]
29 [-1, -3] 1 0 models.common.Concat [1]
30 -1 1 131584 models.common.Conv [512, 256, 1, 1]
31 -2 1 131584 models.common.Conv [512, 256, 1, 1]
32 -1 1 590336 models.common.Conv [256, 256, 3, 1]
33 -1 1 590336 models.common.Conv [256, 256, 3, 1]
34 -1 1 590336 models.common.Conv [256, 256, 3, 1]
35 -1 1 590336 models.common.Conv [256, 256, 3, 1]
36 [-1, -3, -5, -6] 1 0 models.common.Concat [1]
37 -1 1 1050624 models.common.Conv [1024, 1024, 1, 1]
38 -1 1 0 models.common.MP []
39 -1 1 525312 models.common.Conv [1024, 512, 1, 1]
40 -3 1 525312 models.common.Conv [1024, 512, 1, 1]
41 -1 1 2360320 models.common.Conv [512, 512, 3, 2]
42 [-1, -3] 1 0 models.common.Concat [1]
43 -1 1 262656 models.common.Conv [1024, 256, 1, 1]
44 -2 1 262656 models.common.Conv [1024, 256, 1, 1]
45 -1 1 590336 models.common.Conv [256, 256, 3, 1]
46 -1 1 590336 models.common.Conv [256, 256, 3, 1]
47 -1 1 590336 models.common.Conv [256, 256, 3, 1]
48 -1 1 590336 models.common.Conv [256, 256, 3, 1]
49 [-1, -3, -5, -6] 1 0 models.common.Concat [1]
50 -1 1 1050624 models.common.Conv [1024, 1024, 1, 1]
51 -1 1 7609344 models.common.SPPCSPC [1024, 512, 1]
52 -1 1 131584 models.common.Conv [512, 256, 1, 1]
53 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
54 37 1 262656 models.common.Conv [1024, 256, 1, 1]
55 [-1, -2] 1 0 models.common.Concat [1]
56 -1 1 131584 models.common.Conv [512, 256, 1, 1]
57 -2 1 131584 models.common.Conv [512, 256, 1, 1]
58 -1 1 295168 models.common.Conv [256, 128, 3, 1]
59 -1 1 147712 models.common.Conv [128, 128, 3, 1]
60 -1 1 147712 models.common.Conv [128, 128, 3, 1]
61 -1 1 147712 models.common.Conv [128, 128, 3, 1]
62[-1, -2, -3, -4, -5, -6] 1 0 models.common.Concat [1]
63 -1 1 262656 models.common.Conv [1024, 256, 1, 1]
64 -1 1 33024 models.common.Conv [256, 128, 1, 1]
65 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
66 24 1 65792 models.common.Conv [512, 128, 1, 1]
67 [-1, -2] 1 0 models.common.Concat [1]
68 -1 1 33024 models.common.Conv [256, 128, 1, 1]
69 -2 1 33024 models.common.Conv [256, 128, 1, 1]
70 -1 1 73856 models.common.Conv [128, 64, 3, 1]
71 -1 1 36992 models.common.Conv [64, 64, 3, 1]
72 -1 1 36992 models.common.Conv [64, 64, 3, 1]
73 -1 1 36992 models.common.Conv [64, 64, 3, 1]
74[-1, -2, -3, -4, -5, -6] 1 0 models.common.Concat [1]
75 -1 1 65792 models.common.Conv [512, 128, 1, 1]
76 -1 1 0 models.common.MP []
77 -1 1 16640 models.common.Conv [128, 128, 1, 1]
78 -3 1 16640 models.common.Conv [128, 128, 1, 1]
79 -1 1 147712 models.common.Conv [128, 128, 3, 2]
80 [-1, -3, 63] 1 0 models.common.Concat [1]
81 -1 1 131584 models.common.Conv [512, 256, 1, 1]
82 -2 1 131584 models.common.Conv [512, 256, 1, 1]
83 -1 1 295168 models.common.Conv [256, 128, 3, 1]
84 -1 1 147712 models.common.Conv [128, 128, 3, 1]
85 -1 1 147712 models.common.Conv [128, 128, 3, 1]
86 -1 1 147712 models.common.Conv [128, 128, 3, 1]
87[-1, -2, -3, -4, -5, -6] 1 0 models.common.Concat [1]
88 -1 1 262656 models.common.Conv [1024, 256, 1, 1]
89 -1 1 0 models.common.MP []
90 -1 1 66048 models.common.Conv [256, 256, 1, 1]
91 -3 1 66048 models.common.Conv [256, 256, 1, 1]
92 -1 1 590336 models.common.Conv [256, 256, 3, 2]
93 [-1, -3, 51] 1 0 models.common.Concat [1]
94 -1 1 525312 models.common.Conv [1024, 512, 1, 1]
95 -2 1 525312 models.common.Conv [1024, 512, 1, 1]
96 -1 1 1180160 models.common.Conv [512, 256, 3, 1]
97 -1 1 590336 models.common.Conv [256, 256, 3, 1]
98 -1 1 590336 models.common.Conv [256, 256, 3, 1]
99 -1 1 590336 models.common.Conv [256, 256, 3, 1]
100[-1, -2, -3, -4, -5, -6] 1 0 models.common.Concat [1]
101 -1 1 1049600 models.common.Conv [2048, 512, 1, 1]
102 75 1 295424 models.common.Conv [128, 256, 3, 1]
103 88 1 1180672 models.common.Conv [256, 512, 3, 1]
104 101 1 4720640 models.common.Conv [512, 1024, 3, 1]
105 [102, 103, 104] 1 1411586 models.yolo.ISegment [4, [[12, 16, 19, 36, 40, 28], [36, 75, 76, 55, 72, 146], [142, 110, 192, 243, 459, 401]], 32, 256, [256, 512, 1024]]
yolov7-seg summary: 417 layers, 37882274 parameters, 37882274 gradients, 142.7 GFLOPs

Transferred 555/565 items from yolov7-seg.pt
AMP: checks passed
optimizer: SGD(lr=0.01) with parameter groups 98 weight(decay=0.0), 95 weight(decay=0.0005), 95 bias
train: Scanning 'N:\develop\pyProject\yolov7-segmentation\coco\labels\train.cache' images and labels... 19 found, 0 missing, 0 empty, 0 corrupt: 100%|██████████| 19/19 [00:00<?, ?it/s]
val: Scanning 'N:\develop\pyProject\yolov7-segmentation\coco\labels\val.cache' images and labels... 7 found, 0 missing, 0 empty, 0 corrupt: 100%|██████████| 7/7 [00:00<?, ?it/s]

AutoAnchor: 4.58 anchors/target, 0.998 Best Possible Recall (BPR). Current anchors are a good fit to dataset
Plotting labels to runs\train-seg\exp4\labels.jpg...
Image sizes 640 train, 640 val
Using 4 dataloader workers
Logging results to runs\train-seg\exp4
Starting training for 300 epochs...

  Epoch    GPU_mem   box_loss   seg_loss   obj_loss   cls_loss  Instances       Size
  0/299      5.87G     0.1198      0.155     0.2091    0.02682        346        640:  80%|████████  | 4/5 [00:02<00:00,  1.69it/s]

Traceback (most recent call last):
File "N:\develop\pyProject\yolov7-segmentation\segment\train.py", line 681, in
main(opt)
File "N:\develop\pyProject\yolov7-segmentation\segment\train.py", line 577, in main
train(opt.hyp, opt, device, callbacks)
File "N:\develop\pyProject\yolov7-segmentation\segment\train.py", line 295, in train
for i, (imgs, targets, paths, _, masks) in pbar: # batch ------------------------------------------------------
File "K:\soft\anaconda\envs\yolov7-seg\lib\site-packages\tqdm\std.py", line 1195, in iter
for obj in iterable:
File "N:\develop\pyProject\yolov7-segmentation\utils\dataloaders.py", line 171, in iter
yield next(self.iterator)
File "K:\soft\anaconda\envs\yolov7-seg\lib\site-packages\torch\utils\data\dataloader.py", line 530, in next
data = self._next_data()
File "K:\soft\anaconda\envs\yolov7-seg\lib\site-packages\torch\utils\data\dataloader.py", line 1224, in _next_data
return self._process_data(data)
File "K:\soft\anaconda\envs\yolov7-seg\lib\site-packages\torch\utils\data\dataloader.py", line 1250, in _process_data
data.reraise()
File "K:\soft\anaconda\envs\yolov7-seg\lib\site-packages\torch_utils.py", line 457, in reraise
raise exception
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "K:\soft\anaconda\envs\yolov7-seg\lib\site-packages\torch\utils\data_utils\worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "K:\soft\anaconda\envs\yolov7-seg\lib\site-packages\torch\utils\data_utils\fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "K:\soft\anaconda\envs\yolov7-seg\lib\site-packages\torch\utils\data_utils\fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "N:\develop\pyProject\yolov7-segmentation\utils\segment\dataloaders.py", line 167, in getitem
masks = (torch.from_numpy(masks) if len(masks) else torch.zeros(1 if self.overlap else nl, img.shape[0] //
TypeError: can't convert np.ndarray of type numpy.uint16. The only supported types are: float64, float32, float16, complex64, complex128, int64, int32, int16, int8, uint8, and bool.

nan loss when training on Windows

Hello
How are you?
Thanks for contributing to this project.
I am going to train a model on Windows.
But I get nan loss values at the beginning.
I checked the same training project on Google Colab (Ubuntu) and it works well.
I think that the there are some issues in the current project for Windows.

onnx Export Problem.

Hello, I studied using the project and checked the results. I tried to convert it to onnx, but there is a problem.

Is there any way to solve it?

IndexError: boolean index did not match indexed array along dimension 0; dimension is 1 but corresponding boolean dimension is 9

`Transferred 555/565 items from yolov7-seg.pt
AMP: checks passed ✅
optimizer: SGD(lr=0.01) with parameter groups 98 weight(decay=0.0), 95 weight(decay=0.0005), 95 bias
train: Scanning '/home/divya/Documents/potholes/yolov7-segmentation/pot/train/l
train: New cache created: /home/divya/Documents/potholes/yolov7-segmentation/pot/train/labels.cache
val: Scanning '/home/divya/Documents/potholes/yolov7-segmentation/pot/test/labe
val: New cache created: /home/divya/Documents/potholes/yolov7-segmentation/pot/test/labels.cache

AutoAnchor: 4.29 anchors/target, 0.999 Best Possible Recall (BPR). Current anchors are a good fit to dataset ✅
Plotting labels to runs/train-seg/yolov7-seg/labels.jpg...
Image sizes 640 train, 640 val
Using 4 dataloader workers
Logging results to runs/train-seg/yolov7-seg
Starting training for 10 epochs...

  Epoch    GPU_mem   box_loss   seg_loss   obj_loss   cls_loss  Instances       Size

0%| | 0/1753 [00:00<?, ?it/s]
Traceback (most recent call last):
File "segment/train.py", line 681, in
main(opt)
File "segment/train.py", line 577, in main
train(opt.hyp, opt, device, callbacks)
File "segment/train.py", line 295, in train
for i, (imgs, targets, paths, _, masks) in pbar: # batch ------------------------------------------------------
File "/usr/local/lib/python3.8/dist-packages/tqdm/std.py", line 1195, in iter
for obj in iterable:
File "/home/divya/Documents/potholes/yolov7-segmentation/utils/dataloaders.py", line 171, in iter
yield next(self.iterator)
File "/home/divya/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 530, in next
data = self._next_data()
File "/home/divya/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1224, in _next_data
return self._process_data(data)
File "/home/divya/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1250, in _process_data
data.reraise()
File "/home/divya/.local/lib/python3.8/site-packages/torch/_utils.py", line 457, in reraise
raise exception
IndexError: Caught IndexError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/divya/.local/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/home/divya/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/divya/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/divya/Documents/potholes/yolov7-segmentation/utils/segment/dataloaders.py", line 111, in getitem
img, labels, segments = self.load_mosaic(index)
File "/home/divya/Documents/potholes/yolov7-segmentation/utils/segment/dataloaders.py", line 254, in load_mosaic
img4, labels4, segments4 = random_perspective(img4,
File "/home/divya/Documents/potholes/yolov7-segmentation/utils/segment/augmentations.py", line 102, in random_perspective
new_segments = np.array(new_segments)[i]
IndexError: boolean index did not match indexed array along dimension 0; dimension is 1 but corresponding boolean dimension is 9`

This is the error I'm getting while running this - python3 segment/train.py --data data.yaml --batch 4 --weights 'yolov7-seg.pt' --cfg yolov7-seg.yaml --epochs 10 --name yolov7-seg --hyp hyp.scratch-high.yaml

requirements.txt install cpu version of torch

Thank you for the code. I found this small issue.

When I used requirements.txt in my conda environment and tried to train the code by giving the GPU device id through the command line the code was throwing the error "invalid device 0 requested". After a little bit of debugging, I realized the requirement.txt installs CPU versions of libraries and hence the environment couldn't parse the meaning of device id 0. After fixing this I was able to run the code on GPU.

how to get cocrdinates of segmented area

Hi..
I want to calculate the inner area of detected segments..how to get the coordinates of polygonal segmented area..dont want the xywh 2d box co ordinates..
Thanks !

Config for a (S)mall yolo7 seg?

It seems there is only one model config - yolov7-seg.yaml

  1. yolo7-seg has depth and width multiple of 1. Does it mean the corresponding model architecture of YOLO5 would be (L)arge?
  2. I would need a config for a (S)mall version, to compare it to YOLO5. How should I proceed? Would it be enough to change depth_multiple and width_multiple: to whatever YOLO5 (S)mall network config has?

where is cfg?

Hi, I want to Know where is the cfg? In addition, the label in data is image or JSON? Thank you!

About custom data

Hi @RizwanMunawar,
I want to try the segmentation task on my own dataset, but I encounter the following error.

anaconda3/envs/py39/lib/python3.9/site-packages/torch/_utils.py", line 461, in reraise
    raise exception
TypeError: Caught TypeError in DataLoader worker process 0.
yolov7-segmentation/utils/segment/dataloaders.py", line 167, in __getitem__
    masks = (torch.from_numpy(masks) if len(masks) else torch.zeros(1 if self.overlap else nl, img.shape[0] //
TypeError: can't convert np.ndarray of type numpy.uint16. The only supported types are: float64, float32, float16, complex64, complex128, int64, int32, int16, int8, uint8, and bool.

I customize the dataset image size to be 256*256 and the format is that of the COCO dataset. For training I used the tool JSON2YOLO to convert it to YOLO data format and then I got the error, how do I fix this?

The same data works fine for the detection task, but it will be wrong for the segmentation task.
The data format is as follows

5 0.253906 0.982422 0.226562 0.978516 0.216797 0.957031 0.246094 0.951172 0.259766 0.96875 0.253906 0.982422

Could you help me with this problem, thanks

Do you have any plan for larger model?

Hi, Thank you for sharing nice work.

Do you have any plan for larger segmentation model(like yolov7x) in yolov7?

I want more large model for accuracy.

Track id

May I know how to add track id in order for me to confirm if the same object is keep tracking with the same id? Thanks

Dataset quantity.

Thanks for great work.

I was wondering how many data you have used to make this possible and how long it takes to get train done.

Thank you in advance.

mosaic

Hello! Thank you for your repository. Now I want to ask you a question.
Because the training took a long time, I tried to set the mosaic enhancement coefficient to 0.5 in ### data/hyps/hyp.scratch-high.yaml, but report error looks like this:

File "home/user/yolov7/seg/utils/segment/dataloaders.py", line 143, in __getitem__
   img, labels, segments = random_perspective(
TypeError: random_perspective() got an unexpected keyword argument 'return_seg'`

customize segmentation points

how can I customize this to segmentations of fixed size, i.e each segmented frame should have the same number of points or coordinates.

No Module named 'Models'

I have trained the model and now I want to use the trained model for inference. I try to load the model, it gives me this: No module named 'models'

After some search, I found that to load the PyTorch model - I need to maintain the directory structure that I had during training. My use case is such that it does not allow me to keep the directory structure as it is. I have to save weights at some other place and then load them in the sub-module file.

Do you know if there is any way that I can load the saved model by specifying its path directly without the need to maintain the directory structure?

Preparing RLE for MOTS20 evaluation

May I know how should I prepare the RLE required for MOTS20 evaluation?

The sample RLE is: WSV:2d;1O10000O10000O1O100O100O1O100O1000000000000000O100O102N5K00O1O1N2O110OO2O001O1NTga3
But, my RLE is: b'_d\c0h2on02N2K5N2N2N2N2N2N8H2N2Hg0_O2N2Bd0H2N2K8K2N2K>F1O1OU1jN2N2N8I1O1O4L1O1O4L0000001O1O1O000000O1O1O2N2N2N1TOl0O1O1I7O1O8E5N2Na0gNj0N2N5H5N2N2N2N2N5H5N2N8\O>N2N8^N\1N2N5K2N2N>VO>N2NQkUY1'

I had tried to evaluate it, but it failed. The attached file is my output from the MOTS20-09.

Appreciate alot if anyone could help. Thank you very much.
MOTS20-09.txt

Validation Error

Hi, Thank you for sharing nice work.
I wanna check performance on "val, test dataset".
I encountered an error while i ran "val.py" code, for my trained model

python3 val.py --task "test" --batch-size 4 --weights "runs/train-seg/mymodel/weights/best.pt" --imgsz 1504 --data data/mydata.yaml

image

So I traced back the error script.
image

As you can see, the out( maybe model's prediction output ) data type is tuple
(I think the function non_max_suppression in utils/general.py get a "tensor type" args).

the training script had no problem. it ran well.
but the "val.py" script has some problem.

Could you please tell me if you know about this issue?

Enabling --save-txt module

Hello, i have enabled this module and get the output txt. But unfortunately i could not understand the output of this txt. I understand that each line stands for one detected object, but is it possible for you to provide me additional information about this? I see some numbers which i could not really understand what they stand for. Thank you in advance for your help

Doesn't work well for multi stream

When --source streams.txt is used to run multistream, the trk is shown on every stream rather than their respective ones. Will there be an update where the trk only shows on their respective stream?

TensorRT export trials

Hello,

I would like to export my custom trained model with this YoloV7-segmentation to TensorRT.
I tried to use basic script to export classic yolov7 to TensorRT (torch to ONNX then ONNX to TRT), it works but seems to loose segmentation part.
I get 5 outputs, including one of size 819200 but I do not know if this output can be used to get segmentation or if all the parts related to segmentation are lost during export.

Respectfully,
Jordan

save txt result is not the same as roboflow result

image

I enable save-txt and get the polygon result is on the right side of the photo.

download roboflow yolov7 pytorch.zip result is on the left side of the photo.
Why are the two results different?
Can I use save-txt and get like roboflow txt result?
thanks.

AttributeError: 'list' object has no attribute 'shape'

Hi!
I encountered such an error in the training process of using my own dataset (exported by RoboFlow). Is it because I have only one segmentation object?
It will be appreciated if you could give me some suggestions!

Traceback (most recent call last):
File "train.py", line 613, in
main(opt)
File "train.py", line 509, in main
train(opt.hyp, opt, device, callbacks)
File "train.py", line 291, in train
loss, loss_items = compute_loss(pred, targets.to(device)) # loss scaled by batch_size
File "/home/yolov7-segmentation-main/utils/loss.py", line 127, in call
tcls, tbox, indices, anchors = self.build_targets(p, targets) # targets
File "/home/yolov7-segmentation-main/utils/loss.py", line 200, in build_targets
anchors, shape = self.anchors[i], p[i].shape
AttributeError: 'list' object has no attribute 'shape'

how to get the coordinates of segmented area?

Hello.

I tried enabling "save_txt" option and it would give me this:

result.txt ( Line 170 in segment/predict.py )
0 0.406728 0.598982 0.237003 0.593891
0 0.61659 0.608032 0.303517 0.765837
0 0.519495 0.388575 0.204128 0.553167
0 0.21789 0.597851 0.186544 0.528281

I think these coordinates will be the xywh of 2d box

However, I want to get the coordinates of segmented area instead of 2d box coordinates.

It will be really helpful for me if anyone can help me with this!

Thank you!

training labels format

what annotation format should we use ? i tried json2yolo but i am facing many errors , i also tried yolo text format polygon but no labels detected

--augment agrument does not work in predict.py file.

Hello! Thank you for your repository. I have a question for you. I trained yolov7 segmentation om my custom dataset and I want to do inference for the video.
This command works fine:
python3 segment/predict.py --weights "/home/user/Disk/Priroda/git_reps/yolov7-segmentation/runs/train-seg/yolov7-seg18/weights/best.pt" --source "/home/user/Disk/Whales/From_Sasha_3/20_01/crop1_DJI_0481.mp4" --imgsz 640

When I want to slightly increase quality of inference I specify --augment parameter but it does not work.

Report error looks like this:
photo_1

how to get the area of segmented object?

Hi, Thank you for your repository. I'm newbie and learning YOLOv7. I have 2 question?

  1. Can you explain a line in save.txt labels from result?
    image
  2. How to get the area of segmented predicted objects?
    frames10

Thanks for your help!

ModuleNotFoundError: No module named 'utils.dataloaders'

Traceback (most recent call last):
File "C:\Users\fazyl\Desktop\yolov7-segmentation\segment\predict.py", line 23, in
from models.common import DetectMultiBackend
File "C:\Users\fazyl\Desktop\yolov7-segmentation\models\common.py", line 24, in
from utils.dataloaders import exif_transpose, letterbox
ModuleNotFoundError: No module named 'utils.dataloaders'

This error happening when I tried train and predict in my conda environment(without creating new env as you recommended). Can you please help with that?

my custom dataset is does not works.

I make custom dataset with black box driving video on the road.
and it find road crack and pothole.
but it is does not work.
can you help me?
i have no idea except you.
thanks.

inference

How to create a custom inference using model.torch.hub load for instance segmentaion

TypeError: can't convert np.ndarray of type numpy.uint16.

I was training the algorithm, and got this error

    Epoch    GPU_mem   box_loss   seg_loss   obj_loss   cls_loss  Instances       Size
    125/299      14.5G    0.03991    0.03042     0.0524          0        857        640: 100%|██████████| 10/10 [00:03<00:00,  2.56it/s]
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95)     Mask(P          R      mAP50  mAP50-95): 100%|████████
                   all         20        356      0.611      0.604      0.545      0.239       0.58      0.598      0.524      0.204

      Epoch    GPU_mem   box_loss   seg_loss   obj_loss   cls_loss  Instances       Size
    126/299      14.5G    0.04134    0.03091     0.0547          0        596        640:  80%|████████  | 8/10 [00:03<00:00,  2.57it/s]
Traceback (most recent call last):
  File "C:\Users\mag\projects\yolov7Mask\segment\train.py", line 681, in <module>
    main(opt)
  File "C:\Users\mag\projects\yolov7Mask\segment\train.py", line 577, in main
    train(opt.hyp, opt, device, callbacks)
  File "C:\Users\mag\projects\yolov7Mask\segment\train.py", line 295, in train
    for i, (imgs, targets, paths, _, masks) in pbar:  # batch ------------------------------------------------------
  File "C:\Users\mag\projects\yolov7Mask\.env\lib\site-packages\tqdm\std.py", line 1195, in __iter__
    for obj in iterable:
  File "C:\Users\mag\projects\yolov7Mask\utils\dataloaders.py", line 171, in __iter__
    yield next(self.iterator)
  File "C:\Users\mag\projects\yolov7Mask\.env\lib\site-packages\torch\utils\data\dataloader.py", line 681, in __next__
    data = self._next_data()
  File "C:\Users\mag\projects\yolov7Mask\.env\lib\site-packages\torch\utils\data\dataloader.py", line 1356, in _next_data
    return self._process_data(data)
  File "C:\Users\mag\projects\yolov7Mask\.env\lib\site-packages\torch\utils\data\dataloader.py", line 1402, in _process_data
    data.reraise()
  File "C:\Users\mag\projects\yolov7Mask\.env\lib\site-packages\torch\_utils.py", line 461, in reraise
    raise exception
TypeError: Caught TypeError in DataLoader worker process 4.
Original Traceback (most recent call last):
  File "C:\Users\mag\projects\yolov7Mask\.env\lib\site-packages\torch\utils\data\_utils\worker.py", line 302, in _worker_loop
    data = fetcher.fetch(index)
  File "C:\Users\mag\projects\yolov7Mask\.env\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "C:\Users\mag\projects\yolov7Mask\.env\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "C:\Users\mag\projects\yolov7Mask\utils\segment\dataloaders.py", line 167, in __getitem__
    masks = (torch.from_numpy(masks) if len(masks) else torch.zeros(1 if self.overlap else nl, img.shape[0] //
TypeError: can't convert np.ndarray of type numpy.uint16. The only supported types are: float64, float32, float16, complex64, complex128, int64, int32, int16, int8, uint8, and bool.

I trained it using the following:

python segment/train.py --data data-4/data.yaml --weights yolov7-seg.pt --hyp hyp.scratch-high.yaml --cfg yolov7-seg.yaml

Have you faced something similar?

Training on unlabelled images

Hi Rizwan, first of all thank you so much for your source codes, they are very useful. However, may I know is it possible to have the instance segmentation model trained on unlabelled images? (intended to reduce false positives) If so, how should it be done? I have tried training with (empty) and without annotation .txt file and none of them work. Thank you in advance.

Tutorial missing weights and repo missing yolov7-seg.yaml

The weights are missing in the README.

python3 segment/train.py --data data/custom.yaml --batch 4 --weights yolov7-seg.pt --cfg yolov7-seg.yaml --epochs 10 --name yolov7-seg --img 640 --hyp hyp.scratch-high.yaml

instead of

python3 segment/train.py --data data/custom.yaml --batch 4 --weights '' --cfg yolov7-seg.yaml --epochs 10 --name yolov7-seg --img 640 --hyp hyp.scratch-high.yaml

Another thing, I think the cfg yolov7-seg.yaml in the repo

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.