Giter Site home page Giter Site logo

mseg-semantic's People

Contributors

johnwlambert avatar mseg-dataset avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mseg-semantic's Issues

What information do the gray pictures contain?

Your dataset is amazing. You have done a great job! Where can I find information about the architecture of your neural network? And my main question. Are the gray pictures cards of confidence or the same segmentation, but without labels?

error running demo

Hi! Thank you for your amazing work!
There was some problem when I tried to run the demo program on my own PC as I followed the instruction in README.
Here is the error message.

[2020-10-31 19:49:01,033 INFO universal_demo.py line 63 10457] => creating model ...
[2020-10-31 19:49:03,787 INFO inference_task.py line 308 10457] => loading checkpoint 'mseg_semantic/model/mseg-1m.pth'
[2020-10-31 19:49:04,255 INFO inference_task.py line 314 10457] => loaded checkpoint 'mseg_semantic/model/mseg-1m.pth'
[2020-10-31 19:49:04,259 INFO inference_task.py line 327 10457] >>>>>>>>>>>>>> Start inference task >>>>>>>>>>>>>
[2020-10-31 19:49:04,262 INFO inference_task.py line 365 10457] Write image prediction to 000000_overlaid_classes.jpg
/home/kkycj/.local/lib/python3.6/site-packages/torch/nn/functional.py:3121: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
/home/kkycj/.local/lib/python3.6/site-packages/torch/nn/functional.py:2941: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.")
Traceback (most recent call last):
File "mseg_semantic/tool/universal_demo.py", line 109, in
run_universal_demo(args, use_gpu)
File "mseg_semantic/tool/universal_demo.py", line 76, in run_universal_demo
itask.execute()
File "/home/kkycj/workspace/mseg-semantic/mseg_semantic/tool/inference_task.py", line 343, in execute
self.render_single_img_pred()
File "/home/kkycj/workspace/mseg-semantic/mseg_semantic/tool/inference_task.py", line 379, in render_single_img_pred
id_to_class_name_map=self.id_to_class_name_map
File "/home/kkycj/workspace/mseg-api/mseg/utils/mask_utils_detectron2.py", line 468, in overlay_instances
polygons, _ = mask_obj.mask_to_polygons(segment)
File "/home/kkycj/workspace/mseg-api/mseg/utils/mask_utils_detectron2.py", line 121, in mask_to_polygons
res, hierarchy = cv2.findContours(mask.astype("uint8"), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)
ValueError: too many values to unpack (expected 2)

Could you please help me finding out where is the problem?
Thank you so much.

Train code

Does this repository provide training code? I only see the test code

No imagenet normalization in universaldemo.py

Hi John Lambert,

Edit: nevermind, it just happens later than I expected. Ignore this post

It appears that ImageNet normalization is never being applied when running the network using the unversaldemo.py script. Is this intentional?

This happens for all cases: single image (render_single_img_pred in inference_task.py), video (execute_on_video in inference_task.py), and even on a folder of images using create_test_loader. A comment in create_test_loader suggests normalization should happen on the fly, but it appears this was never added.

Even without normalization the results look good though.

Thanks for the great repo!

Things and Stuff

Could you please give me the number of stuff and things in Mseg datasets
Because this information not exist in your paper
Example : Coco-datasets : 91 Stuff and 80 Things

Thank you

got different output from sample when using pretrained model

Hi!
I am trying to use the pretrained models to process images from KITTI Odometry and changed nothing of the code. But I got some invalid segmentations. Then I tested in the sample image4 in here .The output is as follow:
dirtroad10_overlaid_classes

The config is:
python3 -u mseg_semantic/tool/universal_demo.py --config=mseg_semantic/config/test/default_config_360_ms.yaml model_name mseg-3m model_path mseg-3m.pth input_file dirtroad10.jpg

Could you please tell me where the problem is?
Thanks!

class

There's a little bit of an error with the class.
image
I changed one line of code to change args.tc.classes to args.tc.num_uclasses as shown below.
image

Could not find a version that satisfies the requirement pandas>=1.2.0 (from mseg-semantic==1.0.0)

Hello, Thanks for your work. I was trying to recreate your work. However, when I tried to run pip3 install -e ~/mseg-semantic, it gave me the following error:

ERROR: Could not find a version that satisfies the requirement pandas>=1.2.0 (from mseg-semantic==1.0.0) (from versions: 0.1, 0.2b0, 0.2b1, 0.2, 0.3.0b0, 0.3.0b2, 0.3.0, 0.4.0, 0.4.1, 0.4.2, 0.4.3, 0.5.0, 0.6.0, 0.6.1, 0.7.0, 0.7.1, 0.7.2, 0.7.3, 0.8.0rc1, 0.8.0, 0.8.1, 0.9.0, 0.9.1, 0.10.0, 0.10.1, 0.11.0, 0.12.0, 0.13.0, 0.13.1, 0.14.0, 0.14.1, 0.15.0, 0.15.1, 0.15.2, 0.16.0, 0.16.1, 0.16.2, 0.17.0, 0.17.1, 0.18.0, 0.18.1, 0.19.0, 0.19.1, 0.19.2, 0.20.0, 0.20.1, 0.20.2, 0.20.3, 0.21.0, 0.21.1, 0.22.0, 0.23.0, 0.23.1, 0.23.2, 0.23.3, 0.23.4, 0.24.0, 0.24.1, 0.24.2, 0.25.0, 0.25.1, 0.25.2, 0.25.3, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.1.0, 1.1.1, 1.1.2, 1.1.3, 1.1.4, 1.1.5)
ERROR: No matching distribution found for pandas>=1.2.0 (from mseg-semantic==1.0.0)

This issue is somehow related to this one: gboeing/osmnx#636
I am using python 3.6.9. My question is: Should I make padas>=1.1.0 in the requirements.txt file and will it still work ??

Testing time takes much longer than reported in the repo!!

It's taking 35-40s to process the segmentation of a single frame.
My test setup configuration:
i. Ubuntu 18.04 LTS, core i7, 24 Gb RAM
ii. Graphics Nvidia 1070M (Laptop version of 1070Ti)
iii. Cuda 10.2
iv. Pytorch version: 1.6.0 + cu101
v. CUDA_HOME = /usr/local/cuda-10.2
This is the output from my terminal:

arghya@arghya-Erazer-X7849-MD60379:~$ python3 -u ~/mseg-semantic/mseg_semantic/tool/universal_demo.py --config=/home/arghya/mseg-semantic/mseg_semantic/config/test/default_config_720_ms.yaml model_name mseg-3m-720p model_path ~/Downloads/mseg-3m-720p.pth input_file ~/Downloads/Urban_3_fps.mp4
Namespace(config='/home/arghya/mseg-semantic/mseg_semantic/config/test/default_config_720_ms.yaml', file_save='default', opts=['model_name', 'mseg-3m-720p', 'model_path', '/home/arghya/Downloads/mseg-3m-720p.pth', 'input_file', '/home/arghya/Downloads/SubT_Urban_3_fps.mp4'])
arch: hrnet
base_size: 720
batch_size_val: 1
dataset: Urban_3_fps
has_prediction: False
ignore_label: 255
img_name_unique: False
index_start: 0
index_step: 0
input_file: /home/arghya/Downloads/Urban_3_fps.mp4
layers: 50
model_name: mseg-3m-720p
model_path: /home/arghya/Downloads/mseg-3m-720p.pth
network_name: None
save_folder: default
scales: [0.5, 0.75, 1.0, 1.25, 1.5, 1.75]
small: True
split: val
test_gpu: [0]
test_h: 713
test_w: 713
version: 4.0
vis_freq: 20
workers: 16
zoom_factor: 8
[2021-07-29 02:50:46,742 INFO universal_demo.py line 59 11926] arch: hrnet
base_size: 720
batch_size_val: 1
dataset: Urban_3_fps
has_prediction: False
ignore_label: 255
img_name_unique: True
index_start: 0
index_step: 0
input_file: /home/arghya/Downloads/Urban_3_fps.mp4
layers: 50
model_name: mseg-3m-720p
model_path: /home/arghya/Downloads/mseg-3m-720p.pth
network_name: None
print_freq: 10
save_folder: default
scales: [0.5, 0.75, 1.0, 1.25, 1.5, 1.75]
small: True
split: test
test_gpu: [0]
test_h: 713
test_w: 713
u_classes: ['backpack', 'umbrella', 'bag', 'tie', 'suitcase', 'case', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'animal_other', 'microwave', 'radiator', 'oven', 'toaster', 'storage_tank', 'conveyor_belt', 'sink', 'refrigerator', 'washer_dryer', 'fan', 'dishwasher', 'toilet', 'bathtub', 'shower', 'tunnel', 'bridge', 'pier_wharf', 'tent', 'building', 'ceiling', 'laptop', 'keyboard', 'mouse', 'remote', 'cell phone', 'television', 'floor', 'stage', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot_dog', 'pizza', 'donut', 'cake', 'fruit_other', 'food_other', 'chair_other', 'armchair', 'swivel_chair', 'stool', 'seat', 'couch', 'trash_can', 'potted_plant', 'nightstand', 'bed', 'table', 'pool_table', 'barrel', 'desk', 'ottoman', 'wardrobe', 'crib', 'basket', 'chest_of_drawers', 'bookshelf', 'counter_other', 'bathroom_counter', 'kitchen_island', 'door', 'light_other', 'lamp', 'sconce', 'chandelier', 'mirror', 'whiteboard', 'shelf', 'stairs', 'escalator', 'cabinet', 'fireplace', 'stove', 'arcade_machine', 'gravel', 'platform', 'playingfield', 'railroad', 'road', 'snow', 'sidewalk_pavement', 'runway', 'terrain', 'book', 'box', 'clock', 'vase', 'scissors', 'plaything_other', 'teddy_bear', 'hair_dryer', 'toothbrush', 'painting', 'poster', 'bulletin_board', 'bottle', 'cup', 'wine_glass', 'knife', 'fork', 'spoon', 'bowl', 'tray', 'range_hood', 'plate', 'person', 'rider_other', 'bicyclist', 'motorcyclist', 'paper', 'streetlight', 'road_barrier', 'mailbox', 'cctv_camera', 'junction_box', 'traffic_sign', 'traffic_light', 'fire_hydrant', 'parking_meter', 'bench', 'bike_rack', 'billboard', 'sky', 'pole', 'fence', 'railing_banister', 'guard_rail', 'mountain_hill', 'rock', 'frisbee', 'skis', 'snowboard', 'sports_ball', 'kite', 'baseball_bat', 'baseball_glove', 'skateboard', 'surfboard', 'tennis_racket', 'net', 'base', 'sculpture', 'column', 'fountain', 'awning', 'apparel', 'banner', 'flag', 'blanket', 'curtain_other', 'shower_curtain', 'pillow', 'towel', 'rug_floormat', 'vegetation', 'bicycle', 'car', 'autorickshaw', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'trailer', 'boat_ship', 'slow_wheeled_object', 'river_lake', 'sea', 'water_other', 'swimming_pool', 'waterfall', 'wall', 'window', 'window_blind']
version: 4.0
vis_freq: 20
workers: 16
zoom_factor: 8
[2021-07-29 02:50:46,743 INFO universal_demo.py line 60 11926] => creating model ...
[2021-07-29 02:50:49,912 INFO inference_task.py line 307 11926] => loading checkpoint '/home/arghya/Downloads/mseg-3m-720p.pth'
[2021-07-29 02:50:50,433 INFO inference_task.py line 313 11926] => loaded checkpoint '/home/arghya/Downloads/mseg-3m-720p.pth'
[2021-07-29 02:50:50,437 INFO inference_task.py line 326 11926] >>>>>>>>>>>>>> Start inference task >>>>>>>>>>>>>
[2021-07-29 02:50:50,440 INFO inference_task.py line 437 11926] Write video to /home/arghya/mseg-semantic/temp_files/SubT_Urban_3_fps_mseg-3m-720p_universal_scales_ms_base_sz_720.mp4
Video fps: 3.00 @ 720x1280 resolution.
[2021-07-29 02:50:50,451 INFO inference_task.py line 442 11926] On image 0/1312
/home/arghya/.local/lib/python3.6/site-packages/torch/nn/functional.py:3121: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
  "See the documentation of nn.Upsample for details.".format(mode))
/home/arghya/.local/lib/python3.6/site-packages/torch/nn/functional.py:2941: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
  warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.")
[2021-07-29 02:51:15,910 INFO inference_task.py line 442 11926] On image 1/1312
/home/arghya/.local/lib/python3.6/site-packages/torch/nn/functional.py:3121: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
  "See the documentation of nn.Upsample for details.".format(mode))
/home/arghya/.local/lib/python3.6/site-packages/torch/nn/functional.py:2941: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
  warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.")
[2021-07-29 02:51:39,985 INFO inference_task.py line 442 11926] On image 2/1312
[2021-07-29 02:52:03,004 INFO inference_task.py line 442 11926] On image 3/1312
[2021-07-29 02:52:25,585 INFO inference_task.py line 442 11926] On image 4/1312
[2021-07-29 02:52:48,215 INFO inference_task.py line 442 11926] On image 5/1312
[2021-07-29 02:53:11,059 INFO inference_task.py line 442 11926] On image 6/1312
[2021-07-29 02:53:34,598 INFO inference_task.py line 442 11926] On image 7/1312
[2021-07-29 02:53:57,468 INFO inference_task.py line 442 11926] On image 8/1312
[2021-07-29 02:54:21,657 INFO inference_task.py line 442 11926] On image 9/1312
[2021-07-29 02:54:47,152 INFO inference_task.py line 442 11926] On image 10/1312
[2021-07-29 02:55:14,647 INFO inference_task.py line 442 11926] On image 11/1312
[2021-07-29 02:55:43,719 INFO inference_task.py line 442 11926] On image 12/1312
[2021-07-29 02:56:13,402 INFO inference_task.py line 442 11926] On image 13/1312
[2021-07-29 02:56:45,320 INFO inference_task.py line 442 11926] On image 14/1312
[2021-07-29 02:57:18,402 INFO inference_task.py line 442 11926] On image 15/1312
[2021-07-29 02:57:51,063 INFO inference_task.py line 442 11926] On image 16/1312
[2021-07-29 02:58:26,466 INFO inference_task.py line 442 11926] On image 17/1312
[2021-07-29 02:59:00,356 INFO inference_task.py line 442 11926] On image 18/1312
[2021-07-29 02:59:33,815 INFO inference_task.py line 442 11926] On image 19/1312
[2021-07-29 03:00:08,435 INFO inference_task.py line 442 11926] On image 20/1312
[2021-07-29 03:00:43,657 INFO inference_task.py line 442 11926] On image 21/1312
[2021-07-29 03:01:19,335 INFO inference_task.py line 442 11926] On image 22/1312
[2021-07-29 03:01:55,123 INFO inference_task.py line 442 11926] On image 23/1312
[2021-07-29 03:02:30,679 INFO inference_task.py line 442 11926] On image 24/1312
[2021-07-29 03:03:05,552 INFO inference_task.py line 442 11926] On image 25/1312
[2021-07-29 03:03:42,121 INFO inference_task.py line 442 11926] On image 26/1312
[2021-07-29 03:04:18,147 INFO inference_task.py line 442 11926] On image 27/1312
[2021-07-29 03:04:53,813 INFO inference_task.py line 442 11926] On image 28/1312
[2021-07-29 03:05:30,250 INFO inference_task.py line 442 11926] On image 29/1312
[2021-07-29 03:06:06,936 INFO inference_task.py line 442 11926] On image 30/1312
[2021-07-29 03:06:43,893 INFO inference_task.py line 442 11926] On image 31/1312
[2021-07-29 03:07:22,273 INFO inference_task.py line 442 11926] On image 32/1312
[2021-07-29 03:07:58,141 INFO inference_task.py line 442 11926] On image 33/1312
[2021-07-29 03:08:36,007 INFO inference_task.py line 442 11926] On image 34/1312
[2021-07-29 03:09:13,067 INFO inference_task.py line 442 11926] On image 35/1312
[2021-07-29 03:09:48,738 INFO inference_task.py line 442 11926] On image 36/1312
[2021-07-29 03:10:25,400 INFO inference_task.py line 442 11926] On image 37/1312
[2021-07-29 03:11:02,181 INFO inference_task.py line 442 11926] On image 38/1312
[2021-07-29 03:11:39,011 INFO inference_task.py line 442 11926] On image 39/1312
[2021-07-29 03:12:18,203 INFO inference_task.py line 442 11926] On image 40/1312
[2021-07-29 03:12:56,330 INFO inference_task.py line 442 11926] On image 41/1312
[2021-07-29 03:13:35,509 INFO inference_task.py line 442 11926] On image 42/1312
[2021-07-29 03:14:11,946 INFO inference_task.py line 442 11926] On image 43/1312
[2021-07-29 03:14:47,958 INFO inference_task.py line 442 11926] On image 44/1312
[2021-07-29 03:15:23,517 INFO inference_task.py line 442 11926] On image 45/1312
[2021-07-29 03:15:59,211 INFO inference_task.py line 442 11926] On image 46/1312
[2021-07-29 03:16:34,647 INFO inference_task.py line 442 11926] On image 47/1312
[2021-07-29 03:17:11,127 INFO inference_task.py line 442 11926] On image 48/1312
[2021-07-29 03:17:47,540 INFO inference_task.py line 442 11926] On image 49/1312
[2021-07-29 03:18:24,153 INFO inference_task.py line 442 11926] On image 50/1312
[2021-07-29 03:19:00,355 INFO inference_task.py line 442 11926] On image 51/1312
[2021-07-29 03:19:36,037 INFO inference_task.py line 442 11926] On image 52/1312
[2021-07-29 03:20:12,058 INFO inference_task.py line 442 11926] On image 53/1312
[2021-07-29 03:20:47,903 INFO inference_task.py line 442 11926] On image 54/1312
[2021-07-29 03:21:24,239 INFO inference_task.py line 442 11926] On image 55/1312
[2021-07-29 03:22:00,087 INFO inference_task.py line 442 11926] On image 56/1312
[2021-07-29 03:22:35,845 INFO inference_task.py line 442 11926] On image 57/1312
[2021-07-29 03:23:11,774 INFO inference_task.py line 442 11926] On image 58/1312
[2021-07-29 03:23:47,541 INFO inference_task.py line 442 11926] On image 59/1312
[2021-07-29 03:24:23,755 INFO inference_task.py line 442 11926] On image 60/1312
[2021-07-29 03:25:00,037 INFO inference_task.py line 442 11926] On image 61/1312
[2021-07-29 03:25:35,865 INFO inference_task.py line 442 11926] On image 62/1312
[2021-07-29 03:26:12,221 INFO inference_task.py line 442 11926] On image 63/1312
[2021-07-29 03:26:49,299 INFO inference_task.py line 442 11926] On image 64/1312
[2021-07-29 03:27:26,690 INFO inference_task.py line 442 11926] On image 65/1312
[2021-07-29 03:28:03,340 INFO inference_task.py line 442 11926] On image 66/1312
[2021-07-29 03:28:40,227 INFO inference_task.py line 442 11926] On image 67/1312
[2021-07-29 03:29:17,089 INFO inference_task.py line 442 11926] On image 68/1312
[2021-07-29 03:29:54,721 INFO inference_task.py line 442 11926] On image 69/1312
[2021-07-29 03:30:31,592 INFO inference_task.py line 442 11926] On image 70/1312
[2021-07-29 03:31:08,450 INFO inference_task.py line 442 11926] On image 71/1312
[2021-07-29 03:31:45,385 INFO inference_task.py line 442 11926] On image 72/1312
[2021-07-29 03:32:22,591 INFO inference_task.py line 442 11926] On image 73/1312
[2021-07-29 03:32:58,804 INFO inference_task.py line 442 11926] On image 74/1312
[2021-07-29 03:33:35,249 INFO inference_task.py line 442 11926] On image 75/1312
[2021-07-29 03:34:12,158 INFO inference_task.py line 442 11926] On image 76/1312
[2021-07-29 03:34:48,676 INFO inference_task.py line 442 11926] On image 77/1312
[2021-07-29 03:35:26,471 INFO inference_task.py line 442 11926] On image 78/1312
[2021-07-29 03:36:03,713 INFO inference_task.py line 442 11926] On image 79/1312
[2021-07-29 03:36:40,725 INFO inference_task.py line 442 11926] On image 80/1312
[2021-07-29 03:37:17,384 INFO inference_task.py line 442 11926] On image 81/1312
[2021-07-29 03:37:54,698 INFO inference_task.py line 442 11926] On image 82/1312
[2021-07-29 03:38:32,027 INFO inference_task.py line 442 11926] On image 83/1312
[2021-07-29 03:39:07,970 INFO inference_task.py line 442 11926] On image 84/1312
[2021-07-29 03:39:45,093 INFO inference_task.py line 442 11926] On image 85/1312
[2021-07-29 03:40:22,227 INFO inference_task.py line 442 11926] On image 86/1312
[2021-07-29 03:40:58,539 INFO inference_task.py line 442 11926] On image 87/1312
[2021-07-29 03:41:35,155 INFO inference_task.py line 442 11926] On image 88/1312
[2021-07-29 03:42:11,598 INFO inference_task.py line 442 11926] On image 89/1312
[2021-07-29 03:42:49,505 INFO inference_task.py line 442 11926] On image 90/1312
[2021-07-29 03:43:26,028 INFO inference_task.py line 442 11926] On image 91/1312
[2021-07-29 03:44:02,794 INFO inference_task.py line 442 11926] On image 92/1312
[2021-07-29 03:44:39,811 INFO inference_task.py line 442 11926] On image 93/1312
[2021-07-29 03:45:15,724 INFO inference_task.py line 442 11926] On image 94/1312
[2021-07-29 03:45:53,234 INFO inference_task.py line 442 11926] On image 95/1312
[2021-07-29 03:46:29,862 INFO inference_task.py line 442 11926] On image 96/1312
[2021-07-29 03:47:06,696 INFO inference_task.py line 442 11926] On image 97/1312
[2021-07-29 03:47:42,920 INFO inference_task.py line 442 11926] On image 98/1312
[2021-07-29 03:48:19,640 INFO inference_task.py line 442 11926] On image 99/1312
[2021-07-29 03:48:55,935 INFO inference_task.py line 442 11926] On image 100/1312
[2021-07-29 03:49:32,223 INFO inference_task.py line 442 11926] On image 101/1312
[2021-07-29 03:50:08,022 INFO inference_task.py line 442 11926] On image 102/1312
[2021-07-29 03:50:44,716 INFO inference_task.py line 442 11926] On image 103/1312
[2021-07-29 03:51:22,088 INFO inference_task.py line 442 11926] On image 104/1312
[2021-07-29 03:51:59,518 INFO inference_task.py line 442 11926] On image 105/1312
[2021-07-29 03:52:37,975 INFO inference_task.py line 442 11926] On image 106/1312
[2021-07-29 03:53:14,968 INFO inference_task.py line 442 11926] On image 107/1312
[2021-07-29 03:53:50,860 INFO inference_task.py line 442 11926] On image 108/1312
[2021-07-29 03:54:26,802 INFO inference_task.py line 442 11926] On image 109/1312
[2021-07-29 03:55:02,738 INFO inference_task.py line 442 11926] On image 110/1312
[2021-07-29 03:55:38,041 INFO inference_task.py line 442 11926] On image 111/1312
[2021-07-29 03:56:13,368 INFO inference_task.py line 442 11926] On image 112/1312
[2021-07-29 03:56:48,973 INFO inference_task.py line 442 11926] On image 113/1312
[2021-07-29 03:57:24,275 INFO inference_task.py line 442 11926] On image 114/1312
[2021-07-29 03:57:59,858 INFO inference_task.py line 442 11926] On image 115/1312
[2021-07-29 03:58:36,053 INFO inference_task.py line 442 11926] On image 116/1312
[2021-07-29 03:59:12,738 INFO inference_task.py line 442 11926] On image 117/1312
[2021-07-29 03:59:52,303 INFO inference_task.py line 442 11926] On image 118/1312
[2021-07-29 04:00:32,366 INFO inference_task.py line 442 11926] On image 119/1312
[2021-07-29 04:01:12,757 INFO inference_task.py line 442 11926] On image 120/1312
[2021-07-29 04:01:51,448 INFO inference_task.py line 442 11926] On image 121/1312
[2021-07-29 04:02:31,448 INFO inference_task.py line 442 11926] On image 122/1312
[2021-07-29 04:03:08,493 INFO inference_task.py line 442 11926] On image 123/1312
[2021-07-29 04:03:45,930 INFO inference_task.py line 442 11926] On image 124/1312
[2021-07-29 04:04:22,804 INFO inference_task.py line 442 11926] On image 125/1312
[2021-07-29 04:04:58,956 INFO inference_task.py line 442 11926] On image 126/1312
[2021-07-29 04:05:35,772 INFO inference_task.py line 442 11926] On image 127/1312
[2021-07-29 04:06:12,102 INFO inference_task.py line 442 11926] On image 128/1312
[2021-07-29 04:06:48,624 INFO inference_task.py line 442 11926] On image 129/1312
[2021-07-29 04:07:25,737 INFO inference_task.py line 442 11926] On image 130/1312
[2021-07-29 04:08:01,762 INFO inference_task.py line 442 11926] On image 131/1312
[2021-07-29 04:08:37,999 INFO inference_task.py line 442 11926] On image 132/1312
[2021-07-29 04:09:15,628 INFO inference_task.py line 442 11926] On image 133/1312
[2021-07-29 04:09:51,805 INFO inference_task.py line 442 11926] On image 134/1312
[2021-07-29 04:10:28,575 INFO inference_task.py line 442 11926] On image 135/1312
[2021-07-29 04:11:04,250 INFO inference_task.py line 442 11926] On image 136/1312
[2021-07-29 04:11:40,140 INFO inference_task.py line 442 11926] On image 137/1312
[2021-07-29 04:12:17,193 INFO inference_task.py line 442 11926] On image 138/1312
[2021-07-29 04:12:56,092 INFO inference_task.py line 442 11926] On image 139/1312
[2021-07-29 04:13:32,841 INFO inference_task.py line 442 11926] On image 140/1312
[2021-07-29 04:14:08,987 INFO inference_task.py line 442 11926] On image 141/1312
[2021-07-29 04:14:45,480 INFO inference_task.py line 442 11926] On image 142/1312
[2021-07-29 04:15:24,121 INFO inference_task.py line 442 11926] On image 143/1312

I think I am missing something. According to this repo, the detection fps should be around 16 fps on a quadro P5000 and I think for Nvidia GTX 1070, it should be something similar but not (1/40) fps.

Can anybody help ??

Problem with creating model on demo notebook

Hi @johnwlambert
How are you?
I followed the demo notebook you put here and encountered a bug while running it:

[2020-08-23 10:43:55,760 INFO universal_demo.py line 60 724] => creating model ...
[2020-08-23 10:44:01,292 INFO inference_task.py line 277 724] => loading checkpoint '/content/mseg-3m.pth'
Traceback (most recent call last):
  File "mseg-semantic/mseg_semantic/tool/universal_demo.py", line 105, in <module>
    test_runner = UniversalDemoRunner(args, use_gpu)
  File "mseg-semantic/mseg_semantic/tool/universal_demo.py", line 70, in __init__
    scales = args.scales
  File "/content/mseg-semantic/mseg_semantic/tool/inference_task.py", line 219, in __init__
    self.model = self.load_model(args)
  File "/content/mseg-semantic/mseg_semantic/tool/inference_task.py", line 279, in load_model
    checkpoint = torch.load(args.model_path)
  File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 585, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 755, in _legacy_load
    magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, '<'.

It occurred on "Try out our model on an indoor scene (dining room):" cell.

Can you please look into it?

The 'sklearn' PyPI package is deprecated, use 'scikit-learn'

Dear authors,

First of all, thank you very much for your great work! I am happy to use your models in a project of mine. A user of the project has reported an easily fixable problem regarding your requierements.txt here: #BonifazStuhr/feamgan#1 (comment).

It says that "The 'sklearn' PyPI package is deprecated, use 'scikit-learn'". You can therefore simply replace sklearn with scikit-learn in requierements.txt.

I recreated the issue as well. Down below you can see the more detailed error message when installing mseg-semantic with pip install.

3.539 Collecting sklearn
3.557 Downloading sklearn-0.0.post11.tar.gz (3.6 kB)
3.724 ERROR: Command errored out with exit status 1:
3.724 command: /opt/conda/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-bfuobfzn/sklearn/setup.py'"'"'; __file__='"'"'/tmp/pip-install-bfuobfzn/sklearn/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-bfuobfzn/sklearn/pip-egg-info
3.724 cwd: /tmp/pip-install-bfuobfzn/sklearn/
3.724 Complete output (18 lines):
3.724 The 'sklearn' PyPI package is deprecated, use 'scikit-learn'
3.724 rather than 'sklearn' for pip commands.
3.724
3.724 Here is how to fix this error in the main use cases:
3.724 - use 'pip install scikit-learn' rather than 'pip install sklearn'
3.724 - replace 'sklearn' by 'scikit-learn' in your pip requirements files
3.724 (requirements.txt, setup.py, setup.cfg, Pipfile, etc ...)
3.724 - if the 'sklearn' package is used by one of your dependencies,
3.724 it would be great if you take some time to track which package uses
3.724 'sklearn' instead of 'scikit-learn' and report it to their issue tracker
3.724 - as a last resort, set the environment variable
3.724 SKLEARN_ALLOW_DEPRECATED_SKLEARN_PACKAGE_INSTALL=True to avoid this error
3.724
3.724 More information is available at
3.724 https://github.com/scikit-learn/sklearn-pypi-package
3.724
3.724 If the previous advice does not cover your use case, feel free to report it at
3.724 https://github.com/scikit-learn/sklearn-pypi-package/issues/new
3.724 ----------------------------------------
3.758 ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

Invitation of making PR on MMSegmentation.

Hey, here!

I am the member of OpenMMLab. This dataset and its related code/method are very valuable for segmentation. We hope we could introduce more people know this method and use dataset.

Would you like to make a new pr on MMSegmentation with us? We could work together to support this model and dataset effectively!

Best,

Is pretrained weight used?

Hi, in train.md you mentioned that we need to download the ImageNet-pretrained HRNet backbone model from the original authors' OneDrive. After downloading the file, I assumed the path to this weight should be specified as "weight" in config yaml file to be used as the initial weight? However the model keys don't seem to match, so I'm confused where should I use the pertained HRNET model.
Looking forward to your response

Format of weights

Hi, can you please tell me if the weights include the model structure or it's just the weights? If the last is true then to which model these weights refer to since under the models dir there are several model implementations?

Prediction on Ade20k and coco_stuff

First thank you for the wonderful work
I have some suggestion:
you can add to requirement.txt the package(Apex, Yaml and yacs).

I have question: even i change the model from Mseg 3m to coco-panoptic-133-1m or ade20k-150 the output are the same. haw i can excute the prediction on coco stuff with output 182 class and ade20k with 150 class

Best regards

Training a Semantic segmentation using Mseg dataset

I followed all steps to download all datasets to generate Mseg dataset. I want to use Mseg for training the semantic segmentation model. I looked at the training branch (training.md) but I haven't found any example to see how I can train my model using Mseg.

problem with running the demo ( No module named 'apex' , No such file img3_overlaid_classes.jpg)

I am trying to run the demo by following the instructions, but got an error saying " No module named 'apex' " (full output below"). which was followed by the error "No such file img3_overlaid_classes.jpg" in the next cell.
I also tried downgrading pytorch to match the apex cuda version, as well as commenting out the bare metal version check in the setup.py.
but nothing seemed to be working.

Namespace(config='mseg-semantic/mseg_semantic/config/test/default_config_360.yaml', file_save='default', opts=['model_name', 'mseg-3m', 'model_path', '/mseg-3m.pth', 'input_file', '/kitchen1.jpg'])
arch: hrnet
base_size: 360
batch_size_val: 1
dataset: kitchen1
has_prediction: False
ignore_label: 255
img_name_unique: False
index_start: 0
index_step: 0
input_file: /kitchen1.jpg
layers: 50
model_name: mseg-3m
model_path: /mseg-3m.pth
network_name: None
save_folder: default
scales: [0.5, 0.75, 1.0, 1.25, 1.5, 1.75]
small: True
split: val
test_gpu: [0]
test_h: 713
test_w: 713
version: 4.0
vis_freq: 20
workers: 16
zoom_factor: 8
[2021-07-06 12:22:03,683 INFO universal_demo.py line 59 71646] arch: hrnet
base_size: 360
batch_size_val: 1
dataset: kitchen1
has_prediction: False
ignore_label: 255
img_name_unique: True
index_start: 0
index_step: 0
input_file: /kitchen1.jpg
layers: 50
model_name: mseg-3m
model_path: /mseg-3m.pth
network_name: None
print_freq: 10
save_folder: default
scales: [0.5, 0.75, 1.0, 1.25, 1.5, 1.75]
small: True
split: test
test_gpu: [0]
test_h: 713
test_w: 713
u_classes: ['backpack', 'umbrella', 'bag', 'tie', 'suitcase', 'case', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'animal_other', 'microwave', 'radiator', 'oven', 'toaster', 'storage_tank', 'conveyor_belt', 'sink', 'refrigerator', 'washer_dryer', 'fan', 'dishwasher', 'toilet', 'bathtub', 'shower', 'tunnel', 'bridge', 'pier_wharf', 'tent', 'building', 'ceiling', 'laptop', 'keyboard', 'mouse', 'remote', 'cell phone', 'television', 'floor', 'stage', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot_dog', 'pizza', 'donut', 'cake', 'fruit_other', 'food_other', 'chair_other', 'armchair', 'swivel_chair', 'stool', 'seat', 'couch', 'trash_can', 'potted_plant', 'nightstand', 'bed', 'table', 'pool_table', 'barrel', 'desk', 'ottoman', 'wardrobe', 'crib', 'basket', 'chest_of_drawers', 'bookshelf', 'counter_other', 'bathroom_counter', 'kitchen_island', 'door', 'light_other', 'lamp', 'sconce', 'chandelier', 'mirror', 'whiteboard', 'shelf', 'stairs', 'escalator', 'cabinet', 'fireplace', 'stove', 'arcade_machine', 'gravel', 'platform', 'playingfield', 'railroad', 'road', 'snow', 'sidewalk_pavement', 'runway', 'terrain', 'book', 'box', 'clock', 'vase', 'scissors', 'plaything_other', 'teddy_bear', 'hair_dryer', 'toothbrush', 'painting', 'poster', 'bulletin_board', 'bottle', 'cup', 'wine_glass', 'knife', 'fork', 'spoon', 'bowl', 'tray', 'range_hood', 'plate', 'person', 'rider_other', 'bicyclist', 'motorcyclist', 'paper', 'streetlight', 'road_barrier', 'mailbox', 'cctv_camera', 'junction_box', 'traffic_sign', 'traffic_light', 'fire_hydrant', 'parking_meter', 'bench', 'bike_rack', 'billboard', 'sky', 'pole', 'fence', 'railing_banister', 'guard_rail', 'mountain_hill', 'rock', 'frisbee', 'skis', 'snowboard', 'sports_ball', 'kite', 'baseball_bat', 'baseball_glove', 'skateboard', 'surfboard', 'tennis_racket', 'net', 'base', 'sculpture', 'column', 'fountain', 'awning', 'apparel', 'banner', 'flag', 'blanket', 'curtain_other', 'shower_curtain', 'pillow', 'towel', 'rug_floormat', 'vegetation', 'bicycle', 'car', 'autorickshaw', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'trailer', 'boat_ship', 'slow_wheeled_object', 'river_lake', 'sea', 'water_other', 'swimming_pool', 'waterfall', 'wall', 'window', 'window_blind']
version: 4.0
vis_freq: 20
workers: 16
zoom_factor: 8
[2021-07-06 12:22:03,683 INFO universal_demo.py line 60 71646] => creating model ...
Traceback (most recent call last):
File "mseg-semantic/mseg_semantic/tool/universal_demo.py", line 105, in
test_runner = UniversalDemoRunner(args, use_gpu)
File "mseg-semantic/mseg_semantic/tool/universal_demo.py", line 70, in init
scales = args.scales
File "/mnt/batch/tasks/shared/LS_root/mounts/clusters/emanuelml/code/Users/emanuel/mseg-semantic/mseg_semantic/tool/inference_task.py", line 219, in init
self.model = self.load_model(args)
File "/mnt/batch/tasks/shared/LS_root/mounts/clusters/emanuelml/code/Users/emanuel/mseg-semantic/mseg_semantic/tool/inference_task.py", line 264, in load_model
from mseg_semantic.model.seg_hrnet import get_configured_hrnet
File "/mnt/batch/tasks/shared/LS_root/mounts/clusters/emanuelml/code/Users/emanuel/mseg-semantic/mseg_semantic/model/seg_hrnet.py", line 17, in
import apex
ModuleNotFoundError: No module named 'apex'

thank you.

MSeg, universal demo takes 10 minute for 1 image, why so?

Hallo Pros,

i am currently working with enhancing image enhancement paper and algorithm and trying to implement that. In the process, we need to use MSeg-segmentation for real and rendered images/ datasets. i have like 50-60k images.

So the dependencies MSeg-api and MSeg_semantic were already installed. I tried the google collab first and then copying the commands, so i could run the script in my linux also. the command is like this:
python -u mseg_semantic/tool/universal_demo.py
--config="default_config_360.yaml"
model_name mseg-3m
model_path mseg-3m.pth
input_file /home/luda1013/PfD/image/try_images

the weight i used, i downloaded it from the google collab, so the mseg-3m-1080.pth
MSeg-log

but for me, it took like 10 minutes for 1 image and also what i get in temp_files is just the gray scale image of it.
Could someone help me how i could solve this problem, thank you :)

Train

Author, is it correct that the output of Auxloss is 0
image

detectron2 Panoptic-FPN

From the README:

One additional repo will be introduced in October 2020:
    mseg-panoptic: provides Panoptic-FPN and Mask-RCNN training, based on Detectron2

I wonder if this is still planned?

Regards,

Performance different from reported in paper

Hi, I tried to run the training code with the 1m config but got significantly worse performance compared to the performance reported in the paper. I notice that you mentioned in the paper that batch size 35 was used but in the 1m config you set batch size to be 14. Can you please explain what should I change to train a model that achieves the performance as you reported with 1m config? Thanks!

mseg-3m-720p.pth?

Hi! Thanks for your gorgeous work!
I was wondering if there are no mseg-3m-720p.pth weights in your release files?
Many thanks and congratulations

How to train the model?

Hi! @johnwlambert
I checkout the training branch. I can not find any instructions for training. I wonder how to train the HR model in Tab2 of your paper.

Also where is train-qvga-mix-copy.sh?
where is train-qvga-mix-cd.sh ?

It is very confusing.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.