Giter Site home page Giter Site logo

junweiliang / multiverse Goto Github PK

View Code? Open in Web Editor NEW
250.0 10.0 61.0 84.03 MB

Dataset, code and model for the CVPR'20 paper "The Garden of Forking Paths: Towards Multi-Future Trajectory Prediction". And for the ECCV'20 SimAug paper.

Home Page: https://next.cs.cmu.edu/multiverse/

License: Apache License 2.0

Python 99.95% Shell 0.05%
trajectory-prediction trajectory-prediction-benchmark computer-vision video-understanding 3d-simulation

multiverse's Introduction

multiverse's People

Contributors

junweiliang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

multiverse's Issues

Reason why no use_gnn in Multimodal testing

Hello, I notice that you did not use GAT during the multimodal testing. May I ask the reason behind that?

python code/multifuture_inference.py forking_paths_dataset/next_x_v1_dataset_prepared_data/obs_data/traj_2.5fps/test/ \
forking_paths_dataset/next_x_v1_dataset_prepared_data/multifuture/test/ \
multiverse-models/multiverse_single18.51_multi168.9_nll2.6/00/best/ \
model1_output.traj.p --save_prob_file model1_output.prob.p \
--obs_len 8 --emb_size 32 --enc_hidden_size 256 --dec_hidden_size 256 \
--use_scene_enc --scene_id2name prepared_data/scene36_64_id2name_top10.json \
--scene_feat_path forking_paths_dataset/next_x_v1_dataset_prepared_data/obs_data/scene_seg/ \
--scene_h 36 --scene_w 64 --scene_conv_kernel 3 --scene_conv_dim 64 \
--grid_strides 2,4 --use_grids 1,0 --num_out 20 --diverse_beam \
--diverse_gamma 0.01 --fix_num_timestep 1 --gpuid 0

SDD data split for simaug

Hi, congratulations on the paper and thank you for sharing the code!

I had a question regarding the SDD data split shared here and reported in the SimAug paper. I'm working on a related model -- I'm one of the authors of P2T_IRL which was a baseline considered in the paper. I wanted to make sure we're using the same data split for a fair comparison.

The data splits shared here: https://next.cs.cmu.edu/data/sdd_data_splits_eccv2020.tgz
seem to be a random 5-fold split of the videos. However baselines reported in the paper (SoPhie, SocialGAN, P2T_IRL) as well as others (MATF-GAN, CF-VAE) have reported results for a very specific train, val and test split of SDD which corresponds to the scenes in the TrajNet benchmark.

Do the numbers reported for SimAug in table 2 (a) correspond to the same split as used by the baselines?

Thanks again!

-- Nachiket

How to get CARLA world coordinates?

Hi.I have a question.
I read this line and search compute_actev_world_norm.py.
But I could not be found compute_actev_world_norm.py.
Where is compute_actev_world_norm.py?
Also What was your calculation process in getting carla world coordinates?

Script not found!!

Excuse me! @JunweiLiang
get_split_path.py can not found in the folder.
Thanks in advanced.

Step 1: Pre-processing
Preprocess the data for SimAug training.

Get data split:

$ python code/get_split_path.py anchor_actev_4view_videos/rgb_videos/
anchor_actev_4view_videos/data_splits --is_anchor --ori_split_path packed_prepro/actev_data_splits/

Visualize the model output

Hello
when I run your code '' vis_multifuture_trajs_video.py'' ,there happened the follow problem:
File "code/vis_multifuture_trajs_video.py", line 85, in
raise Exception("Cannot open %s" % video_file)
Exception: Cannot open forking_paths_dataset/multifuture_visualization/0400_50_293_cam4.mp4

I have generated the multi-future videos , but it seems not complete(there are just 9 viodes not including 0400_50_293_cam4.mp4).
So could you please tell me what's wrong with it?
Look forward to your reply, thank you!

Are there any ways to preprocess the dataset in batches?

Hi, I'm a bachelor's student in Australia. My supervisor provided me this paper as a starting point of my thesis study in my Honour's year.
First, thank you for your contribution. This is really amazing work!
Since I'm just a bachelor's student and the university doesn't provide me with any devices, I can only use google colab to run the code. And when I was running step 1, about line 804 in preprocess.py, it threw a tcmalloc error and killed my program as there is only 12G memory allocated on colab.
So, are there any ways to process those data in batches to prevent from occupying such large memory?
I'll appreciate it so much for help.

Error to train SimAug

Hello! Thank you for your work.
I see you have provided a detailed tutorial for training.
However, with all the preprocessing steps finished, I keep failing to run code/train.py in the tutorial. While tracing code, I guess there might be something wrong while calculating gradient. Furthermore, I find that if I remove the optimizer (Trainer.train_op in your repo) from the inputs of sess.run(inputs) in line 2045, the training process would start running smoothly.

To be clear, the inputs in original repo is:

inputs = [self.loss, self.train_op, self.wd_loss]

While I run the train.py, I would get the error shown below:

multiview data stats:
	min 1, max 4
	{1: 748, 2: 474, 3: 275, 4: 11121}
loaded 47005 data points for train
loaded 7839 data points for val
 batch_size:12, epoch:30, 3918 step every epoch, total step:117540, eval/save every 3000 steps

  0%|          | 0/117540 [00:00<?, ?it/s]
  0%|          | 0/117540 [00:11<?, ?it/s]
Traceback (most recent call last):
  File "/home/ziyan/anaconda3/envs/tf1.15/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call
    return fn(*args)
  File "/home/ziyan/anaconda3/envs/tf1.15/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn
    target_list, run_metadata)
  File "/home/ziyan/anaconda3/envs/tf1.15/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.AlreadyExistsError: Resource __per_step_3/gradients/AddN_6/tmp_var/N10tensorflow19TemporaryVariableOp6TmpVarE
	 [[{{node gradients/AddN_6/tmp_var}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "code/train.py", line 335, in <module>
    main(arguments)
  File "code/train.py", line 308, in main
    trainer.step(sess, batch)
  File "/home/ziyan/simaug/Multiverse/SimAug/code/pred_models.py", line 2073, in step
    outputs = sess.run(inputs, feed_dict=feed_dict)
  File "/home/ziyan/anaconda3/envs/tf1.15/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 956, in run
    run_metadata_ptr)
  File "/home/ziyan/anaconda3/envs/tf1.15/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1180, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/ziyan/anaconda3/envs/tf1.15/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1359, in _do_run
    run_metadata)
  File "/home/ziyan/anaconda3/envs/tf1.15/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1384, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.AlreadyExistsError: Resource __per_step_3/gradients/AddN_6/tmp_var/N10tensorflow19TemporaryVariableOp6TmpVarE
	 [[{{node gradients/AddN_6/tmp_var}}]]

Then, if I remove the self.train_op in inputs, train.py would run smoothly:

inputs = [self.loss, self.wd_loss]

Screen logs:

multiview data stats:
	min 1, max 4
	{1: 748, 2: 474, 3: 275, 4: 11121}
loaded 47005 data points for train
loaded 7839 data points for val
 batch_size:12, epoch:30, 3918 step every epoch, total step:117540, eval/save every 3000 steps

  0%|          | 0/117540 [00:00<?, ?it/s]
  0%|          | 1/117540 [00:20<671:10:59, 20.56s/it]
  0%|          | 2/117540 [00:38<615:44:18, 18.86s/it]
  0%|          | 3/117540 [00:56<601:51:16, 18.43s/it]
  0%|          | 4/117540 [01:14<594:36:26, 18.21s/it]
  0%|          | 5/117540 [01:31<588:21:06, 18.02s/it]
  0%|          | 6/117540 [01:49<587:02:14, 17.98s/it]

Could you check that code/train.py can be executed in the right way and show the packages installed in your environment (like pip list)?

The conda environment I used to execute your code include:

  • python3
  • tensorflow1.15
    both are mentioned in README

And, all preprocessing steps have been done. So, now I don't have any clue to solve the problem.
Your help would be much appreciated, I'm close to making this thing work! Thanks for your time.

error in preprocessing of data for testing trained modal

when i run step 1 preprocess code command i get below error please guide

Warning - Keypoint Feature: VIRAT_S_000201_08_001652_001838 5580_7 not exists
Warning - Appearance Feature: VIRAT_S_000201_08_001652_001838 VIRAT_S_000201_08_001652_001838_5580_7 not exists
100%|###################################################################################################################| 56/56 [01:31<00:00, 1.64s/it]
total frames 15214, seq_list shape:(50233, 20, 2), total unique frame used:16681
total unique person box:58647
Traceback (most recent call last):
File "C:\Users\A.C\Desktop\Multiverse-master\code\preprocess.py", line 911, in
main(arguments)
File "C:\Users\A.C\Desktop\Multiverse-master\code\preprocess.py", line 139, in main
prepro_each(args.traj_path, "train", os.path.join(
File "C:\Users\A.C\Desktop\Multiverse-master\code\preprocess.py", line 716, in prepro_each
other_box_seq_list = np.asarray(
^^^^^^^^^^^
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (50233, 20) + inhomogeneous part.

Can't start an observer client with a PTZ camera to play around

I tried to it.

Then I received the following error message.

Traceback (most recent call last):
  File "code/spectator.py", line 21, in <module>
    from utils import anchor_cameras
  File "/home/use/Multiverse/forking_paths_dataset/code/utils.py", line 77, in <module>
    sun_azimuth_angle=20.0)
Boost.Python.ArgumentError: Python argument types in
    WeatherParameters.__init__(WeatherParameters)
did not match C++ signature:
    __init__(_object*, float cloudiness=0.0, float precipitation=0.0, float precipitation_deposits=0.0, float wind_intensity=0.0, float sun_azimuth_angle=0.0, float sun_altitude_angle=0.0, float fog_density=0.0, float fog_distance=0.0, float fog_falloff=0.0, float wetness=0.0)
    __init__(_object*)

Do you have any solutions?

Feeding more features into forking path dataset

Hi, I currently want to use the bounding box extracted in pickles files and feed them into the model when doing multifuture inference in multifuture_inference.py and I am not sure I understand the preprocess correctly. I noticed that in Virat preprocess, you stored every other box coordinates for each person in each frame. Thus, for data["other_box"] in each data sample, there is an array of lengths 8 which contains coordinates of other boxes at each time.

But for the forking path dataset, I only need to process the other box feature for the "controlled agent", right? So, the final processed other_box will be [N, obs_len, K] where K is the number of other boxes and N is 507, the number of the testing set.

Thanks for that.

error during training

024-04-15 12:11:22.419625: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:533] layout failed: Invalid argument: MutableGraphView::SortTopologically error: detected edge(s) creating cycle(s) {'new_train/person_pred/encoder_grid_class_1/while/Switch_4' -> 'new_train/person_pred/encoder_grid_class_1/while/encoder_grid_class_1/enc_grid_1/concat', 'new_train/person_pred/encoder_grid_class_1/while/Switch_4-1-0-TransposeNCHWToNHWC-LayoutOptimizer' -> 'new_train/person_pred/encoder_grid_class_1/while/Select_2', 'new_train/person_pred/encoder_grid_class_1/while/Switch_3' -> 'new_train/person_pred/encoder_grid_class_1/while/encoder_grid_class_1/enc_grid_1/mul', 'new_train/person_pred/encoder_grid_class_1/while/Switch_3-1-1-TransposeNCHWToNHWC-LayoutOptimizer' -> 'new_train/person_pred/encoder_grid_class_1/while/Select_1', 'new_train/person_pred/encoder_grid_class_0/while/Switch_4' -> 'new_train/person_pred/encoder_grid_class_0/while/encoder_grid_class_0/enc_grid_0/concat', 'new_train/person_pred/encoder_grid_class_0/while/Switch_4-1-0-TransposeNCHWToNHWC-LayoutOptimizer' -> 'new_train/person_pred/encoder_grid_class_0/while/Select_2', 'new_train/person_pred/encoder_grid_class_0/while/Switch_3' -> 'new_train/person_pred/encoder_grid_class_0/while/encoder_grid_class_0/enc_grid_0/mul', 'new_train/person_pred/encoder_grid_class_0/while/Switch_3-1-1-TransposeNCHWToNHWC-LayoutOptimizer' -> 'new_train/person_pred/encoder_grid_class_0/while/Select_1', 'gradients/AddN_35-0-TransposeNHWCToNCHW-LayoutOptimizer' -> 'gradients/AddN_35', 'gradients/AddN_28-0-TransposeNHWCToNCHW-LayoutOptimizer' -> 'gradients/AddN_28', 'gradients/AddN_45-0-TransposeNHWCToNCHW-LayoutOptimizer' -> 'gradients/AddN_45', 'gradients/AddN_20-0-TransposeNHWCToNCHW-LayoutOptimizer' -> 'gradients/AddN_20', 'gradients/AddN_33-0-TransposeNHWCToNCHW-LayoutOptimizer' -> 'gradients/AddN_33', 'gradients/AddN_25-0-TransposeNHWCToNCHW-LayoutOptimizer' -> 'gradients/AddN_25', 'gradients/AddN_44-0-TransposeNHWCToNCHW-LayoutOptimizer' -> 'gradients/AddN_44', 'gradients/AddN_18-0-TransposeNHWCToNCHW-LayoutOptimizer' -> 'gradients/AddN_18', 'gradients/AddN_43-0-TransposeNHWCToNCHW-LayoutOptimizer' -> 'gradients/AddN_43', 'gradients/AddN_39-0-TransposeNHWCToNCHW-LayoutOptimizer' -> 'gradients/AddN_39', 'gradients/AddN_51-0-TransposeNHWCToNCHW-LayoutOptimizer' -> 'gradients/AddN_51', 'gradients/AddN_37-0-TransposeNHWCToNCHW-LayoutOptimizer' -> 'gradients/AddN_37', 'gradients/AddN_37-3-TransposeNHWCToNCHW-LayoutOptimizer' -> 'gradients/AddN_37', 'gradients/AddN_37-2-TransposeNHWCToNCHW-LayoutOptimizer' -> 'gradients/AddN_37', 'gradients/AddN_42-0-TransposeNHWCToNCHW-LayoutOptimizer' -> 'gradients/AddN_42', 'gradients/AddN_38-0-TransposeNHWCToNCHW-LayoutOptimizer' -> 'gradients/AddN_38', 'gradients/AddN_50-0-TransposeNHWCToNCHW-LayoutOptimizer' -> 'gradients/AddN_50', 'gradients/AddN_36-0-TransposeNHWCToNCHW-LayoutOptimizer' -> 'gradients/AddN_36', 'gradients/AddN_36-3-TransposeNHWCToNCHW-LayoutOptimizer' -> 'gradients/AddN_36', 'gradients/AddN_36-2-TransposeNHWCToNCHW-LayoutOptimizer' -> 'gradients/AddN_36', 'new_train/person_pred/encoder_grid_reg_1/while/NextIteration_4-0-0-TransposeNCHWToNHWC-LayoutOptimizer' -> 'new_train/person_pred/encoder_grid_reg_1/while/Merge_4', 'new_train/person_pred/encoder_grid_reg_1/while/Switch_3' -> 'new_train/person_pred/encoder_grid_reg_1/while/encoder_grid_reg_1/enc_grid_regress_1/mul', 'new_train/person_pred/encoder_grid_reg_1/while/Switch_3-1-1-TransposeNCHWToNHWC-LayoutOptimizer' -> 'new_train/person_pred/encoder_grid_reg_1/while/Select_1', 'new_train/person_pred/encoder_grid_reg_0/while/NextIteration_4-0-0-TransposeNCHWToNHWC-LayoutOptimizer' -> 'new_train/person_pred/encoder_grid_reg_0/while/Merge_4', 'new_train/person_pred/encoder_grid_reg_0/while/Switch_3' -> 'new_train/person_pred/encoder_grid_reg_0/while/encoder_grid_reg_0/enc_grid_regress_0/mul', 'new_train/person_pred/encoder_grid_reg_0/while/Switch_3-1-1-TransposeNCHWToNHWC-LayoutOptimizer' -> 'new_train/person_pred/encoder_grid_reg_0/while/Select_1'}.

Additional material for simaug article

Hello author, I want a copy of the supplemental material for the simaug article, the supplemental material I downloaded from the eccv website shows the zip archive is corrupted,thanks

pygame Windows图像无法呈现

您好,
我在根据您的步骤一步步运行时出现了pygame窗口没有图像生成,而且在运行您给出的软件包的example也出现同样的问题。请问这是为什么,怎么解决
image
image

Visualise Data

I have a question regarding visualization. I don't understand why when I launch the visualization I always get this. I had the data prepared properly. But I don't understand why these messages! please explain to me?

Capture

And, when I run your code '' vis_multifuture_trajs_video.py'' ,there happened the follow problem:
File "code/vis_multifuture_trajs_video.py", line 85, in
raise Exception("Cannot open %s" % video_file)
Exception: Cannot open forking_paths_dataset/multifuture_visualization/0000_0_303_cam1.mp4

I don't understand why!

I got in multifutur_visualizations 507 elements (but photos with a bbox only without the possible multifutur paths in green as mentioned!).

Exemple:

image

And in rgb videos I have 3000 videos.

Thanks!

Visualize semantic segmentation image?

Excuse me! @JunweiLiang

When I run the following code, the files' extention in scene_seg_36x64_argoverse folder is NPY. Would it be possible to visualize the semantic segmentation feature.

Like this! image

Thanks in advanced!

python code/extract_scene_seg.py val_frames_renamed.lst ^
deeplabv3_xception_ade20k_train/frozen_inference_graph.pb ^
scene_seg_36x64_argoverse --every 1 --down_rate 8.0 --job 1 --gpuid 0 --save_two_level

How to test on a custom video?

Thanks for your great work.

Currently I want to do a single future prediction on a custom video, so I want to ask how to implement this network on a real-world video? Can you give some instructions on how to prepare my video and do prediction?

Thanks!

How to visualize Actev datasets in Simaug?

hi,Thank you for your work.I see you provide a visual flow of sdd and argoverse datasets in
PREPRO.md, but did not provide the process of actev dataset. Can you tell me how to visualize,I think there are both get_prepared_data_sdd.py and get_prepared_data_argoverse.py Python files.

Some questions regarding to the related work

Hello.
I noticed that you compared the Multiverse model with others like Social GAN and Next-prediction model, and the format of the dataset is compatible with those 2 models. Does that mean I can directly test pretrained Social GAN and Next-prediction models on the same dataset that this model does? I am quite confused of how to achieve the testing and compare results of different models. Could you give me some instructions?

And I found a very interesting idea in your Next-prediction model that you have used appearance features to predict future activities. I'm wondering if it's feasible to apply this method to the Multiverse model to facilitate the prediction.

I'll appreciate it so much for your response.

No such file or directory

Hello, I'm running Multi-Future Trajectory Prediction and when I execute the first python command I get the following error, the error says the file doesn't exist, then I searched the whole folder package and didn't find the file, how can I solve this problem, I'm very much looking forward to getting your answer.
The error message is as follows:
(tens) air-nss@airnss-Legion-R9000P-ARX8:~/predict/Multiverse$ python code/multifuture_inference.py forking_paths_dataset/next_x_v1_dataset_prepared_data/obs_data/traj_2.5fps/test/ \

forking_paths_dataset/next_x_v1_dataset_prepared_data/multifuture/test/
multiverse-models/multiverse_single18.51_multi168.9_nll2.6/00/best/
model1_output.traj.p --save_prob_file model1_output.prob.p
--obs_len 8 --emb_size 32 --enc_hidden_size 256 --dec_hidden_size 256
--use_scene_enc --scene_id2name prepared_data/scene36_64_id2name_top10.json
--scene_feat_path forking_paths_dataset/next_x_v1_dataset_prepared_data/obs_data/scene_seg/
--scene_h 36 --scene_w 64 --scene_conv_kernel 3 --scene_conv_dim 64
--grid_strides 2,4 --use_grids 1,0 --num_out 20 --diverse_beam --use_gnn
--diverse_gamma 0.01 --fix_num_timestep 1 --gpuid 0
Traceback (most recent call last):
File "code/multifuture_inference.py", line 407, in
inputs = get_inputs(args, traj_files, gt_trajs)
File "code/multifuture_inference.py", line 170, in get_inputs
with open(args.scene_id2name, "r") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'prepared_data/scene36_64_id2name_top10.json'

SimAug dataset video files name

Hi, Thanks for your contribution!
I just want to make sure that the files of SimAug Dataset are named with the following format:
VIRAT_S_[4-digit scene id][xx]_[2-digit moment id]_[moment start time]_[moment end time]_F_[frame id]_obs12_pred16_[camera id]

SDD and Argoverse npz train and validation files.

Hi,

congratulations, great work you have here. I wonder if it is possible sharing the datasets you used in SimAug to train in the same way as you did in Next Prediction. In NextP repo it is possible download a train, val and test npz files for Actev dataset. Is there something similar available for SDD and Argoverse Dataset? I only found the test npz files for SDD and Argoverse.

thanks a lot in advance.

private chat

Hello, can I chat privately? I understand that you are also very busy, but because you do encounter some technical difficulties, you cannot continue. And the deadline is almost here. If possible, I would be very grateful! Thank you for replying in your busy schedule. This is my email address [email protected]
Thank you for your answer!

Creating Whole Garden of forking path dataset as single future

Thank you for your reply.

I have a few more confusions and questions regarding this work.

In Paper, Table 3, Is the “Trained on simulation” column trained on whole simulated data (created with get_prepared_data.py and then extracting frames and scenes)?

I am trying to create the whole dataset for training but by using get_prepared_data.py and then using get_frames_and_scene_seg.py, I get about 224 warnings (bad videos). For example: warning, 0401_67_204_1_12_cam3 video has 30 rgb frame, 30 seg frames, 31 in traj. If i change “while cur_rgb_frame < frame_count:” to “while cur_rgb_frame < =frame_count:” I get assert error “assert suc, (videoname, cur_rgb_frame, frame_count)” because there is no frame at while cur_rgb_frame =frame_count.

In get_prepared_data.py the drop frame is always drop_frame = args.drop_frame["virat"] which is equal to 12. Should i also add drop_frame = args.drop_frame["ethucy"], the way you do in get_prepared_data_multifuture.py??

Originally posted by @rameezrehman83 in #27 (comment)

Map loading problem for Unreal

Hi,

I tried to load your provided map (Town03_ethucy and Town05_actev) in unreal but failed. It only showed a warning "Failed to open map file. It may be because the map uses a newer version of the engine for saving". Actually, I used the same version and could load any other map provided in Carla. Could you please tell me how to fix this problem?

Thank you for your help.

How to create a new dataset for some videos.

Hi.
I want to know how to create a new forking path dataset for other datasets.
Then, what do I need data source?
For example, Is it difficult without homography matrics of the video?

Can't cook the maps

Hi.
I use UE 4.22.3 according to this guide.But I received the following error message.

Increasing per-process limit of core file size to infinity.
Project file not found: carla_source_09272019/Unreal/CarlaUE4/CarlaUE4.uprojectPreparing to exit.Shutting down and abandoning module NetworkFile (12)Shutting down and abandoning module CookedIterativeFile (10)Shutting down and abandoning module StreamingFile (8)Shutting down and abandoning module SandboxFile (6)Shutting down and abandoning module PakFile (4)Shutting down and abandoning module RSA (3)Exiting.Exiting abnormally (error code: 1)
FGlobalDynamicReadBuffer::Cleanup()FGlobalDynamicReadBuffer::Cleanup

Do you have any solutions?

How to apply the model to predict the trajectory in a new video

Hello, how do you need to apply your code to predict the trajectory in a new video? In other words, where to put the video and what code to run? Thanks for your answer
您好,请问如果要应用您的代码预测一个新的视频中的轨迹,需要怎样具体操作呢?也就是说将该视频放在哪里以及运行哪些代码呢?谢谢您的回答

0000_11_346_0_12_cam2 is skipped when preparing data

I'm trying to run the test session of this repo, but I noticed that there is a warning that 0000_11_346_0_12_cam2 is skipped due to bad x_agent boxes when preparing data and thus this video cannot be visualized in the multi-future part.

Can you help in solving this problem?

How to visualize new videos in simaug project?

My question is, how can the SimAug project be applied to the new video?I looked at your code, and you visualized the SDD campus dataset and the argoverse dataset. When I was viewing the SimAug project, you visualized the SDD dataset in the prepro.md file. There is a line of command written like this:
python code/get_prepared_data_sdd.py annotations/ data_splits/fold_1 resized.lst prepared_data_fold1
There are annotations in the SDD dataset. If I take a new video to test, do I also need annotations annotation files?
Thank you very much.

No such file or directory: 0000_11_346_cam2.p

Hi.
I try Visualize the Dataset.
So I have a problem.
FileNotFoundError: [Errno 2] No such file or directory: 'next_x_v1_dataset_prepared_data/multifuture/test/0000_11_346_cam2.p'
Do you have a solution?

How to run model on one test video ?

Hello,
I just want to run the model to predict the output of one test video from the forking path dataset as I don't want to extract seg frames and RGB frames every time as it takes too long.

Prepare training data?

Hello @JunweiLiang
I would like to confirm that the data(sddactev_trainval/actev/) use to finetune SimAug model is the same as the outcomes of following your preprocessing rules in training??

  1. Get data split
  2. Get trajectories
  3. Get RGB frames and scene segmentation features
  4. Remove videos with bad trajectories
  5. Preprocess data into npz files for efficient training
python code/train.py sddactev_trainval/actev/ simaug_my_finetune actev_modelname \
 --load_from packed_models/best_simaug_model/00/best/ --wd 0.001 --runId 0 \
 --obs_len 8 --pred_len 12 --emb_size 32 --enc_hidden_size 256 --dec_hidden_size \
 256 --activation_func tanh --keep_prob 1.0 --num_epochs 30 --batch_size 20 \
 --init_lr 0.05 --use_gnn --learning_rate_decay 0.95 --num_epoch_per_decay 2.0 \
 --grid_loss_weight 1.0 --grid_reg_loss_weight 0.1 --save_period 1000 --scene_h 36 \
 --scene_w 64 --scene_conv_kernel 3 --scene_conv_dim 64 --scene_grid_strides 2,4 \
 --use_grids 1,0 --val_grid_num 0 --train_w_onehot --gpuid 0

Another question is that I would like to use the Argoverse dataset to finetune the SimAug model, whether it's possible to use three-view samples for training(e.g. front-center, front-left, front-right)??

Thanks in advance!

Pretrained model can't download!!

Excuse me! @JunweiLiang

When I ran the command (bash scripts/download_single_models.sh), it seems like it can't download the pretrain model in multiverse.

download_single_models.sh: line 2: $'\r': command not found
--2022-07-02 09:32:26-- https://next.cs.cmu.edu/multiverse/dataset/multiverse-models.tgz%0D
Resolving next.cs.cmu.edu (next.cs.cmu.edu)... 128.2.220.9
Connecting to next.cs.cmu.edu (next.cs.cmu.edu)|128.2.220.9|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2022-07-02 09:32:26 ERROR 404: Not Found.

download_single_models.sh: line 4: $'\r': command not found
tar (child): multiverse-models.tgz\r: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now

Thanks in advanced!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.