Giter Site home page Giter Site logo

Comments (15)

huikunbi avatar huikunbi commented on May 26, 2024 1

@JunweiLiang
About how to link the assets.
Actually, I just tried to import your proposed maps into Unreal 4.22.3 on Windows for Carla 0.9.8. I first enabled the editor to allow cooked contents. Then I followed this step:
image
Unreal will show warnings and errors of losing assets in the output log dialog if I opened a new map, and you can check them and put them into directory carla\Unreal\CarlaUE4\Content\Carla.

So the static buildings in 3D environment weren't calculated based on the homography matrics? They were just checked manually?

from multiverse.

JunweiLiang avatar JunweiLiang commented on May 26, 2024

What version of Unreal do you use? I use UE 4.22.3 according to this guide. And then I launch the CarlaUE4 editor according to this to edit the map.
Or you can download my entire Carla 0.9.6 source code with the maps inside to try (you'll still need to have UE4.22.3 installed). I just test it and I could load the map:
Screenshot from 2020-07-02 00-10-49

from multiverse.

huikunbi avatar huikunbi commented on May 26, 2024

@JunweiLiang Thank you for your reply.

Because I tried to edit your provided maps and I just imported them in CarlaUE4 Editor. I changed the project settings and linked all the assets from your provided source code. Now it worked on 4.22.3.

And I have a little question, you said that "In the UE4 editor, I simply duplicate existing CARLA map first (Town05 and Town03), and then edit them to look like the ActEV or ETHUCY videos. "(Section: Edit the maps)
Are the positions of static objects (like buildings or cars) in the scene accurate? How do you ensure your mentioned "LOOK LIKE"? Are they also transformed from the original data and positions in pixels?

from multiverse.

JunweiLiang avatar JunweiLiang commented on May 26, 2024

Could you provide detailed steps on how you link the assets in the CarlaUE4 Editor to make it work? This would be useful. Thanks.

The way I did was to project the trajectories from the real-world videos into the 3D scene (follow this guide) and use that as another reference other than looking at the video itself. So if a person is walking straight along the sidewalk in the real-world video, I would make sure to place the sidewalk in CarlaUE4 such that the agent is also walking straight along the sidewalk in Carla. In fact, I had to make the maps, start the Carla server, and found something that I was not satisfied, and then went back to edit the map again a couple of times. But still, in terms of the re-created static scenes, they are not pixel-accurately reconstructed from the real-world videos. All the annotated scenarios are manually checked to ensure they are reasonable, though.

A better way would be using a good semantic segmentation model to get a smoothed segmentation map, transforming it to the ground plane, and then somehow automatically putting it in CaralaUE4 (I don't know how).

from multiverse.

JunweiLiang avatar JunweiLiang commented on May 26, 2024

Thanks. That is very helpful. Closing this issue now.

No, the static buildings, grass, sidewalks, roads, trees, etc., are manually checked only. The trajectories of person and vehicles are transformed based on homography matrics.

from multiverse.

JunweiLiang avatar JunweiLiang commented on May 26, 2024

Just a follow-up on automatic static scene generation. I recently came across the following two papers:

Kar, Amlan, Aayush Prakash, Ming-Yu Liu, Eric Cameracci, Justin Yuan, Matt Rusiniak, David Acuna, Antonio Torralba, and Sanja Fidler. "Meta-sim: Learning to generate synthetic datasets." In Proceedings of the IEEE International Conference on Computer Vision, pp. 4551-4560. 2019.
Prakash, Aayush, Shaad Boochoon, Mark Brophy, David Acuna, Eric Cameracci, Gavriel State, Omer Shapira, and Stan Birchfield. "Structured domain randomization: Bridging the reality gap by context-aware synthetic data." In 2019 International Conference on Robotics and Automation (ICRA), pp. 7249-7255. IEEE, 2019.

According to these two papers, you can use a scene-graph-like grammar to automatically generate UE4 static scene with the UE4 scene generator.

from multiverse.

huikunbi avatar huikunbi commented on May 26, 2024

@JunweiLiang
Thank you for providing this information, and I will check the paper.

You introduced how to import the data of Actev into Carla in "Recreate Scenarios from Real-world Videos". But I only can see the trajectories of ETH&UCY in pixels, but not in real-world in directory "final_annos/ucyeth_annos/". Can you tell me where they are? Thank you.

from multiverse.

JunweiLiang avatar JunweiLiang commented on May 26, 2024

The trajectories in world coordinates are under final_annos/ucyeth_annos/original_trajs. I will add the instructions maybe at the end of the month.

from multiverse.

huikunbi avatar huikunbi commented on May 26, 2024

I tried to simulate the data (build_moment.py) in Carla. But the persons in ETH&UCY were not walking in the corresponding scenario as the data of "eth" shown in the following figures:

image

image

Can you please help me to figure out the problem?

from multiverse.

JunweiLiang avatar JunweiLiang commented on May 26, 2024

I have updated the instructions for ETH/UCY. Some calibrations of the coordinates are needed. Also I have noticed that your ZARA scene is missing a static vehicle:
image

from multiverse.

huikunbi avatar huikunbi commented on May 26, 2024

@JunweiLiang Thank you for your reminding.

I tried the proposed calibrations parameters of ETH&UCY. I found that it is inconsistent between all the trajectories ( in ETH, ZARA, and HOTEL) and the direction of the scenario. It looks like that the scenario is rotated by a certain angle. I want to confirm that are there any errors in the parameters provided?

zara:
image

eth:
image

hotel:
image

from multiverse.

JunweiLiang avatar JunweiLiang commented on May 26, 2024

Did you encounter the same problem for ActEV and Town5_actev? I'll test it this weekend.

from multiverse.

huikunbi avatar huikunbi commented on May 26, 2024

The walking directions in Town5_actev seem right. Thank you.

from multiverse.

JunweiLiang avatar JunweiLiang commented on May 26, 2024

Hi, the calibration parameters are correct. Steps to reproduce:

  1. Follow this to download the CARLA_0.9.6 package and the edited maps
  2. Start CARLA server by:
$ cd CARLA_0.9.6/; ./CarlaUE4.sh -opengl -carla-port=23015
  1. Change map to Town03_ethucy:
$ python code/spectator.py --port 23015 --change_map Town03_ethucy --go_to_zara_anchor --set_weather

Now you should be looking with the ZARA camera:
Screenshot from 2020-07-18 22-55-29
4. Plot the trajectories into the scene:

$ python code/plot_traj_carla.py ethucy_trajs/world/crowds_zara01.txt 0 -44.0511921243 -79.6225002047 \
0. -3.0428182023594172 --world_rotate 270 --scale 1.2 --port 23015

Now you should see something like this:
Screenshot from 2020-07-18 22-56-17
These are examples of the reconstructed trajectories. Then run:

$ mkdir ethucy_carla_pedestrian/
$ python code/plot_traj_carla.py ethucy_trajs/world/crowds_zara01.txt 0 -44.0511921243 -79.6225002047 \
0. -3.0428182023594172 --world_rotate 270 --scale 1.2 \
--save_carla_traj_file ethucy_carla_pedestrian/zara01.txt

And the trajectories in Town03_ethucy map's world coordinates would be in ethucy_carla_pedestrian/zara01.txt.

To spawn the pedestrians from frame 0 to 200, do:

$ python code/build_moment.py ethucy_carla_pedestrian/zara01.txt 0 200 --port 23015

You should see this with a lot of collisions:
Screenshot from 2020-07-18 23-14-19
We then run an automatic collision moment filtering according to here to get collision-free moments for editing.

Let me know how you got the above incorrect trajectories.

from multiverse.

tom728 avatar tom728 commented on May 26, 2024

Have you solved this problem? Thank you

from multiverse.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.