Giter Site home page Giter Site logo

leaderboard's People

Contributors

cmpute avatar daraan avatar felipecode avatar germanros1987 avatar glopezdiest avatar icgog avatar jackbart94 avatar joel-mb avatar pablovd avatar tayyim avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

leaderboard's Issues

camera doesn't move with the ego vehicle

I met the problem when I tried to spawn multiple clients and carla servers in a multi-processing way. I found that the camera created by carla server didn't move with the ego vehicle while the camera created by me worked well. However, if I tried to spawn multiple clients and carla servers seperately, this problem disappeared.

Does anyone know how to solve this?

Failed to capture data using a rgb-camera with a time-out error.

I am struggling to capture data with the offical log in CarlaLeaderboard 2.0. The camera sensor always failed to send back the data and raise a timeout error. I only cutoff some sensors in the example capture python script and add an additional sensor limit to the agent wrapper. I tried to increase the time limit of the queue to 60s , but it still failed. Please, can any one can help with it?

How to spawn pedestrians in leaderboard

I use CARLA0.9.10 on ubuntu 18.04. In leaderboard, I set synchronous_mode = True

I wanted to spawn pedestrians in leaderboard, and I added request_new_batch_walkers function in carla_data_provider.py as the following:

@staticmethod
def request_new_batch_walkers(model, amount, spawn_points, autopilot=False,
                            random_location=False, rolename='scenario'):
    SpawnActor = carla.command.SpawnActor  # pylint: disable=invalid-name
  
    # CarlaDataProvider.generate_spawn_points()
    spawn_points = CarlaDataProvider.generate_random_spawn_points(amount)
  
    batch = []
  
    for i in range(amount):
        # Get vehicle by model
        blueprint = CarlaDataProvider.create_blueprint(model, rolename)
        spawn_point = spawn_points[i]
        actor = SpawnActor(blueprint, spawn_point)
        batch.append(actor)
  
    walkers, walker_id_batch = CarlaDataProvider.handle_walker_batch(batch)
  
    con_bp = CarlaDataProvider.create_blueprint('controller.ai.walker', rolename)
    controller_batch = [SpawnActor(con_bp, carla.Transform(), walker_id) for walker_id in walker_id_batch]
    controllers = CarlaDataProvider.handle_actor_batch(controller_batch)
    for controller in controllers:
        controller.start()
        controller.go_to_location(CarlaDataProvider._world.get_random_location_from_navigation())
        controller.set_max_speed(1.4 + np.random.randn())
  
    CarlaDataProvider._controllers = controllers
    CarlaDataProvider._timers = [np.random.randint(60, 600) * 20 for _ in controllers]
  
    actors = walkers + controllers
  
    if actors is None:
        return None
  
    for actor in actors:
        if actor is None:
            continue
        CarlaDataProvider._carla_actor_pool[actor.id] = actor
        CarlaDataProvider.register_actor(actor)
    return actors

But warnings would occur when I tried to destroy the walkers and controllers in client side:

ERROR: failed to destroy actor 599 : unable to destroy actor: not found

And I found that these actors were either walkers or controllers, but there were no warnings about the vehicles. After running for a while, the client would be stuck in world.tick in scenario_manager.py

CarlaDataProvider.get_world().tick(self._timeout)

And the carla world would wait for a tick to continue.

If I didn't add these walkers, this problem would disappear and everything worked fine.

So, my question is:

  1. If I spawn walkers in a correct manner? If not, what is the best way to spawn walkers in leaderboard?
  2. If the tick-fail-error was caused by adding the walkers in leaderboard?

ERROR: trying to create rpc server for traffic manager; but the system failed to create because of bind error.

I got that error when I ran the main function in leaderboard_evaluator.py in a multiprocessing way to train my DRL agent. I also changed the arguments, the code was as follows:

import torch.multiprocessing as mp

.
.
.

    for rank in range(2):
        port = 8000 + 100 * (rank + 1)
        t = mp.Process(target=main, args=(
            str(port), result_root_path, rank, traffic_light, counter, shared_model, shared_grad_buffers,
            son_process_counter))
        t.start()
        processes.append(t)

    for t in processes:
        t.join()

The 'port' was the main port used by CARLA server and client. I initialized two subprocess, while the first worked well but the second subprocess got the error as:

trying to create rpc server for traffic manager; but the system failed to create because of bind error.

I found that there is only one place setting the port of traffic manager, in scenario_runner/srunner/scenariomanager/scenarioatomics/atomic_behaviors.py in class ChangeAutoPilot. But the code self._tm = CarlaActorPool.get_client().get_trafficmanager() was not been called.

Does anyone know where to change the port of traffic manage?

libGL error: MESA-LOADER

Hello,

I am fairly new to carla and am trying to get started using this documentation. When trying to run a basic agent outline in step 2.1, I get the following error when running ./test_run.sh.
Screenshot from 2021-11-11 12-48-12

I would appreciate any help with this as I want to make sure I do not have any problems with agents moving forward.

ERROR: failed to destroy actor 290 : unable to destroy actor: not found

I am working on adapting the baseline LbC agent by @bradyz here to work with the latest versions of CARLA (9.10.1) and ScenarioRunner. I keep getting errors that seem to point towards an improper clean_up at the end of each test route. For instance, when running test_route_00 I get the following output:

> Registering the route statistics
ERROR: failed to destroy actor 290 : unable to destroy actor: not found

And sometimes I see the following:

ERROR: Invalid session: no stream available with id 788529155

Eventually the simulator crashes with the following error (although potentially unrelated to the clean_up issue above:

terminating with uncaught exception of type clmdep_msgpack::v1::type_error: std::bad_cast
Signal 6 caught.
Malloc Size=65538 LargeMemoryPoolOffset=65554 
Malloc Size=65535 LargeMemoryPoolOffset=131119 
Malloc Size=122688 LargeMemoryPoolOffset=253824 
Aborted (core dumped)

Any ideas where wires may be crossing with the implementation?

Thanks!

About leaderboard-2.0 branch

May I know which version of CARLA and scenario_runner are required for running with the leaderboard-2.0 branch?

ROS agent for leaderboard 2.0 has no statistics

I tried to create an agent in leaderboard 2.0 framework, and start with the shell script to start fromleaderboard_evaluate.py. Everything goes almost right until I stop the statistics manually. I observed that the carla simulator has already got blank because the actors in it have been destroyed, so the object in python got "Nonetype" AttributeError: 'NoneType' object has no attribute 'get_snapshot', and got stucked.

devices: Intel136k + 4070Ti
system: Ubuntu 20.04
api : ROS1Agent

> Stopping the route

========= Results of RouteScenario_0 (repetition 0) ------ FAILURE =========

╒═════════════════════════════════╤═════════════════════╕
│ Start Time                      │ 2023-06-25 13:56:50 │
├─────────────────────────────────┼─────────────────────┤
│ End Time                        │ 2023-06-25 13:58:10 │
├─────────────────────────────────┼─────────────────────┤
│ Duration (System Time)          │ 80.07s              │
├─────────────────────────────────┼─────────────────────┤
│ Duration (Game Time)            │ 97.05s              │
├─────────────────────────────────┼─────────────────────┤
│ Ratio (System Time / Game Time) │ 1.212               │
╘═════════════════════════════════╧═════════════════════╛

╒═══════════════════════╤═════════╤═════════╕
│ Criterion             │ Result  │ Value   │
├───────────────────────┼─────────┼─────────┤
│ RouteCompletionTest   │ FAILURE │ 0.04 %  │
├───────────────────────┼─────────┼─────────┤
│ OutsideRouteLanesTest │ SUCCESS │ 0 %     │
├───────────────────────┼─────────┼─────────┤
│ CollisionTest         │ SUCCESS │ 0 times │
├───────────────────────┼─────────┼─────────┤
│ RunningRedLightTest   │ SUCCESS │ 0 times │
├───────────────────────┼─────────┼─────────┤
│ RunningStopTest       │ SUCCESS │ 0 times │
├───────────────────────┼─────────┼─────────┤
│ MinSpeedTest          │ SUCCESS │ 100 %   │
├───────────────────────┼─────────┼─────────┤
│ InRouteTest           │ SUCCESS │         │
├───────────────────────┼─────────┼─────────┤
│ AgentBlockedTest      │ SUCCESS │         │
├───────────────────────┼─────────┼─────────┤
│ ScenarioTimeoutTest   │ SUCCESS │ 0 times │
├───────────────────────┼─────────┼─────────┤
│ Timeout               │ SUCCESS │         │
╘═══════════════════════╧═════════╧═════════╛

> Registering the route statistics
Exception on start_listening while trying to handle message received.It could indicate a bug in user code on message handlers. Message skipped.
Traceback (most recent call last):
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm_autobahn.py", line 40, in onMessage
    self.on_message(payload)
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm.py", line 38, in on_message
    handler(message)
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm.py", line 85, in _handle_publish
    self.factory.emit(message["topic"], message["msg"])
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/event_emitter.py", line 164, in emit
    result = f(*args, **kwargs)
  File "/mnt/LinuxData/myfuser/leaderboard/leaderboard/autoagents/ros_base_agent.py", line 208, in _vehicle_control_cmd_callback
    carla_timestamp = CarlaDataProvider.get_world().get_snapshot().timestamp.elapsed_seconds
AttributeError: 'NoneType' object has no attribute 'get_snapshot'
Exception on start_listening while trying to handle message received.It could indicate a bug in user code on message handlers. Message skipped.
Traceback (most recent call last):
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm_autobahn.py", line 40, in onMessage
    self.on_message(payload)
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm.py", line 38, in on_message
    handler(message)
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm.py", line 85, in _handle_publish
    self.factory.emit(message["topic"], message["msg"])
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/event_emitter.py", line 164, in emit
    result = f(*args, **kwargs)
  File "/mnt/LinuxData/myfuser/leaderboard/leaderboard/autoagents/ros_base_agent.py", line 208, in _vehicle_control_cmd_callback
    carla_timestamp = CarlaDataProvider.get_world().get_snapshot().timestamp.elapsed_seconds
AttributeError: 'NoneType' object has no attribute 'get_snapshot'
Exception on start_listening while trying to handle message received.It could indicate a bug in user code on message handlers. Message skipped.
Traceback (most recent call last):
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm_autobahn.py", line 40, in onMessage
    self.on_message(payload)
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm.py", line 38, in on_message
    handler(message)
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm.py", line 85, in _handle_publish
    self.factory.emit(message["topic"], message["msg"])
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/event_emitter.py", line 164, in emit
    result = f(*args, **kwargs)
  File "/mnt/LinuxData/myfuser/leaderboard/leaderboard/autoagents/ros_base_agent.py", line 208, in _vehicle_control_cmd_callback
    carla_timestamp = CarlaDataProvider.get_world().get_snapshot().timestamp.elapsed_seconds
AttributeError: 'NoneType' object has no attribute 'get_snapshot'
Exception on start_listening while trying to handle message received.It could indicate a bug in user code on message handlers. Message skipped.
Traceback (most recent call last):
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm_autobahn.py", line 40, in onMessage
    self.on_message(payload)
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm.py", line 38, in on_message
    handler(message)
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm.py", line 85, in _handle_publish
    self.factory.emit(message["topic"], message["msg"])
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/event_emitter.py", line 164, in emit
    result = f(*args, **kwargs)
  File "/mnt/LinuxData/myfuser/leaderboard/leaderboard/autoagents/ros_base_agent.py", line 208, in _vehicle_control_cmd_callback
    carla_timestamp = CarlaDataProvider.get_world().get_snapshot().timestamp.elapsed_seconds
AttributeError: 'NoneType' object has no attribute 'get_snapshot'
Exception on start_listening while trying to handle message received.It could indicate a bug in user code on message handlers. Message skipped.
Traceback (most recent call last):
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm_autobahn.py", line 40, in onMessage
    self.on_message(payload)
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm.py", line 38, in on_message
    handler(message)
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm.py", line 85, in _handle_publish
    self.factory.emit(message["topic"], message["msg"])
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/event_emitter.py", line 164, in emit
    result = f(*args, **kwargs)
  File "/mnt/LinuxData/myfuser/leaderboard/leaderboard/autoagents/ros_base_agent.py", line 208, in _vehicle_control_cmd_callback
    carla_timestamp = CarlaDataProvider.get_world().get_snapshot().timestamp.elapsed_seconds
AttributeError: 'NoneType' object has no attribute 'get_snapshot'
Exception on start_listening while trying to handle message received.It could indicate a bug in user code on message handlers. Message skipped.
Traceback (most recent call last):
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm_autobahn.py", line 40, in onMessage
    self.on_message(payload)
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm.py", line 38, in on_message
    handler(message)
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm.py", line 85, in _handle_publish
    self.factory.emit(message["topic"], message["msg"])
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/event_emitter.py", line 164, in emit
    result = f(*args, **kwargs)
  File "/mnt/LinuxData/myfuser/leaderboard/leaderboard/autoagents/ros_base_agent.py", line 208, in _vehicle_control_cmd_callback
    carla_timestamp = CarlaDataProvider.get_world().get_snapshot().timestamp.elapsed_seconds
AttributeError: 'NoneType' object has no attribute 'get_snapshot'
Exception on start_listening while trying to handle message received.It could indicate a bug in user code on message handlers. Message skipped.
Traceback (most recent call last):
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm_autobahn.py", line 40, in onMessage
    self.on_message(payload)
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm.py", line 38, in on_message
    handler(message)
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm.py", line 85, in _handle_publish
    self.factory.emit(message["topic"], message["msg"])
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/event_emitter.py", line 164, in emit
    result = f(*args, **kwargs)
  File "/mnt/LinuxData/myfuser/leaderboard/leaderboard/autoagents/ros_base_agent.py", line 208, in _vehicle_control_cmd_callback
    carla_timestamp = CarlaDataProvider.get_world().get_snapshot().timestamp.elapsed_seconds
AttributeError: 'NoneType' object has no attribute 'get_snapshot'
Exception on start_listening while trying to handle message received.It could indicate a bug in user code on message handlers. Message skipped.
Traceback (most recent call last):
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm_autobahn.py", line 40, in onMessage
    self.on_message(payload)
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm.py", line 38, in on_message
    handler(message)
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm.py", line 85, in _handle_publish
    self.factory.emit(message["topic"], message["msg"])
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/event_emitter.py", line 164, in emit
    result = f(*args, **kwargs)
  File "/mnt/LinuxData/myfuser/leaderboard/leaderboard/autoagents/ros_base_agent.py", line 208, in _vehicle_control_cmd_callback
    carla_timestamp = CarlaDataProvider.get_world().get_snapshot().timestamp.elapsed_seconds
AttributeError: 'NoneType' object has no attribute 'get_snapshot'
Exception on start_listening while trying to handle message received.It could indicate a bug in user code on message handlers. Message skipped.
Traceback (most recent call last):
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm_autobahn.py", line 40, in onMessage
    self.on_message(payload)
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm.py", line 38, in on_message
    handler(message)
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/comm/comm.py", line 85, in _handle_publish
    self.factory.emit(message["topic"], message["msg"])
  File "/home/wly/miniconda3/envs/myfuser/lib/python3.8/site-packages/roslibpy/event_emitter.py", line 164, in emit
    result = f(*args, **kwargs)
  File "/mnt/LinuxData/myfuser/leaderboard/leaderboard/autoagents/ros_base_agent.py", line 208, in _vehicle_control_cmd_callback
    carla_timestamp = CarlaDataProvider.get_world().get_snapshot().timestamp.elapsed_seconds
AttributeError: 'NoneType' object has no attribute 'get_snapshot'
> Registering the global statistics

scenrio_runner 0.9.13 crashes server on CARLA 0.9.13

Hi,
I try to use 0.9.13 Carla and 0.9.13 SR, and partially modified leaderboard.
But after only evaluating 30 short routes, Carla collapsed.
Is this expected?
image
I saw that the issue also mentioned related problems. 0.9.12 Carla crashed due to multiple iterations.

Is there any solution at present?
I would appreciate any help you could provide me!

Bug in infractions/km calculation in compute_global_statistics

I think there is a bug in the computation of the infractions / km metrics in the current leaderboard repository (master):
The file is leaderboard/utils/statistics_manager.py
Function def compute_global_statistics(self, total_routes):

...
if self._registry_route_records:
        for route_record in self._registry_route_records:
                 global_record.scores['score_route'] += route_record.scores['score_route']
                 global_record.scores['score_penalty'] += route_record.scores['score_penalty']
                 global_record.scores['score_composed'] += route_record.scores['score_composed']

                 for key in global_record.infractions.keys():
                     route_length_kms = max(route_record.scores['score_route'] / 100 * route_record.meta['route_length'] / 1000.0, 0.001)
                     if isinstance(global_record.infractions[key], list):
                          global_record.infractions[key] = len(route_record.infractions[key]) / route_length_kms
                     else:
                          global_record.infractions[key] += len(route_record.infractions[key]) / route_length_kms
 
                 if route_record.status is not 'Completed':
                     global_record.status = 'Failed'
                     if 'exceptions' not in global_record.meta:
                         global_record.meta['exceptions'] = []
                     global_record.meta['exceptions'].append((route_record.route_id,
                                                              route_record.index,
                                                              route_record.status))
 
             for key in global_record.scores.keys():
                 global_record.scores[key] /= float(total_routes)
             ...

In the line:
global_record.infractions[key] += len(route_record.infractions[key]) / route_length_kms

The infractions/km from all the individual routes simply get added together, leading to nonsense value that are dependent on the number of routes.
The current code implements the following formula:
image
where c_i is the number of collisions in route, km_i is the number of km driven in route i. N is the number of routes.

A naive fix would be to divide by the number of routes:
image
However this is not exactly the correct calculation.
Slicing the driven km into route segments changes the result of the infraction / km metric and in the worst case can lead to a simpsons paradox.

What we want to compute is the total number of collisions / total number of km driven so the correct formula is:
image

which in code would looks something like adding the infractions (per type) as well as adding the driven (!) km to variables outside the for_loop and after the route for loop dividing the counts by the total number of driven kms.

local variable 'leaderboard_evaluator' referenced before assignment

image
I get the above error when trying to run the run_evaluation.sh. When I wrap the client.get_trafficmanager(int(args.trafficManagerPort)) in a try catch block like this I get the following error.

try:
      self.traffic_manager = self.client.get_trafficmanager(int(args.trafficManagerPort))
  except Exception as e:
      print(e)

image
Can someone tell me what's wrong there?

Metric variance in repeated Leaderboard runs

I've benchmarked an agent on the Leaderboard testing routes (with all scenarios present) over 10 repetitions and compiled the driving/route completion scores in a plot here:

image

Error bars are standard deviations around the mean represented by each bar. I've also plotted maximum and minimum scores on each route as dots on the corresponding bars. I've noticed that there's very high variance in many cases (see route 18 and 25 for example). Plotting the average number of infractions per route also shows high variance, which seems to indicate that each route plays out very differently - see the plot for route 25 below (y-axis is average number of infractions expected per run).

image

What causes this variance? The agent runs a neural net under the hood (RGB images as input) which should be deterministic - although I'd expect some variance to creep in if there's simulated noise.

Is there a way to set a seed somewhere to ensure that routes/scenarios play out the same way?

lidar point cloud data have high variance in x-axis

I created a lidar sensor as:

        {
            'type': 'sensor.lidar.ray_cast',
            'x': 0.7, 'y': -0.4, 'z': 3,
            'roll': 0.0, 'pitch': 0.0, 'yaw': 0.0,
            'id': 'LIDAR'
        },

But I found the lidar point cloud have a high variance in x-axis. So, I output the maximum and minimum value for axis x,y,z at each timestep by

        point_cloud = []
        for location in input_data['LIDAR'][1]:
            point_cloud.append([location[0], location[1], -location[2]])
        point_cloud = np.array(point_cloud)

        print('timestep:', self.step)
        print('x: ', np.max(point_cloud[:, 0]), ',', np.min(point_cloud[:, 0]))
        print('y: ', np.max(point_cloud[:, 1]), ',', np.min(point_cloud[:, 1]))
        print('z: ', np.max(point_cloud[:, 2]), ',', np.min(point_cloud[:, 2]))

And the result was as follows:

=======================result================================
timestep: 1

x: 151.4036 , -6.1035153e-06

y: 159.78447 , -156.04468

z: 34.19531 , -3.0915384

timestep: 2

x: 1.0375977e-05 , -193.3553

y: 154.51567 , -194.5679

z: 17.86496 , -3.0915384

timestep: 3

x: 151.40314 , -6.1035153e-06

y: 159.78447 , -156.04468

z: 34.195312 , -3.0911043

timestep: 4

x: 1.4648437e-05 , -193.35521

y: 154.51567 , -194.56285

z: 17.86496 , -3.090934

timestep: 5

x: 151.40294 , -2.746582e-05

y: 159.78447 , -156.04468

z: 34.19531 , -3.3909142

timestep: 6

x: 1.15966795e-05 , -193.35521

y: 154.51567 , -194.56355

z: 17.864958 , -3.0910094

timestep: 7

x: 151.40315 , -4.5776364e-06

y: 159.78447 , -156.04468

z: 34.19531 , -3.0911257

timestep: 8

x: 9.460449e-06 , -193.35526

y: 154.51567 , -194.56554

z: 17.864964 , -3.0912502

timestep: 9

x: 151.40341 , -1.2207031e-06

y: 159.78447 , -156.04468

z: 34.19531 , -3.091369

timestep: 10

x: 1.4648437e-05 , -193.35529

y: 154.51567 , -194.56749

z: 17.864962 , -3.0914743

=====================================================
As you can see, the y and z axis have a relatively stable value range while the x-axis value has a high variance. For example, in timestep 9, the minimum value of x-axis is around 0 (-1.2207031e-06). But in step 10, the minimum x-axis value is -193.35529. This will influence the performance because every timestep the agent can only have access to a half information of the environment. Does anyone know how I can solve this?

Where to find Map 12

Hi guys,
I'm asking about leaderboard 2.0 routes files. Currently I'm trying to run these routes with ScenarioManager (as I'm interested in the pre-crash typology scenarios).

I'm trying to find information on how to get map 12 (I can only find Map 11 information online). Currently I'm using CARLA 0.9.13. Is this included in 0.9.13 release of CARLA, or is it an external download?

question about version

Hi~ I wonder if there is a strict correspondence between leaderboard and carla simulator? For example, I use carla-0.9.12, which version of leaderboard code should I choose?
Looking forward and thanks much for your reply~

Submission pending

Hello, my new submission has been pending for a long time, and it still takes up my computing time. Is there a problem with the leaderboard?

Error when running the leaderboard evaluator after building Dockerfile.master to use it with Python 2.7 egg

Dear CARLA team,

Here is my issue:
I want to use CARLA Python APIs 2.7 .egg
On my local machine, it works fine by doing these steps:
1- export PYTHONPATH=$PYTHONPATH:CARLA_0.9.10.1/PythonAPI/carla/dist/carla-0.9.10-py2.7-linux-x86_64.egg
2- agent: ros_agent.py
3- Evaluator: I run command: pythong leaderboard_evaluator.py

But when I try to build the Dockerfile.master here comes the problems.
1- I don't use ros_agent, I use npc_agent for testing
2- successfully build it, and was able to run the agent inside docker with python3

Then I modified Docker as the follows:

  • replace ubuntu 16 with -> FROM nvidia/cuda:11.1-cudnn8-devel-ubuntu18.04
  • add required additional libraries

After successful docker build, and fix a lot of issues , finally I stuck with this error:

Traceback (most recent call last): File "/workspace/leaderboard/leaderboard/leaderboard_evaluator.py", line 31, in <module> from leaderboard.scenarios.scenario_manager import ScenarioManager File "/workspace/leaderboard/leaderboard/scenarios/scenario_manager.py", line 25, in <module> from leaderboard.autoagents.agent_wrapper import AgentWrapper, AgentError File "/workspace/leaderboard/leaderboard/autoagents/agent_wrapper.py", line 20, in <module> from leaderboard.envs.sensor_interface import CallBack, OpenDriveMapReader, SpeedometerReader, SensorConfigurationInvalid File "/workspace/leaderboard/leaderboard/envs/sensor_interface.py", line 8, in <module> import queue ImportError: No module named queue

Can anyone help !
Thanks

Sensor limits.

It appears to me from reading the code that this if statement is the wrong way around.
The Qualifier limit is applied when not submitting to the qualifier.

if agent_track in (Track.SENSORS_QUALIFIER, Track.MAP_QUALIFIER):
        sensor_limits = SENSORS_LIMITS
else:
        sensor_limits = QUALIFIER_SENSORS_LIMITS

See here.

Can not push leaderboard test docker to EvalAI platform

Hi, I‘m trying to test my CARLA agent on the CARLA leaderboard 1.0 (CARLA Autonomous Driving Challenge 2.0), for which I have obtained permission from the organizers to compete. I have installed the latest version of the EvalAI package (1.3.16) and successfully completed the "set token" operation.

However, when attempting to use the "evalai push" command to submit the required Docker image to the leaderboard, I encountered an error message "Error: None". Additionally, I tried other commands such as "evalai challenges," but received a "401 Client Error: Unauthorized for url: https://eval.ai/api/challenges/challenge/all" error.

I have attempted these operations on both a Linux machine and my personal Mac, and encountered the same issues on both platforms. Upon searching the EvalAI forum, I discovered that other users have also encountered similar problems, which can find here: https://evalai-forum.cloudcv.org/t/evalai-push-error-or-am-i-just-a-noob-with-docker/1773

I tried to send an email to the EvalAI team but got no response, so I wonder if you could give me any advice, I would be very appreciate!

My commands for push the docker to the platform:

pip install evalai
evalai set_token {MY_EVALAI_TOKEN}
evalai push {MY_DOCKER_IMAGE} --phase carla-leaderboard-10-sensors-2098

Incompatibility Issue between CARLA Leaderboard 2.0 and CARLA 0.9.14 API

I have encountered an incompatibility between CARLA Leaderboard 2.0 and CARLA 0.9.14 API. The issue arises from the use of a variable named spectator_as_ego in the class carla.WorldSettings within the CARLA Leaderboard 2.0 codebase. However, this variable does not exist in the CARLA 0.9.14 API, leading to compatibility problems.

The error was caused by

spectator_as_ego = False

Boost.Python.ArgumentError: Python argument types in WorldSettings.__init__(WorldSettings) did not match C++ signature:

test_run.sh
/home/yeda/thesis/leaderboard/leaderboard/leaderboard_evaluator.py:21: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
import pkg_resources
Traceback (most recent call last):
File "/home/yeda/thesis/leaderboard/leaderboard/leaderboard_evaluator.py", line 478, in
main()
File "/home/yeda/thesis/leaderboard/leaderboard/leaderboard_evaluator.py", line 467, in main
leaderboard_evaluator = LeaderboardEvaluator(arguments, statistics_manager)
File "/home/yeda/thesis/leaderboard/leaderboard/leaderboard_evaluator.py", line 79, in init
self.client, self.client_timeout, self.traffic_manager = self._setup_simulation(args)
File "/home/yeda/thesis/leaderboard/leaderboard/leaderboard_evaluator.py", line 178, in _setup_simulation
spectator_as_ego = False
Boost.Python.ArgumentError: Python argument types in
WorldSettings.init(WorldSettings)
did not match C++ signature:
init(_object*, bool synchronous_mode=False, bool no_rendering_mode=False, double fixed_delta_seconds=0.0, bool substepping=True, double max_substep_delta_time=0.01, int max_substeps=10, float max_culling_distance=0.0, bool deterministic_ragdolls=False, float tile_stream_distance=3000.0, float actor_active_distance=2000.0)
init(_object*)

I am using a ubuntu virtual machine to run the client, and Windows to run the Carla server.

How to translate 'input_data'?

I want to run my agent in another machine,which means that i have to translate 'input_data' over socket.However it's too big and i haven't found method until now.

No module named 'leaderboard.autoagents'

Hi I have installed CARLA with Autoware Universe using this tutorial:
https://www.youtube.com/watch?v=dxwwNacez7o

I have cloned and installed the requirments.txt from leaderboard as this tutorial recommends: https://leaderboard.carla.org/get_started_v1/

I am using python3, eggfile 3.7, ubuntu20.04, ROS2 - Foxy, source installation

When I try to launch the file op_bridge_ros2.py
I get the error:

Traceback (most recent call last):
  File "op_bridge_ros2.py", line 17, in <module>
    from op_bridge.agent_wrapper import AgentWrapper
  File "/home/rota2030/hatem-repos/op_bridge/op_bridge/op_bridge.py", line 15, in <module>
    from leaderboard.autoagents.agent_wrapper import  AgentWrapper
ModuleNotFoundError: No module named 'leaderboard.autoagents'

This is very strange because I was able to load the scenario runner modules, but the leaderboard modules, no matter what I try they don't get recognized. I have already assigned the PythonPath in my bash file, exported the right environment variables for the LEADERBOARD_ROOT. But no success.
Could you please assist me?

Snippet of my attempts to load this module is below:

#openplanner bridge

Hatem Darweesh January 2022

#import imp
import importlib
import math
import signal
import os
import sys
import importlib
import time
import traceback
import carla
from srunner.scenariomanager.carla_data_provider import *
#from op_bridge.leaderboard.autoagents.agent_wrapper import AgentWrapper
from op_bridge.agent_wrapper import AgentWrapper

"""
#Line below didn't work
#sys.path.append(os.path.join(os.path.abspath(os.path.dirname(__file__), '..','agent_wrapper.py'))
#import AgentWrapper

#import sys
#sys.path.append("/home/rota2030/hatem-repos/leaderboard/leaderboard/autoagents/")
#import AgentWrapper

#imp.load_source(AgentWrapper, 'home/rota2030/hatem-repos/leaderboard/leaderboard/autoagents/')


#LINE BELOW WORKED WITH FULL PATH
from importlib.machinery import SourceFileLoader

auto_agents = SourceFileLoader("agent_wrapper", "/home/rota2030/hatem-repos/leaderboard/leaderboard/autoagents/agent_wrapper.py").load_module()
auto_agents.AgentWrapper()
"""

My bash file is below:

#!/bin/bash
export PYTHONPATH=$PYTHONPATH:/opt/carla-simulator/PythonAPI/carla/dist/carla-0.9.13-py3.7-linux-x86_64.egg
export PYTHONPATH=$PYTHONPATH:/opt/carla-simulator/PythonAPI/carla
export PYTHONPATH=$PYTHONPATH:/opt/carla-simulator/PythonAPI
export PYTHONPATH=$PYTHONPATH:${CARLA_ROOT}/PythonAPI/util
export PYTHONPATH=$PYTHONPATH:${CARLA_ROOT}/PythonAPI/carla
export PYTHONPATH=$PYTHONPATH:${CARLA_ROOT}/PythonAPI/carla/agents
#export PYTHONPATH=$PYTHONPATH:${LEADERBOARD_ROOT}/autoagents
#export PYTHONPATH=$PYTHONPATH:/home/user/hatem-repos/scenario_runner
export PYTHONPATH="${CARLA_ROOT}/PythonAPI/carla/":"${SCENARIO_RUNNER_ROOT}":"${LEADERBOARD_ROOT}":"${CARLA_ROOT}/PythonAPI/carla/dist/carla-0.9.10-py3.7-linux-x86_64.egg":${PYTHONPATH}




export CARLA_ROOT=/opt/carla-simulator
export LEADERBOARD_ROOT=/home/rota2030/hatem-repos/leaderboard
export TEAM_CODE_ROOT=/home/user/hatem-repos/op_agent
export SCENARIO_RUNNER_ROOT=/home/user/hatem-repos/scenario_runner

#source /opt/ros/galactic/setup.bash 
#I have uninstalled ros galactic and installed foxy. Because Leaderboard modules just work with ros2 foxy according to https://leaderboard.carla.org/get_started/
source ~/ros2_foxy/install/local_setup.bash
source /home/rota2030/hatem-repos/carla-autoware-universe/autoware.universe.openplanner/install/setup.bash



I checked similar issues: #77, #84

but they did not work for me

Memory leak issue when loading a new world

I'm trying to create a custom RL environment for CARLA using leaderboard_evaluator.py as a template, but I'm running into some issues when trying to reset the custom environment (after an episode is done). The functions that load world/scenario, and cleanup after a scenario's end closely matches what's done in leaderboard_evaluator.py (e.g. the load_and_wait_for_world and cleanup functions), but there's a memory leak somewhere that happens every time I reset the environment.

Using a memory profiler shows that each time the environment resets, the carla.Client takes up more and more memory. This eventually leads to an out-of-memory error which kills the process. Is there a clean up method I'm missing/some common pitfall when resetting environments that I should resolve to stop this from happening?

I can provide code if needed, but I wanted first check if this was a known issue.

Carla server launch args on the evaluation server

Hi, I would like to ask how exactly carla is launched on the aws evaluation server, specifically which arguments are passed. It seems on 0.9.10.1 with -opengl flag there exists rendering issue (carla/issues/3377). Many black reflections are observable especially when the ground is wet. This makes it harder than before for agents to generalize to new weathers.

On my machine it looks like this.
unknown

carla/issues/3377 said it's just for opengl, vulkan should not have such problem. But I was not able to run carla server on aws g3 instance with -vulkan flag. I use .tar.gz release file and run through ./CarlaUE4.sh -vulkan. It would be helpful to specify how carla is launched for the leaderboard/challenge such that the rendering is at least consistent.

Thanks!

Sumission stucked with status "Submitted"

Hi, I have uploaded my docker image to leaderboard challenge in EvalAI platform. But the subission is being stuck on the same status as "Submitted" for last 4 days. Can anyone please confirm if it is normal that it takes so much of time to start the evaluation or if something else I can try out to make it working?

Thanks

Ego Vehicle Info and Status Messages ROS

Hi,

I am using the ROS Branch of the Leaderboard and everthing works fine so far (Connection to ROS and running through scenarios) but i did not receive Info and Status topics of the Ego Vehicle. Do i have to modify the ROS Agent or is there a simple solution like a specific sensor?
Thanks for your answer!

How to convert scenarios in leaderboard 1.0 in the leaderboard 2.0

How to convert scenarios in leaderboard 1.0 in the leaderboard 2.0. The leaderboard 2.0 uses config file which combinates of scenarios and routes e.g. leaderboard/data/routes_validation.xml. However, the leaderboard 1.0 starts a scenario with the route config file and scenario config file in a seperate manner. The difficulty is these two types of scenario files are different. How to convert the 1.0 version to 2.0 veresion?

Bug in SpeedometerReader

Is the following intended behavior? The case when transform is not assigned (after maximum attempts) doesn't seem to be handled properly.

def __call__(self):
""" We convert the vehicle physics information into a convenient dictionary """
# protect this access against timeout
attempts = 0
while attempts < self.MAX_CONNECTION_ATTEMPTS:
try:
velocity = self._vehicle.get_velocity()
transform = self._vehicle.get_transform()
break
except Exception:
attempts += 1
time.sleep(0.2)
continue
return {'speed': self._get_forward_speed(transform=transform, velocity=velocity)}

OpenDRIVE data never retrieved when it is the only sensor.

I wanted to check out the opendrive_map sensor but never got any input_data; not setting up other sensors for this purpose.

I tracked this bug down to this line of code that always aborts the data retrieval:

if self._opendrive_tag and self._opendrive_tag not in data_dict.keys() \
and len(self._sensors_objects.keys()) == len(data_dict.keys()) + 1:
break

With only an opendrive_map sensor this is equal to 1 == 0+1 and always breaks -> returns empty dict.

Shortly also noted in #50

Leaderboard and Carla RSS

Good afternoon,

I have bee using carla 9.13 and leaderboard for a while? For an implementation I switched to the RSS carla version to use this sensor. Since then leaderboard can not launch and I get this error

========= Preparing RouteScenario_0 (repetition 0) =========

Loading the world

The scenario could not be loaded:

Traceback (most recent call last):
File "/home/avner/CARLA_AUTOWARE_project/carla_RSS/leaderboard/leaderboard/leaderboard_evaluator.py", line 227, in _load_and_run_scenario
self._load_and_wait_for_world(args, config.town)
File "/home/avner/CARLA_AUTOWARE_project/carla_RSS/leaderboard/leaderboard/leaderboard_evaluator.py", line 171, in _load_and_wait_for_world
self.world = self.client.load_world(town)
RuntimeError: map not found

Registering the route statistics
Traceback (most recent call last):
File "/home/avner/CARLA_AUTOWARE_project/carla_RSS/leaderboard/leaderboard/leaderboard_evaluator.py", line 227, in _load_and_run_scenario
self._load_and_wait_for_world(args, config.town)
File "/home/avner/CARLA_AUTOWARE_project/carla_RSS/leaderboard/leaderboard/leaderboard_evaluator.py", line 171, in _load_and_wait_for_world
self.world = self.client.load_world(town)
RuntimeError: map not found

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/avner/CARLA_AUTOWARE_project/carla_RSS/leaderboard/leaderboard/leaderboard_evaluator.py", line 441, in
main()
File "/home/avner/CARLA_AUTOWARE_project/carla_RSS/leaderboard/leaderboard/leaderboard_evaluator.py", line 435, in main
leaderboard_evaluator.run(arguments)
File "/home/avner/CARLA_AUTOWARE_project/carla_RSS/leaderboard/leaderboard/leaderboard_evaluator.py", line 369, in run
crashed = self._load_and_run_scenario(args, config)
File "/home/avner/CARLA_AUTOWARE_project/carla_RSS/leaderboard/leaderboard/leaderboard_evaluator.py", line 241, in _load_and_run_scenario
self._cleanup()
File "/home/avner/CARLA_AUTOWARE_project/carla_RSS/leaderboard/leaderboard/leaderboard_evaluator.py", line 155, in _cleanup
self.world.tick() # TODO: Make sure all scenario actors have been destroyed
AttributeError: 'LeaderboardEvaluator' object has no attribute 'world'

I tried scenario runner to see if it works and it launches but crashes as soon as a second actor starts moving with this error

leRunningRedLight_1
WARNING: Actor model vehicle.* not available. Using instead vehicle.tesla.model3
ScenarioManager: Running scenario OppositeVehicleJunction
Traceback (most recent call last):
File "scenario_runner.py", line 415, in _load_and_run_scenario
self.manager.run_scenario()
File "/home/avner/CARLA_AUTOWARE_project/carla_RSS/leaderboard/scenario_runner/srunner/scenariomanager/scenario_manager.py", line 136, in run_scenario
self._tick_scenario(timestamp)
File "/home/avner/CARLA_AUTOWARE_project/carla_RSS/leaderboard/scenario_runner/srunner/scenariomanager/scenario_manager.py", line 175, in _tick_scenario
self.scenario_tree.tick_once()
File "/home/avner/.local/lib/python3.8/site-packages/py_trees/behaviour.py", line 158, in tick_once
for unused in self.tick():
File "/home/avner/.local/lib/python3.8/site-packages/py_trees/composites.py", line 578, in tick
for node in child.tick():
File "/home/avner/.local/lib/python3.8/site-packages/py_trees/composites.py", line 494, in tick
for node in child.tick():
File "/home/avner/.local/lib/python3.8/site-packages/py_trees/composites.py", line 494, in tick
for node in child.tick():
File "/home/avner/.local/lib/python3.8/site-packages/py_trees/composites.py", line 578, in tick
for node in child.tick():
File "/home/avner/.local/lib/python3.8/site-packages/py_trees/composites.py", line 494, in tick
for node in child.tick():
File "/home/avner/.local/lib/python3.8/site-packages/py_trees/composites.py", line 578, in tick
for node in child.tick():
File "/home/avner/.local/lib/python3.8/site-packages/py_trees/behaviour.py", line 247, in tick
self.initialise()
File "/home/avner/CARLA_AUTOWARE_project/carla_RSS/leaderboard/scenario_runner/srunner/scenariomanager/scenarioatomics/atomic_behaviors.py", line 2059, in initialise
self._agent = ConstantVelocityAgent(
File "/home/avner/CARLA_AUTOWARE_project/carla_RSS/PythonAPI/carla/agents/navigation/constant_velocity_agent.py", line 35, in init
super().init(vehicle, target_speed, opt_dict=opt_dict, map_inst=map_inst, grp_inst=grp_inst)
TypeError: init() got an unexpected keyword argument 'map_inst'
init() got an unexpected keyword argument 'map_inst'
Destroying ego vehicle 2309
ERROR: failed to destroy actor 2309 : unable to destroy actor: not found
Preparing scenario: OppositeVehicleRunningRedLight_2
WARNING: Actor model vehicle.* not available. Using instead vehicle.tesla.model3
ScenarioManager: Running scenario OppositeVehicleJunction

Any idea what is happening ? I download the zip of Carla RSS from the original repo. Everything seems to work correctly.

Help would be very much appriciated !! Thank you :)

Testing Routes and Autoware

Hello,

First of all thank you very much ! I was using the previous version until now, and I just switched to leaderboard 2, it is amazing.

I have a quick question : where can I find the route testing file ? In the new branch there is only the training one. Is it available?

My main goal and the reason why I upgraded carla and leaderboard is to switch to autoware universe and ros 2 galactic. I have a few questions about this :

Fist of all is it possible at all to run leaderboard with autoware universe ? That is a good think to know before I spend tens of hours on it . I saw that ROS2 galactic wasn t listed in the ros agent compatibility. Is it possible to run it anyway ?

Any tips to make this bridge work ?

Thank you very much !

Is it possible to support multi-agent environment with the leaderbaord?

Hello!
thank you all for the great project!
I am currently trying to use the leaderboard for my PhD thesis.

Carla supports a multi-agent environment.
I would like to ask you how to use multi-agent vehicles in combination with the leaderboard.
For this I need to change in carla the role_names as mentioned here.
I am not sure where exactly to do this setting.

Best regards,
Shawan

The mode of leaderboard

I have known that leaderboard runs in synchronization mode.However,I wonder that if it can run in asynchronous mode since I think that the calculation time of an agent should be taken into consideration.

What units are the gnss measurements in?

I've updated CARLA from 0.9.9.4 to 0.9.10, and I've noticed that the units of the gnss measurements have changed. Can you please clarify what the units are?

I've done a test in the second scenario of the training routes and got the following first readings:
CARLA_version, latitude, longitude, altitude
CARLA 0.9.9.4, 48.999461259289234, 8.001663866110341, 0.027095741455013922
CARLA 0.9.10, -0.0005387375004914929, 0.0010921803470367757, 0.03183420454388397

The values received in CARLA 0.9.9.4 are in degrees, and are correct.

DVS

Does it support DVS cameras?

run_evaluation.sh Issue

Hi,

I am using conda environment with Python 3.8, ubuntu 20.
CARLA is 0.9.11 which I built locally.

I am encountering the following issue:

Traceback (most recent call last):
File "/mnt/e3641809-fbba-49ea-86ed-f89021679b99/home/varun/carla/PythonAPI/leaderboard/leaderboard/leaderboard_evaluator.py", line 343, in _load_and_run_scenario
self.manager.run_scenario()
File "/mnt/e3641809-fbba-49ea-86ed-f89021679b99/home/varun/carla/PythonAPI/leaderboard/leaderboard/scenarios/scenario_manager.py", line 140, in run_scenario
self._tick_scenario(timestamp)
File "/mnt/e3641809-fbba-49ea-86ed-f89021679b99/home/varun/carla/PythonAPI/leaderboard/leaderboard/scenarios/scenario_manager.py", line 154, in _tick_scenario
self._watchdog.pause()
AttributeError: 'Watchdog' object has no attribute 'pause'

image

How to use MAP track in leaderboard

In leaderboard website, there are two tracks -- one is SENSORS track, the other is MAP track. In this repository, the SENSORS track is set as default in /leaderboard/autoagents/autonomous_agent.py

class AutonomousAgent(object):
    def __init__(self, path_to_conf_file):
        self.track = Track.SENSORS
        self._global_plan = None
        self._global_plan_world_coord = None

When I set self.track as Track.MAP in my agent by

class HDMapAgent(BaseAgent):
    def setup(self, path_to_conf_file):
        super().setup(path_to_conf_file)
        self.track = Track.MAP
        self.converter = Converter()

and After I added an opendrive_map sensor by

   def sensors(self):
        result = super().sensors()
        result.append({
            'type': 'sensor.opendrive_map',
            'reading_frequency': 20,
            'id': 'opendrive_map'
            })

An error occurred as :

Could not setup required agent due to 'NoneType' object is not iterable.

I don't know how to set up an agent using MAP track. Is there any reference for that?

No module named 'srunner'

Hello,

I followed the instructions from this page https://leaderboard.carla.org/get_started/ (using same carla version)
After definition my environment variables, I try running ./scripts/run_evaluation.sh, and I get a ModuleNotFoundError: No module named 'srunner'. I checked and my PYTHONPATH contains the carla/leaderboard/scenario_runner paths, as described in the tutorial, and I am in the correct conda env. Any idea as to what is happening? My OS is Ubuntu 16.04

Using `Track.Map` on setup seems to crash leaderboard.

I'm trying to use opendrive_map sensors within the leaderboard by setting up:

class Track(Enum):

    """
    This enum represents the different tracks of the CARLA AD leaderboard.
    """

    SENSORS = "SENSORS"
    MAP = "MAP"

class DoraAgent(AutonomousAgent):
    def setup(self, path_to_conf_file):
        """
        Setup the agent parameters
        """
        self.track = Track.MAP

    def sensors(self):  # pylint: disable=no-self-use
        """
        Define the sensor suite required by the agent
        return: a list containing the required sensors in the following format:
        [
        {'type': 'sensor.camera.rgb', 'x': 0.7, 'y': -0.4, 'z': 1.60, 'roll': 0.0, 'pitch': 0.0, 'yaw': 0.0,
        width': 300, 'height': 200, 'fov': 100, 'id': 'Left'},
        {'type': 'sensor.camera.rgb', 'x': 0.7, 'y': 0.4, 'z': 1.60, 'roll': 0.0, 'pitch': 0.0, 'yaw': 0.0,
        width': 300, 'height': 200, 'fov': 100, 'id': 'Right'},
        {'type': 'sensor.lidar.ray_cast', 'x': 0.7, 'y': 0.0, 'z': 1.60, 'yaw': 0.0, 'pitch': 0.0, 'roll': 0.0,
        id': 'LIDAR'}
        ]
        """
        sensors = []

        return sensors

But it seems that many scenario are skipped and the one that are not crash for some obscure Track issues when tested on leaderboard/data/routes_training and leaderboard/data/routes_devtest.

/home/peter/Documents/CONTRIB/dependencies/leaderboard/leaderboard/leaderboard_evaluator.py:87: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
  if LooseVersion(dist.version) < LooseVersion('0.9.13'):

========= Preparing RouteScenario_0 (repetition 0) =========
> Loading the world
Skipping scenario 'ParkingExit_1' due to setup error: 'ParkingExit'
Skipping scenario 'SignalizedJunctionLeftTurn_1' due to setup error: 'SignalizedJunctionLeftTurn'
Skipping scenario 'NonSignalizedJunctionRightTurn_1' due to setup error: 'NonSignalizedJunctionRightTurn'
Skipping scenario 'Accident_1' due to setup error: 'Accident'
Skipping scenario 'NonSignalizedJunctionLeftTurn_1' due to setup error: 'NonSignalizedJunctionLeftTurn'
Skipping scenario 'ControlLoss_1' due to setup error: 'ControlLoss'
Skipping scenario 'ConstructionObstacleTwoWays_1' due to setup error: 'ConstructionObstacleTwoWays'
Skipping scenario 'OppositeVehicleTakingPriority_1' due to setup error: 'OppositeVehicleTakingPriority'
Skipping scenario 'NonSignalizedJunctionLeftTurn_2' due to setup error: 'NonSignalizedJunctionLeftTurn'
Skipping scenario 'NonSignalizedJunctionLeftTurn_3' due to setup error: 'NonSignalizedJunctionLeftTurn'
Skipping scenario 'AccidentTwoWays_2' due to setup error: 'AccidentTwoWays'
Skipping scenario 'OppositeVehicleTakingPriority_2' due to setup error: 'OppositeVehicleTakingPriority'
Skipping scenario 'OppositeVehicleTakingPriority_3' due to setup error: 'OppositeVehicleTakingPriority'
Skipping scenario 'ControlLoss_2' due to setup error: 'ControlLoss'
Skipping scenario 'ControlLoss_3' due to setup error: 'ControlLoss'
Skipping scenario 'ControlLoss_4' due to setup error: 'ControlLoss'
Skipping scenario 'HardBreakRoute_2' due to setup error: 'HardBreakRoute'
Skipping scenario 'OppositeVehicleRunningRedLight_1' due to setup error: 'OppositeVehicleRunningRedLight'
Skipping scenario 'ConstructionObstacle_2' due to setup error: 'ConstructionObstacle'
Skipping scenario 'VehicleOpensDoorTwoWays_1' due to setup error: 'VehicleOpensDoorTwoWays'
Skipping scenario 'OppositeVehicleRunningRedLight_2' due to setup error: 'OppositeVehicleRunningRedLight'
Skipping scenario 'HazardAtSideLane_2' due to setup error: 'HazardAtSideLane'
Skipping scenario 'VehicleTurningRoutePedestrian_1' due to setup error: 'VehicleTurningRoutePedestrian'
Skipping scenario 'ParkedObstacleTwoWays_1' due to setup error: 'ParkedObstacleTwoWays'
Skipping scenario 'DynamicObjectCrossing_1' due to setup error: 'DynamicObjectCrossing'
Skipping scenario 'OppositeVehicleTakingPriority_4' due to setup error: 'OppositeVehicleTakingPriority'
Skipping scenario 'ConstructionObstacleTwoWays_2' due to setup error: 'ConstructionObstacleTwoWays'
Skipping scenario 'BlockedIntersection_1' due to setup error: 'BlockedIntersection'
Skipping scenario 'HardBreakRoute_3' due to setup error: 'HardBreakRoute'
Skipping scenario 'HardBreakRoute_4' due to setup error: 'HardBreakRoute'
Skipping scenario 'ControlLoss_6' due to setup error: 'ControlLoss'
Skipping scenario 'DynamicObjectCrossing_2' due to setup error: 'DynamicObjectCrossing'
Skipping scenario 'BlockedIntersection_2' due to setup error: 'BlockedIntersection'
Skipping scenario 'InvadingTurn_1' due to setup error: 'InvadingTurn'
Skipping scenario 'OppositeVehicleTakingPriority_5' due to setup error: 'OppositeVehicleTakingPriority'
Skipping scenario 'ConstructionObstacleTwoWays_3' due to setup error: 'ConstructionObstacleTwoWays'
Skipping scenario 'BlockedIntersection_3' due to setup error: 'BlockedIntersection'
Skipping scenario 'AccidentTwoWays_2' due to setup error: 'AccidentTwoWays'
Skipping scenario 'NonSignalizedJunctionRightTurn_3' due to setup error: 'NonSignalizedJunctionRightTurn'
Skipping scenario 'VehicleTurningRoutePedestrian_2' due to setup error: 'VehicleTurningRoutePedestrian'
Skipping scenario 'DynamicObjectCrossing_3' due to setup error: 'DynamicObjectCrossing'
Skipping scenario 'VehicleTurningRoute_1' due to setup error: 'VehicleTurningRoute'
Skipping scenario 'ConstructionObstacle_2' due to setup error: 'ConstructionObstacle'
Skipping scenario 'ControlLoss_7' due to setup error: 'ControlLoss'
Skipping scenario 'HardBreakRoute_4' due to setup error: 'HardBreakRoute'
Skipping scenario 'OppositeVehicleRunningRedLight_3' due to setup error: 'OppositeVehicleRunningRedLight'
Skipping scenario 'PriorityAtJunction_1' due to setup error: 'PriorityAtJunction'
Skipping scenario 'HazardAtSideLane_3' due to setup error: 'HazardAtSideLane'
Skipping scenario 'HazardAtSideLane_4' due to setup error: 'HazardAtSideLane'
Skipping scenario 'HardBreakRoute_5' due to setup error: 'HardBreakRoute'
Skipping scenario 'ControlLoss_8' due to setup error: 'ControlLoss'
Skipping scenario 'VehicleTurningRoute_2' due to setup error: 'VehicleTurningRoute'
Skipping scenario 'DynamicObjectCrossing_4' due to setup error: 'DynamicObjectCrossing'
Skipping scenario 'InvadingTurn_2' due to setup error: 'InvadingTurn'
Skipping scenario 'NonSignalizedJunctionRightTurn_4' due to setup error: 'NonSignalizedJunctionRightTurn'
Skipping scenario 'InvadingTurn_3' due to setup error: 'InvadingTurn'
Skipping scenario 'NonSignalizedJunctionLeftTurn_4' due to setup error: 'NonSignalizedJunctionLeftTurn'
Skipping scenario 'ControlLoss_9' due to setup error: 'ControlLoss'
Skipping scenario 'ParkedObstacleTwoWays_2' due to setup error: 'ParkedObstacleTwoWays'
Skipping scenario 'DynamicObjectCrossing_5' due to setup error: 'DynamicObjectCrossing'
Skipping scenario 'InvadingTurn_4' due to setup error: 'InvadingTurn'
Skipping scenario 'OppositeVehicleTakingPriority_6' due to setup error: 'OppositeVehicleTakingPriority'
Skipping scenario 'InvadingTurn_5' due to setup error: 'InvadingTurn'
Skipping scenario 'HardBreakRoute_6' due to setup error: 'HardBreakRoute'
Skipping scenario 'HardBreakRoute_7' due to setup error: 'HardBreakRoute'
Skipping scenario 'BlockedIntersection_4' due to setup error: 'BlockedIntersection'
Skipping scenario 'ConstructionObstacleTwoWays_4' due to setup error: 'ConstructionObstacleTwoWays'
Skipping scenario 'HardBreakRoute_8' due to setup error: 'HardBreakRoute'
> Setting up the agent

The sensor's configuration used is invalid:

Traceback (most recent call last):
  File "/home/peter/Documents/CONTRIB/dependencies/leaderboard/leaderboard/leaderboard_evaluator.py", line 268, in _load_and_run_scenario
    validate_sensor_configuration(self.sensors, track, args.track)
  File "/home/peter/Documents/CONTRIB/dependencies/leaderboard/leaderboard/autoagents/agent_wrapper.py", line 63, in validate_sensor_configuration
    raise SensorConfigurationInvalid("You are submitting to the wrong track [{}]!".format(Track(selected_track)))
leaderboard.envs.sensor_interface.SensorConfigurationInvalid: You are submitting to the wrong track [Track.SENSORS]!

I am using:
Carla leaderboard 2.0 version
Leaderboard on leaderboard-2.0 branch
Scenario Runner on leaderboard-2.0 branch
Python 3.7.15

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.