Giter Site home page Giter Site logo

praveen-palanisamy / macad-gym Goto Github PK

View Code? Open in Web Editor NEW
309.0 10.0 70.0 2.07 MB

Multi-Agent Connected Autonomous Driving (MACAD) Gym environments for Deep RL. Code for the paper presented in the Machine Learning for Autonomous Driving Workshop at NeurIPS 2019:

Home Page: https://arxiv.org/abs/1911.04175

License: MIT License

Shell 0.79% Python 91.04% C++ 8.17%
multi-agent-reinforcement-learning autonomous-driving multi-agent-autonomous-driving carla-gym macad-gym carla carla-rl carla-reinforcement-learning gym-environments deep-reinforcement-learning carla-simulator carla-driving-simulator

macad-gym's Introduction

MACAD-Gym learning environment 1 MACAD-Gym is a training platform for Multi-Agent Connected Autonomous Driving (MACAD) built on top of the CARLA Autonomous Driving simulator.

MACAD-Gym provides OpenAI Gym-compatible learning environments for various driving scenarios for training Deep RL algorithms in homogeneous/heterogenous, communicating/non-communicating and other multi-agent settings. New environments and scenarios can be easily added using a simple, JSON-like configuration.

PyPI version fury.io PyPI format Downloads

Quick Start

Install MACAD-Gym using pip install macad-gym. If you have CARLA_SERVER setup, you can get going using the following 3 lines of code. If not, follow the Getting started steps.

Training RL Agents

import gym
import macad_gym
env = gym.make("HomoNcomIndePOIntrxMASS3CTWN3-v0")

# Your agent code here

Any RL library that supports the OpenAI-Gym API can be used to train agents in MACAD-Gym. The MACAD-Agents repository provides sample agents as a starter.

Visualizing the Environment

To test-drive the environments, you can run the environment script directly. For example, to test-drive the HomoNcomIndePOIntrxMASS3CTWN3-v0 environment, run:

python -m macad_gym.envs.homo.ncom.inde.po.intrx.ma.stop_sign_3c_town03

Usage guide

Getting Started

Assumes an Ubuntu (18.04/20.04/22.04 or later) system. If you are on Windows 10/11, use the CARLA Windows package and set the CARLA_SERVER environment variable to the CARLA installation directory.

  1. Install the system requirements:

    • Miniconda/Anaconda 3.x
      • wget -P ~ https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh; bash ~/Miniconda3-latest-Linux-x86_64.sh
    • cmake (sudo apt install cmake)
    • zlib (sudo apt install zlib1g-dev)
    • [optional] ffmpeg (sudo apt install ffmpeg)
  2. Setup CARLA (0.9.x)

    3.1 mkdir ~/software && cd ~/software

    3.2 Example: Download the 0.9.13 release version from: Here Extract it into ~/software/CARLA_0.9.13

    3.3 echo "export CARLA_SERVER=${HOME}/software/CARLA_0.9.13/CarlaUE4.sh" >> ~/.bashrc

  3. Install MACAD-Gym:

    • Option1 for users : pip install macad-gym
    • Option2 for developers:
      • Fork/Clone the repository to your workspace: git clone https://github.com/praveen-palanisamy/macad-gym.git && cd macad-gym
      • Create a new conda env named "macad-gym" and install the required packages: conda env create -f conda_env.yml
      • Activate the macad-gym conda python env: source activate macad-gym
      • Install the macad-gym package: pip install -e .
      • Install CARLA PythonAPI: pip install carla==0.9.13

      NOTE: Change the carla client PyPI package version number to match with your CARLA server version

Learning Platform and Agent Interface

The MACAD-Gym platform provides learning environments for training agents in both, single-agent and multi-agent settings for various autonomous driving tasks and scenarios that enables training agents in homogeneous/heterogeneous The learning environments follows naming convention for the ID to be consistent and to support versioned benchmarking of agent algorithms. The naming convention is illustrated below with HeteCommCoopPOUrbanMgoalMAUSID as an example: MACAD-Gym Naming Conventions

The number of training environments in MACAD-Gym is expected to grow over time (PRs are very welcome!).

Environments

The environment interface is simple and follows the widely adopted OpenAI-Gym interface. You can create an instance of a learning environment using the following 3 lines of code:

import gym
import macad_gym
env = gym.make("HomoNcomIndePOIntrxMASS3CTWN3-v0")

Like any OpenAI Gym environment, you can obtain the observation space and action spaces as shown below:

>>> print(env.observation_space)
Dict(car1:Box(168, 168, 3), car2:Box(168, 168, 3), car3:Box(168, 168, 3))
>>> print(env.action_space)
Dict(car1:Discrete(9), car2:Discrete(9), car3:Discrete(9))

To get a list of available environments, you can use the list_available_envs() function as shown in the code snippet below:

import gym
import macad_gym
macad_gym.list_available_envs()

This will print the available environments. Sample output is provided below for reference:

Environment-ID: Short description
{'HeteNcomIndePOIntrxMATLS1B2C1PTWN3-v0': 'Heterogeneous, Non-communicating, '
                                          'Independent,Partially-Observable '
                                          'Intersection Multi-Agent scenario '
                                          'with Traffic-Light Signal, 1-Bike, '
                                          '2-Car,1-Pedestrian in Town3, '
                                          'version 0',
 'HomoNcomIndePOIntrxMASS3CTWN3-v0': 'Homogenous, Non-communicating, '
                                     'Independed, Partially-Observable '
                                     'Intersection Multi-Agent scenario with '
                                     'Stop-Sign, 3 Cars in Town3, version 0'}

Agent interface

The Agent-Environment interface is compatible with the OpenAI-Gym interface thus, allowing for easy experimentation with existing RL agent algorithm implementations and libraries. You can use any existing Deep RL library that supports the Open AI Gym API to train your agents.

The basic agent-environment interaction loop is as follows:

import gym
import macad_gym


env = gym.make("HomoNcomIndePOIntrxMASS3CTWN3-v0")
configs = env.configs
env_config = configs["env"]
actor_configs = configs["actors"]


class SimpleAgent(object):
    def __init__(self, actor_configs):
        """A simple, deterministic agent for an example
        Args:
            actor_configs: Actor config dict
        """
        self.actor_configs = actor_configs
        self.action_dict = {}


    def get_action(self, obs):
        """ Returns `action_dict` containing actions for each agent in the env
        """
        for actor_id in self.actor_configs.keys():
            # ... Process obs of each agent and generate action ...
            if env_config["discrete_actions"]:
                self.action_dict[actor_id] = 3  # Drive forward
            else:
                self.action_dict[actor_id] = [1, 0]  # Full-throttle
        return self.action_dict


agent = SimpleAgent(actor_configs)  # Plug-in your agent or use MACAD-Agents
for ep in range(2):
    obs = env.reset()
    done = {"__all__": False}
    step = 0
    while not done["__all__"]:
        obs, reward, done, info = env.step(agent.get_action(obs))
        print(f"Step#:{step}  Rew:{reward}  Done:{done}")
        step += 1
env.close()

Citing:

If you find this work useful in your research, please cite:

@misc{palanisamy2019multiagent,
    title={Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning},
    author={Praveen Palanisamy},
    year={2019},
    eprint={1911.04175},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
Citation in other Formats: (Click to View)

MLA
Palanisamy, Praveen. "Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning." arXiv preprint arXiv:1911.04175 (2019).
APA
Palanisamy, P. (2019). Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning. arXiv preprint arXiv:1911.04175.
Chicago
Palanisamy, Praveen. "Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning." arXiv preprint arXiv:1911.04175 (2019).
Harvard
Palanisamy, P., 2019. Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning. arXiv preprint arXiv:1911.04175.
Vancouver
Palanisamy P. Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning. arXiv preprint arXiv:1911.04175. 2019 Nov 11.

NOTEs:

  • MACAD-Gym supports multi-GPU setups and it will choose the GPU that is less loaded to launch the simulation needed for the RL training environment

  • MACAD-Gym is for CARLA 0.9.x & above . If you are looking for an OpenAI Gym-compatible agent learning environment for CARLA 0.8.x (stable release), use this carla_gym environment.

macad-gym's People

Contributors

johnminelli avatar morphlng avatar praveen-palanisamy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

macad-gym's Issues

How to visualize the learning environment?

Hello there, I am new to this project. The work you have done is remarkable! Here is a question I encontered.

I can run the basic_agent.py in the examples folder, but when I add env.render() into the for loop, it occurs an NotImplementedError like below.

Traceback (most recent call last):
File "/home/moda/macad-gym-master/examples/basic_agent.py", line 34, in
env.render()
File "/home/moda/macad-gym-master/src/macad_gym/multi_actor_env.py", line 73, in render
raise NotImplementedError
NotImplementedError
Killing live carla processes set()

Looking forward to seeing your reply!

Stuck in env.reset() until RAM runs out

I am trying to run the basic agent-environment interaction loop given in the README file. However, the call to env.reset() never returns after creating a CARLA server. After about an hour, the process maxes out the 64 GB of RAM and fails. I can provide more ram, but this just seems to delay all the RAM being used.
The following is the last output given:

setrlimit() failed with error 22 (Invalid argument)

  • Max per-process value allowed is 1073741824 (we wanted infinity).
    sh: 1: xdg-user-dir: not found
    error: XDG_RUNTIME_DIR not set in the environment.

I am trying to run in a singularity image with no display using an Nvidia Tesla P100.

Any thoughts on what might be causing this issue?

Full output: output.txt

The device's problem.

Hello. Can these project run on the computer without GPU? When I try to run it on CPU, it doesn't work. Looking for your apply. Thank you!

Unable to use RL algorithms with continuous action space

Hi @praveen-palanisamy

I have been working on macad-gym successfully over the past few months using PPO and many other algorithms. Now I am trying to use DDPG using RLlib which requires continuous action space.

I have changed the boolean "discrete_actions": False within environment config, but its still a issue since the policy function is passing Discrete(9) and I do not know the alternative for continuous action space.
Screenshot from 2021-08-23 19-44-09
Screenshot from 2021-08-23 19-44-23

I also followed the guide mentioned here but now its giving me the following error.
error.txt

Any help in this regard would be appreciated.
Thanks.

error when import macad_gym

when I try to import macad_gym, there is the following error code.

free(): invalid pointer
Aborted (core dumped)

Could you help me to address this issue?

How to use?

What are the algorithms included? Do you maybe have some tutorial or documentation to test it? Thank you!

How to create communicating environment?

Hi, I want to create a communicating environment. I read wiki and previous issues, however, it seems there is no configuration to enable communicating in src/macad_gym/envs/intersection/urban_signal_intersection_3c.py or other env files.
How should I create it? Thanks!

The latest pull request is incomplete

Description

I've just tested the latest PR and it seems like we didn't "merge them all".

For example, in the latest PR, the author _decode_obs before calling multi_view_render, the latter function used to handle the decoding process by itself.

images = {k: self._decode_obs(k, v) for k, v in obs_dict.items()}
multi_view_render(images, [self._x_res, self._y_res],
self._actor_configs)

However, in the multi_view_render function, there still exist the decoding process, which causes problems.

for actor_id, im in images.items():
if not actor_configs[actor_id]["render"]:
continue
surface = pygame.surfarray.make_surface(im.swapaxes(0, 1) * 128 + 128)
surface_seq += ((surface, (poses[actor_id][1], poses[actor_id][0])), )

The result can be seen below:

wrong

Solution

I went back and checked all the pull request started by @johnMinelli and found that in PR #68 (comment), he had removed the decoding process in render.py, but this PR hasn't merged yet.

Please do further testing and make sure all changes have been merged.

env._seed issue

hello!

i just copy your basic agent-environment interaction loop and run it(test.py), and error below has happened.

Traceback (most recent call last):
File "test.py", line 4, in
env = gym.make("HomoNcomIndePOIntrxMASS3CTWN3-v0")
File "/usr/local/lib/python3.6/dist-packages/gym/envs/registration.py", line 163, in make
return registry.make(id)
File "/usr/local/lib/python3.6/dist-packages/gym/envs/registration.py", line 121, in make
patch_deprecated_methods(env)
File "/usr/local/lib/python3.6/dist-packages/gym/envs/registration.py", line 181, in patch_deprecated_methods
env.seed = env._seed
AttributeError: 'TrafficLightSignal1B2C1PTown03' object has no attribute '_seed'

with "HeteNcomIndePOIntrxMATLS1B2C1PTWN3-v0" scenario.

can you help me with this?

No support for other sensors?

Hey, as mentioned in the title, i'm questioning if it is possible to configure the agent's observation space with diferent sensors such as LiDAR and Radar. Is that not possible? And if not how easy is it to fork the project and add more sensors?

Thanks for the attention.

Communication Mechanism

Hi, I am wondering which environment implements communicating agents. I did not find it through a quick check. Could you leave a pointer to that? Thanks!

gym version will affect the usage of ray[rllib]

To enable "Multi-Agent" environment training in rllib, you have to inherit the base class MultiAgentEnv from ray.rllib.env. Macad-gym have already done this in multi_env.py.

MultiAgentEnvBases = [MultiActorEnv]
try:
from ray.rllib.env import MultiAgentEnv
MultiAgentEnvBases.append(MultiAgentEnv)
except ImportError:
logger.warning("\n Disabling RLlib support.", exc_info=True)

However, since gym==0.21.0, the environment created with gym.make will automatically wrapped in a Class called OrderEnforcing. This will break the inheritance check in rllib, causing a training session failure.

gym_version

This is probably something ray[rllib] should take care of, I'm just reporting this to help if anybody have met this problem.

Vehicle models

Hi,
I see that the vehicle is selected at random. Can I set the vehicle type?

Running sample code

Hi,

I am trying to run the sample code:

import gym
import macad_gym
env = gym.make("HomoNcomIndePOIntrxMASS3CTWN3-v0")

and I get the following error:

free(): invalid pointer
Process finished with exit code 134 (interrupted by signal 6: SIGABRT)

Carla is also running but I don't guess that is related at this point.

Also stuck in env.reset() in example

Hi, when I tried to example code run basic_agent.py, the terminal only output this and then stuck in code env.reset()
image
I tried both Carla 0.9.4 and 0.9.12, they have the same problem. the system is ubuntu 20.0.4 and python 3.8
the log in example/log/macad_gym.log is like this and there is no bug
image
Is there any way I can fix this issue? Thanks a lot :)

Tensorflow crashes

I found that if I installed the macad-agent and run the basic_agent in macad-gym(https://github.com/praveen-palanisamy/macad-gym/blob/master/examples/basic_agent.py), there will be some issues with importing tensorflow modules. The detailed log is:

Traceback (most recent call last):
  File "/home/tianyushi/code/macad-gym/examples/basic_agent.py", line 5, in <module>
    env = gym.make("HomoNcomIndePOIntrxMASS3CTWN3-v0")
  File "/home/tianyushi/miniconda3/envs/macad-gym/lib/python3.6/site-packages/gym/envs/registration.py", line 183, in make
    return registry.make(id, **kwargs)
  File "/home/tianyushi/miniconda3/envs/macad-gym/lib/python3.6/site-packages/gym/envs/registration.py", line 125, in make
    env = spec.make(**kwargs)
  File "/home/tianyushi/miniconda3/envs/macad-gym/lib/python3.6/site-packages/gym/envs/registration.py", line 88, in make
    cls = load(self._entry_point)
  File "/home/tianyushi/miniconda3/envs/macad-gym/lib/python3.6/site-packages/gym/envs/registration.py", line 17, in load
    mod = importlib.import_module(mod_name)
  File "/home/tianyushi/miniconda3/envs/macad-gym/lib/python3.6/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 994, in _gcd_import
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/home/tianyushi/code/macad-gym/src/macad_gym/envs/__init__.py", line 1, in <module>
    from macad_gym.carla.multi_env import MultiCarlaEnv
  File "/home/tianyushi/code/macad-gym/src/macad_gym/carla/multi_env.py", line 220, in <module>
    **from ray.rllib.env import MultiAgentEnv**
  File "/home/tianyushi/miniconda3/envs/macad-gym/lib/python3.6/site-packages/ray/rllib/__init__.py", line 11, in <module>
    from ray.rllib.evaluation.policy_graph import PolicyGraph
  File "/home/tianyushi/miniconda3/envs/macad-gym/lib/python3.6/site-packages/ray/rllib/evaluation/__init__.py", line 2, in <module>
    from ray.rllib.evaluation.policy_evaluator import PolicyEvaluator
  File "/home/tianyushi/miniconda3/envs/macad-gym/lib/python3.6/site-packages/ray/rllib/evaluation/policy_evaluator.py", line 18, in <module>
    from ray.rllib.evaluation.sampler import AsyncSampler, SyncSampler
  File "/home/tianyushi/miniconda3/envs/macad-gym/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 14, in <module>
    from ray.rllib.evaluation.tf_policy_graph import TFPolicyGraph
  File "/home/tianyushi/miniconda3/envs/macad-gym/lib/python3.6/site-packages/ray/rllib/evaluation/tf_policy_graph.py", line 12, in <module>
    from ray.rllib.models.lstm import chop_into_sequences
  File "/home/tianyushi/miniconda3/envs/macad-gym/lib/python3.6/site-packages/ray/rllib/models/__init__.py", line 1, in <module>
    from ray.rllib.models.catalog import ModelCatalog, MODEL_DEFAULTS
  File "/home/tianyushi/miniconda3/envs/macad-gym/lib/python3.6/site-packages/ray/rllib/models/catalog.py", line 17, in <module>
    from ray.rllib.models.fcnet import FullyConnectedNetwork
  File "/home/tianyushi/miniconda3/envs/macad-gym/lib/python3.6/site-packages/ray/rllib/models/fcnet.py", line 6, in <module>
    import tensorflow.contrib.slim as slim
  File "/home/tianyushi/.local/lib/python3.6/site-packages/tensorflow/contrib/__init__.py", line 28, in <module>
    from tensorflow.contrib import cudnn_rnn
  File "/home/tianyushi/.local/lib/python3.6/site-packages/tensorflow/contrib/cudnn_rnn/__init__.py", line 33, in <module>
    from tensorflow.contrib.cudnn_rnn.python.ops.cudnn_rnn_ops import CudnnCompatibleGRUCell
  File "/home/tianyushi/.local/lib/python3.6/site-packages/tensorflow/contrib/cudnn_rnn/python/ops/cudnn_rnn_ops.py", line 21, in <module>
    from tensorflow.contrib.rnn.python.ops import lstm_ops
  File "/home/tianyushi/.local/lib/python3.6/site-packages/tensorflow/contrib/rnn/__init__.py", line 83, in <module>
    from tensorflow.contrib.rnn.python.ops.gru_ops import *
  File "/home/tianyushi/.local/lib/python3.6/site-packages/tensorflow/contrib/rnn/python/ops/gru_ops.py", line 33, in <module>
    resource_loader.get_path_to_datafile("_gru_ops.so"))
  File "/home/tianyushi/.local/lib/python3.6/site-packages/tensorflow/contrib/util/loader.py", line 55, in load_op_library
    ret = load_library.load_op_library(path)
  File "/home/tianyushi/.local/lib/python3.6/site-packages/tensorflow/python/framework/load_library.py", line 56, in load_op_library
    lib_handle = py_tf.TF_LoadLibrary(library_filename, status)
  File "/home/tianyushi/.local/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 473, in __exit__
    c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Could not parse default value '4000' from Attr("upper_frequency_limit: float = 4000") for Op Mfcc
Could not parse default value '20' from Attr("lower_frequency_limit: float = 20") for Op Mfcc

I found a discussion related to this problem: tensorflow/tensorflow#13963

Regarding the Agent Interface example.

Hello, in trying to run the example on the README.md for the Agent Interface, I did the following changes that you might want to consider:

#env = gym.make("HomoNComIndePOIntrxMASS3CTWN3-v0") # There is a typo here
env = gym.make("HomoNcomIndePOIntrxMASS3CTWN3-v0") #this is the correct one
#configs = env.configs() #this is a dict, cant assign to a variable

env_config = env.configs["env"] #change this to read directly from dict
actor_configs = env.configs["actors"] #change this to read directly from dict

I kept the rest of the code the same, but I got the following error after running exactly what is on the example, without changing anything else:

Could not connect to Carla server because: rpc::rpc_error during call in function version

Increasing number of steps per episode/ iteration

Hi @praveen-palanisamy,

I hope you are doing great.
I am facing one issue whether somehow I cannot cross the limit on number of steps ("max_steps" in scenario.py) per episode. The maximum in some scenarios I have reached is either 1024 or 2048 steps, and after the limit another episode starts.

Increasing the value in scenario.py breaks the code. Any help in this regard will be really appreaciated.

Multiprocess pickle Problem

Hi @praveen-palanisamy ,

I want to use multiprocess in macad-gym, but I find pygame cannot be pickled:


Cleaned-up the world...
Clearing Carla server state
/gym/gym/wrappers/monitor.py:31: UserWarning: The Monitor wrapper is being deprecated in favor of gym.wrappers.RecordVideo and gym.wrappers.RecordEpisodeStatistics (see https://github.com/openai/gym/issues/2297)
  warnings.warn(
Traceback (most recent call last):
  File "/driving_meta_0.9/train.py", line 145, in <module>
    main(args)
  File "/driving_meta_0.9/train.py", line 59, in main
    sampler = MultiTaskSampler(config['env-name'],
  File "/driving_meta_0.9/maml_rl/samplers/multi_task_sampler.py", line 107, in __init__
    worker.start()
  File "/anaconda3/envs/driving/lib/python3.8/site-packages/multiprocess/process.py", line 121, in start
    self._popen = self._Popen(self)
  File "/anaconda3/envs/driving/lib/python3.8/site-packages/multiprocess/context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "/anaconda3/envs/driving/lib/python3.8/site-packages/multiprocess/context.py", line 284, in _Popen
    return Popen(process_obj)
  File "/anaconda3/envs/driving/lib/python3.8/site-packages/multiprocess/popen_spawn_posix.py", line 32, in __init__
    super().__init__(process_obj)
  File "/anaconda3/envs/driving/lib/python3.8/site-packages/multiprocess/popen_fork.py", line 19, in __init__
    self._launch(process_obj)
  File "/anaconda3/envs/driving/lib/python3.8/site-packages/multiprocess/popen_spawn_posix.py", line 47, in _launch
    reduction.dump(process_obj, fp)
  File "/anaconda3/envs/driving/lib/python3.8/site-packages/multiprocess/reduction.py", line 63, in dump
    ForkingPickler(file, protocol, *args, **kwds).dump(obj)
  File "/anaconda3/envs/driving/lib/python3.8/site-packages/dill/_dill.py", line 498, in dump
    StockPickler.dump(self, obj)
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 487, in dump
    self.save(obj)
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 603, in save
    self.save_reduce(obj=obj, *rv)
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 717, in save_reduce
    save(state)
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 560, in save
    f(self, obj)  # Call unbound method with explicit self
  File "/anaconda3/envs/driving/lib/python3.8/site-packages/dill/_dill.py", line 990, in save_module_dict
    StockPickler.save_dict(pickler, obj)
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 971, in save_dict
    self._batch_setitems(obj.items())
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 997, in _batch_setitems
    save(v)
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 603, in save
    self.save_reduce(obj=obj, *rv)
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 717, in save_reduce
    save(state)
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 560, in save
    f(self, obj)  # Call unbound method with explicit self
  File "/anaconda3/envs/driving/lib/python3.8/site-packages/dill/_dill.py", line 990, in save_module_dict
    StockPickler.save_dict(pickler, obj)
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 971, in save_dict
    self._batch_setitems(obj.items())
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 997, in _batch_setitems
    save(v)
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 560, in save
    f(self, obj)  # Call unbound method with explicit self
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 931, in save_list
    self._batch_appends(obj)
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 958, in _batch_appends
    save(tmp[0])
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 603, in save
    self.save_reduce(obj=obj, *rv)
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 717, in save_reduce
    save(state)
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 560, in save
    f(self, obj)  # Call unbound method with explicit self
  File "/anaconda3/envs/driving/lib/python3.8/site-packages/dill/_dill.py", line 990, in save_module_dict
    StockPickler.save_dict(pickler, obj)
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 971, in save_dict
    self._batch_setitems(obj.items())
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 997, in _batch_setitems
    save(v)
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 603, in save
    self.save_reduce(obj=obj, *rv)
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 717, in save_reduce
    save(state)
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 560, in save
    f(self, obj)  # Call unbound method with explicit self
  File "/anaconda3/envs/driving/lib/python3.8/site-packages/dill/_dill.py", line 990, in save_module_dict
    StockPickler.save_dict(pickler, obj)
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 971, in save_dict
    self._batch_setitems(obj.items())
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 997, in _batch_setitems
    save(v)
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 603, in save
    self.save_reduce(obj=obj, *rv)
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 717, in save_reduce
    save(state)
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 560, in save
    f(self, obj)  # Call unbound method with explicit self
  File "/anaconda3/envs/driving/lib/python3.8/site-packages/dill/_dill.py", line 990, in save_module_dict
    StockPickler.save_dict(pickler, obj)
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 971, in save_dict
    self._batch_setitems(obj.items())
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 997, in _batch_setitems
    save(v)
  File "/anaconda3/envs/driving/lib/python3.8/pickle.py", line 578, in save
    rv = reduce(self.proto)
TypeError: cannot pickle 'pygame.font.Font' object
Killing live carla processes set()

Can we use something to replace pygame? I notice you use it to obtain observation. Or do you have any idea how to pickle it?

Really thank you very much.

v0.1.3 carla serve can't get connection

macad-gym-v0.1.3 Running the supplied example basic_agent.py gives an error when the carla client connects to the server, and then keeps dead-ending in the following code:

self._client = None
        while self._client is None:
            try:
                self._client = carla.Client("localhost", self._server_port)
                # The socket establishment could takes some time
                time.sleep(2)
                self._client.set_timeout(2.0)
                print(
                    "Client successfully connected to server, Carla-Server version: ",
                    self._client.get_server_version(),
                )
            except RuntimeError as re:
                if "timeout" not in str(re) and "time-out" not in str(re):
                    print("Could not connect to Carla server because:", re)
                self._client = None

My Running Environment:

  • ubuntu 22 64bit
  • carla 0.9.13
  • python packge version
Package                Version      Editable project location
---------------------- ------------ ---------------------------------------------
absl-py                1.4.0
aiohttp                3.8.6
aiosignal              1.2.0
ale-py                 0.7.1
astor                  0.8.1
async-timeout          4.0.2
asynctest              0.13.0
atari-py               0.2.9
attrs                  22.2.0
beautifulsoup4         4.12.2
cached-property        1.5.2
cachetools             4.2.4
carla                  0.9.13
certifi                2018.11.29
charset-normalizer     2.0.12
click                  8.0.4
cloudpickle            1.2.2
colorama               0.4.5
Cython                 3.0.7
dataclasses            0.8
decorator              4.4.2
dm-tree                0.1.8
filelock               3.4.1
frozenlist             1.2.0
future                 0.18.3
gast                   0.2.2
google                 3.0.0
google-auth            1.35.0
google-auth-oauthlib   0.4.6
google-pasta           0.2.0
GPUtil                 1.4.0
grpcio                 1.48.2
gym                    0.16.0
gym-notices            0.0.8
h5py                   3.1.0
idna                   3.6
idna-ssl               1.1.0
importlib-metadata     4.8.3
importlib-resources    5.4.0
jsonschema             3.2.0
Keras-Applications     1.0.8
Keras-Preprocessing    1.1.2
lz4                    3.1.10
macad-gym              0.1.3        /home/dell/qqw/repo/valid/macad-gym-0.1.3/src
Markdown               3.3.7
mkl-fft                1.0.6
mkl-random             1.0.1
multidict              5.2.0
networkx               2.5.1
numpy                  1.19.5
oauthlib               3.2.2
opencv-python          4.2.0.32
opencv-python-headless 4.2.0.34
opt-einsum             3.3.0
packaging              21.3
pandas                 1.1.5
pip                    21.3.1
protobuf               4.21.0
py-spy                 0.3.14
pyasn1                 0.5.1
pyasn1-modules         0.3.0
pygame                 2.5.2
pyglet                 1.5.0
pyparsing              3.0.7
pyrsistent             0.18.0
python-dateutil        2.8.2
pytz                   2023.3.post1
PyYAML                 6.0.1
ray                    0.8.4
redis                  4.3.6
requests               2.27.1
requests-oauthlib      1.3.1
rsa                    4.9
scipy                  1.4.1
setuptools             59.6.0
six                    1.16.0
soupsieve              2.3.2.post1
tabulate               0.8.10
TBB                    0.2
tensorboard            2.1.1
tensorboardX           2.1
tensorflow             2.1.0
tensorflow-estimator   2.1.0
tensorflow-gpu         2.1.0
termcolor              1.1.0
tf-slim                1.1.0
typing_extensions      4.1.1
urllib3                1.26.18
Werkzeug               2.0.3
wheel                  0.32.2
wrapt                  1.16.0
yarl                   1.7.2
zipp                   3.6.0

I have no problem running the example with the latest code, so is this a bug in v0.1.3, or is there something wrong with my environment configuration?

Implement of IMPALA Agent Examples

In the experimental part of the macad-gym paper, you mentioned that using the IMPALA agent control vehicle in macad-gym, but I did not find this part of the realization in the latest code. Excuse me, whether this code is open source, or there are this part of the code in the historical version.

when I use urban_signal_intersection_3c env ,happen error,please help me

Traceback (most recent call last):
File "/home/ggstar/pythonproject/macad-gym-master/src/macad_gym/envs/intersection/urban_signal_intersection_3c.py", line 109, in
env = UrbanSignalIntersection3Car()
File "/home/ggstar/pythonproject/macad-gym-master/src/macad_gym/envs/intersection/urban_signal_intersection_3c.py", line 105, in init
super(UrbanSignalIntersection3Car, self).init(self.configs)
File "/home/ggstar/pythonproject/macad-gym-master/src/macad_gym/carla/multi_env.py", line 275, in init
configs["scenarios"]
KeyError: 'scenarios'

Support for CARLA built from source

Hi @praveen-palanisamy. Thanks for your great work.

I have built carla 0.9.9 from source. Is there a way to run your code with it? Since I built carla from source, it does not have a CarlaUE4.sh file in which case it does not make sense to set CARLA_SERVER.

If I launch CARLA from UE4 using make launch and then open up a specific town, is it possible to point to this instance in your code?

Thanks,
Neel

Modification of observations/actor states

Hi @praveen-palanisamy,

Now that I have macad-gym setup, I am planning to setup an environment with states being global position (x,y) of the actors (cars or pedestrians) and their velocities (I think carla doesn't provide actor velocity, but I can use a time window in the past to estimate velocity): in a practical setting, this can be for instance coming from on-board GPS.

I understand you used the image itself as the state/observation, thus, do you have any recommendations regarding how to modify the observation space? (potential enhancement feature)

Thanks,
Neel

Different Carla versions

Hi, this is not an issue but more like a question.

I see that this library works with Carla 0.9.4, however, I also need to have version 0.9.6 installed as I am using its newest functionality with SUMO. This creates a conflict when I try to use this library.

Is there a way I can have both versions installed on my machine without conflicting with your library?

Maybe having 2 different PATHS env variables on the .bashrc ?

Thanks!

Help in creating Adversarial Environment

Hi @praveen-palanisamy,

I wanted to try the adversarial multi-agent example as mentioned in the related paper, but there are only 2 available examples and an adversarial example is not in it.
Kindly help in how to create such env and use it for training and testing.

Any information would be helpful.
Thanks

How to set the spectator on the agent

Hi @praveen-palanisamy ,

I am now working on a multi-agent project, and I want to set the env spectator on the agent vehicle. For example, from the rear of the car, a fixed third-person view to the vehicle. Do you know how to do that? Really thanks a lot!

Besides, when the code throws this exception in line 999 of multi_env.py:

except Exception:
            print("Error during step, terminating episode early.",
                  traceback.format_exc())
            self._clear_server_state()

My training step in the code will get cannot unpack non-iterable NoneType object error. I know why it could happen since the step function doesn't have a normal return this time, but I don't know how to handle it. Could you give me some examples about How do people normally deal with this error?

Best regards

`multi_view_render` will pop new display window on each frame with latest version of Pygame

Description

I'm testing the pre-defined environment 'HomoNcomIndePOIntrxMASS3CTWN3-v0' using code in stop_sign_3c_town03.py.

I found that if using the latest version of Pygame module, the behavior is not what I expected. The display will recreate on each frame rather than re-render. A GIF shows exactly how it perform:

bug_report

Recreating the problem

  • platform: Ubuntu 20.04
  • Pygame 2.1.2

Solutions

While downgrading to Pygame 1.9.6 solves the problem, I'm a little curious about why is this happening. So I looked into the implementation of multi_view_render in render.py

display = pygame.display.set_mode((window_dim[0], window_dim[1]),
pygame.HWSURFACE | pygame.DOUBLEBUF)
display.blits(blit_sequence=surface_seq, doreturn=1)

It seems to me that the display is reinitialized on each call (I'm not familiar with pygame that much, hopefully I'm not wrong), so I changed display into a global variable and call pygame.display.set_mode only once. By doing this, the code runs correctly even with latest version of pygame.

Is this a bug need to be addressed? Or should I ask the Pygame community to reveal the deeper reason?

[Windows Support]module 'os' has no attribute 'getpgid'

When running env.reset(), I encounter this error:

Initializing new Carla server...                                                                                        
FATAL ERROR while launching server: <class 'AttributeError'>                                                            
Error during reset: Traceback (most recent call last):                                                                    
File "C:\Eer Kai Jun\Autonomous Driving\env\lib\site-packages\macad_gym\carla\multi_env.py", line 579, in reset         self._init_server()                                                                                                   
File "C:\Eer Kai Jun\Autonomous Driving\env\lib\site-packages\macad_gym\carla\multi_env.py", line 475, in _init_server                                                                                                                            live_carla_processes.add(os.getpgid(self._server_process.pid))                                                      
AttributeError: module 'os' has no attribute 'getpgid'

I think this is because the os library does not support the getpgid function in windows. Are they any workarounds for this issue?

Support the library

Hi, I'm using the library for my thesis project and I'm interested in supporting/contribuiting to that since I think having a simpler interface for CARLA is great. I already solved some minor bugs and some TODOs in multi_env, I'll fill a PR with the changes.
However, (probably it's my fault since I don't have a complete picture of the library aim and previous versions of CARLA) I see a lack of structure and some classes are not even used. Can I ask for some clarifications about some files in particular and directory structure? Are you still able to follow the project (at least as advisor if not in coding)?

Thank you for your work

PathTracker generate wrong route

I want to apply traffic_manager.set_path(actor, route) (This api is missed in the doc, but actually available and described in this PR) to navigate the actor to desired end point when "auto_control" is enabled.

However, the path generated by Macad's PathTracker is not correct. I've tested Scenario SSUI3C_TOWN3, the start/end position is defined as:

SSUI3C_TOWN3 = {
"map": "Town03",
"actors": {
"car1": {
"start": [170.5, 80, 0.4],
"end": [144, 59, 0]
},
"car2": {
"start": [188, 59, 0.4],
"end": [167, 75.7, 0.13],
},
"car3": {
"start": [147.6, 62.6, 0.4],
"end": [191.2, 62.7, 0],
}
},
"weather_distribution": [0],
"max_steps": 500
}

Take car1 as an example, this car should take a left turn. But the route generated by PathTracker is like this (green line):

path_draw

It takes a right turn, and we can check the end point, it's far away from what we defined:

planner_route

I've noticed that Macad-Gym copied a PythonAPI from Carla 0.9.6 distribution, maybe this is the reason why the generated path is incorrect. I'm trying to apply the Carla 0.9.13 PythonAPI to see whether it will make a difference.

Running examle code

Hi,

I am trying to run the example code basci agents.py, and I get the following error:
Traceback (most recent call last):
File "basic_agent.py", line 5, in
env = gym.make("HomoNcomIndePOIntrxMASS3CTWN3-v0")
File "/home/user/anaconda3/envs/macad-gym/lib/python3.6/site-packages/gym/envs/registration.py", line 183, in make
return registry.make(id, **kwargs)
File "/home/user/anaconda3/envs/macad-gym/lib/python3.6/site-packages/gym/envs/registration.py", line 125, in make
env = spec.make(**kwargs)
File "/home/user/anaconda3/envs/macad-gym/lib/python3.6/site-packages/gym/envs/registration.py", line 88, in make
cls = load(self._entry_point)
File "/home/user/anaconda3/envs/macad-gym/lib/python3.6/site-packages/gym/envs/registration.py", line 17, in load
mod = importlib.import_module(mod_name)
File "/home/user/anaconda3/envs/macad-gym/lib/python3.6/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 994, in _gcd_import
File "", line 971, in _find_and_load
File "", line 955, in _find_and_load_unlocked
File "", line 665, in _load_unlocked
File "", line 678, in exec_module
File "", line 219, in _call_with_frames_removed
File "/home/user/software/macad-gym/src/macad_gym/envs/init.py", line 1, in
from macad_gym.carla.multi_env import MultiCarlaEnv
File "/home/user/software/macad-gym/src/macad_gym/carla/multi_env.py", line 220, in
from ray.rllib.env import MultiAgentEnv
File "/home/user/anaconda3/envs/macad-gym/lib/python3.6/site-packages/ray/rllib/init.py", line 11, in
from ray.rllib.evaluation.policy_graph import PolicyGraph
File "/home/user/anaconda3/envs/macad-gym/lib/python3.6/site-packages/ray/rllib/evaluation/init.py", line 2, in
from ray.rllib.evaluation.policy_evaluator import PolicyEvaluator
File "/home/user/anaconda3/envs/macad-gym/lib/python3.6/site-packages/ray/rllib/evaluation/policy_evaluator.py", line 18, in
from ray.rllib.evaluation.sampler import AsyncSampler, SyncSampler
File "/home/user/anaconda3/envs/macad-gym/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 14, in
from ray.rllib.evaluation.tf_policy_graph import TFPolicyGraph
File "/home/user/anaconda3/envs/macad-gym/lib/python3.6/site-packages/ray/rllib/evaluation/tf_policy_graph.py", line 12, in
from ray.rllib.models.lstm import chop_into_sequences
File "/home/user/anaconda3/envs/macad-gym/lib/python3.6/site-packages/ray/rllib/models/init.py", line 1, in
from ray.rllib.models.catalog import ModelCatalog, MODEL_DEFAULTS
File "/home/user/anaconda3/envs/macad-gym/lib/python3.6/site-packages/ray/rllib/models/catalog.py", line 17, in
from ray.rllib.models.fcnet import FullyConnectedNetwork
File "/home/user/anaconda3/envs/macad-gym/lib/python3.6/site-packages/ray/rllib/models/fcnet.py", line 6, in
import tensorflow.contrib.slim as slim
File "/home/user/anaconda3/envs/macad-gym/lib/python3.6/site-packages/tensorflow/contrib/init.py", line 28, in
from tensorflow.contrib import cudnn_rnn
File "/home/user/anaconda3/envs/macad-gym/lib/python3.6/site-packages/tensorflow/contrib/cudnn_rnn/init.py", line 33, in
from tensorflow.contrib.cudnn_rnn.python.ops.cudnn_rnn_ops import CudnnCompatibleGRUCell
File "/home/user/anaconda3/envs/macad-gym/lib/python3.6/site-packages/tensorflow/contrib/cudnn_rnn/python/ops/cudnn_rnn_ops.py", line 21, in
from tensorflow.contrib.rnn.python.ops import lstm_ops
File "/home/user/anaconda3/envs/macad-gym/lib/python3.6/site-packages/tensorflow/contrib/rnn/init.py", line 83, in
from tensorflow.contrib.rnn.python.ops.gru_ops import *
File "/home/user/anaconda3/envs/macad-gym/lib/python3.6/site-packages/tensorflow/contrib/rnn/python/ops/gru_ops.py", line 33, in
resource_loader.get_path_to_datafile("_gru_ops.so"))
File "/home/user/anaconda3/envs/macad-gym/lib/python3.6/site-packages/tensorflow/contrib/util/loader.py", line 55, in load_op_library
ret = load_library.load_op_library(path)
File "/home/user/anaconda3/envs/macad-gym/lib/python3.6/site-packages/tensorflow/python/framework/load_library.py", line 56, in load_op_library
lib_handle = py_tf.TF_LoadLibrary(library_filename, status)
File "/home/user/anaconda3/envs/macad-gym/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 473, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Could not parse default value '4000' from Attr("upper_frequency_limit: float = 4000") for Op Mfcc
Could not parse default value '20' from Attr("lower_frequency_limit: float = 20") for Op Mfcc
Killing live carla processes set()

Carla is not running, What is the matter, please

Cannot run Agent interface demo code

Hi, I was not able to run your demo code under the "Agent interface" section in your README. I tried CARLA 0.9.4, 0.9.5 and 0.9.9.4 and all failed. I was using Ubuntu18.04 with one RTX2080Ti GPU. The python version is 3.6.12. I installed the MACAD-Gym via Option1

When I tried CARLA 0.9.4, the CARLA window would quickly show up and disappeared, and the error was:

Running simulation in single-GPU mode
WARNING: Version mismatch detected: You are trying to connect to a simulator that might be incompatible with this API 
WARNING: Client API version     = 0.9.5 
WARNING: Simulator API version  = 0.9.4 
Error during reset: Traceback (most recent call last):
  File "/home/meng/miniconda3/envs/macad/lib/python3.6/site-packages/macad_gym/carla/multi_env.py", line 581, in reset
    self._init_server()
  File "/home/meng/miniconda3/envs/macad/lib/python3.6/site-packages/macad_gym/carla/multi_env.py", line 511, in _init_server
    carla.Rotation(yaw=180 + angle, pitch=-15)))
RuntimeError: trying to access an expired episode; a new episode was started in the simulation but an object tried accessing the old one.

Then I tried CARLA 0.9.5, but the CARLA window would stuck at the initial scene and not responding.

At last I tried CARLA 0.9.9.4, and it continued saying Could not connect to Carla server because: rpc::rpc_error during call in function version and finally ended with:

Signal 11 caught.
Malloc Size=65538 LargeMemoryPoolOffset=65554 
Malloc Size=65535 LargeMemoryPoolOffset=131119 
terminating with uncaught exception of type std::__1::bad_weak_ptr: bad_weak_ptr
Signal 6 caught.
Malloc Size=124960 LargeMemoryPoolOffset=256096 
Segmentation fault (core dumped)

I am able to run each CARLA simulator alone, but I just cannot make it via the gym interface. Do you know what could be the reason for this? Thanks.

Unable to spawn actor: car1

When running env.reset(), I encounter the following error:

reset(): Retry #: 1/2
Clearing Carla server state
RuntimeError: Unable to spawn actor:car1

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/afs/inf.ed.ac.uk/group/ug4-projects/s1800883/Carla/env/lib/python3.7/site-packages/macad_gym/carla/multi_env.py", line 590, in reset
    raise error
  File "/afs/inf.ed.ac.uk/group/ug4-projects/s1800883/Carla/env/lib/python3.7/site-packages/macad_gym/carla/multi_env.py", line 583, in reset
    return self._reset()
  File "/afs/inf.ed.ac.uk/group/ug4-projects/s1800883/Carla/env/lib/python3.7/site-packages/macad_gym/carla/multi_env.py", line 774, in _reset
    "Unable to spawn actor:{}".format(actor_id))
  File "/afs/inf.ed.ac.uk/group/ug4-projects/s1800883/Carla/env/lib/python3.7/site-packages/macad_gym/carla/multi_env.py", line 768, in _reset
    self._actors[actor_id] = self._spawn_new_agent(actor_id)
  File "/afs/inf.ed.ac.uk/group/ug4-projects/s1800883/Carla/env/lib/python3.7/site-packages/macad_gym/carla/multi_env.py", line 683, in _spawn_new_agent
    self.world.wait_for_tick()
RuntimeError: time-out of 10000ms while waiting for the simulator, make sure the simulator is ready and connected to localhost:49821

I'm using the following specs:
Python version: 3.7.3
CARLA: 0.9.11
OS: Ubuntu 20.04

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.