Giter Site home page Giter Site logo

mljejucamp2017 / drl_based_selfdrivingcarcontrol Goto Github PK

View Code? Open in Web Editor NEW
293.0 25.0 96.0 493.46 MB

Deep Reinforcement Learning (DQN) based Self Driving Car Control with Vehicle Simulator

Jupyter Notebook 13.87% Python 4.49% C# 68.48% Objective-C 0.27% CSS 0.61% ASP 12.29%
deep-reinforcement-learning drl vehicle-simulator dqn self-driving-car

drl_based_selfdrivingcarcontrol's People

Contributors

kyushik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

drl_based_selfdrivingcarcontrol's Issues

question about saved_networks

I run the program and simulator according to your method, set it to training mode, but the trained model cannot be saved. Num_training is 1M.But on my computer, when the number of steps is 10K, the program runs slowly.If I need to reset Num_training?The problem is that it cannot generate ckpt file.The file type it produces is different from ckpt file, such as "events.out.tfevents.1527528894" .I would like to ask if you have any good suggestions or have a trained network that i can use.Thank you so much for helping me.

Simulating the Training for a Semi Autonomous Car(Level 2 Autonomy)

@Kyushik

Good Day!

Can you please clarify my questionns?

  1. Can you please let me know, If I can train "DRL_based_SelfDrivingCarControl" for a Semi Autonomous Car with a level 2 autonomy?

I am looking for training the following scenario in Indian Road(Please review the screen shots attached) - the scenario is similar to application of ADAS in Cars like Volvo XC90

Scenario 1 - Car A and Car B are in the same side of the road(lane concept is not included), Car B is stopped in front of Car A

Unity_Simulation_1

Scenario 2 - Car A moves to a particular distance(behind Car B), alerted by Radar / LIDAR(Cube's color changed from black to red) - Car A is manually driven by a human driver

Unity_Simulation_2

Scenario 3 - Realizing the Radar alert, Car A switches to autonomous mode, takes control from the driver and steers the Car A sideways to avoid crash

Unity_Simulation_3

Can you please let me know, if I can simulate a training scenario in DRL, I am confused about how to drive the car manually during the DRL training.

Can you please help me.

Thanks
Guru

Lidar data plotting

How can you plot the lidar data in xy coordinate? i mean how to convert it

the simulation screen is stuck

Hello, I encountered a strange problem during the development process, the simulation screen is stuck, my operating system is win7 64 bit, may I ask where the problem may occur? If I load the emulator of the previous version or use emv_name = "../environment/Planning/Windows/Planning", the picture will not freeze

How run Simulator?

Hello @Kyushik,
I want to get the code in the thigh environment, but I get this error and the simulator closes after a few moments that do not move,

UnityTimeOutException Traceback (most recent call last)
in
----> 1 env = UnityEnvironment(file_name=env_name)
2
3 # Examine environment parameters
4 print(str(env))
5

~\anaconda3\lib\site-packages\mlagents_envs\environment.py in init(self, file_name, worker_id, base_port, seed, no_graphics, timeout_wait, additional_args, side_channels, log_folder)
215 )
216 try:
--> 217 aca_output = self._send_academy_parameters(rl_init_parameters_in)
218 aca_params = aca_output.rl_initialization_output
219 except UnityTimeOutException:

~\anaconda3\lib\site-packages\mlagents_envs\environment.py in _send_academy_parameters(self, init_parameters)
459 inputs = UnityInputProto()
460 inputs.rl_initialization_input.CopyFrom(init_parameters)
--> 461 return self._communicator.initialize(inputs)
462
463 @staticmethod

~\anaconda3\lib\site-packages\mlagents_envs\rpc_communicator.py in initialize(self, inputs)
102
103 def initialize(self, inputs: UnityInputProto) -> UnityOutputProto:
--> 104 self.poll_for_timeout()
105 aca_param = self.unity_to_external.parent_conn.recv().unity_output
106 message = UnityMessageProto()

~\anaconda3\lib\site-packages\mlagents_envs\rpc_communicator.py in poll_for_timeout(self)
94 """
95 if not self.unity_to_external.parent_conn.poll(self.timeout_wait):
---> 96 raise UnityTimeOutException(
97 "The Unity environment took too long to respond. Make sure that :\n"
98 "\t The environment does not need user interaction to launch\n"

UnityTimeOutException: The Unity environment took too long to respond. Make sure that :
The environment does not need user interaction to launch
The Agents' Behavior Parameters > Behavior Type is set to "Default"
The environment and the Python interface have compatible versions.

How can I fix this problem?

hello, some questions about the simulator

thanks for providing the DRL algorithm and the unity environment. i doubt how the agent(self-driving car) in the unity environment communicate with other vehicles, how the self-driving car' state transform, and how the reward define in the the code of unity environment. Because unity environment is a executable program, we can't know the detail information, is there some reference material for the detail information about the internal code of unity environment .thanks very much!

Skip Frame =4

I would like to ask that, is this skip frame necessary? since the agent does not act according to pixel(frame) but act according to sensor and camera features?

The Float values equal to Meters in distance?

Hi,
In the DRL Simulator distance between Agent and other obstacles calculated in float values.
Is this float values equal to Meters ? If not, then what this float value represent?

Please help me to understand this?

Thanks in Advance
Malathi K

The use of simulator for Ubuntu16.04!

Thanks for your wonderful code!

My issue:
( I follow your work in Ubuntu16.04 2080Ti)
When I input 'jupyter notebook' in the terminal , then run the file 'Double_Dueling_image.ipynb' it will apper the Initial interface of Unity.
Then nothing like this :
image

question about project

I'm sorry for disturbing you.I'm a beginner for DRL and interested in your project.I download your project and the simulator.But I don't know how to run it successfully.Hope to get your help.Thank you.

Clarification about ML Agent

Hi,
I am new to this project.I would like to know did you use ML Agents of unity in this project. If so, can you please guide me with the tutorials/links to learn more about it.
I would like to recreate this for my college project on my own can you please guide me.

With curiosity,
Ranjani N

About the sensors and simulator scripts

Hi, congratulate won the camp, and the paper was accepted to iv 2018!

In the README.md , you have mentioned " I used lots of vehicle sensors(e.g. RADAR, LIDAR, ...) to perceive environments around host vehicle. Also, There are a lot of Advanced Driver Assistant Systems (ADAS) which are already commercialized."

  • I'm wondering if you can tell us something about where I can find those sensors and how should i use them.

  • And because of the environment made by purchased models, if you can upload the agents, brain and academy scripts?

Thanks.

Mac version

Hey I plan to use this simulator for my class project and I was wondering if the Mac version is going to be released any time soon?

Thanks for this!

Reg- unable to run code

Hi ,

I am trying to run your code through jupyter notebook connecting from anaconda prompt by the method you described earlier like unzipping etc. getting few issues like:

ModuleNotFoundError Traceback (most recent call last)
in ()
8 import time
9
---> 10 from unityagents import UnityEnvironment
11
12 get_ipython().run_line_magic('matplotlib', 'inline')

ModuleNotFoundError: No module named 'unityagents'

Name Error: name " UnityEnvironment " is not defined


NameError Traceback (most recent call last)
in ()
----> 1 env = UnityEnvironment(file_name=env_name)
2
3 # Examine environment parameters
4 print(str(env))
5

NameError: name 'UnityEnvironment' is not defined


error Traceback (most recent call last)
in ()
17
18 # Get information for update
---> 19 env_info = env.step(action_in)[default_brain]
20
21 next_observation_stack, observation_set, next_state_stack, state_set = resize_input(env_info, observation_set, state_set)

C:\Users\Q\Desktop\JejuCamp_ML_Agents\unityagents\environment.py in step(self, vector_action, memory, text_action)
469 self._conn.send(b"STEP")
470 self._send_action(vector_action, memory, text_action)
--> 471 return self._get_state()
472 elif not self._loaded:
473 raise UnityEnvironmentException("No Unity environment is loaded.")

C:\Users\Q\Desktop\JejuCamp_ML_Agents\unityagents\environment.py in _get_state(self)
285 self._data = {}
286 while True:
--> 287 state_dict, end_of_message = self._get_state_dict()
288 if end_of_message is not None:
289 self._global_done = end_of_message

C:\Users\Q\Desktop\JejuCamp_ML_Agents\unityagents\environment.py in _get_state_dict(self)
241 :return:
242 """
--> 243 state = self._recv_bytes().decode('utf-8')
244 if state[:14] == "END_OF_MESSAGE":
245 return {}, state[15:] == 'True'

C:\Users\Q\Desktop\JejuCamp_ML_Agents\unityagents\environment.py in _recv_bytes(self)
217 try:
218 s = self._conn.recv(self._buffer_size)
--> 219 message_length = struct.unpack("I", bytearray(s[:4]))[0]
220 s = s[4:]
221 while len(s) != message_length:

error: unpack requires a bytes object of length 4

once i open the simulator app, I get following exception found at end in log file. I cannot file .cs file. please help me in this regard.

Mono path[0] = 'C:/DL/adas/jas/DRL_based_SelfDrivingCarControl-master (1)/DRL_based_SelfDrivingCarControl-master/environment/jeju_camp_Data/Managed'
Mono config path = 'C:/DL/adas/jas/DRL_based_SelfDrivingCarControl-master (1)/DRL_based_SelfDrivingCarControl-master/environment/jeju_camp_Data/MonoBleedingEdge/etc'
PlayerConnection initialized from C:/DL/adas/jas/DRL_based_SelfDrivingCarControl-master (1)/DRL_based_SelfDrivingCarControl-master/environment/jeju_camp_Data (debug = 0)
PlayerConnection initialized network socket : 0.0.0.0 55335
Multi-casting "[IP] 10.0.3.178 [Port] 55335 [Flags] 2 [Guid] 3442019743 [EditorId] 2618453742 [Version] 1048832 [Id] WindowsPlayer(LAPTOP-MDH1861V) [Debug] 0" to [225.0.0.222:54997]...
Started listening to [0.0.0.0:55335]
PlayerConnection already initialized - listening to [0.0.0.0:55335]
Initialize engine version: 2017.3.1f1 (fc1d3344e6ea)
GfxDevice: creating device client; threaded=1
Direct3D:
Version: Direct3D 11.0 [level 11.1]
Renderer: Intel(R) HD Graphics 5500 (ID=0x1616)
Vendor: Intel
VRAM: 1130 MB
Driver: 20.19.15.4642
Begin MonoManager ReloadAssembly

  • Completed reload, in 7.260 seconds
    Initializing input.

XInput1_3.dll not found. Trying XInput9_1_0.dll instead...
Input initialized.

Initialized touch support.

Setting up 2 worker threads for Enlighten.
Thread -> id: 3714 -> priority: 1
Thread -> id: 342c -> priority: 1
UnloadTime: 4.280462 ms
UnityAgentsException: The brain Brain was set to External mode but Unity was unable to read the arguments passed at launch.
at CoreBrainExternal.InitializeCoreBrain (Communicator communicator) [0x00059] in D:\UnityGames\ML_Agent_Jejucamp2017\Assets\Scripts\CoreBrainExternal.cs:37
at Brain.InitializeBrain (Academy aca, Communicator communicator) [0x0000e] in D:\UnityGames\ML_Agent_Jejucamp2017\Assets\Scripts\Brain.cs:209
at Academy.InitializeEnvironment () [0x00056] in D:\UnityGames\ML_Agent_Jejucamp2017\Assets\Scripts\Academy.cs:230
at Academy.Awake () [0x00002] in D:\UnityGames\ML_Agent_Jejucamp2017\Assets\Scripts\Academy.cs:208

(Filename: D:/UnityGames/ML_Agent_Jejucamp2017/Assets/Scripts/CoreBrainExternal.cs Line: 37)

Can you please suggest me step by step procedure to execute as i am beginer to RL concepts... Thank you.

Could you tell me the code execution process for each ipynb files in RL algorithms folder.

Clarification on working with the project in unity hub

Hi @Kyushik
My system specifications are,
Ubuntu 18.04(64 bit)
Processor Intel® Core™ i3-8109U CPU @ 3.00GHz × 4
Graphics Intel® HD Graphics (Coffeelake 3x8 GT3)
Memory 7.7 GiB

I would like to modify your code using Unity hub, after opening the Unity SDK, while using play option I got the following error stating that "The communicator was unable to connect. Please make sure the external process is ready to accept communication with unity" as below.

Screenshot from 2020-01-10 09-47-15

As I am new to unity development, can you please suggest me how to run your car simulator as like any other games in unity in order to check and modify as I wish.

Thanks in advance.

Questions about project

Hello,
Is this an academic project? If so, were the results published elsewhere and where can I find the corresponding paper?

All the best,
Eduardo

Clarification about sensor input

Hi @Kyushik
Can you please clarify me the following questions

  1. Is LIDAR data(env_info.vector_observations[0]) is received from jeju_camp.x86_64 file?
  2. how the distance /speed is calculated by LIDAR(as its not physically present)?

Thanks in advance

About training

Hi @Kyushik , I'm using Ubuntu 16.04. I have run the file Double_Dueling_image.ipynb. The Num_training = 1000000 is done but still the training is not stopped.

When will the training stop or how can I stop the training?

Thanks in Advance

The simulator!

I am glad to work on your codes. While working, the simulator is not loading. Can you describe the simulator connections? Is it uses Socket? or it starts automatically with the python code that you've provided?

Thanks,
Fayjie

Permission Error

@Kyushik I'm using Ubuntu 16.04. While I run the Double_Dueling_image.ipynb file I got the permission error.

env = UnityEnvironment(file_name=env_name)

# Examine environment parameters
print(str(env))

# Set the default brain to work with
default_brain = env.brain_names[0]
brain = env.brains[default_brain]

Screenshot from 2019-11-04 15-14-10

can you please help me with this

Thanks in Advance

SocketException: Unable to connect because the target computer actively refused.

hello,Can you help me solve this problem?Thanks very much.
I really don't know where this path(D:\Github\ML_Agent_Jeju_Simulator\ML_Agent_Jejucamp2017\Assets\Scripts) comes from.

SocketException: Unable to connect because the target computer actively refused.

at System.Net.Sockets.Socket.Connect (System.Net.IPAddress[] addresses, System.Int32 port) [0x000c3] in <4b9f316768174388be8ae5baf2e6cc02>:0
at System.Net.Sockets.Socket.Connect (System.String host, System.Int32 port) [0x00007] in <4b9f316768174388be8ae5baf2e6cc02>:0
at ExternalCommunicator.InitializeCommunicator () [0x000b2] in D:\Github\ML_Agent_Jeju_Simulator\ML_Agent_Jejucamp2017\Assets\Scripts\ExternalCommunicator.cs:128
at Academy.InitializeEnvironment () [0x00094] in D:\Github\ML_Agent_Jeju_Simulator\ML_Agent_Jejucamp2017\Assets\Scripts\Academy.cs:235
at Academy.Awake () [0x00002] in D:\Github\ML_Agent_Jeju_Simulator\ML_Agent_Jejucamp2017\Assets\Scripts\Academy.cs:208

(Filename: <4b9f316768174388be8ae5baf2e6cc02> Line: 0)

Question about ".ckpt" files

Hi,

I have downloaded the github repo and started the training process. May i know where the models are getting dumped?
I could not find any ".ckpt" files in that directory while the training was going on.

Unity simulator crashing

The unity simulator opens correctly manually the first time. But when I run it with the python code (DQN_image) it shows the unity splash screen logo and crashes. After that when I try to open it manually it crashes too. I have tried this with windows 10 and ubuntu 20.04.

Additionally, I am not able to run the .app of the simulator in mac Catalina.

Thanks,
Alex

Running Simulator

image

do i need to install other dependencies, like unity3d, to run this simulator? because the simulator always got crash and not responding when the progress is still observing. And also do you have any idea to make the simulator running smoothly on my PC?
btw I run this simulator with:
CPU : AMD Ryzen 5 4600H @3.00 GHz
GPU : NVIDIA Geforce GTX 1650
Memory : 8GB

ipynb file

Which ipynb file shoud we run in ml algorithms folder??
sry.. i am a beginner

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.