Giter Site home page Giter Site logo

car-behavioral-cloning's People

Contributors

cseas avatar kant avatar muety avatar naokishibuya avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

car-behavioral-cloning's Issues

Problem running drive.py model.h5

I change the image size to 320x160 in the utils.py and this problem occurs while running the "drive.py model.h5", and i dont know where to fix it. please help.

(car-behavioral-cloning) C:\Users\Arnon>python drive.py model.h5
Using TensorFlow backend.
2020-01-29 16:48:04.774223: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE instructions, but these are available on your machine and could speed up CPU computations.
2020-01-29 16:48:04.782094: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE2 instructions, but these are available on your machine and could speed up CPU computations.
2020-01-29 16:48:04.789933: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
2020-01-29 16:48:04.798509: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2020-01-29 16:48:04.807787: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2020-01-29 16:48:04.816615: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2020-01-29 16:48:04.826754: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2020-01-29 16:48:04.836001: W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
NOT RECORDING THIS RUN ...
(8548) wsgi starting up on http://0.0.0.0:4567
(8548) accepted ('127.0.0.1', 51826)
connect d8ec6de7ce4648ee90a8ccbbef179ea1
Error when checking : expected lambda_input_1 to have shape (None, 66, 200, 3) but got array with shape (1, 160, 320, 3)
Error when checking : expected lambda_input_1 to have shape (None, 66, 200, 3) but got array with shape (1, 160, 320, 3)
Error when checking : expected lambda_input_1 to have shape (None, 66, 200, 3) but got array with shape (1, 160, 320, 3)
127.0.0.1 - - [29/Jan/2020 16:49:42] "GET /socket.io/?EIO=4&transport=websocket HTTP/1.1" 200 0 16.036483
(8548) accepted ('127.0.0.1', 51831)
connect 3416a47d23bb4066a4a5785f61fe06d3
Error when checking : expected lambda_input_1 to have shape (None, 66, 200, 3) but got array with shape (1, 160, 320, 3)
Error when checking : expected lambda_input_1 to have shape (None, 66, 200, 3) but got array with shape (1, 160, 320, 3)
Error when checking : expected lambda_input_1 to have shape (None, 66, 200, 3) but got array with shape (1, 160, 320, 3)
not working

Missing depenencies versions

Most of the dependencies assume using the newest version, which generates a lot of problems.
Need something like pip freeze or something.

Example error:

Traceback (most recent call last):
  File "drive.py", line 19, in <module>
    sio = socketio.Server()
AttributeError: module 'socketio' has no attribute 'Server'

"GET /socket.io/?EIO=4&transport=websocket HTTP/1.1" 404

I just open up the udacity simulator, and clone this repo trying to get a test run
when i run
python drive.py model.h5
I got this printing all over the terminal:
127.0.0.1 - - [06/Dec/2018 23:56:52] "GET /socket.io/?EIO=4&transport=websocket HTTP/1.1" 404 342 0.002979
(17772) accepted ('127.0.0.1', 56738)
and its keep printing like this

Please kindly help to tell me what am i be wrong with?

list of pixel values?

Hi

Is there any way to get the list of pixel values? I could not find that in your code.

Regards,
Mohammad

How to do it?

Dear Sir:
When i run the drive.py i got this error. Please help.

usage: drive.py [-h] model [image_folder]
drive.py: error: the following arguments are required: model
An exception has occurred, use %tb to see the full traceback.

SystemExit: 2

Can't save image data

I'm getting the following error when specifying an output directory.

    ...
    File "drive.py", line 66, in telemetry
        image.save('{}.jpg'.format(image_filename))
AttributeError: 'numpy.ndarray' object has no attribute 'save'

TypeError when running model.py

Hello,

I am trying to get the model.py code running -- I already recorded some training data with the simulator and have it saved to desktop (csv file and images folder). I have activated the environment with all the dependencies.

Python couldn't find the log file so I changed the pd.read command as follows:
data_df = pd.read_csv('~/Desktop/driving_log.csv')
Maybe this causes a problem since I'm not using os.path.join(args.data_dir ?

There was another error and I added a row to the csv file and labeled the respective columns (steering, etc.)

Now I am stuck with this error and haven't figured out how to resolve:

return load_image(data_dir, left), steering_angle + 0.2
TypeError: Can't convert 'float' object to str implicitly

I don't think anything in the return line should be converted to string anyways.

Any tips would be appreciated! My apologies if this is really noob question, I am still in the early stages of learning.

-Nicholas

Getting error

Getting-
Using TensorFlow backend. XXX lineno: 34, opcode: 0 Traceback (most recent call last): File "drive.py", line 105, in <module> model = load_model(args.model) File "C:\Users\Win 10\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\saving.py", line 419, in load_model model = _deserialize_model(f, custom_objects, compile) File "C:\Users\Win 10\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\saving.py", line 225, in _deserialize_model model = model_from_config(model_config, custom_objects=custom_objects) File "C:\Users\Win 10\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\saving.py", line 458, in model_from_config return deserialize(config, custom_objects=custom_objects) File "C:\Users\Win 10\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\layers\__init__.py", line 55, in deserialize printable_module_name='layer') File "C:\Users\Win 10\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\utils\generic_utils.py", line 145, in deserialize_keras_object list(custom_objects.items()))) File "C:\Users\Win 10\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\sequential.py", line 301, in from_config model.add(layer) File "C:\Users\Win 10\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\sequential.py", line 165, in add layer(x) File "C:\Users\Win 10\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\base_layer.py", line 457, in __call__ output = self.call(inputs, **kwargs) File "C:\Users\Win 10\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\layers\core.py", line 687, in call return self.function(inputs, **arguments) File "model.py", line 34, in <lambda> model.add(Lambda(lambda x: x/127.5-1.0, input_shape=INPUT_SHAPE)) SystemError: unknown opcode

https://github.com/car-behavioral-cloning

Hello,
I got following error when running the drive.py script
usage: drive.py[-h] model [image folder]
drive.py: error: the following arguments are required: model

No file named model.h5

I trained the model and for some epochs it generated model-00x.h5 files but there was no file named model.h5.
Why is it so?

Model isnt working

I have a few things to clarify. When I run the drive.py file with the model.h5 file, though it says 'Recording this run' and starting wsgi, nothing happens after that.

  1. Do we need Unity Engine and load the Udacity Simulator on that?
    if yes, then does Unity work on Ubuntu 16.04

  2. The python files, the .exe of the simulator and also the image folder which contains the data is in the same folder. The driving_log.csv has the absolute paths of the images. Where should each component be present? Is there some directory problem here the reason for my problem?

  3. What are the arguments needed to be provided by us while running the drive.py and train.py file?
    will the drive.py file work even when the model is not saved as a .h5 and saved as .ckpt?

Issue during cv2 import (ImportError: /usr/lib/x86_64-linux-gnu/libharfbuzz.so.0: undefined symbol: FT_Get_Var_Blend_Coordinates)

I'm getting following issue while running model:

import cv2,os
The error showing is:
ImportError: /usr/lib/x86_64-linux-gnu/libharfbuzz.so.0: undefined symbol: FT_Get_Var_Blend_Coordinates

Steps taken leading to issue:

  1. Installed Ubuntu 18.10
  2. Installed Anaconda
  3. Run "git clone https://github.com/llSourcell/How_to_simulate_a_self_driving_car.git" in terminal.
  4. Downloaded Udacity Self-Driving Car Simulator (binary version 1).
  5. Run "conda env create -f environments.yml" inside cloned folder.
  6. Run "conda activate car-behavioral-cloning"
  7. Opened Udacity Self-Driving Car Simulator into autonomous mode.
  8. Run "python drive.py model.h5"

After step 8 I've run into issued mentioned above,
Issue can be easily reproduced in virtual machine with Ubuntu 18.04 and 18.10.

train throttle too?

It looks like only the steering angle was trained in the model, Not throttle.

Did you try to update your train model to make it have 2 outputs, one for steering, and the other for throttle?

Thank you.

Vulnerable to Adversarial Driving

This is a really great project that helps me a lot.

Recently, I find the NVIDIA self-driving model can be manipulated by adding imperceivable perturbations to the input image.

https://github.com/wuhanstudio/adversarial-driving

Adversarial Driving that attacks Autonomous Driving concerns me a lot. I'll try to improve the NVIDIA model to make it robust to Adversarial Driving.

I'll create a PR if I make it, hope you'll find it helpful.

utils.preprocess

utils has not attribute preprocess. Can you please help me with this error

AttributeError: module 'secrets' has no attribute 'token_bytes' (when I run the file drive.py with model.h5)

Hello. Mr.Naoki and everyone here.

I just using Mr.Naokishibuya's git file from here.
after model training,

I'm facing very difficult error.

python drive.py model-008.h5

result :
Traceback (most recent call last):
File "drive.py", line 8, in
import socketio
File "C:\ProgramData\Anaconda3\envs\car-behavioral-cloning\lib\site-packages\socketio_init_.py", line 3, in
from .client import Client
File "C:\ProgramData\Anaconda3\envs\car-behavioral-cloning\lib\site-packages\socketio\client.py", line 7, in
import engineio
File "C:\ProgramData\Anaconda3\envs\car-behavioral-cloning\lib\site-packages\engineio_init_.py", line 5, in
from .server import Server
File "C:\ProgramData\Anaconda3\envs\car-behavioral-cloning\lib\site-packages\engineio\server.py", line 6, in
import secrets
ImportError: No module named 'secrets'

so, I install python-ldap file (download from web sie)

and secrets (secrets requires python-ldap, but pip install is not working for it)

python ldap : https://www.lfd.uci.edu/~gohlke/pythonlibs/#python-ldap
python_ldap-3.2.0-cp35-cp35m-win_amd64.whl

and I used the command 'pip install secrets'
finally it is working. secrets install is success.

but finally, when I put the command

'python drive.py model.h5'

the result is below.

################################
127.0.0.1 - - [25/Dec/2020 18:36:56] "GET /socket.io/?EIO=4&transport=websocket HTTP/1.1" 500 1324 0.001998
(3888) accepted ('127.0.0.1', 54090)
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\envs\car-behavioral-cloning\lib\site-packages\eventlet\wsgi.py", line 573, in handle_one_response
result = self.application(self.environ, start_response)
File "C:\ProgramData\Anaconda3\envs\car-behavioral-cloning\lib\site-packages\engineio\middleware.py", line 60, in call
return self.engineio_app.handle_request(environ, start_response)
File "C:\ProgramData\Anaconda3\envs\car-behavioral-cloning\lib\site-packages\socketio\server.py", line 563, in handle_request
return self.eio.handle_request(environ, start_response)
File "C:\ProgramData\Anaconda3\envs\car-behavioral-cloning\lib\site-packages\engineio\server.py", line 380, in handle_request
transport, jsonp_index)
File "C:\ProgramData\Anaconda3\envs\car-behavioral-cloning\lib\site-packages\engineio\server.py", line 529, in _handle_connect
sid = self.generate_id()
File "C:\ProgramData\Anaconda3\envs\car-behavioral-cloning\lib\site-packages\engineio\server.py", line 503, in generate_id
secrets.token_bytes(12) + self.sequence_number.to_bytes(3, 'big'))
AttributeError: module 'secrets' has no attribute 'token_bytes'

###########################################

Previously, In the server.py file, I put the code of token_byte().
(source code from here :
https://github.com/python/cpython/blob/master/Lib/secrets.py
https://fossies.org/linux/Python/Lib/secrets.py )

because, server.py has a code of it.
l

def generate_id(self):
"""Generate a unique session id."""
id = base64.b64encode(
token_bytes(12) + self.sequence_number.to_bytes(3, 'big'))
self.sequence_number = (self.sequence_number + 1) & 0xffffff
return id.decode('utf-8').replace('/', '_').replace('+', '-')

The above def in the server.py using token_bytes() from secrets module.
so, I just put a code into server.py which I got it from the web site.
Finally, error is gone.

but nothing happen.

The car doesn't move.

Please help me!

Why YUV?

Hi,

I was curious to know if there's any specific reason why you converted the RGB image to YUV?

Issue while running drive.py model-001.h5

while running the following command in cmd
python drive.py model-001.5
which is a model that I have just trained I get the following issue
Anaconda Prompt (anaconda3) 12_12_2020 5_54_02 PM
is this issue related to drive.py or other file?
can anyone please help

dns.name.EmptyLabel: A DNS label is empty.

When I tried to run the drive.py file using the code python drive.py model.h5
it shows the error as dns.name.EmptyLabel: A DNS label is empty..How can I solve this?.
advanced thanks.

Library versions

Could you please include the tensorflow and keras library versions in environments? I have compile errors in drive.py because of incompatibility of the model.

acc value less than val_acc

Hi there, I trained the model from scratch and just added in the accuracy metric, however I am getting acc value less than val_acc. This is the output:
20000/20000 [==============================] - 121s - loss: 0.0338 - acc: 0.3356 - val_loss: 0.0172 - val_acc: 0.8464
Epoch 2/10
20000/20000 [==============================] - 102s - loss: 0.0309 - acc: 0.3292 - val_loss: 0.0149 - val_acc: 0.8321
Epoch 3/10
20000/20000 [==============================] - 99s - loss: 0.0299 - acc: 0.3304 - val_loss: 0.0165 - val_acc: 0.8607
Epoch 4/10
20000/20000 [==============================] - 100s - loss: 0.0296 - acc: 0.3266 - val_loss: 0.0166 - val_acc: 0.8464
Epoch 5/10
20000/20000 [==============================] - 104s - loss: 0.0281 - acc: 0.3313 - val_loss: 0.0149 - val_acc: 0.8750
Epoch 6/10
20000/20000 [==============================] - 105s - loss: 0.0280 - acc: 0.3247 - val_loss: 0.0218 - val_acc: 0.8464
Epoch 7/10
20000/20000 [==============================] - 109s - loss: 0.0266 - acc: 0.3267 - val_loss: 0.0128 - val_acc: 0.8643
Epoch 8/10
20000/20000 [==============================] - 103s - loss: 0.0260 - acc: 0.3263 - val_loss: 0.0185 - val_acc: 0.8464
Epoch 9/10
20000/20000 [==============================] - 103s - loss: 0.0255 - acc: 0.3316 - val_loss: 0.0118 - val_acc: 0.8357
Epoch 10/10
20000/20000 [==============================] - 111s - loss: 0.0254 - acc: 0.3327 - val_loss: 0.0153 - val_acc: 0.8643

how does the model run rgb2yuv?

I notice you are doing a color space change to yuv while training.
However, it isn't implemented as a keras layer (like Cropping2D)
So, when you export the model and run it in the unity simulator, the model uses RGB itself.

Am I missing something here?

Thanks

model.py giving error for training on new data

Hi naokishibuya!!

Sir when i am trying to train the model for my newly created data using this command

python model.py
i am getting the below error
Please suggest me where i am wrong

I am using Ubuntu 16.04 and the data folder is in my home directory (containing file driving_log.csv and IMG folder)
and my model.py is also in my home directory.Rest everything is working fine for pre-trained data.


Using TensorFlow backend.
------------------------------
Parameters
------------------------------
data_dir             := data
samples_per_epoch    := 20000
nb_epoch             := 10
learning_rate        := 0.0001
save_best_only       := True
test_size            := 0.2
keep_prob            := 0.5
batch_size           := 40
------------------------------
Traceback (most recent call last):
  File "model.py", line 162, in <module>
    main()
  File "model.py", line 154, in main
    data = load_data(args)
  File "model.py", line 33, in load_data
    X = data_df[['center', 'left', 'right']].values
  File "/home/vinayak/miniconda3/envs/car-behavioral-cloning/lib/python3.5/site-packages/pandas/core/frame.py", line 2056, in __getitem__
    return self._getitem_array(key)
  File "/home/vinayak/miniconda3/envs/car-behavioral-cloning/lib/python3.5/site-packages/pandas/core/frame.py", line 2100, in _getitem_array
    indexer = self.loc._convert_to_indexer(key, axis=1)
  File "/home/vinayak/miniconda3/envs/car-behavioral-cloning/lib/python3.5/site-packages/pandas/core/indexing.py", line 1231, in _convert_to_indexer
    raise KeyError('%s not in index' % objarr[mask])
KeyError: "['center' 'left' 'right'] not in index"

ask throttle in drive.py

i wanna clarify the formula of throttle in drive.py
throttle = 1.0 - steering_angle**2 - (speed/speed_limit)**2, Why did you do that? How the throttle is connected to the steering angle? can you explain this?

connectivity issue and secrets issue

when i was running this code the problem of secrets module occur when i resolved this issue my code is run successfully but m waiting,waiting,waiting and still waiting the car is dose not move its connectivity please help how i resolved this connectivity and i also recommend please add secret module in environment.yml file
thanks waiting for your reply

Horizontal translation is useful for difficult curve handling?

Hi Dr. Shibuya, I really love your work!

Regarding

The horizontal translation is useful for difficult curve handling (i.e. the one after the bridge).

I don't quite understand this, can you illustrate more? My thoughts about why this model works is that it can extract the most important features from the camera - outlines of the roads (i.e. lanes) and make decisions. Thanks!

arc tan activation in last layer

Hi,

I saw the same NVIDIA model by Lex Fridman (from MIT) they used the arctan activation function in the final layer ,to bound the outputs between -20 and 20 and 'Relu' for the previous layers

you have a linear activation for the last and 'elu' for the previous layers , any reason for this or will both methods work fine?

Data

Hi Sir,

I currently have about ~64 000 images in my dataset and the performance of my model is not very good. Could this be due to the fact that the model is overfitting?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.