Giter Site home page Giter Site logo

morvanzhou / tutorials Goto Github PK

View Code? Open in Web Editor NEW
11.4K 667.0 5.7K 61.18 MB

机器学习相关教程

Home Page: https://morvanzhou.github.io/tutorials

License: MIT License

Python 87.09% Jupyter Notebook 12.91%
machine-learning neural-network tensorflow python sklearn theano threading multiprocessing numpy

tutorials's Introduction


我是 周沫凡, 莫烦Python 只是谐音, 我喜欢制作, 分享所学的东西, 所以你能在这里找到很多有用的东西, 少走弯路. 你能在这里找到关于我的所有东西.

这个 Python tutorial 的一些内容:

赞助和支持

这些 tutorial 都是我用业余时间写出来, 录成视频, 如果你觉得它对你很有帮助, 请你也分享给需要学习的朋友们. 如果你看好我的经验分享, 也请考虑适当的 赞助打赏, 让我能继续分享更好的内容给大家.

tutorials's People

Contributors

chunhuajiang avatar codemayq avatar jningwei avatar morvanzhou avatar xqtbox avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tutorials's Issues

keras attention CNNLSTM

莫烦老师 你好:
LSTM 的输出 S1一维n
CNN 一层卷积后输出C (nm)(m 为filter的个数)
计算S1与C 中的每一个filter 计算余弦相似度a
a
C然后加上Dense层进行分类

用keras merge实现重写dot 函数能不能实现
from keras.layers import Merge
import numpy as np
np.random.seed(1337) # for reproducibility

from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers import Embedding
from keras.layers import Convolution1D, GlobalMaxPooling1D
from keras.layers import LSTM
from keras.datasets import imdb

set parameters:

max_features = 5000
maxlen = 400
batch_size = 32
embedding_dims = 50
nb_filter = 250
filter_length = 3
#hidden_dims = 250
nb_epoch = 2

print('Loading data...')
(X_train, y_train), (X_test, y_test) = imdb.load_data(nb_words=max_features)
print(len(X_train), 'train sequences')
print(len(X_test), 'test sequences')

print('Pad sequences (samples x time)')
X_train = sequence.pad_sequences(X_train, maxlen=maxlen)
X_test = sequence.pad_sequences(X_test, maxlen=maxlen)
print('X_train shape:', X_train.shape)
print('X_test shape:', X_test.shape)

print('Build model...')

right_branch = Sequential()

right_branch.add(Embedding(max_features,
embedding_dims,
input_length=maxlen
))

we add a Convolution1D, which will learn nb_filter

word group filters of size filter_length:

right_branch.add(Convolution1D(
nb_filter=nb_filter,
filter_length=filter_length,
border_mode='valid',
activation='relu',
subsample_length=1))

left_branch = Sequential()
left_branch.add(Embedding(max_features,
embedding_dims,
input_length=maxlen
))

left_branch.add(LSTM(398,return_sequences=False))

merged = Merge([left_branch,right_branch], mode='dot',output_shape=lambda x: x[0])

final_model = Sequential()
final_model.add(merged)
final_model.add(Dense(1))
final_model.add(Activation('sigmoid'))
print('compile...')
final_model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('fit...')
final_model.fit([X_train,X_train], y_train,
batch_size=1,
nb_epoch=nb_epoch,
validation_data=([X_test,X_test], y_test)
)

score, acc = final_model.evaluate([X_test,X_test], y_test,batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)

from keras.utils.visualize_util import plot
plot(final_model, to_file='model8.png',show_shapes=True)
‘’‘’
这是在merge 中修改的函数dot
还删除了一下城下
if shape1[self.dot_axes[0]] != shape2[self.dot_axes[1]]:
raise Exception('Dimension incompatibility using dot mode: ' +
'%s != %s. ' % (shape1[self.dot_axes[0]], shape2[self.dot_axes[1]]) +
'Layer shapes: %s, %s' % (shape1, shape2))

elif self.mode == 'dot':
l1 = inputs[0]
l2 = inputs[1].T
attention=[]
for i in range(1,17):
attention.append(K.sum((l1 * l2[i])) / ( K.sqrt(K.sum((l1 * l1)) * K.sqrt(K.sum( (l2[i] l2[i])) ) )))
return attention
((l2.T).sum(axis=1))

跑RNN2的代码的时候有问题

按着示例代码打的,但是在调用计算误差调用softmax_cross_entropy_with_logits()函数的时候传入的参数shape不一样会报错,pred的shape是[128,10],但是y的shape是[1,10],第一个维度的形状不一样啊。。。

附报错提醒 ValueError : Dimension 0 in both shapes must be equal, but are 128 and 1. Shapes are [128,10] and [1,10]. for 'softmax_cross_entropy_with_logits_sg' (op: 'SoftmaxCrossEntropyWithLogits') with input shapes: [128,10], [1,10].

LSTM:About data sequence in 7-RNN_Classifier_example.py 关于过程数据序列化问题 7-RNN_Classifier_example.py?

Now dataset X={x1,x2,x3...,xn},shape=[n,m], x1,x2,...,xn are samples of X.
And label data y.shape=[n,k]
If I use a time window with length of 2,then after reshape:
X= tf.reshape(X,[int(n/2), 2, m])
X.shape=[n/2,m]
But I have a problem in getting the cost by formula,
cost_rnn = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_ , labels=y))
because both X and y have different shape.

Anybody knows how to solve this problem?


现在,有数据集X={x1,x2,x3...,xn},shape=[n,m]
其中,x1包含多个变量,shape=[m].
比如,X=[[1,10,100],[2,20,200],[3,30,300]],可以看做X由多个样本x1,x2,...组成的。

标签样本y={y1,y2,...yn},shape=[n,k],
比如,Y=[[1,0,0],[0,1,0],[0,0,1]]。
这个LSTM如果序列化数据的话,比如说,用时间窗time_step=2,
X= tf.reshape(X,[int(n/2), 2, m])

那么,序列化之后的样本,X就只有n-1 个了,shape=[n/2,m]
这样,由于维度不一样,就无法求出cost
cost_rnn = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_rnn, labels=y))

针对这种数据集应该怎样处理?

How to import other kind of data?

Great tutorial!

I have a few handwritten character samples, stored in separate jpg files each.

Could you please suggest how to import data from those files?

Most tutorials on CNN only describe mnist or cifar datasets.
Most examples are included as TF libraries, which makes it less transparent for other applications.

xiexie,
Bayram

InvalidArgumentError in tutorials/tensorflowTUT/tf15_tensorboard/full_code.py

Hi Morvan,

Thanks for all your efforts and help ! I am watching your videos.

I am trying to execute tutorials/tensorflowTUT/tf15_tensorboard/full_code.py on my Tensorflow 1.0 & Windows 10. I replaced some tf functions because in 1.0 they have new names, however finally I got the following errors. I spent a lot of time but still failed to fix it. Any idea ?

thanks in advance !


Using matplotlib backend: Qt5Agg


InvalidArgumentError Traceback (most recent call last)
C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args)
1021 try:
-> 1022 return fn(*args)
1023 except errors.OpError as e:

C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
1003 feed_dict, fetch_list, target_list,
-> 1004 status, run_metadata)
1005

C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\contextlib.py in exit(self, type, value, traceback)
65 try:
---> 66 next(self.gen)
67 except StopIteration:

C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\errors_impl.py in raise_exception_on_not_ok_status()
468 compat.as_text(pywrap_tensorflow.TF_Message(status)),
--> 469 pywrap_tensorflow.TF_GetCode(status))
470 finally:

InvalidArgumentError: You must feed a value for placeholder tensor 'inputs/y_inputs' with dtype float
[[Node: inputs/y_inputs = Placeholderdtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]]

During handling of the above exception, another exception occurred:

InvalidArgumentError Traceback (most recent call last)
in ()
66 sess.run(train_step, feed_dict={xs:x_data, ys:y_data})
67 if i % 50 == 0:
---> 68 result = sess.run(merged, feed_dict={xs:x_data})
69 # writer.add_summary(result, i)
70

C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata)
765 try:
766 result = self._run(None, fetches, feed_dict, options_ptr,
--> 767 run_metadata_ptr)
768 if run_metadata:
769 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
963 if final_fetches or final_targets:
964 results = self._do_run(handle, final_targets, final_fetches,
--> 965 feed_dict_string, options, run_metadata)
966 else:
967 results = []

C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
1013 if handle is None:
1014 return self._do_call(_run_fn, self._session, feed_dict, fetch_list,
-> 1015 target_list, options, run_metadata)
1016 else:
1017 return self._do_call(_prun_fn, self._session, handle, feed_dict,

C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args)
1033 except KeyError:
1034 pass
-> 1035 raise type(e)(node_def, op, message)
1036
1037 def _extend_graph(self):

InvalidArgumentError: You must feed a value for placeholder tensor 'inputs/y_inputs' with dtype float
[[Node: inputs/y_inputs = Placeholderdtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]]

Caused by op 'inputs/y_inputs', defined at:
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\runpy.py", line 85, in run_code
exec(code, run_globals)
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\ipykernel_main
.py", line 3, in
app.launch_new_instance()
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\traitlets\config\application.py", line 658, in launch_instance
app.start()
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\ipykernel\kernelapp.py", line 474, in start
ioloop.IOLoop.instance().start()
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\zmq\eventloop\ioloop.py", line 177, in start
super(ZMQIOLoop, self).start()
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\tornado\ioloop.py", line 887, in start
handler_func(fd_obj, events)
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\tornado\stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 440, in _handle_events
self._handle_recv()
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 472, in _handle_recv
self._run_callback(callback, msg)
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 414, in _run_callback
callback(*args, **kwargs)
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\tornado\stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 276, in dispatcher
return self.dispatch_shell(stream, msg)
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 228, in dispatch_shell
handler(stream, idents, msg)
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 390, in execute_request
user_expressions, allow_stdin)
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\ipykernel\ipkernel.py", line 196, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\ipykernel\zmqshell.py", line 501, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2717, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2821, in run_ast_nodes
if self.run_code(code, result):
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2881, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 40, in
ys = tf.placeholder(tf.float32, [None, 1], name='y_inputs')
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1520, in placeholder
name=name)
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 2149, in _placeholder
name=name)
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 763, in apply_op
op_def=op_def)
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 2395, in create_op
original_op=self._default_original_op, op_def=op_def)
File "C:\Users\geldqb\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 1264, in init
self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'inputs/y_inputs' with dtype float
[[Node: inputs/y_inputs = Placeholderdtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]]

save and restore

Hi Morvan,
您好!您的教程和代码非常好。
关于Reinforcement Learning 部分 像DQN 等training完成之后,尝试save和restore 网络,结果发现与刚training好的效果不一样。估计save和restore 网络的方法有问题,能不能在你的代码上加上saver和restore的部分,非常感谢。

Alex

拼写错误

tutorials/tensorflowTUT/tensorflow8_feeds.py中把output写成了ouput
qq20170604-145633 2x

关于tensorflow 数据可视化 bug 关于 tf.train.SummaryWriter("/Users/taw/logs", sess.graph) 报错

报错内容:
Traceback (most recent call last):
File "/Users/taw/PycharmProjects/stractTest/tensorTest4.py", line 46, in
writer = tf.train.SummaryWriter("/Users/taw/logs", sess.graph)
File "/Users/taw/anaconda/lib/python2.7/site-packages/tensorflow/python/training/summary_io.py", line 82, in init
self.add_graph(graph_def)
File "/Users/taw/anaconda/lib/python2.7/site-packages/tensorflow/python/training/summary_io.py", line 128, in add_graph
event = event_pb2.Event(wall_time=time.time(), graph_def=graph_def)
File "/Users/taw/anaconda/lib/python2.7/site-packages/google/protobuf/internal/python_message.py", line 519, in init
_ReraiseTypeErrorWithFieldName(message_descriptor.name, field_name)
File "/Users/taw/anaconda/lib/python2.7/site-packages/google/protobuf/internal/python_message.py", line 450, in _ReraiseTypeErrorWithFieldName
six.reraise(type(exc), exc, sys.exc_info()[2])
File "/Users/taw/anaconda/lib/python2.7/site-packages/google/protobuf/internal/python_message.py", line 517, in init
copy.MergeFrom(new_val)
File "/Users/taw/anaconda/lib/python2.7/site-packages/google/protobuf/internal/python_message.py", line 1208, in MergeFrom
"expected %s got %s." % (cls.name, type(msg).name))
TypeError: Parameter to MergeFrom() must be instance of same class: expected GraphDef got Graph. for field Event.graph_def

does not appear anything in graph tab in tensorboard

@MorvanZhou i wrote code the tensorboard tutorial section 14 the same as your code , but it is not work for me. in tensorborad does not any things appear in graph tab. i use google chrome on windows. :( , it does not appear anything in graph tab in tensorboard

周大神有一个问题想请教你

我是你的粉丝
一个**的TENSORFLOW爱好者
我很想做一个属于自己的AI聊天机器人,但是苦于nltk就没有中文的,所以很想知道如果我想做一个中文的聊天机器人该从何下手。我的代码原型是https://github.com/Conchylicultor/DeepQA 这个。研究了很久,但是 就是搞不定中文。。望你有空的时候看看。。我期待能搞出一个中文的。。

PS : 中文学习语料库我可以自己码出来,现在DEEPQA是用的英文的语料库,我觉得问题的重点可能是在如何中文分词,好像GITHUB上有一个JIEBA分词的东西,但是真不知道如何使用。

期待你的答复 !!!!

能否介绍一下在机器学习,尤其是keras或者TensorFlow 当中,参数的shape具体该如何设置?谢谢啦

莫烦老师您好,我是你的课程的忠实听众。

看了你的TensorFlow及keras课程获益匪浅,但是有一个问题我一直在实际问题中困扰了好久。例如:

 1.  需要建立一个lstm的序列预测模型
 2.  x:为一个 1行3列的数组(1*3)  y为一个1*1 的数据
 3.  每次训练的时候 输入500组x,500组y

看了很多晚上的TensorFlow 及 keras的例子。发现对于这种问题不同人的输入方式不同。有些把(x,y)设置成(500,3,1; 500,1,1)。 但是有些设置的不一样,有时候会变成(500*3,1;500,1)

我根据咱们的keras教程设置后,报告错误:x 的(500*3*1)与y的shape不相同。 这个问题已经折磨我2个月了。

请问有没有一些通用的方法,或者规律来进行序列的参数形状设置呢? 因为每一个实际项目的x,y不尽相同,有时候y会变成一个n*m的矩阵。 如果没有一些通用的方法论的话,真的会郁闷死了~

希望莫烦老师能解答我的这个问题,或者专门出一期视频来讲解神经网络当中的x,y参数的设置技巧。太谢谢啦~~

一个被x,y的shape折磨疯的人~~

maybe wrong in A3C_RNN.py

  1. line 84/85 should be self.a_loss, self.c_loss
  2. use the RNN state maybe wrong:
    a) mixed the RNN state in global network/local network
    b) update the local RNN state in line 160, but still update RNN hidden state with same observed state at line 146

Requests for additions to TensorFlow tutorials

Hi @MorvanZhou! Really enjoying your tutorials on tensorflow 😄 . It would be great if you could add tutorials on the following topics -

  • variable_scope / name_scope and tensorflow scopes in general
  • Data manipulation / Data input functions in tensorflow. I'm finding it extremely difficult to find the correct functions to get my data in the correct format.

Thanks!

autoencoder

我发现autoencoder代码里面的训练时候的权值一直没有改变。
很严重的错误

自编码器不能收敛?

您好!
我现在使用自编码器学习图像的特征,现在准备使用CIFAR10的数据进行训练,但是却发现在训练一段时间后模型并不你能收敛,导致模型失败?
经过对比,大家使用的数据大多事MNIST的手写识别数据,其数据主要是0/1数据源,而我将CIFAR10的数据归一化到0--1,进行训练,但是依然不能收敛!是不是我的网络有些简单!
我使用的是 (32*32=1024 ---> 521 ---> 256 ---> 512 ---> 1024)这样的模型?请问您做过关于这类的编码器么?

谢谢🙏

Deep undertanding

Hi,
Can you recommend any source to try to understand the real architecture done with tensorflow in the classification of RNN (MNIST). I thought that the simplest RNN would work with 3 Weight matrixes. W_hx, W_hh and W_yh but I just saw two in the tutorial and I don't understand very well why.

Regards,
Gissella B.

周神我想问个问题

用 writer=tf.summary.FileWriter("logs/",sess.graph)生成文件之后用TensorBoard查看,有多个文件我们怎么去区分?或者怎么重命名一个文件,我试过直接重命名不行

tutorials/tensorflowTUT/tf18_CNN3/full_code.py 进行测试时内存占用过高

我在运行tutorials/tensorflowTUT/tf18_CNN3/full_code.py时,
发现测试过程是将整个测试集(10000个测试样例)一次性输入网络中,这样导致测试过程占用的内存达到6G.

我觉得可以这样修改,把test时的batch_size设置为100, 进行多个batch.

但是经过我的测试,输入100个batch_size为100的batch, 运算时间远长于输入一个batch_size = 10000.

这是我的测试代码

import numpy as np
for i in range(1000):
    batch_xs, batch_ys = mnist.train.next_batch(100)
    sess.run(train_step, feed_dict={xs: batch_xs, ys: batch_ys, keep_prob: 0.5})
    print ("iter: %d" % i)
    if i % 50 == 0:
        w = []
        for _ in range(10):
            batch_txs, batch_tys = mnist.test.next_batch(100)
            a = (compute_accuracy(batch_txs, batch_tys))
            w.append(a)
        print ("Acc: %f" % np.mean(w))

PolicyGradient 里面的动作数目 为啥也是离散化的?

hi morvan,
@MorvanZhou
看完PolicyGradient的教程后,我有个疑惑,说这个方法可以处理连续的动作,比Q - learning处理连续动作更有优势,但是实现的时候最后动作也是离散化的 nb_actions, 这个和Q 网络 输出 的个数也是一致的啊, 怎么体现处理连续 大量的动作的优势呢??

谢谢!
chao

python2.7下怎么运行例程

莫烦老师您好!
我是初学者,看了您的视频收获很大,讲行特别好,非常感谢您 的分享!
我用python2.7运行您的例程tutorials/tensorflowTUT/tf14_tensorboard/
会报错,不知道该怎么改。报错如下:
Traceback (most recent call last):
File "/home/py/文档/practice/tensorflow_Visualization2.py", line 46, in
writer = tf.train.SummaryWriter('logs/', sess.graph)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/summary_io.py", line 82, in init
self.add_graph(graph_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/summary_io.py", line 128, in add_graph
event = event_pb2.Event(wall_time=time.time(), graph_def=graph_def)
File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/python_message.py", line 519, in init
_ReraiseTypeErrorWithFieldName(message_descriptor.name, field_name)
File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/python_message.py", line 450, in _ReraiseTypeErrorWithFieldName
six.reraise(type(exc), exc, sys.exc_info()[2])
File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/python_message.py", line 517, in init
copy.MergeFrom(new_val)
File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/python_message.py", line 1208, in MergeFrom
"expected %s got %s." % (cls.name, type(msg).name))
TypeError: Parameter to MergeFrom() must be instance of same class: expected GraphDef got Graph. for field Event.graph_def

Filters

Hi Morvan!
I would like to know if you have any idea of how to guarantee that the total number of filters we define for a certain layer are not going to measure the same pattern. Does the random initial weights guarantee this?

Gissella

跑你的CNN程序时出现错误

错误是这样的的:UnicodeEncodeError: 'utf-8' codec can't encode character '\udcd5' in position 87: surrogates not allowed,不知道是哪个地方的错误·

CNN example question

Hi, first thanks for the videos on youtube and the code here. Enjoyed the learning experience a lot!

Just a question, I tried to follow your video about building CNN with Keras (link). And some of the apis of Keras seem to be changed after you record the video, thus I directly reused your code here.

But my training and testing accuracy are both around 0.4 only. I saw in your video it is 0.976. Could you suggest some possible reasons behind? Thanks!

kerasTUT 6-CNN tutorial needs update

I have tried to update this tutorial to the latest keras. However, it seems the problem to be keras source code, because I tried to use the latest api, but the error it reported is located in keras source.

However, I am a newbie, so I maybe totally wrong. Could you try to update this tutorial, thanks a lot!

I have managed to update part of the tutorial until I met the error:

# Another way to build your CNN
model = Sequential()

# here explain the meaning and effects of number of filters  https://youtu.be/zHop6Oq757Y?list=PLXO45tsB95cKhCSIgTgIfjtG5y0Bf_TIY&t=250
# Conv layer 1 output shape (32, 28, 28)
model.add(Conv2D(
    filters=32,
	data_format='channels_first',
    kernel_size=(5,5),
    padding='same',     # Padding method
    dim_ordering='th',      # if use tensorflow, to set the input dimension order to theano ("th") style, but you can change it.
    input_shape=(1,         # channels
                 28, 28,)    # height & width
))

model.add(Activation('relu'))

# Pooling layer 1 (max pooling) output shape (32, 14, 14)
model.add(MaxPooling2D(
    pool_size=(2, 2),
    strides=(2, 2),
    padding='same'   # Padding method
))

error message I got:

Focus on one: /Users/Natsume/Documents/kur_experiment/LIE_examples/kerasTUT
(dlnd-tf-lab)  ->python 6-CNN_example.py
Using Theano backend.
6-CNN_example.py:58: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(padding="same", data_format="channels_first", kernel_size=(5, 5), filters=32, input_shape=(1, 28, 28...)`
  28, 28,)    # height & width
/Users/Natsume/Downloads/keras/keras/backend/theano_backend.py:1814: UserWarning: dict_keys(['filter_dilation']) are now deprecated in `tensor.nnet.abstract_conv.conv2d` interface and will be ignored.
  filter_dilation=dilation_rate)
Traceback (most recent call last):
  File "6-CNN_example.py", line 67, in <module>
    padding='same'   # Padding method
  File "/Users/Natsume/Downloads/keras/keras/models.py", line 475, in add
    output_tensor = layer(self.outputs[0])
  File "/Users/Natsume/Downloads/keras/keras/engine/topology.py", line 585, in __call__
    output = self.call(inputs, **kwargs)
  File "/Users/Natsume/Downloads/keras/keras/layers/pooling.py", line 154, in call
    data_format=self.data_format)
  File "/Users/Natsume/Downloads/keras/keras/layers/pooling.py", line 217, in _pooling_function
    pool_mode='max')
  File "/Users/Natsume/Downloads/keras/keras/backend/theano_backend.py", line 1945, in pool2d
    mode='max')
TypeError: pool_2d() got an unexpected keyword argument 'ws'

莫烦先生您好,请您帮帮我

您好,我正在观看您的tensorflow教程。遇到了点问题,您能帮帮我吗?

我在解决一个关于传感器数据校准的问题,从传感器收集到x,y,z三轴的加速度数据,影响它不准的因素有2个,分别是零偏(d)和刻度系数偏差(k),dx,dy, dz ,kx, ky, kz,我觉得可以用tensorflow的方式来校准这些数据。

我看了您的Tensorflow 11 例子3 建造神经网络,感觉和我的问题比较像,但是修改了好久都不能实现想要的结果,希望您能指导我一下,谢谢。

can not get the graph

hi , I used your code , and run in the terminal , but i can not get the graph as your video shows , can you explain and try again whether the code has left some sentence . thanks

莫烦老师 您好

我在tensorflow跑这个CNN的例子,为什么accurancy只有0.1,cost也达到10几,请问这是什么原因呢

initialize_all_variables is deprecated

for the code of tf14 and tf15

WARNING:tensorflow:From full_code.py:62 in .: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use tf.global_variables_initializer instead.

little typo

keras tutorial 5, s is missing from epochs, it should be the following:

model.fit(X_train, y_train, epochs=2, batch_size=32)

pip3 how to install tkinter !

/usr/local/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/bin/python3.6 /Users/liuguiyang/Documents/CodeProj/PyProj/MLCourse/source/QLearn/maze/demo_maze.py
Traceback (most recent call last):
File "/Users/liuguiyang/Documents/CodeProj/PyProj/MLCourse/source/QLearn/maze/demo_maze.py", line 13, in
from source.QLearn.maze.maze_env import Maze
File "/Users/liuguiyang/Documents/CodeProj/PyProj/MLCourse/source/QLearn/maze/maze_env.py", line 12, in
import tkinter as tk
File "/usr/local/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/tkinter/init.py", line 36, in
import _tkinter # If this fails your Python may not be configured for Tk
ModuleNotFoundError: No module named '_tkinter'

Doubt in the code of autoencoder.py file

Hi Morvan,

Very humbly requesting you to correct my doubt.
In the decoding section of your code, you didn't take the transpose of previous weight matrices.
Why you did so?

How to use a continuous color as the color of a line?

Hi Morvan,
Thanks for continuing making great lessons!

I have a question about using matplotlib to create a line with a continuous color, the continuous color should be based on an array.

The original dataset and question is here

Could you teach me how to do it? Thanks

保存模型问题

请问莫烦老师 tf20_RNN2.2 中的模型如何保存和载入呢? 重新载入的时候 state 如何加载?

visualize cpu history

cpuhistory

I am using ubuntu16, how can I visualize this cpu history? looking forward to your reply. Thanks a lot

unhashable type: 'list' from basic/35_set.py

Traceback (most recent call last):
File "d:\projects\machine_learning\mofan_tutorials\basic\35_set.py", line 13, in
print(set([char_list, sentence]))
TypeError: unhashable type: 'list'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.