This may end up being be more of a question/discussion about the pi's limitations more so than a bug report but when running the Deep MNIST for Experts example (https://www.tensorflow.org/versions/r0.9/tutorials/mnist/pros/index.html) the pi3 runs out of resources during the evaluation step. I searched quite a bit looking for a solution to this and have found people say something to the effect of "use batches to evaluate" but that's about as much info as anybody says about the topic.
So is there a way to evaluate in batches instead of the example that does it all at once:
print("test accuracy %g"%accuracy.eval(feed_dict={ x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
I have not been able to find this process defined anywhere and will likely need to implement this in pi applications due to it's limited resources on processing large data sets.
running on a Pi3 on Raspbian GNU/Linux 8 (jessie)" with Python version 2.7.9
Full Console error:
Traceback (most recent call last):
File "/home/pi/tensorflow/Tutorial2.py", line 114, in
x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 556, in eval
return _eval_using_default_session(self, feed_dict, self.graph, session)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 3637, in _eval_using_default_session
return session.run(tensors, feed_dict)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 382, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 655, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 723, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 743, in _do_call
raise type(e)(node_def, op, message)
ResourceExhaustedError: OOM when allocating tensor with shape[10000,28,28,32]
[[Node: Conv2D = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/cpu:0"](Reshape, Variable_2/read)]]
Caused by op u'Conv2D', defined at:
File "", line 1, in
File "/usr/lib/python2.7/idlelib/run.py", line 116, in main
ret = method(_args, *_kwargs)
File "/usr/lib/python2.7/idlelib/run.py", line 324, in runcode
exec code in self.locals
File "/home/pi/tensorflow/Tutorial2.py", line 74, in
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
File "/home/pi/tensorflow/Tutorial2.py", line 64, in conv2d
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 394, in conv2d
data_format=data_format, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 703, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2298, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1232, in init
self._traceback = _extract_stack()