Giter Site home page Giter Site logo

tensorflow-for-poets-2's Introduction

Overview

This repo contains code for the "TensorFlow for poets 2" series of codelabs.

There are multiple versions of this codelab depending on which version of the tensorflow libraries you plan on using:

This repo contains simplified and trimmed down version of tensorflow's example image classification apps.

The scripts directory contains helpers for the codelab. Some of these come from the main TensorFlow repository, and are included here so you can use them without also downloading the main TensorFlow repo (they are not part of the TensorFlow pip installation).

tensorflow-for-poets-2's People

Contributors

kul3r4 avatar markdaoust avatar samthor avatar yashk2810 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorflow-for-poets-2's Issues

Tensorflow Error While Classifying the image

Anujs-iMac:tensorflow-for-poets-2 anujchampjain$ python scripts/label_image.py --graph=tf_files/retrained_graph.pb --image=tf_files/flower_photos/daisy/21652746_cc379e0eea_m.jpg
2017-09-05 12:36:52.419514: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-09-05 12:36:52.419532: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-09-05 12:36:52.419558: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-09-05 12:36:52.419561: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
Traceback (most recent call last):
File "scripts/label_image.py", line 120, in
input_operation = graph.get_operation_by_name(input_name);
File "/Users/anujchampjain/anaconda/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2836, in get_operation_by_name
return self.as_graph_element(name, allow_tensor=False, allow_operation=True)
File "/Users/anujchampjain/anaconda/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2708, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "/Users/anujchampjain/anaconda/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2768, in _as_graph_element_locked
"graph." % repr(name))
KeyError: "The name 'import/input' refers to an Operation not in the graph."

Run the customized app occur an error

If you perform step 8, a modified application is started, an error occurs. I changed the name of the last layer as indicated in the instructions and copied the model and labels. But when I run the application on the phone an error occurs:

01-11 13:48:00.124 18608-18608/? E/AndroidRuntime: FATAL EXCEPTION: main
Process: org.tensorflow.demo, PID: 18608
java.lang.RuntimeException: Failed to load model from 'file:///android_asset/graph.pb'
at org.tensorflow.contrib.android.TensorFlowInferenceInterface.(TensorFlowInferenceInterface.java:100)
at org.tensorflow.demo.TensorFlowImageClassifier.create(TensorFlowImageClassifier.java:103)
at org.tensorflow.demo.ClassifierActivity.onPreviewSizeChosen(ClassifierActivity.java:130)
at org.tensorflow.demo.CameraActivity$1.onPreviewSizeChosen(CameraActivity.java:159)
at org.tensorflow.demo.CameraConnectionFragment.setUpCameraOutputs(CameraConnectionFragment.java:421)
at org.tensorflow.demo.CameraConnectionFragment.openCamera(CameraConnectionFragment.java:428)
at org.tensorflow.demo.CameraConnectionFragment.access$000(CameraConnectionFragment.java:64)
at org.tensorflow.demo.CameraConnectionFragment$1.onSurfaceTextureAvailable(CameraConnectionFragment.java:95)
at android.view.TextureView.getHardwareLayer(TextureView.java:368)
at android.view.View.updateDisplayListIfDirty(View.java:15244)
at android.view.View.draw(View.java:16040)
at android.view.ViewGroup.drawChild(ViewGroup.java:3610)
at android.view.ViewGroup.dispatchDraw(ViewGroup.java:3400)
at android.view.View.updateDisplayListIfDirty(View.java:15262)
at android.view.View.draw(View.java:16040)
at android.view.ViewGroup.drawChild(ViewGroup.java:3610)
at android.view.ViewGroup.dispatchDraw(ViewGroup.java:3400)
at android.view.View.draw(View.java:16273)
at android.view.View.updateDisplayListIfDirty(View.java:15267)
at android.view.View.draw(View.java:16040)
at android.view.ViewGroup.drawChild(ViewGroup.java:3610)
at android.view.ViewGroup.dispatchDraw(ViewGroup.java:3400)
at android.view.View.updateDisplayListIfDirty(View.java:15262)
at android.view.View.draw(View.java:16040)
at android.view.ViewGroup.drawChild(ViewGroup.java:3610)
at android.view.ViewGroup.dispatchDraw(ViewGroup.java:3400)
at android.view.View.updateDisplayListIfDirty(View.java:15262)
at android.view.View.draw(View.java:16040)
at android.view.ViewGroup.drawChild(ViewGroup.java:3610)
at android.view.ViewGroup.dispatchDraw(ViewGroup.java:3400)
at android.view.View.draw(View.java:16273)
at com.android.internal.policy.PhoneWindow$DecorView.draw(PhoneWindow.java:2697)
at android.view.View.updateDisplayListIfDirty(View.java:15267)
at android.view.ThreadedRenderer.updateViewTreeDisplayList(ThreadedRenderer.java:281)
at android.view.ThreadedRenderer.updateRootDisplayList(ThreadedRenderer.java:287)
at android.view.ThreadedRenderer.draw(ThreadedRenderer.java:322)
at android.view.ViewRootImpl.draw(ViewRootImpl.java:2619)
at android.view.ViewRootImpl.performDraw(ViewRootImpl.java:2438)
at android.view.ViewRootImpl.performTraversals(ViewRootImpl.java:2071)
at android.view.ViewRootImpl.doTraversal(ViewRootImpl.java:1111)
at android.view.ViewRootImpl$TraversalRunnable.run(ViewRootImpl.java:6017)
at android.view.Choreographer$CallbackRecord.run(Choreographer.java:858)
at android.view.Choreographer.doCallbacks(Choreographer.java:670)
at android.view.Choreographer.doFrame(Choreographer.java:606)
at android.view.Choreographer$FrameDisplayEventReceiver.run(Choreographer.java:844)
at android.os.Handler.handleCallback(Handler.java:739)
at android.os.Handler.dispatchMessage(Handler.java:95)
at android.os.Looper.loop(Looper.java:234)
at android.app.ActivityThread.main(ActivityThread.java:5526)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:726)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:616)
Caused by: java.io.IOException: Not a valid TensorFlow Graph serialization: NodeDef mentions attr 'dilations' not in Op<name=Conv2D; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_HALF, DT_FLOAT]; attr=strides:list(int); attr=use_cudnn_on_gpu:bool,default=true; attr=padding:string,allowed=["SAME", "VALID"]; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW"]>; NodeDef: Mobi
01-11 13:50:09.979 18608-18608/org.tensorflow.demo I/Process: Sending signal. PID: 18608 SIG: 9

Does anyone know what this error is related to?

Using the Inception v3 or v4 architecture

Hi,

I'm trying to classify some images that are very similar, so I need the more precise Inception architecture.

I've tried using the following command, but to no avail:

IMAGE_SIZE=224
ARCHITECTURE="inceptionv3${IMAGE_SIZE}"

Getting the following error:
Couldn't understand architecture name 'inceptionv3224'

Any ideas?

Find the coordinates

Hi
I am trying to see if it is possible to find the coordinates of the position where the match takes place?

Thanks in advance
Arvind

Question regarding the retrain script and use of hash function

This is a question (not a bug)

In the retrain script retrain.py we have this create_image_lists function at the top.

Instead of randomising the dataset and assign into 3 buckets (train/test/validation) I see that the script uses hash function to do this instead. The script seems to work fine - I just don't know how. Is it some kind of well established algorithm / methodology? (it's a new paradigm to me)

I have summarised my question in this StackExchange Forum.

I would be grateful if someone could kindly explain why and how the "magic" works in this situation. (reason I ask is more for understanding. I am very intrigue and would like to know more)

Thank you!

Error running quantization script when building pb file using tfhub module

In the new retrain,py script with tensforflow, you can load modules (tfhub_modules) for other models too. I did so for mobilenet v2 (as mentioned over in the official docs here).

However, while running the quantization script (optmize_for_inference.py), it is giving errors.

KeyError: "The following input nodes were not found: set(['input'])\n"

Any idea why this is happening and how to properly quantize or otherwise deal with this, so that I can successfully use in an android app using tensforflow for mobile.

TensorFlow for Poets 2: TFLite iOS Add your retrained model

In the "Add your model files to the project" is lists a command relating to the Android tutorial with a different file structure than used for the iOS tutorial.

"The demo project is configured to search for a graph.lite, and a labels.txt files in the android/tflite/app/src/main/assets/ "

Gradle 'android' project refresh failed.

I'm running the most recent stable release of Android Studio 2.3.3 and I get the following error:
screen shot 2017-09-15 at 12 07 44 am

I downloaded Android Studio 3.0 Beta 2, to see if that would fix things, and this time got a different gradle error saying that the minimum supported Gradle version is 4.0-rc-1.

As someone who is totally new to android dev, walking through this codelab, I'm not sure what to do.

Error running iOS app on device

The app (tflite_photos_example) runs fine in the simulator (for iPhone 6), but when I run it on my device (also iPhone 6) I get this error:

2018-05-14 17:58:44.476046+0200 tflite_photos_example[1819:481465] +[CATransaction synchronize] called within transaction
nnapi error: unable to open library libneuralnetworks.so
Loaded model 1resolved reporter2018-05-14 17:58:44.858000+0200 tflite_photos_example[1819:481465] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[__NSArrayM insertObject:atIndex:]: object cannot be nil'
*** First throw call stack:
(0x183aaad8c 0x182c645ec 0x183a43750 0x18397705c 0x100a36710 0x1929955e4 0x192984dd0 0x192989e68 0x19298e6a8 0x19298cab0 0x192990104 0x19299510c 0x100a36500 0x100a35a84 0x100a3579c 0x100a3922c 0x18d686ee0 0x18d686acc 0x18d677d60 0x18d676b94 0x18d7046a8 0x100a443a0 0x18d67ae38 0x18d67a240 0x18d64765c 0x18dc77a0c 0x18d646e4c 0x18d646ce8 0x18d645b78 0x18e2db72c 0x18d645268 0x18e0c09b8 0x18e20eae8 0x18d644c88 0x18d644624 0x18d64165c 0x18d6413ac 0x1862a8470 0x1862b0d6c 0x100df5220 0x100e01850 0x1862dc878 0x1862dc51c 0x1862dcab8 0x183a53404 0x183a52c2c 0x183a5079c 0x183970da8 0x185953020 0x18d95178c 0x100a44750 0x183401fc0)
libc++abi.dylib: terminating with uncaught exception of type NSException
(lldb) 

Issue in the create_image_lists function in retrain.py

There is an issue in the create_image_lists function in retrain.py. The function reads all the format images such as jpg, jpeg, JPG, JPEG. Here the issue arises when we read the images. The jpg extension reads the same images as JPG extension does. As a result we end up creating bottleneck files for double the number of images than the actual images. This results in increase in time taken to process.

I have error when i try to retrain the data

PS C:\Users\George35mk\Desktop\MACHINE LERNING EXAMPLES\Google Developers\Train an Image Classifier with TensorFlow for Poets - Machine Learning Recipes #6\tensorflow-for-poets-2> python -m scripts.retrain --bottleneck_dir=tf_files/bottlenecks --how_many_training_steps=500 --model_dir=tf_files/models/  --summaries_dir=tf_files/training_summaries/"${ARCHITECTURE}" --output_graph=tf_files/retrained_graph.pb --output_labels=tf_files/retrained_labels.txt --architecture="${ARCHITECTURE}" --image_dir=tf_files/flower_photos
ERROR:tensorflow:Couldn't understand architecture name ''
Traceback (most recent call last):
  File "C:\Users\George35mk\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "C:\Users\George35mk\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\George35mk\Desktop\MACHINE LERNING EXAMPLES\Google Developers\Train an Image Classifier with TensorFlow for Poets - Machine Learning Recipes #6\tensorflow-for-poets-2\scripts\retrain.py", line 1257, in <module>
    tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
  File "C:\Users\George35mk\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\platform\app.py", line 48, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "C:\Users\George35mk\Desktop\MACHINE LERNING EXAMPLES\Google Developers\Train an Image Classifier with TensorFlow for Poets - Machine Learning Recipes #6\tensorflow-for-poets-2\scripts\retrain.py", line 907, in main
    model_info = create_model_info(FLAGS.architecture)
  File "C:\Users\George35mk\Desktop\MACHINE LERNING EXAMPLES\Google Developers\Train an Image Classifier with TensorFlow for Poets - Machine Learning Recipes #6\tensorflow-for-poets-2\scripts\retrain.py", line 856, in create_model_info
    raise ValueError('Unknown architecture', architecture)
ValueError: ('Unknown architecture', '')
PS C:\Users\George35mk\Desktop\MACHINE LERNING EXAMPLES\Google Developers\Train an Image Classifier with TensorFlow for Poets - Machine Learning Recipes #6\tensorflow-for-poets-2>

Training on deeper file stucture

I am trying to use retrain.py to train on a training set with a slightly more complicated file structure where instead of the case in the example:
training_folder/object_class/image.jpg
I have 
training_folder/object_class/another_sub_dir/another_sub_dir/image.jpg
I have adjusted image_dir to the training_folder and changed dir_name to os.path.basename(sub_dir)+'/another_sub_dir/another_sub_dir/'
However when i run retrain.py I get 'No valid folders of images found at …' which means that create_image_lists was not successful.

Xcode project command

For the "Open the project with Xcode" command it states to use:

"open ios/tflite/tflite_camera_example.xcworkspace"

But I'm pretty sure you are actually supposed to use:

"open ios/tflite/tflite_photos_example.xcworkspace" to open the application.

TOCO is currently not working from command line

OS Version : Ubuntu 16.04
Tensorflow Version: 1.8.0

When I run the command toco --help the output is :

TOCO from pip install is currently not working on command line.
Please use the python TOCO API or use
bazel run tensorflow/contrib/lite:toco -- <args> from a TensorFlow source dir.

Is this because I'm using Tensorflow 1.8 or something else is wrong and what should I do about it ?

Empty validation set for some category (have 50 images)

Thanks a lot for this project, it helps a lot to start of with TF.

I ran into a problem while retraining on my own dataset (~400 categories, each having 50 images), once the script computed the validation accuracy:

ZeroDivisionError: integer division or modulo by zero

Sorry I don't have the complete error stack trace anymore, but it was the same as in one Stackoverflow: https://stackoverflow.com/questions/38175673/critical-tensorflowcategory-has-no-images-validation/38264426#comment82741726_38264426

Except that I've got much more that 20 images per category. Here was the issue: since the images are put into validation|testing|validating bucket based on a hash of their filename, it may occur that one bucket is empty for a category. It just happened to me: 50 images per label, no image selected for testing in one of the ~400 categories. The program doesn't check this: https://github.com/googlecodelabs/tensorflow-for-poets-2/blob/master/scripts/retrain.py#L189-L205

I created a dirty solution here, making sure that there will be at least one image in validating and testing sets: https://github.com/matthieudelaro/tensorflow-for-poets-2/blob/matthieu/scripts/retrain.py#L189-L211

Gonna create a PR soon

Why is Input Node not "Cast"? Is there a difference between "Mul" and "Mul:0"?

Even though TensorFlow For Poets 2 does not warrant the running of "strip_unused.py", the app still runs with a retrained+optimized+quantized model.

However, during the optimization step to skip the "DecodeJpeg" node, we redirected the input node to "Cast". But the app works only when the input node specified is "Mul" or "Mul:0".

My first question is, why is the input node not "Cast"?

My second question is, what is the actual difference between "Mul" and "Mul:0" in ClassifierActivity.java, since both of these input nodes make the app work with the retrained+optimized+quantized model.

Reference for dissertation

Hello, I would like to use your system as a part of my artefact I'm developing for my dissertation. I would highly appreciate and thank you if you allow me to do so and help me figure about who exactly I need to reference and cite. Should I cite the project from Googlecodelabs, or you(Mark Daoust and Sam Thorogood). Thank you for your cooperation in advance!

tf_files

Is tf_files supposed to be empty?

Where is this content:
tf_files/models/
tf_files/training_summaries/
tf_files/retrained_graph.pb
tf_files/retrained_labels.txt

Thank you!

Poets 2 App crashes all the time

After optimizing the net and changing the app variables in ClassifierActivity.java to

private static final String INPUT_NAME = "input";
private static final String OUTPUT_NAME = "final_result";
(as specified in the optimization step)

The app just won't work, it crashes all the time.

Any ideas?

I have this

private static final int INPUT_SIZE = 224;
private static final int IMAGE_MEAN = 128;
private static final float IMAGE_STD = 128; (also tried with 128.0f)
private static final String INPUT_NAME = "input";
private static final String OUTPUT_NAME = "final_result";

private static final String MODEL_FILE = "file:///android_asset/optimized_graph.pb";
private static final String LABEL_FILE = "file:///android_asset/retrained_labels.txt";

Color detection in TFMobile demo

Hello,
First of all Thank you very much for the great demo and awesome tutorial it makes implementing Tensorflow in android very easy.

I need one help with this,
I have this requirement of detecting the traffic signal, I have a dataset and I have created a model for that and in the general case, it works great.

What I want is to detect what color of the traffic signal is lightened, It means is it Green or Red?

I have added dataset which has two types of images of green and red lightened traffic signals but it is just detecting a traffic signal.

Can anyone help me with this or guide how can I achieve this?

unrecognized arguments in scripts.label_image

I am on a windows. In the 5 th step on the tensorflow-for-poets-2 when I try to classify the image it gives following error. Don't know why!!!

PS C:\Users\ABANS\tensorflow-for-poets-2> python -m scripts.label_image
--graph=tf_files/retrained_graph.pb
--image=tf_files/flower_photos/roses/2414954629_3708a1a04d.jpg

usage: label_image.py [-h] [--image IMAGE] [--graph GRAPH] [--labels LABELS]
[--input_height INPUT_HEIGHT]
[--input_width INPUT_WIDTH] [--input_mean INPUT_MEAN]
[--input_std INPUT_STD] [--input_layer INPUT_LAYER]
[--output_layer OUTPUT_LAYER]

label_image.py: error: unrecognized arguments: --graph=tf_files/retrained_graph.pb --image=tf_files/flower_photos/roses/2414954629_3708a1a04d.jpg

This comment doesn't show backward slash!!!

Have anyone been able to use GPU to train this classifier?

Hi,

I have found out that the training time of this repo with or without gpu are exactly the same. Here is a simple benchmark:

image

Please note in the beginning of first training process, it did recognize my gpu as below:
image

However the training time are very close, and the gpu usage did not go up at all during the process.
Did any one manage to actually utilize a GPU in this case?

Wich model shouls y add to avoid this issue

Hi, i have this problem on the demo:

java.lang.RuntimeException: Failed to load model from 'file:///android_asset/graph.pb'

I'm running project on a galaxy S8 running android 7.0

Codelab instructions not working

In the codelab, there is an instruction to optimise a graph:

IMAGE_SIZE=224
toco \
  --input_file=tf_files/retrained_graph.pb \
  --output_file=tf_files/optimized_graph.pb \
  --input_format=TENSORFLOW_GRAPHDEF \
  --output_format=TENSORFLOW_GRAPHDEF \
  --input_shape=1,${IMAGE_SIZE},${IMAGE_SIZE},3 \
  --input_array=input \
  --output_array=final_result

When issuing this command, I get the following output:

2017-12-19 13:42:31.656665: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 421 operators, 589 arrays (0 quantized)
2017-12-19 13:42:31.669161: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 59 operators, 118 arrays (0 quantized)
2017-12-19 13:42:31.670319: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 2: 59 operators, 119 arrays (0 quantized)
2017-12-19 13:42:31.671417: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 3: 58 operators, 117 arrays (0 quantized)
2017-12-19 13:42:31.672488: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before dequantization graph transformations: 58 operators, 117 arrays (0 quantized)
2017-12-19 13:42:31.673317: I tensorflow/contrib/lite/toco/toco_tooling.cc:269] Estimated count of arithmetic ops: 0.301549 billion (note that a multiply-add is counted as 2 ops).

When checking the folder optimized.pb is not created.

any suggestions on how to label_image on batch images not one by one?

thanks for the great work and helpful code, I'm a newer on TF,
and I will appreciate your help with this situation:

I have followed the tutorial to retrain my own model, using my own train datas.
and got the saved model *.pb file,
then when I want to label another whole dataset images,
it was so slow to classify the images one by one,
and is there a way to feed a batch size images to classification progress?
as I searched.. it seems to me that there are some ops not supporting batch,
any suggestions or code referred would be great and helpful, thanks in advance!

cant use my own built mobilenet model into tensorflow demo app.

I changed input and output names via optimize_for_inference.py and I have a error like this

tensorflow demo keeps stoping
error: Failed to run TensorFlow inference with inputs:[input], outputs:[final_result]

please help: posted on stackoverflow multiple times now answer yet.

Camera Activity or fragments seems to be slower on poets example than Tensorflow example

I have tried to build and install both Tensorflow/example android and google poets 2 image classifier on my phone and getting them to work.

The reason for building poets version is because of the strip down bare bone nature and most of the AAR and Tensorflow interface are handled by gradle.

However, I'm puzzled by why the poet version seems to be lagging or the frames updating on the fragments don't seem as fast as the original version of the Tensorflow example.

Below are some of the logs capture for the poets version, hopefully you guys might have some clue as to why this could be happening. Thank you.

10-17 12:12:05.488 29443-29469/org.tensorflow.demo I/tensorflow: CameraConnectionFragment: Opening camera preview: 640x480
10-17 12:12:05.508 29443-29469/org.tensorflow.demo I/CameraDeviceState: Legacy camera service transitioning to state CONFIGURING
10-17 12:12:05.518 29443-29527/org.tensorflow.demo I/RequestThread-0: Configure outputs: 2 surfaces configured.
10-17 12:12:05.518 29443-29527/org.tensorflow.demo D/Camera: app passed NULL surface
10-17 12:12:05.638 29443-29443/org.tensorflow.demo I/Choreographer: Skipped 65 frames! The application may be doing too much work on its main thread.
10-17 12:12:05.638 29443-29443/org.tensorflow.demo I/Timeline: Timeline: Activity_idle id: android.os.BinderProxy@2c595d5d time:83965267
10-17 12:12:05.738 29443-29469/org.tensorflow.demo I/CameraDeviceState: Legacy camera service transitioning to state IDLE
10-17 12:12:05.758 29443-29469/org.tensorflow.demo I/RequestQueue: Repeating capture request set.
10-17 12:12:05.758 29443-29527/org.tensorflow.demo W/LegacyRequestMapper: convertRequestMetadata - control.awbRegions setting is not supported, ignoring value
10-17 12:12:05.758 29443-29527/org.tensorflow.demo W/LegacyRequestMapper: Only received metering rectangles with weight 0.
10-17 12:12:06.920 29443-29528/org.tensorflow.demo I/CameraDeviceState: Legacy camera service transitioning to state CAPTURING
10-17 12:12:06.920 29443-29460/org.tensorflow.demo E/BufferQueueProducer: [unnamed-29443-2] dequeueBuffer: min undequeued buffer count (2) exceeded (dequeued=11 undequeued=0)
10-17 12:12:06.980 29443-29469/org.tensorflow.demo D/ImageReader_JNI: ImageReader_imageSetup: Overriding buffer format YUV_420_888 to 32315659.
10-17 12:12:06.990 29443-29469/org.tensorflow.demo D/tensorflow: CameraActivity: Initializing buffer 0 at size 307200
10-17 12:12:06.990 29443-29469/org.tensorflow.demo D/tensorflow: CameraActivity: Initializing buffer 1 at size 76800
10-17 12:12:06.990 29443-29463/org.tensorflow.demo E/BufferQueueProducer: [unnamed-29443-2] dequeueBuffer: min undequeued buffer count (2) exceeded (dequeued=10 undequeued=1)
10-17 12:12:06.990 29443-29469/org.tensorflow.demo D/tensorflow: CameraActivity: Initializing buffer 2 at size 76800
10-17 12:12:07.440 29443-29450/org.tensorflow.demo I/art: Debugger is no longer active
10-17 12:12:10.363 29443-29527/org.tensorflow.demo E/RequestThread-0: Timed out while waiting for request to complete.
10-17 12:12:10.383 29443-29527/org.tensorflow.demo W/CaptureCollector: Preview buffers dropped for request: 0
10-17 12:12:10.403 29443-29496/org.tensorflow.demo E/CameraDevice-JV-0: Lost output buffer reported for frame 3
10-17 12:12:10.403 29443-29496/org.tensorflow.demo E/CameraDevice-JV-0: Lost output buffer reported for frame 3
10-17 12:12:15.828 29443-29527/org.tensorflow.demo E/RequestThread-0: Timed out while waiting for request to complete.
10-17 12:12:15.828 29443-29527/org.tensorflow.demo W/RequestHolder: Capture failed for request: 0

TFLite Cocoapod error

Upon pasting the "pod install --project-directory=ios/tflite/" command I am getting an error:

"[!] Unable to find a specification for TensorFlowLite"

Loss of accuracy due to quantization >> 1%

Following the tutorial 2 (step 5) results in 2.2% - 5% loss of accuracy when quantizing the model. Happens for the coefficients provided in the repo, as well as for any coefficients obtained through retraining. Checked the accuracy of retrained models and it roughly follows the tradeoff reported in the original blog post

evaluate.py doesn't output right accuracy and cross entropy

Hi, I followed TensorFlow for Poets 2 codelab procedure to evaluate the model performance on the test set. And I found that the accuracy output from evaluate.py was too low to be true, and didn't match retrain.py test set accuracy result.

After some analysing, I've figured out that this is due to the different ground truth labels' order between evaluate.py and retrain.py. The permutation within image_lists.keys() in this line is always changing.

We need the ground truth label list in the same order as retrained_labels.txt, which is the same one being used in training, just like label_image.py does.
One possible solution:

...
from scripts.count_ops import load_graph
from scripts.label_image import load_labels
label_file = "tf_files/retrained_labels.txt"
...
    filenames = []
    labels = load_labels(label_file)
        
    #for label_index, label_name in enumerate(image_lists.keys()):
    for label_index, label_name in enumerate(labels):
      for image_index, image_name in enumerate(image_lists[label_name][category]):

--tfhub_module param is unused?

Hey there, really enjoying playing with this tutorial! 😄

I was trying to retrain using nasnet_mobile using the tfhub_module param like so:

--tfhub_module="https://tfhub.dev/google/imagenet/nasnet_mobile/feature_vector/1"

However I noticed no matter what I did and which model I tried to use it didn't change or error out, it just kept using the inception_v3 model instead.

I had a look at the code in retrain.py and saw that the tfhub_module param was not used at all and only the architecture flag was used. However the architecture flag only supports inception and mobilenet models. Is tfhub_module deprecated and should be removed from the tutorial or? And if so what is the recommended method of using a net other than mobilenet and inception?

Toco does not work on Windows

I was not able to optimise the model using Toco "TensorFlow Lite Optimizing Converter"

ImportError: No module named 'tensorflow.contrib.lite.toco.python'

Expand tutorial for ML Engine host

Hi,

I've spent multiple days trying to take the Inception v3 based model trained from this tutorial and host it on ML engine. I have been unable to successfully convert it into a saved model which allows predictions.

And I don't think I am alone, here are just a few SO posts trying to work out the same.

https://stackoverflow.com/questions/48486306/prediction-failed-contents-must-be-scalar/48513120
https://stackoverflow.com/questions/47558050/retrained-inception-v3-model-deployed-in-cloud-ml-engine-always-outputs-the-same
https://stackoverflow.com/questions/41023733/use-google-cloud-machine-learning-service-to-predict-with-a-locally-retrained-in

InvalidArgumentError (see above for traceback): input must be 4-dimensional[1,224,224]

I''m evaluating the performance of the model using this command

python -m scripts.evaluate tf_files/optimized_graph.pb

I tested it first using the flower_photos dataset and it works fine, but when evaluating my own dataset, I got this errors.

Traceback (most recent call last):
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras\lib\site-packages\tensorflow\python\client\session.py", line 1323, in _do_call
return fn(*args)
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras\lib\site-packages\tensorflow\python\client\session.py", line 1302, in _run_fn
status, run_metadata)
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 473, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: input must be 4-dimensional[1,224,224]
[[Node: MobilenetV1/MobilenetV1/Conv2d_0/convolution = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 2, 2, 1], use_cudnn_on_gpu=true, _dev
ice="/job:localhost/replica:0/task:0/device:GPU:0"](_arg_input_0_1/_1, MobilenetV1/Conv2d_0/weights)]]
[[Node: MobilenetV1/Predictions/Reshape/_3 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:local
host/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_198_MobilenetV1/Predictions/Reshape", tensor_type=DT_FLOAT, _device="/job:localhost/re
plica:0/task:0/device:CPU:0"
]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "D:\Pycharm Projects\Exercise Files\tensorflow-for-poets-2\scripts\evaluate.py", line 91, in
accuracy,xent = evaluate_graph(*sys.argv[1:])
File "D:\Pycharm Projects\Exercise Files\tensorflow-for-poets-2\scripts\evaluate.py", line 81, in evaluate_graph
eval_accuracy, eval_xent = sess.run([accuracy, xent], feed_dict)
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras\lib\site-packages\tensorflow\python\client\session.py", line 889, in run
run_metadata_ptr)
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras\lib\site-packages\tensorflow\python\client\session.py", line 1120, in _run
feed_dict_tensor, options, run_metadata)
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras\lib\site-packages\tensorflow\python\client\session.py", line 1317, in _do_run
options, run_metadata)
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras\lib\site-packages\tensorflow\python\client\session.py", line 1336, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: input must be 4-dimensional[1,224,224]
[[Node: MobilenetV1/MobilenetV1/Conv2d_0/convolution = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 2, 2, 1], use_cudnn_on_gpu=true, _dev
ice="/job:localhost/replica:0/task:0/device:GPU:0"](_arg_input_0_1/_1, MobilenetV1/Conv2d_0/weights)]]
[[Node: MobilenetV1/Predictions/Reshape/_3 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:local
host/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_198_MobilenetV1/Predictions/Reshape", tensor_type=DT_FLOAT, _device="/job:localhost/re
plica:0/task:0/device:CPU:0"
]]

Caused by op 'MobilenetV1/MobilenetV1/Conv2d_0/convolution', defined at:
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "D:\Pycharm Projects\Exercise Files\tensorflow-for-poets-2\scripts\evaluate.py", line 91, in
accuracy,xent = evaluate_graph(*sys.argv[1:])
File "D:\Pycharm Projects\Exercise Files\tensorflow-for-poets-2\scripts\evaluate.py", line 33, in evaluate_graph
with load_graph(graph_file_name).as_default() as graph:
File "D:\Pycharm Projects\Exercise Files\tensorflow-for-poets-2\scripts\count_ops.py", line 31, in load_graph
tf.import_graph_def(graph_def, name='')
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras\lib\site-packages\tensorflow\python\framework\importer.py", line 313, in import_graph_def
op_def=op_def)
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras\lib\site-packages\tensorflow\python\framework\ops.py", line 2956, in create_op
op_def=op_def)
File "C:\Users\Reagan\AppData\Local\Continuum\Anaconda3\envs\keras\lib\site-packages\tensorflow\python\framework\ops.py", line 1470, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): input must be 4-dimensional[1,224,224]
[[Node: MobilenetV1/MobilenetV1/Conv2d_0/convolution = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 2, 2, 1], use_cudnn_on_gpu=true, _dev
ice="/job:localhost/replica:0/task:0/device:GPU:0"](_arg_input_0_1/_1, MobilenetV1/Conv2d_0/weights)]]
[[Node: MobilenetV1/Predictions/Reshape/_3 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:local
host/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_198_MobilenetV1/Predictions/Reshape", tensor_type=DT_FLOAT, _device="/job:localhost/re
plica:0/task:0/device:CPU:0"
]]

how to add dataset to pretrained model

I have retrained the model using below command:

IMAGE_SIZE=224
ARCHITECTURE="mobilenet_0.50_${IMAGE_SIZE}"
python -m scripts.retrain --bottlenech_dir=tf_files/bottlenecks --how_many_training_steps=4000 --model_dir=tf_files/models/ --summaries_dir=tf_files/training_summaries/"${ARCHITECTURE}" --output_graph=tf_files/retrained_graph.pb --output_labels=tf_files/retrained_labels.txt --architecture="${ARCHITECTURE}" --image_dir=tf_files/flower_photos
when compared with android/tflite/app/src/main/assets/labels.txt, the examples has more labels, my retrained model only has 5 labels (daisy,dandelion,roses,sunflowers,tulips), anyone can tell me how to reuse the pretrained model and add additional dataset to the retrained model? Thanks

Optimize for inference - Mobile optimizing graph error.

I got the following error, while optimizing for mobile:
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/riteshk_m_786/.local/lib/python3.6/site-packages/tensorflow/python/tools/optimize_for_inference.py", line 146, in
app.run(main=main, argv=[sys.argv[0]] + unparsed)
File "/home/riteshk_m_786/.local/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "/home/riteshk_m_786/.local/lib/python3.6/site-packages/tensorflow/python/tools/optimize_for_inference.py", line 90, in main
FLAGS.output_names.split(","), FLAGS.placeholder_type_enum)
File "/home/riteshk_m_786/.local/lib/python3.6/site-packages/tensorflow/python/tools/optimize_for_inference_lib.py", line 109, in optimize_for_inference
placeholder_type_enum)
File "/home/riteshk_m_786/.local/lib/python3.6/site-packages/tensorflow/python/tools/strip_unused_lib.py", line 83, in strip_unused
raise KeyError("The following input nodes were not found: %s\n" % not_found)
KeyError: "The following input nodes were not found: {'input'}\n"

Android apk stops working when using other model.

I've put the retrained image graph and labels in the android asset directory and also have changed input size in the ClassifierActivity.java. Even though when I install the apk on my Android Phone, and open it, it stops working.
Why is this so? I need some urgent help, Can someone solve the problem please?
screenshot_20180105-195103

Exception on creating Interpreter instance

App: tflite
Branch: end_of_first_codelab

App is crashing on this line tflite = new Interpreter(loadModelFile(activity));

stacktrace:

Process: android.example.com.tflitecamerademo, PID: 13195
java.lang.UnsatisfiedLinkError: No implementation found for long org.tensorflow.lite.NativeInterpreterWrapper.createErrorReporter(int) (tried Java_org_tensorflow_lite_NativeInterpreterWrapper_createErrorReporter and Java_org_tensorflow_lite_NativeInterpreterWrapper_createErrorReporter__I)
at org.tensorflow.lite.NativeInterpreterWrapper.createErrorReporter(Native Method)
at org.tensorflow.lite.NativeInterpreterWrapper.<init>(NativeInterpreterWrapper.java:47)
at org.tensorflow.lite.Interpreter.<init>(Interpreter.java:77)
at com.example.android.tflitecamerademo.ImageClassifier.<init>(ImageClassifier.java:97)
at com.example.android.tflitecamerademo.Camera2BasicFragment.onActivityCreated(Camera2BasicFragment.java:299)
at android.app.Fragment.performActivityCreated(Fragment.java:2362)
at android.app.FragmentManagerImpl.moveToState(FragmentManager.java:1014)
at android.app.FragmentManagerImpl.moveToState(FragmentManager.java:1171)
at android.app.BackStackRecord.run(BackStackRecord.java:815)
at android.app.FragmentManagerImpl.execPendingActions(FragmentManager.java:1578)
at android.app.FragmentController.execPendingActions(FragmentController.java:371)
at android.app.Activity.performStart(Activity.java:6678)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2609)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2707)
at android.app.ActivityThread.-wrap12(ActivityThread.java)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1460)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:154)
at android.app.ActivityThread.main(ActivityThread.java:6077)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:866)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:756)

Uploading my project that include the code from here

Hi guys, I recently made a fun little project from this tutorial. In short I re-trained the model to classify 15 different celebrities faces and allow users to take a photo and feed it into the neural network and see which celebrity they look like the most.
ten

I plan to make a video and blog about it but I notice that there is some stuff being uncommented that states the code in this tutorial is perhaps copyrighted and not allowed to distribute? I don't exactly understand the terms. But to cut to the chase, am I allowed to make a video,blog post and publish my project to github?
1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.