tensorflow / workshops Goto Github PK
View Code? Open in Web Editor NEWA few exercises for use at events.
Home Page: https://tensorflow.org
License: Apache License 2.0
A few exercises for use at events.
Home Page: https://tensorflow.org
License: Apache License 2.0
Hello, thank you very much for your work. I need to use a level of lstm in workshops/extras/keras-bag-of-words/keras-bow-model.ipynb ..can you help me? I tried to insert
model.add(LSTM(activation='softmax', units=300, recurrent_activation='hard_sigmoid', return_sequences=True)) after the model.add(Dropout(0.5)) but i've the error: Input 0 is incompatible with layer lstm_1: expected ndim=3, found ndim=2
Sorry for my bad english.
Thank you very much. Bye
Hi, the only working notebook in the master branch is the mnt one. I saw someone else recalled the issue but I was wondering if the change was merged or not because they are still not working.
thanks
GETs to:
https://storage.googleapis.com/bq-imports/descriptions.p
https://storage.googleapis.com/bq-imports/genres.p
Result in:
AccessDenied
Access denied.
I am trying to apply your concept on a real time scenario where I get the data and need to give the forecast for it.
But I am not understanding why the output I am getting is changing over time?
Please guide me what way I need to process for the real time deployment of the saved model.
“AttributeError: module ‘tensorflow.contrib.estimator’ has no attribute ‘DNNEstimator’”
workshops/extras/archive/05_custom_estimators.ipynb wont render, but if you try view it is as raw or source code it shows the code,
i have trained a model using python3.7 and tf 2.7 , save the model by 'saved model' format , like this:
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['examples'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: input_example_tensor:0
The given SavedModel SignatureDef contains the following output(s):
outputs['cf_1'] tensor_info:
dtype: DT_INT64
shape: (-1, 1)
name: ParseExample/ParseExampleV2:0
outputs['cf_2'] tensor_info:
dtype: DT_INT64
shape: (-1, 1)
name: ParseExample/ParseExampleV2:1
outputs['cf_label'] tensor_info:
dtype: DT_INT64
shape: (-1, 1)
name: ParseExample/ParseExampleV2:2
outputs['cf_id'] tensor_info:
dtype: DT_INT64
shape: (-1, 1)
name: ParseExample/ParseExampleV2:3
outputs['score'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: score:0
Method name is: tensorflow/serving/predict
i wanna to load it and do some predict by spark, but really confused with the model input : Serialized Example Object. i can use python load the model and predict, just like this:
def model_predict(example_proto):
exam_input = tf.constant([example_proto.SerializeToString()])
return model.signatures'serving_default'
here, the example_proto is my Example Object, and use SerializeToString method , it is worked. but when i do the same thing by spark, there is always report error, such as:
val result = sparkEnv.spark.read.parquet(inputPath).map(item => {
val example = convert2Example(schemaInfo,item)
val map = new java.util.HashMapString,Tensor
val tensor = TString.vectorOf(new String(example.toByteArray,Charset.forName("UTF-8")))
map.put("examples",tensor)
val score = model.value.call(map).get("score")
score.toString
}).rdd
Is there any method to deploy a estimator model which input is Example object by java ?
It seems like 'Evaluator' component takes too long time (more than 2 hours, and it hadn't done yet) in Kubeflow environment on GCP AI Platform Pipeline. It is very unexpected behaviour when comparing the notebook version which took about less than 5 minutes with GPU.
I am assuming that environments with and without GPU behaves differently (since Evaluator tries to evaluate two models[blessing, current] by inferencing inputs). If that is the case, the problem is that I want to allocate one GPU k8s node for one specific TFX component. Otherwise I have to equip every single nodes with GPU which is not desirable.
Any possible thoughts?
Hi, could you please advice, where I'm wrong.
I don't have much experience and I'm trying to figure out how does it work.
I tried to to use model build with your TFX_Pipeline_for_Bert_Preprocessing.ipynb, but when I try to serve it via TF Serving I receive ""error": "Could not parse example input, value: 'You are very good person'\n\t [[{{node ParseExample/ParseExampleV2}}]]""
My steps:
curl -d '{"instances": ["You are very good person"]}' -X POST --output - http://localhost:8501/v1/models/my_model:predict
{ "error": "Could not parse example input, value: 'You are very good person'\n\t [[{{node ParseExample/ParseExampleV2}}]]" }
So I assume, that model is trained with tensor as an input. Also in the end of your notebook there is a test, trying model's "serving default" and we also fit a tensor to the model.
How could I achieve to pass the raw text in request to TF Serving ? Should TF Serving convert string to tensor?
Could you please advice where I'm wrong. Spent more than a week trying to solve this.
Most of the workshops don't load unfortunately, and return an error "This file has moved" in colab: Housing prices, Text generation, Overfitting, MNIST, IMDB.
https://storage.googleapis.com/tensorflow-workshop-examples/stack-overflow-data.csv
gives
<Error>
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist.</Message>
</Error>
When you open the notebooks in Colaboratory, they show up as "This file has moved."
I have tried out TFX_Pipeline_for_Bert_Preprocessing on GCP AI Platform Pipeline while Dataflow option turned on.
However, it failed, and I got some error messages.
tensorflow.python.framework.errors_impl.NotFoundError: Converting GraphDef to Graph has failed. The binary trying to import the GraphDef was built when GraphDef version was 440. The GraphDef was produced by a binary built when GraphDef version was 561. The difference between these versions is larger than TensorFlow's forward compatibility guarantee. The following error might be due to the binary trying to import the GraphDef being too old: Op type not registered 'CaseFoldUTF8' in binary running on beamapp-root-0218001007-4-02171610-z5bw-harness-3q6c. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed
any thoughts?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.