universal-sentence-encoder-fine-tune's People
Forkers
s4sarath arfu2016 wangxiaocao khoa-ho wooramkang amoghm meethariprasad ankepand shruti14nov shivangi-agarwal-fk waheedabro akash-agr hkazuakey hubishan ncgamit erivandevuniversal-sentence-encoder-fine-tune's Issues
When fine tuning starts, it eats up my ram
Thanks for the contribution. As i start fine tuning, my ram utilization goes upto 10Gb, what might be the cause?
any idea how to use sentence encoder as Siamese architecture?
how can we use sentence encoder as siamese architecture to pass two sentences as input?
Problem with get_operation_by_name('finetune/init_all_tables')
I've been trying to run the code on Google Colab and my local computer, both using Tensorlow 1.15.
When trying to graph the sentences before fine-tuning, the following error code:
The name 'finetune/init_all_tables' refers to an Operation not in the graph.
I downloaded the model directly from the hub and imported it on a new folder:
`from tensorflow.python.saved_model import tag_constants
scope = 'finetune'
graph=tf.Graph()
with tf.Session(graph=graph) as sess:
model_path = 'D:/Users/GermanEBR/Glite/ITESM/DCI/Tesis/Databases/USE/universal-sentence-encoder-4'
tf.saved_model.loader.load(sess, [tag_constants.SERVING], model_path)
sess.run(tf.global_variables_initializer())
sess.run(tf.get_default_graph().get_operation_by_name('finetune/init_all_tables'))
in_tensor = tf.get_default_graph().get_tensor_by_name(scope + '/module/fed_input_values:0')
ou_tensor = tf.get_default_graph().get_tensor_by_name(scope + '/module/Encoder_en/hidden_layers/l2_normalize:0')
run_and_plot(sess, in_tensor, X, ou_tensor)`
I would appreciate if someone knows what the problem is. Thanks in advance
will this also work for "https://tfhub.dev/google/universal-sentence-encoder/2"
will this finetuning method also work for transformer based encoder available at : https://tfhub.dev/google/universal-sentence-encoder/2 ?
tensorflow version compatibility
which version of tensorflow and tensorflow_hub does the code require ?
Thanks!
Running into: `ValueError: cannot add an op with id 133 as it already exists in the graph`
Tensorflow versions:
tensorflow 1.15.2
tensorflow-estimator 1.15.1
tensorflow-hub 0.9.0
tensorflow-text 1.15.1
I'm running the code as is and I'm consistently running into the following error:
/tmp/ipykernel_1454/3669827002.py in main()
24 ops_list = g1.get_operations()
25 for op in g1.get_operations():
---> 26 copy_op_to_graph(op, g2, variables, scope)
27 # copy table initilization
28 copy_op_to_graph(tf.tables_initializer(), g2, variables, scope)
/tmp/ipykernel_1454/1749896782.py in copy_op_to_graph(org_instance, to_graph, variables, scope)
190 op_def)
191 #Use Graph's hidden methods to add the op
--> 192 to_graph._add_op(new_op) # pylint: disable=protected-access
193 to_graph._record_op_seen_by_control_dependencies(new_op)
194 for device_function in reversed(to_graph._device_function_stack):
/opt/miniconda/envs/product_recos_description_model/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py in _add_op(self, op)
3015 if op._id in self._nodes_by_id:
3016 raise ValueError("cannot add an op with id %d as it already "
-> 3017 "exists in the graph" % op._id)
3018 if op.name in self._nodes_by_name:
3019 raise ValueError("cannot add op with name %s as that name "
ValueError: cannot add an op with id 133 as it already exists in the graph
I'm wondering if anyone else has run into this error.
Issue in loading the saved model in Java
I saved the trained model as follows (similar to how u do in convert_use.py)
tf.saved_model.simple_save(sess, save_path, inputs={'input':in_tensor}, outputs={'output':ou_tensor},legacy_init_op=tf.tables_initializer())
But when I load the resulting model in Java, I get the following exception. Can you please help?
W tensorflow/core/framework/op_kernel.cc:1318] OP_REQUIRES failed at lookup_table_op.cc:675 : Failed precondition: Table not initialized.
Exception in thread "main" java.lang.IllegalStateException: Table not initialized.
[[Node: finetune/module/string_to_index_Lookup/hash_table_Lookup = LookupTableFindV2[Tin=DT_STRING, Tout=DT_INT64, _output_shapes=[<unknown>], _device="/job:localhost/replica:0/task:0/device:CPU:0"](finetune/module/string_to_index/hash_table, finetune/module/compound_bigrams/boolean_mask/Gather, finetune/module/string_to_index/hash_table/Const)]]
at org.tensorflow.Session.run(Native Method)
Trainable model size doubles
hi @helloeve , I was trying to use this model with module wrapper but it gives error that "graph def should be less than 2GB". While further inspecting the issue I noted that output of convert_use is 1.9GB while the original model is ~ 812 MB. Do you happen to know why this might be happening? we are not adding any new tensors or operations in graph copy.
Need help with packaging this as module
Hi can you please help me with creating a trainable module. This is the code I wrote.
def module_fn():
scope = 'finetune'
with tf.Session() as sess:
model_path = 'model/'
tf.saved_model.loader.load(sess, [tag_constants.SERVING], model_path)
sess.run(tf.global_variables_initializer())
sess.run(tf.get_default_graph().get_operation_by_name('finetune/init_all_tables'))
in_tensor = tf.get_default_graph().get_tensor_by_name(scope + '/module/fed_input_values:0')
ou_tensor = tf.get_default_graph().get_tensor_by_name(scope + '/module/Encoder_en/hidden_layers/l2_normalize:0')
hub.add_signature(inputs=in_tensor, outputs=ou_tensor)
spec = hub.create_module_spec(module_fn)
embed = hub.Module(spec)
scope = 'finetune'
with tf.Session() as session:
model_path = 'model/'
tf.saved_model.loader.load(session, [tag_constants.SERVING], model_path)
session.run([tf.global_variables_initializer(),tf.tables_initializer()])
session.run(tf.get_default_graph().get_operation_by_name(scope + '/init_all_tables'))
embed.export("module/",session)
But when I try to import it and find embedding, it results to an error 'FailedPreconditionError: Table not initialized.'
with tf.Session() as sess:
embed = hub.Module("module/", trainable=True)
sess.run([tf.global_variables_initializer(),tf.tables_initializer()])
query_embeddings = sess.run(embed(["t shirts men"]))
print(query_embeddings)
How to add a new dataset for finetuning?
How to add a new dataset for finetuning?
I have corpus without labels
I want to fine tune but i dont have labels
Error in Loading Model
Hi @helloeve , thanks for the super helpful implementation. Despite V2 of USE having trainable variables, I have been unable to use them.
On running your code, I keep running into this error :
Code : tf.saved_model.loader.load(sess, [tag_constants.SERVING], model_path)
Error : MetaGraphDef associated with tags 'serve' could not be found in SavedModel. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: 'saved_model_cli'
Any workaround for this? Thanks!
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.