Comments (5)
see #22134
Try:
- reduce the batch_size
- reduce some convs outputs to reduce the params.
from bert-chinese-ner.
本来我的tensorflow是1.8.0的,发现和那个bert的源码不兼容
后来我升级到1.9.0了,可是还是报错了。更新为1.9.0报错信息如下,跪求解决方案
2019-03-12 22:29:47.417690: E T:\src\github\tensorflow\tensorflow\core\common_runtime\executor.cc:696] Executor failed to create kernel. Not found: No registered '_CopyFromGpuToHost' OpKernel for CPU devices compatible with node swap_out_gradients/bert/encoder/layer_0/attention/self/key/MatMul_grad/MatMul_1_0 = _CopyFromGpuToHostT=DT_FLOAT, _class=["loc@gradients/bert/encoder/layer_0/attention/self/key/MatMul_grad/MatMul_1_0"], _device="/job:localhost/replica:0/task:0/device:CPU:0"
. Registered: device='GPU'[[Node: swap_out_gradients/bert/encoder/layer_0/attention/self/key/MatMul_grad/MatMul_1_0 = _CopyFromGpuToHost[T=DT_FLOAT, _class=["loc@gradients/bert/encoder/layer_0/attention/self/key/MatMul_grad/MatMul_1_0"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](bert/encoder/Reshape_1/_4857)]]
Traceback (most recent call last):
你最终用的TensorFlow版本是什么?谢谢了
from bert-chinese-ner.
我遇到了同样的问题,应该不是tensorflow版本的问题。我把seq length 和 max batch size分别从128和32改到了64和4才跑起来……机子cpu2.8GHZ,内存是8G,1050Ti。和google给的Titan X GPU(12GB RAM)的seq length 和 max batch size的128和32差的还是蛮多,主要是cpu爆了。
from bert-chinese-ner.
我遇到了同样的问题,应该不是tensorflow版本的问题。我把seq length 和 max batch size分别从128和32改到了64和4才跑起来……机子cpu2.8GHZ,内存是8G,1050Ti。和google给的Titan X GPU(12GB RAM)的seq length 和 max batch size的128和32差的还是蛮多,主要是cpu爆了。
这两个参数在哪个文件?没看到呢
from bert-chinese-ner.
flags.DEFINE_integer(
"max_seq_length", 128,
"The maximum total input sequence length after WordPiece tokenization."
)
from bert-chinese-ner.
Related Issues (20)
- 你好,在测试集上的结果(精确率,召回率,F1值)没有输出吗 HOT 2
- No such file or directory: './output/label2id.pkl'怎么解决 HOT 1
- 可不可以添加一个license? HOT 1
- killed问题 HOT 1
- 结果全为O问题 HOT 10
- 全局步长未增长,正常吗?
- 请问一下您的环境是py2吗? HOT 5
- 数据集 HOT 2
- tensorflow.python.framework.errors_impl.FailedPreconditionError: output/result_dir/train.tf_record; Is a directory HOT 1
- How to save the model with the best f1 score in verification when training in multiple rounds? HOT 3
- The label_map starts from 1 not 0. How do you avoid getting predicted label == 0 HOT 1
- FileNotFoundError: [Errno 2] No such file or directory: './output/label2id.pkl' HOT 1
- How to get word vector by the fine-tuned Bert? HOT 1
- Hello, I would like to ask, how to use the model to predict the new input data HOT 1
- 如何输出每一类的precision,recall和f1呢? HOT 1
- _read_data 读出来是个空,这段有问题吧??? HOT 1
- 请问,如何使用tenorflow-serving 进行相应的部署呢。您有相关资料或代码吗? HOT 2
- 为什么label_test.txt文件比token_test.txt文件多出许多行呢? HOT 2
- 关于结果
- 训练过程一直持续不停止 HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bert-chinese-ner.