Comments (7)
BERTSUM just a feeding technique, i am serious, I study the code to translate to Tensorflow.
from presumm.
Thanks for your reply.
Can we still use use_position_embeddings, with the input sequence length greater than 512 ?
In my opinion, 512 is just a hyper-parameter and we can simply change it to another value.
Back to my question, you still do not provide answers to my question.
As you said, if BertSum is just a way of modifying input, rather than a new way of pre-training, I don't see that the former definition of BertSum can capture the true meaning of CLS token.
Without re-pre-training Bert with BertSum input architecture, your suggestion of BertSum won't understand the true meaning of CLS tokens for inputs longer than 2 sentences.
Please correct me if I am wrong.
Regards
from presumm.
Are you sure ?
If you just change the way of constructing an input (i.e/ (1) Add CLS tokens in the beginning of each and (2) interval segment embeddings) and use the pretrained BERT whose input involves in only one CLS token for each input, how do we get meaningful output ?
I thought that we need to "pretrain again" with the BERTSUM feeding method against MLM and NSP ?!
What do u think @huseinzol05 ?
from presumm.
Nope, no need. It is literally just constructing the inputs. I reversed engineered the code to Tensorflow already.
from presumm.
@huseinzol05
I am surprised !
Is there any chance that you could possibly share the tensorflow code ?
That would be greatly helpful for my understanding
from presumm.
This is preprocessing and tokenization, https://github.com/huseinzol05/NLP-Models-Tensorflow/blob/master/extractive-summarization/preprocessing-data-bert.ipynb
This is the model, https://github.com/huseinzol05/NLP-Models-Tensorflow/blob/master/extractive-summarization/4.bert-base.ipynb
I got bad result for now,
- I disable
use_position_embeddings
inside https://github.com/huseinzol05/NLP-Models-Tensorflow/blob/master/extractive-summarization/modeling.py#L190
Original implementation is using use_position_embeddings
. Problem if we implemented use_position_embeddings
, max length BERT can accept is 512 length, and obviously, some texts we want to summarize longer than that.
If we check how nlpyang/PreSumm tackle this issue, https://github.com/nlpyang/PreSumm/blob/master/src/models/model_builder.py#L150 , nlpyang/PreSumm repeat after 512, and that pytorch code is unable to done it tensorflow.
from presumm.
Can we still use use_position_embeddings, with the input sequence length greater than 512 ? In my opinion, 512 is just a hyper-parameter and we can simply change it to another value.
We can't, use_position_embeddings
in Tensorflow code will exception if sequence longer than 512.
As you said, if BertSum is just a way of modifying input, rather than a new way of pre-training, I don't see that the former definition of BertSum can capture the true meaning of CLS token.
.
It will learn during transfer learning extractive / abstractive summarization. We know original BERT can capture 2 sentences by putting CLS
token, example, text similarity transfer learning. BERTSUM just added CLS
after N sentences, and during transfer learning, we give indices of CLS
to let know BERTSUM when to gather output sequences to multiple output N sequences depends on CLS
counts.
from presumm.
Related Issues (20)
- having [Errno 21] Is a directory: while running train for BertExtAbs
- step 4 converting to simpler json returning asci error
- TypeError: __init__() got an unexpected keyword argument 'temp_dir'
- example_add_guidance.py
- Error when testing BertAbs model HOT 2
- Acc is very low and does not converge during training
- Cannot load model via torch.load HOT 1
- xsum数据集 HOT 1
- data preprocessing: empty 'tgt' text HOT 1
- issue for converting to bert_data HOT 2
- Use pretrained model : train_from HOT 9
- Getting the same sequence for all input candidate in generation
- How to do inference using pretrained bertsum models?
- Training the BERT large extractive model
- How can i know if i download BERT successfully
- bert-base-uncased HOT 2
- error in step 3 HOT 1
- 在运行test模式,BertAbs模型时,遇到了RuntimeError: "index_select_out_cuda_impl" not implemented for 'Float' HOT 1
- How to save the best model? HOT 1
- RuntimeError: cublas runtime error : the GPU program failed to execute at /pytorch/aten/src/THC/THCBlas.cu:450
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from presumm.