Giter Site home page Giter Site logo

Comments (19)

Alexadar avatar Alexadar commented on May 1, 2024 6

Guys, are [CLS] and [SEP] tokens mandatory for this example?

from transformers.

pbabvey avatar pbabvey commented on May 1, 2024 2

I noticed that the probability for longer sentences, regardless of how much they are related to the same subject, is higher than the shorter ones. For example, I added some random sentences to the end of the first or second part and observed significant increase in the first logit value. Is it a way to regularize the model for the next sentence prediction?

from transformers.

parth126 avatar parth126 commented on May 1, 2024 2

@parth126 have you seen #1788 and is it related to your issue?

Yes it was the same issue. And the solution worked like a charm.
Many thanks @LysandreJik

from transformers.

thomwolf avatar thomwolf commented on May 1, 2024 1

I think it should work. You should get a [1, 2] tensor of logits where predictions[0, 0] is the score of Next sentence being True and predictions[0, 1] is the score of Next sentence being False. So just take the max of the two (or use a SoftMax to get probabilities).
Did you try it?
The model behaves better on longer sentences of course (it's mainly trained on 512 tokens inputs).

from transformers.

dirkgr avatar dirkgr commented on May 1, 2024 1

This is not super clear, even wrong in the examples, but there is this note in the docstring for BertModel:

`pooled_output`: a torch.FloatTensor of size [batch_size, hidden_size] which is the output of a
    classifier pretrained on top of the hidden state associated to the first character of the
    input (`CLF`) to train on the Next-Sentence task (see BERT's paper).

That seems to suggest pretty strongly that you have to put in the CLF token.

from transformers.

ehsan-soe avatar ehsan-soe commented on May 1, 2024 1

@pbabvey I am observing the same thing.
are the probabilities length normalized?

from transformers.

parth126 avatar parth126 commented on May 1, 2024 1

Those are the logits, because you did not pass the next_sentence_label.
My understanding is that you could apply a softmax and get the probability for the sequence to be a possible sequence.
Sentence 1: How old are you?
Sentence 2: The Eiffel Tower is in Paris
tensor([[-2.3808, 5.4018]], grad_fn=<AddmmBackward>)
Sentence 1: How old are you?
Sentence 2: I am 193 years old
tensor([[ 6.0164, -5.7138]], grad_fn=<AddmmBackward>)
For the first example the probability that the second sentence is a probable continuation is very low.
For the second example the probability is very high (I am looking at the first logit)

im getting different scores for the sentences that you have tried . please advise why i'm getting it below is my code .

import torch
from transformers import BertTokenizer, BertModel, BertForMaskedLM,BertForNextSentencePrediction
tokenizer=BertTokenizer.from_pretrained('bert-base-uncased')
BertNSP=BertForNextSentencePrediction.from_pretrained('bert-base-uncased')

text1 = "How old are you?"
text2 = "The Eiffel Tower is in Paris"

text1_toks = ["[CLS]"] + tokenizer.tokenize(text1) + ["[SEP]"]
text2_toks = tokenizer.tokenize(text2) + ["[SEP]"]
text=text1_toks+text2_toks
print(text)
indexed_tokens = tokenizer.convert_tokens_to_ids(text1_toks + text2_toks)
segments_ids = [0]*len(text1_toks) + [1]*len(text2_toks)

tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
print(indexed_tokens)
print(segments_ids)
BertNSP.eval()
prediction = BertNSP(tokens_tensor, segments_tensors)
prediction=prediction[0] # tuple to tensor
print(predictions)
softmax = torch.nn.Softmax(dim=1)
prediction_sm = softmax(prediction)
print (prediction_sm)

o/p of predictions
tensor([[ 2.1772, -0.8097]], grad_fn=)

o/p of prediction_sm
tensor([[0.9923, 0.0077]], grad_fn=)

why is the score still high 0.9923 even after apply softmax ?

I am facing the same issue. No matter what sentences I use, I always get very high probability of the second sentence being related to the first.

from transformers.

thomwolf avatar thomwolf commented on May 1, 2024

Closing that for now, feel free to reopen if there is another issue.

from transformers.

hackable avatar hackable commented on May 1, 2024
from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM,BertForNextSentencePrediction

# Load pre-trained model tokenizer (vocabulary)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')

# Tokenized input
text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]"
tokenized_text = tokenizer.tokenize(text)

# Convert token to vocabulary indices
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
# Define sentence A and B indices associated to 1st and 2nd sentences (see paper)
segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]

# Convert inputs to PyTorch tensors
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])

# Load pre-trained model (weights)
model = BertForNextSentencePrediction.from_pretrained('bert-base-uncased')
model.eval()

# Predict is Next Sentence ?
predictions = model(tokens_tensor, segments_tensors )




print(predictions)
tensor([[ 6.3714, -6.3910]], grad_fn=<AddmmBackward>)

How do i infer this as true or false

from transformers.

dariocazzani avatar dariocazzani commented on May 1, 2024

Those are the logits, because you did not pass the next_sentence_label.

My understanding is that you could apply a softmax and get the probability for the sequence to be a possible sequence.

Sentence 1: How old are you?
Sentence 2: The Eiffel Tower is in Paris
tensor([[-2.3808, 5.4018]], grad_fn=<AddmmBackward>)
Sentence 1: How old are you?
Sentence 2: I am 193 years old
tensor([[ 6.0164, -5.7138]], grad_fn=<AddmmBackward>)

For the first example the probability that the second sentence is a probable continuation is very low.
For the second example the probability is very high (I am looking at the first logit)

from transformers.

pengjiao123 avatar pengjiao123 commented on May 1, 2024

predictions = model(tokens_tensor, segments_tensors )
I try the code more than once,why I have the different result?
sometime predictions[0, 0] is higher ,however, the same sentence pair,predictions[0, 0] is lower.

from transformers.

thomwolf avatar thomwolf commented on May 1, 2024

Maybe your model is not in evaluation mode (model.eval())?
You need to do this to desactivate the dropout modules.

from transformers.

pengjiao123 avatar pengjiao123 commented on May 1, 2024

It is OK.THANKS A LOT.

from transformers.

AIGyan avatar AIGyan commented on May 1, 2024

error: --> 197 embeddings = words_embeddings + position_embeddings + token_type_embeddings 198 embeddings = self.LayerNorm(embeddings) 199 embeddings = self.dropout(embeddings) The size of tensor a (21) must match the size of tensor b (14) at non-singleton dimension 1

The above issues get resolved, when I added few extra 1's and 0's to make the shape similar tokens_tensor and segments_tensors. Just wondering am I using in a right way.

My predictions output is a tensor array of size 21 X 30522 .
And what I believe the example is to predict the word which is [MASK] . Can you also please guide how to predict the next sentence?

from transformers.

yuchuang1979 avatar yuchuang1979 commented on May 1, 2024

Maybe your model is not in evaluation mode (model.eval())?
You need to do this to desactivate the dropout modules.

@thomwolf Actually even when I used model.eval() I still got different results. I observed this when I use every model of the package (BertModel, BertForNextSentencePrediction etc). Only when I fixed the length of the input (e.g. to 128), I can get the same results. In this way I have to pad 0 to indexed_tokens so it has a fixed length.

Could you explain why is like this, or did I make any mistake?

Thank you so much!

from transformers.

Alexadar avatar Alexadar commented on May 1, 2024

Maybe your model is not in evaluation mode (model.eval())?
You need to do this to desactivate the dropout modules.

@thomwolf Actually even when I used model.eval() I still got different results. I observed this when I use every model of the package (BertModel, BertForNextSentencePrediction etc). Only when I fixed the length of the input (e.g. to 128), I can get the same results. In this way I have to pad 0 to indexed_tokens so it has a fixed length.

Could you explain why is like this, or did I make any mistake?

Thank you so much!

Make sure

  1. input_ids, input_mask, segment_ids have same length
  2. vocabulary file for tokenizer is from the same config dir as your bert_config.json

I had symilar symptoms when vocab and config was from diferent berts

from transformers.

AjitAntony avatar AjitAntony commented on May 1, 2024

Those are the logits, because you did not pass the next_sentence_label.

My understanding is that you could apply a softmax and get the probability for the sequence to be a possible sequence.

Sentence 1: How old are you?
Sentence 2: The Eiffel Tower is in Paris
tensor([[-2.3808, 5.4018]], grad_fn=<AddmmBackward>)
Sentence 1: How old are you?
Sentence 2: I am 193 years old
tensor([[ 6.0164, -5.7138]], grad_fn=<AddmmBackward>)

For the first example the probability that the second sentence is a probable continuation is very low.
For the second example the probability is very high (I am looking at the first logit)

im getting different scores for the sentences that you have tried . please advise why i'm getting it below is my code .

import torch
from transformers import BertTokenizer, BertModel, BertForMaskedLM,BertForNextSentencePrediction
tokenizer=BertTokenizer.from_pretrained('bert-base-uncased')
BertNSP=BertForNextSentencePrediction.from_pretrained('bert-base-uncased')

text1 = "How old are you?"
text2 = "The Eiffel Tower is in Paris"

text1_toks = ["[CLS]"] + tokenizer.tokenize(text1) + ["[SEP]"]
text2_toks = tokenizer.tokenize(text2) + ["[SEP]"]
text=text1_toks+text2_toks
print(text)
indexed_tokens = tokenizer.convert_tokens_to_ids(text1_toks + text2_toks)
segments_ids = [0]*len(text1_toks) + [1]*len(text2_toks)

tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
print(indexed_tokens)
print(segments_ids)
BertNSP.eval()
prediction = BertNSP(tokens_tensor, segments_tensors)
prediction=prediction[0] # tuple to tensor
print(predictions)
softmax = torch.nn.Softmax(dim=1)
prediction_sm = softmax(prediction)
print (prediction_sm)

o/p of predictions
tensor([[ 2.1772, -0.8097]], grad_fn=)

o/p of prediction_sm
tensor([[0.9923, 0.0077]], grad_fn=)

why is the score still high 0.9923 even after apply softmax ?

from transformers.

LysandreJik avatar LysandreJik commented on May 1, 2024

@parth126 have you seen #1788 and is it related to your issue?

from transformers.

AjitAntony avatar AjitAntony commented on May 1, 2024

@LysandreJik thanks for the information

from transformers.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.