Giter Site home page Giter Site logo

Comments (19)

jerbarnes avatar jerbarnes commented on September 26, 2024 2

Thanks for pointing that out. I'll have a look and try to get back to you soon.

from semeval22_structured_sentiment.

jerbarnes avatar jerbarnes commented on September 26, 2024 1

Yes, you are correct. I will have to have a deeper look at the other datasets and will come with corrections soon.

from semeval22_structured_sentiment.

jerbarnes avatar jerbarnes commented on September 26, 2024 1

It depends a bit. There were relatively few examples that were affected, so I doubt that retraining anything based on pre-trained languaged models will see large benefits. On the other hand, if you have smaller models that you can train quickly, it might be worth it.

from semeval22_structured_sentiment.

janpf avatar janpf commented on September 26, 2024 1

Hi Jeremy,
thanks for your work!
I just found this one and I have no idea what's happening here 😅

from mpqa:

{
        "sent_id": "xbank/wsj_0583-27",
        "text": "Sansui , he said , is a perfect fit for Polly Peck 's electronics operations , which make televisions , videocassette recorders , microwaves and other products on an \" original equipment maker \" basis for sale under other companies ' brand names .",
        "opinions": [
            {
                "Source": [
                    [
                        "sa"
                    ],
                    [
                        "12:14"
                    ]
                ],
                "Target": [
                    [
                        ","
                    ],
                    [
                        "7:8"
                    ]
                ],
                "Polar_expression": [
                    [
                        ","
                    ],
                    [
                        "17:18"
                    ]
                ],
                "Polarity": "Positive",
                "Intensity": "Average"
            }
        ]
    },

I didn't create script to find issues like these tho :/

from semeval22_structured_sentiment.

egilron avatar egilron commented on September 26, 2024 1

Today I redownloaded the repo, and re-extracted the data. Now my import script only catches a handful of darmstadt-unis sentences with some expression text/spans issues. Looking good!

from semeval22_structured_sentiment.

jerbarnes avatar jerbarnes commented on September 26, 2024

Hi Jan,

Yes, you're right. This first issue stems from the dataset itself, where negators ('no', 'not', etc) and intensifiers ('very', 'extremely') are not explicitly included in the polar expression, but instead attached as properties. In the conversion script, we decided to leave them separate, but it is true that this choice is arbitrary. Regarding the missing indices, I'll have to have a deeper look into the code to see why that is happening. Thanks for bringing it up!

from semeval22_structured_sentiment.

janpf avatar janpf commented on September 26, 2024

Thanks for your quick reply!

negators ('no', 'not', etc) and intensifiers ('very', 'extremely') are not explicitly included in the polar expression

that information actually helps a lot!
btw: in the train.json some even funkier stuff seems to be going on:

{
  "sent_id": "Colorado_Technical_University_Online_69_10-14-2005-1",
  "text": "They have used one of the books that was used by a professor of mine from a SUNY school that would only teach with graduate level books for undergraduate courses .",
  "opinions": [
    {
      "Source": [
        [],
        []
      ],
      "Target": [
        [
          "They"
        ],
        [
          "0:4"
        ]
      ],
      "Polar_expression": [
        [
          "no",
          "complaints"
        ],
        []
      ],
      "Polarity": "Positive",
      "Intensity": "Average"
    }
  ]
},

from semeval22_structured_sentiment.

jerbarnes avatar jerbarnes commented on September 26, 2024

Ok, I've confirmed that this is only a problem in the Darmstadt dataset, only affects polar expressions, but occurs in all splits. The problem comes from the fact that the original annotations often span several sentences. That means that if you have a document with a target in the first sentence and a polar expression much later. When we divide the annotations into sentences, the polar expression is no longer in the sentence, which gives null offsets. I will refactor the code a bit to remove these sentences and push later today.

from semeval22_structured_sentiment.

jerbarnes avatar jerbarnes commented on September 26, 2024

Ok, I've updated the preprocessing script to remove the annotations that were problematic. Let me know if it works on your end and I'll close the issue.

from semeval22_structured_sentiment.

janpf avatar janpf commented on September 26, 2024

Thanks! Looks like that removed the issues. If I find something else I'll just reopen ;)

from semeval22_structured_sentiment.

janpf avatar janpf commented on September 26, 2024

I believe that there are some other cases of wrong annotations. Example from multibooked_ca/dev:

{
        "sent_id": "corpora/ca/quintessence-Miriam-1",
        "text": "La porteria i l ' escala .",
        "opinions": []
},
{
        "sent_id": "corpora/ca/quintessence-Miriam-2",
        "text": "Son poc accesibles quan vas amb nens petits i no posaba res a l ' anunci",
        "opinions": [
            {
                "Source": [
                    [],
                    []
                ],
                "Target": [
                    [
                        "l ' escala"
                    ],
                    [
                        "14:24"
                    ]
                ],
                "Polar_expression": [
                    [
                        "poc accesibles"
                    ],
                    [
                        "4:18"
                    ]
                ],
                "Polarity": "Negative",
                "Intensity": "Standard"
            },

the target doesn't exist in the source text, but in the sentence right before :O
the opener_en/dev also does contain an interesting case:

    {
        "sent_id": "../opener/en/kaf/hotel/english00200_e8f707795fc0c7f605a1f7115c3da711-2",
        "text": "Hotel Premiere Classe Orly Rungis is near the airport and close to Orly",
        "opinions": [
            {
                "Source": [
                    [],
                    []
                ],
                "Target": [
                    [
                        "Hotel Premiere Classe Orly Rungis"
                    ],
                    [
                        "0:33"
                    ]
                ],
                "Polar_expression": [
                    [
                        "near the airport"
                    ],
                    [
                        "37:53"
                    ]
                ],
                "Polarity": "Negative",
                "Intensity": "Standard"
            },
            {
                "Source": [
                    [],
                    []
                ],
                "Target": [
                    [
                        "Hotel Premiere Classe Orly Rungis"
                    ],
                    [
                        "0:33"
                    ]
                ],
                "Polar_expression": [
                    [
                        "close to Orly major highways"
                    ],
                    [
                        "0:71"
                    ]
                ],
                "Polarity": "Negative",
                "Intensity": "Standard"
            }
        ]
    },
    {
        "sent_id": "../opener/en/kaf/hotel/english00200_e8f707795fc0c7f605a1f7115c3da711-3",
        "text": "major highways ( all night heard the noise of passing large vehicles ) .",
        "opinions": [
            {
                "Source": [
                    [],
                    []
                ],
                "Target": [
                    [
                        "Hotel Premiere Classe Orly Rungis"
                    ],
                    [
                        "0:33"
                    ]
                ],
                "Polar_expression": [
                    [
                        "noise of passing large vehicles"
                    ],
                    [
                        "37:68"
                    ]
                ],
                "Polarity": "Negative",
                "Intensity": "Standard"
            }
        ]
    },

my guess is that some sentences have been by accident incorrectly split into two separate sentences?

from semeval22_structured_sentiment.

jerbarnes avatar jerbarnes commented on September 26, 2024

It seems like the problem stems from the original sentence segmentation. The annotation was performed at document-level and although we told annotators to make sure that all sources/targets/expressions were annotated within sentences, at the time it wasn't completely clear that the annotations spanned across the incorrect sentence boundaries. This will require quite a bit of work to fix and I'm afraid I'll have to leave it for now. What I will do is filter the dev/eval data to make sure they do not influence the evaluation.

from semeval22_structured_sentiment.

egilron avatar egilron commented on September 26, 2024

I compared the index and text representation for each segment of each elemet = ["Source", "Target", "Polar_expression"] in each opinion in the train data. I checked if the length of the text was similar to the length of the span represented by the index values. Here is what I got:

Polar_expression dissimilar Polar_expression similar Source dissimilar Source similar Source empty Target dissimilar Target similar Target empty
opener_en 95 2789 0 266 2618 17 2665 202
multibooked_eu 94 1590 5 200 1479 23 1262 399
opener_es 47 2997 0 176 2868 3 2756 285
multibooked_ca 62 1918 1 167 1812 23 1672 285
norec 0 9255 0 898 7550 0 6819 1670
darmstadt_unis 22 1077 0 63 743 2 804 0
mpqa 0 1706 0 1434 272 0 1481 225

Example sentence

{"sent_id": "../opener/en/kaf/hotel/english00192_e3fe22eeb360723a699504a27e13065e-5", 
   "text": "I can't explain in words how grand this place looks .", 
   "opinions": [{"Source": [[], []], "Target": [["this place"], ["36:46"]], "Polar_expression": [["how grand looks"], ["26:52"]], 
   "Polarity": "Positive", "Intensity": "Standard"}]}

For cases like this, where words are ommitted from the text, like "how grand looks", we could write a script to brake the element up in segments. Or just go with the index representations. Or just throw them out.

  for text, span in zip(opinion[element][0], opinion[element][1]):
      sp = [int(n) for n in span.split(":")]
      if len(text) == sp[1]- sp[0]:
          data.append(element + " similar")
      else: 
          data.append(element + " dissimilar")

from semeval22_structured_sentiment.

janpf avatar janpf commented on September 26, 2024

@egilron nice work!
maybe you could add another column which indicates whether the string in source, target and expression exists in the original text in the first place? sometimes the indices are correct but the string doesn't appear anywhere at all.

from semeval22_structured_sentiment.

egilron avatar egilron commented on September 26, 2024

Thank you! Virtually all the dissimilarities between text span and index span that I catch, comes from the text span ommitting words while the index span covers from first to last word. Like the "how grand looks" example. I found only one sentence where a text span is larger than index span. I have 393 segments where the index span is larger than text representation. For 389 of these I find each word in the text representation inside the span representation. Like [["how grand looks"], ["26:52"]] All words in ["how", "grand", "looks"] can be found in "I can't explain in words how grand this place looks ."[26-1:52-1] ( "how grand this place looks")

For the four spans where text words are not found in the index span, these words are from outside the sentence.
All counting is train only.

Table: Is getting cluttered now. Use at own risk.

Polar_expression indexspan_larger Polar_expression similar Polar_expression spanlarger text_notin_sentence Polar_expression textspan_larger Polar_expressionspanlarger text_in_span Source indexspan_larger Source similar Source spanlarger text_notin_sentence Source_empty Sourcespanlarger text_in_span Target indexspan_larger Target similar Target_empty Targetspanlarger text_in_span
opener_en 95 2789 0 0 95 0 266 0 2618 0 17 2665 202 17
multibooked_eu 94 1590 2 0 92 5 200 1 1479 4 23 1262 399 23
opener_es 46 2997 0 1 46 0 176 0 2868 0 3 2756 285 3
multibooked_ca 62 1918 1 0 61 1 167 0 1812 1 23 1672 285 23
norec 0 9255 0 0 0 0 898 0 7550 0 0 6819 1670 0
darmstadt_unis 22 1077 0 0 22 0 63 0 743 0 2 804 0 2
mpqa 0 1706 0 0 0 0 1434 0 272 0 0 1481 225 0

from semeval22_structured_sentiment.

jerbarnes avatar jerbarnes commented on September 26, 2024

Hey,

In the end, I was able to fix the easy ones, where the target/polar expression was split, but the offsets did not reflect this. That took care of most of them. For the ones that were split across sentences, I either filtered them, if they were incorrect annotations (the original annotation spanned a sentence boundary) or combined the text and fixed them otherwise (incorrect sentence segmentation).

I think that should fix most of the issues, but let me know if you happen to find anything else.

from semeval22_structured_sentiment.

MinionAttack avatar MinionAttack commented on September 26, 2024

Hi @jerbarnes, after this change do I have to retrain the models?

from semeval22_structured_sentiment.

jerbarnes avatar jerbarnes commented on September 26, 2024

It looks like it's a problem in the original annotation file in MPQA. In that particular file, lots of the the indices seem like they're off. Not sure what would have happened. I can remove this one in the preprocessing script, but I don't currently have a way to search for similar kinds of errors.

from semeval22_structured_sentiment.

jerbarnes avatar jerbarnes commented on September 26, 2024

Great to hear!

from semeval22_structured_sentiment.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.