Giter Site home page Giter Site logo

entity-recognition-problem's Introduction

entity-recognition-problem

Exploratory Data Analysis:

I plotted label vs frequency graphs and most frequent words.
We can see that the data is very imbalanced.


Feature Extraction:

  1. Orthographic Vector:
    It basically checks on each position of the string, what character is present.
    'x' - > ASCII characters,
    'c' -> Small letters,
    'C' -> Capital letters,
    'n' -> Numbers,
    'p' -> Punctuations

    For example [['Hi, my name is Devashish'], ['How are you?']] will return [['CcpxccxccccxccxCcccccccc'], ['Cccxcccxcccp']]

    Then after getting the mapping, it creates a vector for it. [[2, 1, 4, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 2, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 2, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 4]]
    'x' -> 0,
    'c' -> 1,
    'C' -> 2,
    'n' -> 3,
    'p' -> 4,

    The max length of this vector is 20 and it replaces characters from ortho mapping with numbers on that position of the string.

  2. **Pos Tag: **
    Position of Speech Tags.
    For this I am using nltk.pos_tag.

    Here are the definitions of each tags:
    CC -> coordinating conjunction
    CD -> cardinal digit
    DT -> determiner
    EX -> existential there (like: “there is” … think of it like “there exists”)
    FW -> foreign word
    IN -> preposition/subordinating conjunction
    JJ -> adjective ‘big’
    JJR -> adjective, comparative ‘bigger’
    JJS -> adjective, superlative ‘biggest’
    LS -> list marker 1)
    MD -> modal could, will
    NN -> noun, singular ‘desk’
    NNS -> noun plural ‘desks’
    NNP -> proper noun, singular ‘Harrison’
    NNPS -> proper noun, plural ‘Americans’
    PDT -> predeterminer ‘all the kids’
    POS -> possessive ending parent’s
    PRP -> personal pronoun I, he, she
    PRP -> possessive pronoun my, his, hers
    RB -> adverb very, silently,
    RBR -> adverb, comparative better
    RBS -> adverb, superlative best
    RP -> particle give up TO, to go ‘to’ the store.
    UH -> interjection, errrrrrrrm
    VB -> verb, base form take
    VBD -> verb, past tense took
    VBG -> verb, gerund/present participle taking
    VBN -> verb, past participle taken
    VBP -> verb, sing. present, non-3d take
    VBZ -> verb, 3rd person sing. present takes
    WDT -> wh-determiner which
    WP -> possessive wh-pronoun whose
    WRB -> wh-abverb where, when

Solutions:

  1. Method 1: Normal Classification:
    I am using support vector classification for this.
    Evaluation: Recall is very low and we have many false negatives

  2. Method 2: Downsampled Classification:
    Since we are dealing with imbalanced data, I have downsampled the class with label('O') and am using support vector classification for this.
    Evaluation: Accuracy is very less.

  3. Method 3: Upsampled Classification:
    Since we are dealing with imbalanced data, I have upsampled the minority and am using naive bayes classification because the number of samples increases for this.
    Evaluation: Accuracy is very less.

  4. Method 4: Penalized Classification:
    Since we are dealing with imbalanced data, I tried penalizing the training model. I used SVC for this with class_weight='balanced'. But it took forever to run so I decided to skip this.
    Evaluation: Too much time.

  5. Method 5: Random Forest:
    Since the data in imbalanced, random forest is a good algorithm for this.
    Evaluation: It performed better than all the other models till now. It was able to classify other labels also.

  6. Method 6: Reading text as tweets:
    For this, I refered to http://noisy-text.github.io/2017/pdf/WNUT19.pdf.
    I have implemented a multi task neural network that aims at generating named entities in user generated text.
    The model captures some orthographic features at the character level by using convulation neural network, PoS features by using Bidirectional Long-Short Term Memory.Once the network is trained, I used it as a feature extractor to feed a Conditional Random Fields (CRF) classifier.

    For example: Lets consider the two texts 'Hi Paris' and 'Welcome to Paris'.
    Here in the first text, 'Paris' can be reffered as Name entity and in the second text, 'Paris' can be reffered as Location entitity.
    This gave me an idea that the outputs from the previous word/layer makes difference in identifying the entity.

Evaluation: It performed decent and as it made the most sense, I decided to go with this.


Evaluation:

For evaluation, since this is very imbalanced data, I have used confusion matrix to check false negatives. I have also used accuracy for some cases.
**All the models had many false negatives. For future improvement, I should extract some more features. **

Bonus:

For bonus, I clustered the predicted labels based on their probabilities. I used KMeans clustering.

Some more insights:

Top likely transitions:
B-person -> I-person 3.936851
B-location -> I-location 2.291192
I-product -> I-product 1.705511
O -> O 1.632621
B-creative-work -> I-creative-work 1.417550
I-creative-work -> I-creative-work 1.386468
B-group -> I-group 1.333125
B-product -> I-product 1.057185
I-location -> I-location 0.951327
I-group -> I-group 0.726022
O -> B-person 0.531396
B-corporation -> I-corporation 0.516798
O -> B-location 0.494253
O -> B-group 0.371175
O -> B-corporation 0.290107


Top unlikely transitions:
B-location -> B-person -0.140064
B-person -> B-location -0.142025
B-group -> O -0.152444
B-person -> B-person -0.153057
I-group -> O -0.157578
I-creative-work -> O -0.238135
I-product -> O -0.587993
B-product -> O -0.602447
B-creative-work -> O -1.781312
O -> I-corporation -2.033485
O -> I-group -2.725038
O -> I-product -3.081457
O -> I-creative-work -3.115185
O -> I-location -3.260909
O -> I-person -3.946296

entity-recognition-problem's People

Contributors

devashishnyati avatar

Stargazers

Hemant Singh avatar Piyush Malhotra avatar Sachee avatar  avatar Akshay Bahadur avatar

Watchers

James Cloos avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.