Giter Site home page Giter Site logo

amelie-schreiber / anomaly_detection_persistent_homology Goto Github PK

View Code? Open in Web Editor NEW
1.0 2.0 0.0 1.36 MB

Detecting anomalous texts with persistent homology of context vectors computed by individual attention heads in language transformers

Jupyter Notebook 100.00%

anomaly_detection_persistent_homology's Introduction

anomaly_detection_persistent_homology

Detecting anomalous texts with persistent homology of context vectors computed by individual attention heads in language transformers

Methods

  1. Compute the context vectors (or attention probability distributions, or hidden states) of some input text for a specific head in a specific layer of a language model (or just the layer for hidden states).
  2. Compute the pairwise Euclidean distances between them (or the Jensen-Shannon distance or KL-divergence for attention probability distributions).
  3. Use the distance matrix to compute persistent homology and a persistence diagram for the text for some homology group $H_i$ ($H_1$ seems best if the text is large enough, but $H_0$ is pretty good too).
  4. Do this for multiple baseline texts on the same topic.
  5. Compute the Fréchet mean of the baseline texts persistence diagrams.
  6. Find outliers.
  7. Removing those outliers with large Wasserstein distance from the Fréchet mean may improve results but this is not done in the notebooks.
  8. Compute persistent homology and persistence diagrams for new potentially anomalous texts.
  9. Compute the Wasserstein distances between the potentially anomalous texts persistence diagrams and the Fréchet mean of the baseline texts persistence diagrams.
  10. Find outlier Wasserstein distances and classify the corresponding texts as anomalous.

Some Heuristics and Guiding Principles

We must note that this form of anomalous text detection is not perfect, and as seen in the notebooks, there are false positives and false negatives. There are some things we should keep in mind while using this methods of anomaly detection. The length of the text matters. If one of the texts is significantly larger that the others in terms of token count, then it has a higher chance of being labeled an anomaly, regardless of whether it is on the same topic as the baseline texts. The next thing to keep in mind is that if the baseline texts too loosely clustered around the Fréchet mean, this makes detecting outliers (anomalous text) more difficult. This can happen if the topics mentioned in the baseline texts are only loosely related in content. For example, if we have baseline text talking more extensively about the applications of deep learning to healthcare, with an emphasis on healthcare applications, this will likely be considered an outlier in the initial calculation of the Féchet mean. This also leaves us open to the possibility of not detecting a text about healthcare instead of deep learning as anomalous. We must also note that some models perform better at this than others. For example, xlm-roberta-large forms better persistent homology features than xlm-roberta-base on average. We must also take note of the fact that certain heads may perform better than others for certain topic classes as well. This is an interesting feature of this analysis that is as much about anomaly detection as it is about analyzing the topics modeled by individual heads of the model.

Anomaly Detection with Persistent Homology of Hidden States

We also perform the same persistent homology analysis with layer outputs (hidden states) of texts, forming persistent diagrams of several baseline texts, computing their Fréchet mean persistence diagram, and then comparing some new potentially anomalous texts to the Fréchet mean of the baseline texts. We also should mention that comparing different layers' performance at anomaly detection is important. We might want to run the same analysis for all layers, and then look at the classification of anomalous texts on average. However, we are restricted by computational resources at the moment.

Anomaly Detection at the Level of Attention Probability Distributions

Note, we can perform a very similar analysis at the level of the attention probability distribution (computed from the softmax of the attention matrix), using the Jensen-Shannon distance metric on probability distributions. This provide a much lower level analysis than using the context vectors or the hidden states. Again, looking at the average behavior over all attention heads would be important.

Next Steps

The next obvious thing to do would be to perform this analysis on all attention heads, and then determine what percentage of them classifies a given text as anomalous. This can help us better understand both the information that the individual attention heads are capturing, as well as get a better determination of whether a text is anomalous. Unfortunately, due to computational constraints this will have to wait until later.

anomaly_detection_persistent_homology's People

Contributors

amelie-schreiber avatar

Stargazers

Dan Ofer avatar

Watchers

Kostas Georgiou avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.