Giter Site home page Giter Site logo

vanessaschenkel / contrastive-conditioning Goto Github PK

View Code? Open in Web Editor NEW

This project forked from zurichnlp/contrastive-conditioning

0.0 0.0 0.0 1.81 MB

Code and data accompanying the paper "Contrastive Conditioning for Assessing Disambiguation in MT: A Case Study of Distilled Bias"

License: MIT License

Python 64.66% Jupyter Notebook 35.34%

contrastive-conditioning's Introduction

Code for the paper "Contrastive Conditioning for Assessing Disambiguation in MT: A Case Study of Distilled Bias" (EMNLP 2021). A high-level introduction can be found in Jannis' blog (part 1 | part 2), and a more detailed description in the paper.

The code allows you to evaluate English→X machine translation systems on two probing tasks for disambiguation, using contrastive conditioning as an evaluation protocol.

Contrastive conditioning has been implemented for the following probing tasks:

The code is directed at MT models trained with Fairseq v0.x (https://github.com/pytorch/fairseq), but it should be fairly easy to extend it to other MT frameworks.

Installation

  • Requires Python >= 3.7
  • pip install -r requirements.txt
  • Dependencies for Fairseq models:
    • PyTorch
    • fairseq==0.10.2
    • fastBPE==0.1.0

Optional dependencies

  • Optional dependency for generating disambiguation cues using MLM:

    • transformers==4.9.2
  • Optional dependencies for original ("pattern-matching") WinoMT evaluation:

  • Optional dependencies for analysis notebooks:

    • scipy==1.7.1
    • scikit-learn==0.24.2

Usage Examples

Basic Example of Contrastive Conditioning

The code below reproduces Table 1 of the paper.

from translation_models.fairseq_models import load_sota_evaluator

# List the translations that should be scored.
# We use German translations as an example.
translations = [
    "Der Assistent fragte die Ärztin, ob sie Hilfe brauche.",
    "Der Assistent fragte die Doktorin, ob sie Hilfe brauche.",
    "Die Assistentin fragte den Arzt, ob sie Hilfe brauche.",
    "Die Assistentin fragte den Doktor, ob sie Hilfe brauche.",
    "Die Assistenz fragte die ärztliche Fachperson, ob sie Hilfe brauche.",
    "Die Assistentin fragte, ob sie Hilfe brauche.",
]

# The translations all have the same source sentence, which is:
# "The assistant asked the doctor if she needs any help."
source_with_correct_cue = "The assistant asked the [female] doctor if she needs any help."
source_with_incorrect_cue = "The assistant asked the [male] doctor if she needs any help."

# Warning: This will download a very large model from PyTorch Hub
evaluator_model = load_sota_evaluator("de")

# Score translations given the contrastive sources
scores_correct = evaluator_model.score(
    source_sentences=len(translations) * [source_with_correct_cue],
    hypothesis_sentences=translations,
)
scores_incorrect = evaluator_model.score(
    source_sentences=len(translations) * [source_with_incorrect_cue],
    hypothesis_sentences=translations,
)
# Compute the ratio of the two scores to judge the disambiguation quality of the translations
overall_scores = [s_c / (s_c + s_i) for s_c, s_i in zip(scores_correct, scores_incorrect)]

Targeted Evaluation of an MT System

In order to automatically evaluate your English→X MT system using contrastive conditioning, you need to wrap it into a subclass of translation_models.TranslationModel. Basically, this means that you implement a method translate(self, sentences: List[str], **kwargs) -> List[str] so that your system can be prompted to translate the test sentences.

The codebase already provides a wrapper for Fairseq-trained models (translation_models.fairseq_models.FairseqTranslationModel). Feel free to make a pull request if you create a wrapper for another MT framework.

The system can then be evaluated on MuCoW and WinoMT as follows. For MuCoW, only German and Russian are currently supported as target languages.

from tasks.contrastive_conditioning import MucowContrastiveConditioningTask
from tasks.contrastive_conditioning import WinomtContrastiveConditioningTask
from translation_models import TranslationModel, ScoringModel
from translation_models.fairseq_models import load_sota_evaluator

evaluated_model: TranslationModel = ...  # TODO
tgt_language = ...  # We use "de" and "ru" for our paper

# You could use an alternative evaluator model by subclassing `ScoringModel`
evaluator_model: ScoringModel = load_sota_evaluator(tgt_language)

# Evaluate on MuCoW ...
mucow_result = MucowContrastiveConditioningTask(
  tgt_language=tgt_language,
  evaluator_model=evaluator_model,
).evaluate(evaluated_model)
print(mucow_result)

# Evaluate on WinoMT ...
winomt_result = WinomtContrastiveConditioningTask(
  evaluator_model=evaluator_model
).evaluate(evaluated_model)
print(winomt_result)

Baselines using Pattern Matching

As an alternative to contrastive conditioning, our re-implementations of the baseline ("pattern-matching") methods can be used as follows:

from tasks.pattern_matching import MucowPatternMatchingTask
from tasks.pattern_matching import WinomtPatternMatchingTask
from translation_models import TranslationModel

evaluated_model: TranslationModel = ...  # TODO
tgt_language = ...  # We use "de" and "ru" for our paper

# Evaluate on MuCoW ...
mucow_result = MucowPatternMatchingTask(
  tgt_language=tgt_language,
).evaluate(evaluated_model)
print(mucow_result)

# Evaluate on WinoMT ...
winomt_result = WinomtPatternMatchingTask(
  tgt_language=tgt_language,
).evaluate(evaluated_model)
print(winomt_result)

License

MIT License, except for the probing task data, whose licenses are described in the corresponding README files.

Citation

@inproceedings{vamvas-sennrich-2021-contrastive,
    title = "Contrastive Conditioning for Assessing Disambiguation in {MT}: {A} Case Study of Distilled Bias",
    author = "Vamvas, Jannis  and
      Sennrich, Rico",
    booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2021",
    address = "Online and Punta Cana, Dominican Republic",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.emnlp-main.803",
    pages = "10246--10265",
}

contrastive-conditioning's People

Contributors

jvamvas avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.