Giter Site home page Giter Site logo

icdar-emoreccom's Introduction

Overview

  • This is the code for EmoRecCom #1 solution (Tensorflow 2.0 version)
  • For usage of this code, please follow here
  • The ensemble models (TF + Pytorch) achieved 0.685 in the private leaderboard [paper]
Track 4 private leader board

Data preparation

Competition data

  • The data folder is organized as presented in here, you can also edit this file to adapt to your working directory (not recommended). Instead, this could be directly downloaded from drive by running setup.sh

  • The data directory by default is as follow:

├── private_test
│   ├── images
│   ├── readme.md
│   ├── results.csv
│   └── transcriptions.json
├── public_test
│   ├── images
│   ├── results.csv
│   └── transcriptions.json
└── public_train
    ├── additional_infor:train_emotion_polarity.csv
    ├── images
    ├── readme.md
    ├── train_5_folds.csv
    ├── train_emotion_labels.csv
    └── train_transcriptions.json

Additional data (optional)

  • In case you want to train a model with static word embeddings (word2vec, glove, fasttext, etc.). Download them by uncommenting the desired pretrained models in setup.sh. By default, static word embedding is not used in our approach
  • The provided static embedding models are in pickle file for easy loading, refer prepare_data.sh for more detail

Prerequisites

  • tensorflow
  • numpy
  • pandas
  • sklearn
  • transformers
  • efficientnet

Running setup.sh also installs the dependencies

Train & inference

  • Example bash scripts for training and inference are train.sh and infer.sh

Train example

python src/main.py \
    --train_dir data/public_train \
    --target_cols angry disgust fear happy sad surprise neutral other \
    --gpus 0 1 2 \
    --image_model efn-b2 \
    --bert_model roberta-base \
    --word_embedding embeddings/glove.840B.300d.pkl \
    --max_vocab 30000 \
    --image_size 256 \
    --max_word 36 \
    --max_len 48 \
    --text_separator " " \
    --n_hiddens -1 \
    --lr 0.00003 \
    --n_epochs 5 \
    --seed 1710 \
    --do_train \
    --lower \

Inference example

python src/main.py \
    --test_dir data/private_test \
    --target_cols angry disgust fear happy sad surprise neutral other \
    --gpus 1 \
    --ckpt_dir outputs/efn-b2_256_roberta-base_48_-1_0.1/ \
    --do_infer \
  • In addition, we perform stacking by Logistic Regression, requires out-of-fold along with test prediction

Outputs

Reproducing:


Model achitecture

icdar-emoreccom's People

Contributors

viethoang1512 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

Forkers

xiangyaode

icdar-emoreccom's Issues

Google Drive link not available

I'm sorry to bother you.
When I try to run the code for this repository, I see that the download of the dataset times out.
Could you please check if the link is valid.
Thanks.

Separate train-infer

Restore models from checkpoints directory without passing arguments, these hyper-parameters should be explicitly inferred from experiment name

Static word embedding

Combining BERT with Static Word Embeddings (refer this paper for more detail)
Change needed:

  • build_model()
  • ICDARGenerator

Duplicated variable names

Resolve duplication in variable name, especially in case of jupyter enviroment experiment
Examples:

# Adapt working directory
TRAIN_IMAGE_DIR = os.path.join(args.data_dir, TRAIN_IMAGE_DIR)
TEST_IMAGE_DIR = os.path.join(args.data_dir, TEST_IMAGE_DIR)
TRAIN_LABELS = os.path.join(args.data_dir, TRAIN_LABELS)
TRAIN_POLARITY = os.path.join(args.data_dir, TRAIN_POLARITY)
TRAIN_SCRIPT = os.path.join(args.data_dir, TRAIN_SCRIPT)
TEST_SCRIPT = os.path.join(args.data_dir, TEST_SCRIPT)
SAMPLE_SUBMISSION = os.path.join(args.data_dir, SAMPLE_SUBMISSION)

tqdm

Import tqdm from tqdm.auto instead of tqdm for both jupyter and terminal envs
Change needed:

  • tqdm

Minor bug in inference

Save prediction into checkpoint directory instead of provided output arguments
Eg : OUTPUT_DIR -> args.ckpt_dir

pred = np.mean(preds, axis=0)
test_processed[TARGET_COLS] = pred
result_path = os.path.join(OUTPUT_DIR, "results.csv")
test_processed[["image_id"] + TARGET_COLS].to_csv(result_path, header=False)
test_path = os.path.join(OUTPUT_DIR, "test_pred.npy")
with open(test_path, "wb") as f:
        np.save(f, test_processed[TARGET_COLS].values)
oof_df = pd.DataFrame.sort_index(pd.concat(oof_preds, axis=0))
oof_path = os.path.join(OUTPUT_DIR, "oof_pred.npy")
with open(oof_path, "wb") as f:
        np.save(f, oof_df[TARGET_COLS].values)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.