Giter Site home page Giter Site logo

nextplusplus / tat-qa Goto Github PK

View Code? Open in Web Editor NEW
87.0 10.0 22.0 4.65 MB

TAT-QA (Tabular And Textual dataset for Question Answering) contains 16,552 questions associated with 2,757 hybrid contexts from real-world financial reports.

Home Page: https://nextplusplus.github.io/TAT-QA/

License: MIT License

Python 100.00%
tat-qa financial-reports hybrid tabular textual

tat-qa's Introduction

TAT-QA: A Question Answering Benchmark on a Hybrid of Tabular and Textual Content in Finance

TAT-QA (Tabular And Textual dataset for Question Answering) contains 16,552 questions associated with 2,757 hybrid contexts from real-world financial reports.

You can download our TAT-QA dataset via TAT-QA dataset.

For more information, please refer to our TAT-QA website or read our ACL2021 paper PDF.

Updates

${\color{red}Jan 2024:}$ We release the ground truth for the TAT-QA Test set under the folder TAT-QA dataset, to facilitate future research on this task!

${\color{red}May 2023:}$ TAT-DQA is released! TAT-DQA is a large-scale Document Visual QA (VQA) dataset, which is constructed by extending the TAT-QA. Please check out it if you are interested in the new task.

TagOp Model

Requirements

To create an environment with MiniConda and activate it.

conda create -n tat-qa python==3.7
conda activate tat-qa
pip install -r requirement.txt
pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.7.0+${CUDA}.html

We adopt RoBERTa as our encoder to develop our TagOp and use the following commands to prepare RoBERTa model

cd dataset_tagop
mkdir roberta.large && cd roberta.large
wget -O pytorch_model.bin https://s3.amazonaws.com/models.huggingface.co/bert/roberta-large-pytorch_model.bin
wget -O config.json https://s3.amazonaws.com/models.huggingface.co/bert/roberta-large-config.json
wget -O vocab.json https://s3.amazonaws.com/models.huggingface.co/bert/roberta-large-vocab.json
wget -O merges.txt https://s3.amazonaws.com/models.huggingface.co/bert/roberta-large-merges.txt

Training & Testing

Preprocessing dataset

We heuristicly generate the "facts" and "mapping" fields based on raw dataset, which are stored under the folder of dataset_tagop.

Prepare dataset

PYTHONPATH=$PYTHONPATH:$(pwd):$(pwd)/tag_op python tag_op/prepare_dataset.py --mode [train/dev/test]

Note: The result will be written into the folder ./tag_op/cache default.

Train & Evaluation

CUDA_VISIBLE_DEVICES=2 PYTHONPATH=$PYTHONPATH:$(pwd) python tag_op/trainer.py --data_dir tag_op/cache/ \
--save_dir ./checkpoint --batch_size 48 --eval_batch_size 8 --max_epoch 50 --warmup 0.06 --optimizer adam --learning_rate 5e-4 \
--weight_decay 5e-5 --seed 123 --gradient_accumulation_steps 4 --bert_learning_rate 1.5e-5 --bert_weight_decay 0.01 \
--log_per_updates 50 --eps 1e-6 --encoder roberta

Testing

CUDA_VISIBLE_DEVICES=2 PYTHONPATH=$PYTHONPATH:$(pwd) python tag_op/predictor.py --data_dir tag_op/cache/ --test_data_dir tag_op/cache/ \\
--save_dir tag_op/ --eval_batch_size 32 --model_path ./checkpoint --encoder roberta

Note: The training process may take around 2 days using a single 32GB v100.

Checkpoint

You may download this checkpoint of the trained TagOp model vai TagOp Checkpoint

Citation

Please kindly cite our work if you use our dataset or codes, thank you.

@inproceedings{zhu-etal-2021-tat,
    title = "{TAT}-{QA}: A Question Answering Benchmark on a Hybrid of Tabular and Textual Content in Finance",
    author = "Zhu, Fengbin  and
      Lei, Wenqiang  and
      Huang, Youcheng  and
      Wang, Chao  and
      Zhang, Shuo  and
      Lv, Jiancheng  and
      Feng, Fuli  and
      Chua, Tat-Seng",
    booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.acl-long.254",
    doi = "10.18653/v1/2021.acl-long.254",
    pages = "3277--3287"
}

License

The TAT-QA dataset is under the license of Creative Commons (CC BY) Attribution 4.0 International

Any Questions?

For any issues please create an issue here or kindly drop an email to the author: Fengbin Zhu [email protected], thank you.

tat-qa's People

Contributors

2-chae avatar fengbinzhu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tat-qa's Issues

NameError: name 'scatter_max' is not defined

I run the code on Goggle Colab GPU. When I run the !CUDA_VISIBLE_DEVICES=2 PYTHONPATH=$PYTHONPATH:$(pwd) python tag_op/trainer.py --data_dir tag_op/cache/ \ --save_dir ./checkpoint --batch_size 4 --eval_batch_size 2 --max_epoch 3 --warmup 0.06 --optimizer adam --learning_rate 5e-4 \ --weight_decay 5e-5 --seed 123 --gradient_accumulation_steps 4 --bert_learning_rate 1.5e-5 --bert_weight_decay 0.01 \ --log_per_updates 50 --eps 1e-6 --encoder roberta command. I get the following error:
10/12/2021 10:32:46 At epoch 1
Traceback (most recent call last):
File "tag_op/trainer.py", line 130, in
main()
File "tag_op/trainer.py", line 109, in main
model.update(batch)
File "/content/TatQA/tag_op/tagop/model.py", line 98, in update
output_dict = self.mnetwork(**tasks)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/TatQA/tag_op/tagop/modeling_tagop.py", line 221, in forward
reduce_max_index_get_vector(table_tag_prediction[:, :, 1], table_sequence_output, table_cell_index)
File "/content/TatQA/tag_op/tagop/modeling_tagop.py", line 849, in reduce_max_index_get_vector
return _index_reduce_max_get_vector(values_for_reduce, values_for_reference, index, max_length, name)
File "/content/TatQA/tag_op/tagop/modeling_tagop.py", line 1047, in _index_reduce_max_get_vector
reduce_values, reduce_index = scatter_max(
NameError: name 'scatter_max' is not defined

Please help over it.

no answer in test set

dear author,
there is no answer in test set.
And I found it is validated in dev set and evaluated also in dev set
And I am so confused about it

The Evaluation Script does not include Table-Text source

I used the official evaluation script with below code line

!python tatqa_eval.py --gold_path=/content/tatqa_dataset_dev.json --pred_path=/content/sample_prediction.json

and I got the following output


Exact-match accuracy 45.92
F1 score 58.88
Scale score 90.95
45.92 & 58.88

---- raw detail ---
em ...
answer_from table ... text
answer_type ...
arithmetic 497.0 ... 16.0
count 12.0 ... 0.0
multi-span 92.0 ... 24.0
span 171.0 ... 349.0

[4 rows x 3 columns]
---- em detail ---
em ...
answer_from table ... text
answer_type ...
arithmetic 0.531187 ... 0.000000
count 0.500000 ... 0.000000
multi-span 0.684783 ... 0.083333
span 0.584795 ... 0.045845

[4 rows x 3 columns]
---- f1 detail ---
f1 ...
answer_from table ... text
answer_type ...
arithmetic 0.531187 ... 0.000
count 0.500000 ... 0.000
multi-span 0.833696 ... 0.255
span 0.584795 ... 0.540

[4 rows x 3 columns]

Why does it not show the answer_from table-text

Original Reports for data

Hi do you happen to have the CIK or the company name and the year for each sample in the dataset. i.e

{
   uid: ...,
   cik: ...,
   company_name: ...,
   year: ...
}

Thank you very much for any help you can offer.

the generated file “tagop_roberta_cached_test.pkl”

When I used the below command line
python tag_op/prepare_dataset.py --mode [train/dev/test]
I used the “tatqa_dataset_test.json” from the folder “dataset_raw”
The generated file “tagop_roberta_cached_test.pkl” is empty. How can I generate the right “tagop_roberta_cached_test.pkl”?
Thanks for your help!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.