Giter Site home page Giter Site logo

coin_dialogre's Introduction

CoIn_DialogRE

0. Package Description

├─ data/: raw data and preprocessed data
    ├─ train.json
    ├─ dev.json
    ├─ test.json
    ├─ entity_type_id.json
    ├─ speaker_vocab_id.json
    ├─ vocab.txt: bert vocab file, we add the new-introduced special tokens
├─ logs/: save the log files
├─ model/: save the optimal model file and prediction results
├─ src/: source codes
    ├─ attention.py 
    ├─ data_utils.py: utils for processing data
    ├─ dataset.py
    ├─ embeddings.py: generate entity type/ utterance embedding
    ├─ model.py
    ├─ main.py: main file to run the model
├─ readme.md

1. Environments

We conducted experiments on a sever with two GeForce GTX 1080Ti GPU.

  • python (3.6.5)
  • cuda (11.0)
  • CentOS Linux release 7.8.2003 (Core)

2. Dependencies

  • torch (1.2.0)
  • transformers (2.0.0)
  • pytorch-transformers (1.2.0)
  • numpy (1.19.2)

3. Preparation

3.1 Download the pre-trained language models.

  • Download the bert-base-uncase model.

3.2 Add the special token id into the vocab.txt

  • Inspired by the resource paper, we add the newly-introduced special tokens to indicate the speakers. (Replacing [unused1]..[unsued10] with speaker1..speaker10).
  • You can replace the original vocab.txt with our file (in './data/vocab.txt')

4. Training

If you want to reproduce our results, please follow our hyper-parameter settings and run the code with the following command.

CUDA_VISIBLE_DEVICES=0,1 nohup python -m torch.distributed.launch --nproc_per_node=2 main.py --bert_path {your_bert_path}

5. Evaluating

You also can evaluate our model without training. Please download the released model. model

python evaluate.py --bert_path {your_bert_path} --optimal_model_path {released_model_path}

Citation

Thank you for your interests in our paper, if you have any problem, please feel free to contact me. ([email protected])

@inproceedings{DBLP:conf/ijcai/LongNL21,
  author    = {Xinwei Long and
               Shuzi Niu and
               Yucheng Li},
  title     = {Consistent Inference for Dialogue Relation Extraction},
  booktitle = {Proceedings of the Thirtieth International Joint Conference on Artificial
               Intelligence, {IJCAI} 2021, Virtual Event / Montreal, Canada, 19-27
               August 2021},
  pages     = {3885--3891},
  year      = {2021},
  url       = {https://doi.org/10.24963/ijcai.2021/535}
}

coin_dialogre's People

Contributors

xinwei96 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

coin_dialogre's Issues

GAIN相关代码

 您好,因为最近刚刚接触关系对话相关任务,对您的工作十分感兴趣。在实验中看到了您使用文档级关系抽取的工作GAIN在对话数据集上,对此十分感兴趣。冒昧提出请求您是否可以分享一下GAIN用在对话数据集上的相关的代码呢?[email protected]
 不甚感激~如果不方便的话那很抱歉打扰了~

bug请教

感谢开源非常好的工作!
我使用pytorch==1.7.1,两块V100跑这份代码,参数完全按照papers里面的设置,跑后的错误提示应该是分布式的错误,尝试解决没有解决该问题,特来请教!log日志如下:

Namespace(batch_size=2, bert_output_size=768, bert_path='/home/data/bert-base-chinese', dev_path='../data/dev.json', device_name='cuda', dropout_rate=0.2, epoch_nums=60, eval_batch_size=2, ft_lr=2e-05, handle_abbr=True, local_rank=0, log_path='../logs/', lower=True, lr=0.0005, max_input_lens=512, max_seq_lens=725, max_utter_nums=42, mode='train', num_heads=12, offset=256, optimal_model_path='./model/best.pkl', output_path='../model/', preds_output_path='./results.json', rel_nums=36, report_every_batch=50, rule_nums=24, save_dir='../model/', seed=0, sigma=0.1, t_max=10, test_batch_size=1, test_path='../data/test.json', threshold=0.5, train_path='../data/train.json', type_dict_path='../data/entity_type_id.json', weight_decay=1e-07)
Load dialogre dataset... (This can take some time)
loading files...
Load dialogre dataset... (This can take some time)
loading files...
all done.
all done.
Start Training: 1
Traceback (most recent call last):
File "main.py", line 291, in
main()
File "main.py", line 287, in main
train(args)
File "main.py", line 95, in train
preds = model(batch)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 619, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/raid/CoIn_dialogRE-main/src/model.py", line 48, in forward
input_feat, input_mask = self.encoder(input_tokens_1, input_tokens_2, input_mask_1, input_mask_2, batch_size, max_seq_lens)
File "/raid/CoIn_dialogRE-main/src/model.py", line 107, in encoder
input_feat = self.bert_encoder(input_tokens_1, attention_mask=input_mask_1)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
Traceback (most recent call last):
File "main.py", line 291, in
result = self.forward(*input, **kwargs)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/pytorch_transformers/modeling_bert.py", line 710, in forward
main()
File "main.py", line 287, in main
head_mask=head_mask)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
train(args)
File "main.py", line 95, in train
preds = model(batch)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/pytorch_transformers/modeling_bert.py", line 431, in forward
layer_outputs = layer_module(hidden_states, attention_mask, head_mask[i])
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 619, in forward
result = self.forward(*input, **kwargs)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/pytorch_transformers/modeling_bert.py", line 409, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
attention_outputs = self.attention(hidden_states, attention_mask, head_mask)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/raid/CoIn_dialogRE-main/src/model.py", line 48, in forward
input_feat, input_mask = self.encoder(input_tokens_1, input_tokens_2, input_mask_1, input_mask_2, batch_size, max_seq_lens)
File "/raid/CoIn_dialogRE-main/src/model.py", line 114, in encoder
result = self.forward(*input, **kwargs)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/pytorch_transformers/modeling_bert.py", line 366, in forward
input1 = self.bert_encoder(input_tokens_1, attention_mask=input_mask_1)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
self_outputs = self.self(input_tensor, attention_mask, head_mask)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/pytorch_transformers/modeling_bert.py", line 710, in forward
result = self.forward(*input, **kwargs)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/pytorch_transformers/modeling_bert.py", line 286, in forward
head_mask=head_mask)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
mixed_query_layer = self.query(hidden_states)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/pytorch_transformers/modeling_bert.py", line 431, in forward
result = self.forward(*input, **kwargs)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 93, in forward
return F.linear(input, self.weight, self.bias)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/nn/functional.py", line 1692, in linear
layer_outputs = layer_module(hidden_states, attention_mask, head_mask[i])
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/pytorch_transformers/modeling_bert.py", line 409, in forward
attention_outputs = self.attention(hidden_states, attention_mask, head_mask)
output = input.matmul(weight.t()) File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl

RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling cublasCreate(handle)
result = self.forward(*input, **kwargs)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/pytorch_transformers/modeling_bert.py", line 366, in forward
self_outputs = self.self(input_tensor, attention_mask, head_mask)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/pytorch_transformers/modeling_bert.py", line 286, in forward
mixed_query_layer = self.query(hidden_states)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, *kwargs)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 93, in forward
return F.linear(input, self.weight, self.bias)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/nn/functional.py", line 1692, in linear
output = input.matmul(weight.t())
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling cublasCreate(handle)
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: device-side assert triggered
Exception raised from create_event_internal at /opt/conda/conda-bld/pytorch_1607370144807/work/c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7fe2eaaa58b2 in /home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void
) + 0xad2 (0x7fe2eacf7982 in /home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7fe2eaa90b7d in /home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #3: + 0x5fea0a (0x7fe320aafa0a in /home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #4: + 0x5feab6 (0x7fe320aafab6 in /home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #5: + 0x1a3f6e (0x5563cb729f6e in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #6: + 0x10e34c (0x5563cb69434c in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #7: + 0x10e4a7 (0x5563cb6944a7 in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #8: + 0x10e4a7 (0x5563cb6944a7 in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #9: + 0xfd9c8 (0x5563cb6839c8 in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #10: + 0x10eb77 (0x5563cb694b77 in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #11: + 0x10eb8d (0x5563cb694b8d in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #12: + 0x10eb8d (0x5563cb694b8d in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #13: + 0x10eb8d (0x5563cb694b8d in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #14: + 0x10eb8d (0x5563cb694b8d in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #15: PyDict_SetItem + 0x502 (0x5563cb6e9da2 in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #16: PyDict_SetItemString + 0x4f (0x5563cb6ea86f in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #17: PyImport_Cleanup + 0xa0 (0x5563cb7305d0 in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #18: Py_FinalizeEx + 0x67 (0x5563cb7ab487 in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #19: + 0x237f03 (0x5563cb7bdf03 in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #20: _Py_UnixMain + 0x3c (0x5563cb7be22c in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #21: __libc_start_main + 0xe7 (0x7fe345776b97 in /lib/x86_64-linux-gnu/libc.so.6)
frame #22: + 0x1dce90 (0x5563cb762e90 in /home/data/miniconda3/envs/DeepEnv/bin/python)

terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: device-side assert triggered
Exception raised from create_event_internal at /opt/conda/conda-bld/pytorch_1607370144807/work/c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7efe812068b2 in /home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xad2 (0x7efe81458982 in /home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7efe811f1b7d in /home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #3: + 0x5fea0a (0x7efeb7210a0a in /home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #4: + 0x5feab6 (0x7efeb7210ab6 in /home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #5: + 0x1a3f6e (0x55dfc4a9cf6e in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #6: + 0x10e34c (0x55dfc4a0734c in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #7: + 0x10e4a7 (0x55dfc4a074a7 in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #8: + 0x10e4a7 (0x55dfc4a074a7 in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #9: + 0xfd9c8 (0x55dfc49f69c8 in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #10: + 0x10eb77 (0x55dfc4a07b77 in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #11: + 0x10eb8d (0x55dfc4a07b8d in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #12: + 0x10eb8d (0x55dfc4a07b8d in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #13: + 0x10eb8d (0x55dfc4a07b8d in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #14: + 0x10eb8d (0x55dfc4a07b8d in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #15: PyDict_SetItem + 0x502 (0x55dfc4a5cda2 in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #16: PyDict_SetItemString + 0x4f (0x55dfc4a5d86f in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #17: PyImport_Cleanup + 0xa0 (0x55dfc4aa35d0 in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #18: Py_FinalizeEx + 0x67 (0x55dfc4b1e487 in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #19: + 0x237f03 (0x55dfc4b30f03 in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #20: _Py_UnixMain + 0x3c (0x55dfc4b3122c in /home/data/miniconda3/envs/DeepEnv/bin/python)
frame #21: __libc_start_main + 0xe7 (0x7efedbed7b97 in /lib/x86_64-linux-gnu/libc.so.6)
frame #22: + 0x1dce90 (0x55dfc4ad5e90 in /home/data/miniconda3/envs/DeepEnv/bin/python)

Traceback (most recent call last):
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/distributed/launch.py", line 260, in
main()
File "/home/data/miniconda3/envs/DeepEnv/lib/python3.7/site-packages/torch/distributed/launch.py", line 256, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/home/data/miniconda3/envs/DeepEnv/bin/python', '-u', 'main.py', '--local_rank=1']' died with <Signals.SIGABRT: 6>.


Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.


Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.