Giter Site home page Giter Site logo

absa-quad's People

Contributors

isakzhang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

absa-quad's Issues

Which python version?

Thanks for sharing library versions. Could you please let us know the python version as well? The code breaks with python 3.10

代码问题

您好,非常感谢您提供的源代码。我是这个领域的小白,正在接触。复现过程中,代码显示好多错误,是我没设置好的问题吗?例如:TypeError: read_line_examples_from_file() missing 1 required positional argument: 'silence'。请教一下

您好,非常感谢您提供的源代码。

您好,非常感谢您提供的源代码。我是小白,在复现过程中代码显示有好多错误,是我没有设置好参数的问题吗?例如:TypeError: read_line_examples_from_file() missing 1 required positional argument: 'silence'

Model checkpoint

您好,非常感谢您出色的工作成果。
想请问下,可以提供模型训练好的checkpoint吗,我在使用您的代码时发现性能和论文中显示的效果差距非常大。

An error occurs when loading a previously trained model from 'do_direct_eval()' for evaluation.

Hello, thanks for sharing your code.

Currently, model training has been performed through the 'do_train()' code, and the trained model has been saved. After that, I uncommented the commented part to load the trained model for inference, but an error occurred. The content of the error is as follows.

****** Conduct Evaluating with the last state ******
Traceback (most recent call last):
File "myPath\ABSA-QUAD\main.py", line 312, in
model = T5FineTuner(args)
TypeError: init() missing 2 required positional arguments: 'tfm_model' and 'tokenizer'

Please let me know how to solve this.

Address issues that arise during the training of the repository.

Hello, thanks for sharing your code.

Currently, when I run this repository, I encounter some issues, and I have provided solutions that seem to work well. I am sharing this with everyone in the hope of receiving approval from the author.

  1. silent in read_line_examples_from_file isn't set default:
    Solve:
def read_line_examples_from_file(data_path, silence=False):
    .....
  1. Can't assign hparams in model T5FineTunerwith new version lightning >= 2.0.0:
    Solve:
self.hparams.update(vars(hparams))
  1. New version lightning isn't support training_epoch_end and training_epoch_end, so adding prefix on_ before name of these functions

  2. Lightning has integrated the gradient step in the 20 hooks, so I commented out the optimizer_step function because I observed issues with the optimizer closure."

  3. Due to the internal separation of train and validation data within the model, it is not possible to assign output parameters to on_validation_epoch_end. Therefore, I removed it and replaced it with the following code snippet:

 class MyLightningModule(L.LightningModule):
    def __init__(self):
        super().__init__()
        self.validation_step_outputs = []

     def validation_step(self, ...):
         loss = ...
         self.validation_step_outputs.append(loss)
         return loss

    def on_validation_epoch_end(self):
        epoch_average = torch.stack(self.validation_step_outputs).mean()
        self.log("validation_epoch_average", epoch_average)
        self.validation_step_outputs.clear()  # free memory
  1. Param gpus does not exist in the Lightning Trainer; instead, add the device parameter with the value 'auto' to automatically detect the available GPUs, and set accelerator='gpu'."
    Solve:
    train_params = dict(
        default_root_dir=args.output_dir,
        accumulate_grad_batches=args.gradient_accumulation_steps,
        devices='auto',
        gradient_clip_val=1.0,
        max_epochs=args.num_train_epochs,
        callbacks=[LoggingCallback()],
        accelerator='gpu'
    )

These are the solutions I have referenced online. If there are any errors, please overlook them and feel free to provide additional feedback

About the dataset

你好,我想把你的数据集应用在其他模型中,比如使用BIO标注方式进行方面词和观点词的抽取,这需要数据集中明确方面词和观点词的index。请问你们在标注数据集的时候有保留这部分数据嘛?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.