Giter Site home page Giter Site logo

Callback based Trainer about flambe HOT 4 OPEN

jeremyasapp avatar jeremyasapp commented on July 18, 2024
Callback based Trainer

from flambe.

Comments (4)

yukw777 avatar yukw777 commented on July 18, 2024

Our goal is to make the Trainer into "an object that users never had to override." I'd like to understand exactly why one would override the Trainer when customizing a model and add methods according to the specific use cases.

Not all of the methods you provided above seem to require overriding the Trainer.

  • forward: this is already part of our models. no need to override Trainer
  • batch_train: you can pass in a loss function to Trainer already. Do we have a specific use case where one needs more complex training logic?
  • batch_eval: same as batch_eval.
  • aggregate_metrics: this doesn't seem like it should be part of the model... it seems more natural to be part of the trainer
  • validation_metric: same as aggregate_metrics.
  • optimize: we can already pass in an optimizer to Trainer. Do we have a specific use case where one needs more complex optimization logic?

I think it's natural for data preprocessing and data post processing to be part of the model. Data preprocessing is already part of the model using Field, but something similar should happen for postprocessing too. For example, in a text classification model, a post processing step that's part of the model could be to turn the argmax of the softmax into its corresponding class name. In a seq2seq model, it could be to generate sentences.

from flambe.

jeremyasapp avatar jeremyasapp commented on July 18, 2024

@yukw777 thanks for your comments. I'll address a few of your points:

I'd like to understand exactly why one would override the Trainer when customizing a model and add methods according to the specific use cases.

This would be a good exercise.

batch_train: you can pass in a loss function to Trainer already. Do we have a specific use case where one needs more complex training logic?

A few things come to mind. First, I think if you are trying to migrate code that you already have, it would be easier to do so if you can modify more of the behavior. For example, you may be using a dictionary as output of your forward method. But loss functions operate over tensors, so you would have to modify the loss function as well. I think the point here is that instead of modifying many different objects, you only modify 1, always.

aggregate_metrics: this doesn't seem like it should be part of the model... it seems more natural to be part of the trainer

I'm not sure I agree. The process can be very different across models depending on the metric.

optimize: we can already pass in an optimizer to Trainer. Do we have a specific use case where one needs more complex optimization logic?

Yes, the current Trainer only takes a single optimizer. 1) You may need more than one optimizer (ex: GAN). 2) You may have a custom update rule that does not use the optimizer class. I've seen pytorch code that does that. Now to be fair, you could argue that something that uses more than one optimizer could have its own Trainer.

Data preprocessing is already part of the model using Field

That's not true, data processing is part of the TabularDataset at the moment, not the model

For example, in a text classification model, a post processing step that's part of the model could be to turn the argmax of the softmax into its corresponding class name

This is an example of the kind of customization that has analogies during training. But should it be part of the same model object that was used in training, or have a different model object for inference.

I forgot to write one of the main goals of this process, which is to add more defaults, to simplify user experience. For example for model X, sampling, loss and metrics are always the same, why have to write them every time?

Ultimately, I think user experience could benefit from having to modify the least number of objects to customize to your needs. I want to avoid someone having to customize: dataset, sampler, model, loss, metric, and optimizer, which implies understanding the interface of each of these objects.

from flambe.

yukw777 avatar yukw777 commented on July 18, 2024

@jeremyasapp thanks for the clarification. I think I understand the direction we need to take better now. I totally agree with your point about having sensible defaults to simplify the user experience.

I think I had some misunderstandings about the proposal, but now that I've taken another look with your clarification, I think all of those methods are a good starting point.

I'd actually add data (pre|post)processing to the mix. I forgot that data preprocessing is currently a part of the dataset itself. I think this should definitely be part of the model as well as postprocessing. I think people would generally want to use the same model object for training and inference.

from flambe.

nmatthews-asapp avatar nmatthews-asapp commented on July 18, 2024

This is being addressed by v0.5 (refactor branch) integrations with other training libraries right @jeremyasapp ?

from flambe.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.