-
๐ญ Iโm currently working on ColBERT and DSPy
-
๐ฑ Iโm currently learning Rust, Distributed Systems and Compiler Optimizations
-
๐ฏ Iโm looking to collaborate on Research around LLM Agents
-
๐จโ๐ป All of my projects are available at krypticmouse.github.io/
-
๐ All of my blogs are available here
-
๐ฌ Ask me about ML, DL, IoT, NLP, CV, Web
-
๐ซ How to reach me [email protected]
-
โก Fun fact I stalk PyTorch in my Free Time
double-bind-training's Introduction
double-bind-training's People
double-bind-training's Issues
Feature request: add LM Adapter run name to the os.environ
If at some point in the LM Adapter training it saved the run name to the env, we could use it later in the notebook for, say, saving the folder, or tagging the downstream NER run.
NER training code showing as "crashed" on WandB even when it finished successfully.
All the NER runs on https://wandb.ai/double-bind-ner/masakhane-ner-test-run seem to show "crashed"
Feature: code for Masakhane News and Bloom-lm datasets
- Create snippets that take in a language code and download from Masakhane News/Bloom-lm
- Combine that data with train.txt, etc. from other datasets
- Make sure it all gets tagged in Weights and Biases!
- Incorporate into repo AND colab notebook for AfricaNLP
Feature request: tag the NER runs on wandb with the specific adapter model we used.
Feature request: add tag for datasets used.
A couple ways to go about this: we could ask the user to edit, or we could automatically infer from dataset downloads in the HuggingFace cache. The first one seems easier but less reliable? But would fit in nicely with our current paradigm of having people do this in Colab
RuntimeError in train_ner_adapter: expanded size of the tensor must match the existing size at non-singleton dimension 1.
Evaluating: 0% 0/38 [00:00<?, ?it/s]
Traceback (most recent call last):
File "train_ner_adapter.py", line 726, in <module>
main()
File "train_ner_adapter.py", line 685, in main
result, _ = evaluate(args, model, tokenizer, labels, pad_token_label_id, mode="dev", prefix=global_step)
File "train_ner_adapter.py", line 288, in evaluate
logits = model(avg_emb)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/adapters/models/roberta/adapter_model.py", line 68, in forward
outputs = self.roberta(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/adapters/context.py", line 108, in wrapper_func
results = f(self, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/roberta/modeling_roberta.py", line 843, in forward
buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length)
RuntimeError: The expanded size of the tensor (768) must match the existing size (514) at non-singleton dimension 1. Target sizes: [1312, 768]. Tensor sizes: [1, 514]
Update Readme with usage instructions
- After #10 is complete, we should add instructions for how to do/how we did our phase 1 experiments to the readme. For a start can just be "run this Colab notebook which trains a language adapter and then finetunes on Masakhane NER".
Merge train-lm-adapter into Master
we've been using the https://github.com/krypticmouse/double-bind-training/tree/train-lm-adapter as our "master" branch. For future iterations we would like to merge this into Master so that it's the first thing people see.
- merge https://github.com/krypticmouse/double-bind-training/tree/train-lm-adapter as the master branch.
NER Adapter config info doesn't seem to show up in Weights and Biases.
Feature request: print out more arguments in the NER training
I'd like to add a print(model) if possible, as well as other config params.
Even better if it could be logged to Weights and Biases.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.