Comments (7)
Hi Peter,
MergedLinear should do exactly what Linear does mathematically when multiple linear layers are "merged" into one like in the GPT codebase. It's here simply to make the GPT integration easier.
Hope this helps!
from lora.
Hello @peterkim95
I've added some annotation for LoRA code in lit-llama repo which you can find helpful.
Nevertheless I don't quite understand why there is a combination of Linear (for matrix A) and Conv1d (for B) layers. Why not both Linear or Conv1d. @edwardjhu could you briefly explain this or maybe link to an article to read. Because I have no idea 🤷♂️ .
I have couple of suspicions that were described in my repo, but have a feeling that I'm not even near.
And by the way, thanks for your work 👍 .
from lora.
good issue
from lora.
For instance, if you wish to incorporate an 8-rank LoRA into the attention layer's 3 matrices (Q, K, V) within a model, you can utilize the following code:
lora_A = nn.Linear(in_features, 8 * 3, bias=False)
lora_B = nn.Conv1d(8 * 3, out_features, kernel_size=1, groups=3, bias=False)
If you choose to utilize nn.Linear in both A and B, it would be necessary to handle Q, K, and V separately. However, by employing nn.Conv1d and the group parameter, it becomes possible to process these three components simultaneously without any interference.
from lora.
Hello @clalanliu
So as I understand with nn.Conv1d
and groups parameter each part in the combined qkv
matrix will be processed independently, while with nn.Linear
lora_B
matrix will "see" and process the whole combined matrix. Am I wrong?
And if so why this approach is not used for the lora_A
?
from lora.
@Andrei-Aksionov Yes. You can check my note
And if so why this approach is not used for the lora_A?
There is no need to do so, because the input of QKV matrices are all the same (that is, x).
from lora.
There is no need to do so, because the input of QKV matrices are all the same (that is, x).
Oh boy, how did I miss that 🤣. Thanks
from lora.
Related Issues (20)
- Can't reproduce the results for GLUE and hyperparameter misalignment HOT 4
- Layers.py not being executed HOT 1
- Can not reproduce the result of Roberta-Base HOT 2
- how to improve the memory ability of lora fine tuning? HOT 1
- models are the same after loading lora parameters using peft library
- Is it necessary to add `model = model.merge_and_unload()` when training a new LoRA adapter?
- How to adjust LoRA into nn.ConvTranspose2d? HOT 2
- Cannot implement LoRA on a custom model containing transformer encoder from pytorch
- _conv_forward() error
- Dynamic Lora Selection In Runtime❓ HOT 1
- Reproduce Lora results is close but not accurate HOT 1
- Guidance Needed on Continuing Training with a New Dataset via LoRA
- After joining Lora, the first few layers show a gradient of 0
- lora-dim == lora-r ?
- LORA on T5 model
- [Question about multi-gpu training]
- question for scale!
- Parameter count on GPT-2 medium
- Where is the LoRA matrices saved?
- Questions about running the cola dataset script
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lora.