Giter Site home page Giter Site logo

Comments (6)

Ray-tju avatar Ray-tju commented on June 12, 2024

Hi @MariaWang96
I uploaded the two files you need, I hope it can solve your problem. Note that the file "attention_1d" should add to the dir named "training".

from mfirrn.

Ray-tju avatar Ray-tju commented on June 12, 2024

The file https://github.com/leilimaster/MFIRRN/blob/main/model/Mfirrn.py in ./model has one module named 'attention'. I tried to add another module like:

import torch.nn as nn class SELayer(nn.Module): def init(self, channel, reduction=16): super(SELayer, self).init() self.avg_pool = nn.AdaptiveAvgPool2d(1) self.fc = nn.Sequential( nn.Linear(channel, channel // reduction, bias=False), nn.ReLU(inplace=True), nn.Linear(channel // reduction, channel, bias=False), nn.Sigmoid() )

def forward(self, x):
    b, c, _, _ = x.size()
    y = self.avg_pool(x).view(b, c)
    y = self.fc(y).view(b, c, 1, 1)
    return x * y.expand_as(x)

as a .py file and then 'import attention' in Mfirrn.py.

But, when I run the benchmark.py, it can't work.

error: RuntimeError: Error(s) in loading state_dict for LLNet: Unexpected key(s) in state_dict: "attention.fc.0.weight", "attention.fc.2.weight".

Could you give some guidance?
Hi @MariaWang96
I re-uploaded the file named "Mfirrn", please download it again.

from mfirrn.

MariaWang96 avatar MariaWang96 commented on June 12, 2024

The file https://github.com/leilimaster/MFIRRN/blob/main/model/Mfirrn.py in ./model has one module named 'attention'. I tried to add another module like:
import torch.nn as nn class SELayer(nn.Module): def init(self, channel, reduction=16): super(SELayer, self).init() self.avg_pool = nn.AdaptiveAvgPool2d(1) self.fc = nn.Sequential( nn.Linear(channel, channel // reduction, bias=False), nn.ReLU(inplace=True), nn.Linear(channel // reduction, channel, bias=False), nn.Sigmoid() )

def forward(self, x):
    b, c, _, _ = x.size()
    y = self.avg_pool(x).view(b, c)
    y = self.fc(y).view(b, c, 1, 1)
    return x * y.expand_as(x)

as a .py file and then 'import attention' in Mfirrn.py.
But, when I run the benchmark.py, it can't work.
error: RuntimeError: Error(s) in loading state_dict for LLNet: Unexpected key(s) in state_dict: "attention.fc.0.weight", "attention.fc.2.weight".
Could you give some guidance?
Hi @MariaWang96
I re-uploaded the file named "Mfirrn", please download it again.

I put these two attention files in ./training and then 'from training import attention,attention_1d' in Mfirrn.py, it works!
Thanks for your time and help.
I will close this issue.

from mfirrn.

Ray-tju avatar Ray-tju commented on June 12, 2024

The file https://github.com/leilimaster/MFIRRN/blob/main/model/Mfirrn.py in ./model has one module named 'attention'. I tried to add another module like:
import torch.nn as nn class SELayer(nn.Module): def init(self, channel, reduction=16): super(SELayer, self).init() self.avg_pool = nn.AdaptiveAvgPool2d(1) self.fc = nn.Sequential( nn.Linear(channel, channel // reduction, bias=False), nn.ReLU(inplace=True), nn.Linear(channel // reduction, channel, bias=False), nn.Sigmoid() )

def forward(self, x):
    b, c, _, _ = x.size()
    y = self.avg_pool(x).view(b, c)
    y = self.fc(y).view(b, c, 1, 1)
    return x * y.expand_as(x)

as a .py file and then 'import attention' in Mfirrn.py.
But, when I run the benchmark.py, it can't work.
error: RuntimeError: Error(s) in loading state_dict for LLNet: Unexpected key(s) in state_dict: "attention.fc.0.weight", "attention.fc.2.weight".
Could you give some guidance?
Hi @MariaWang96
I re-uploaded the file named "Mfirrn", please download it again.

I put these two attention files in ./training and then 'from training import attention,attention_1d' in Mfirrn.py, it works! Thanks for your time and help. I will close this issue.

The file https://github.com/leilimaster/MFIRRN/blob/main/model/Mfirrn.py in ./model has one module named 'attention'. I tried to add another module like:
import torch.nn as nn class SELayer(nn.Module): def init(self, channel, reduction=16): super(SELayer, self).init() self.avg_pool = nn.AdaptiveAvgPool2d(1) self.fc = nn.Sequential( nn.Linear(channel, channel // reduction, bias=False), nn.ReLU(inplace=True), nn.Linear(channel // reduction, channel, bias=False), nn.Sigmoid() )

def forward(self, x):
    b, c, _, _ = x.size()
    y = self.avg_pool(x).view(b, c)
    y = self.fc(y).view(b, c, 1, 1)
    return x * y.expand_as(x)

as a .py file and then 'import attention' in Mfirrn.py.
But, when I run the benchmark.py, it can't work.
error: RuntimeError: Error(s) in loading state_dict for LLNet: Unexpected key(s) in state_dict: "attention.fc.0.weight", "attention.fc.2.weight".
Could you give some guidance?
Hi @MariaWang96
I re-uploaded the file named "Mfirrn", please download it again.

I put these two attention files in ./training and then 'from training import attention,attention_1d' in Mfirrn.py, it works! Thanks for your time and help. I will close this issue.

Hi @MariaWang96

Thanks for your attention to our work!

The file https://github.com/leilimaster/MFIRRN/blob/main/model/Mfirrn.py in ./model has one module named 'attention'. I tried to add another module like:
import torch.nn as nn class SELayer(nn.Module): def init(self, channel, reduction=16): super(SELayer, self).init() self.avg_pool = nn.AdaptiveAvgPool2d(1) self.fc = nn.Sequential( nn.Linear(channel, channel // reduction, bias=False), nn.ReLU(inplace=True), nn.Linear(channel // reduction, channel, bias=False), nn.Sigmoid() )

def forward(self, x):
    b, c, _, _ = x.size()
    y = self.avg_pool(x).view(b, c)
    y = self.fc(y).view(b, c, 1, 1)
    return x * y.expand_as(x)

as a .py file and then 'import attention' in Mfirrn.py.
But, when I run the benchmark.py, it can't work.
error: RuntimeError: Error(s) in loading state_dict for LLNet: Unexpected key(s) in state_dict: "attention.fc.0.weight", "attention.fc.2.weight".
Could you give some guidance?
Hi @MariaWang96
I re-uploaded the file named "Mfirrn", please download it again.

I put these two attention files in ./training and then 'from training import attention,attention_1d' in Mfirrn.py, it works! Thanks for your time and help. I will close this issue.

Hi @MariaWang96
I just re-tested the performance of the model, and the results are shown in the table below. Note that our GPU is Nvidia RTX 3090, and the test environment is cuda V11.1, Pytorch 1.7.

Thank you for your attention to this work!

Extracting params take 2.542s
[ 0, 30] Mean: 2.839, Std: 1.550
[30, 60] Mean: 3.557, Std: 1.669
[60, 90] Mean: 4.572, Std: 2.175
[ 0, 90] Mean: 3.656, Std: 0.711
Extracting params take 13.159s
[ 0, 30] Mean: 4.323, Std: 3.748
[30, 60] Mean: 5.070, Std: 4.933
[60, 90] Mean: 5.962, Std: 6.983
[ 0, 90] Mean: 5.119, Std: 0.670

from mfirrn.

MariaWang96 avatar MariaWang96 commented on June 12, 2024

Actually, I suspected your result on AFLW2000-3D before. In my experiment, when I get 3.678 on AFLW2000-3D, I could get 4.785 on AFLW at the same time.

Before you published the weight file, I retrian your model on my machine following your paper, and evaluate on AFLW2000-3D and AFLW.

When set 'def calc_nme(pts68_fit_all, option='ori'):' in bechmark_aflw2000.py:
the evaluation result is
AFLW2000-3D: 2.974 3.983 5.220 4.059
AFLW: 4.307 5.049 6.059 5.138

When set 'def calc_nme(pts68_fit_all, option='re'):' in bechmark_aflw2000.py:
the evaluation result is
AFLW2000-3D: 2.765 3.226 4.704 3.565
AFLW: 4.307 5.049 6.059 5.138

But now, it seems I'm wrong.

from mfirrn.

Ray-tju avatar Ray-tju commented on June 12, 2024

Actually, I suspected your result on AFLW2000-3D before. In my experiment, when I get 3.678 on AFLW2000-3D, I could get 4.785 on AFLW at the same time.

Before you published the weight file, I retrian your model on my machine following your paper, and evaluate on AFLW2000-3D and AFLW.

When set 'def calc_nme(pts68_fit_all, option='ori'):' in bechmark_aflw2000.py: the evaluation result is AFLW2000-3D: 2.974 3.983 5.220 4.059 AFLW: 4.307 5.049 6.059 5.138

When set 'def calc_nme(pts68_fit_all, option='re'):' in bechmark_aflw2000.py: the evaluation result is AFLW2000-3D: 2.765 3.226 4.704 3.565 AFLW: 4.307 5.049 6.059 5.138

But now, it seems I'm wrong.

Replace the "benchmark.py" in the baseline with the "benchmark.py" we just released to get the correct result, but due to the randomness of multi-granular segmentation, the evaluation result will fluctuate in the range of 3.650-3.690.
If you have other questions, you can start the issue again
Thank you for your attention to this work!

from mfirrn.

Related Issues (4)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.