Comments (8)
Xu, Thank you for your reply. I had understood the AC loss and implemented it successfully.Thank you anyway. best wishes Li
…Sorry to bother. Could you please share your pytorch implemention? I am quite confused about this loss function. I got very bad performance. :(
Thank you very much!
class ActiveContourLoss(Module):
def init(self):
super(ActiveContourLoss, self).init()
def forward(self, y_pred, y_true, combine=None):
x = y_pred[:,:,1:,:] - y_pred[:,:,:-1,:] # horizontal and vertical directions
y = y_pred[:,:,:,1:] - y_pred[:,:,:,:-1]
delta_x = x[:,:,1:,:-2]**2
delta_y = y[:,:,:-2,1:]**2
delta_u = torch.abs(delta_x + delta_y)
epsilon = 0.00000001 # where is a parameter to avoid square root is zero in practice.
w = 1.
if combine is not None:
lenth = w * torch.mean(torch.sqrt(delta_u + epsilon)) # equ.(11) in the paper
else:
lenth = w * torch.sum(torch.sqrt(delta_u + epsilon)) # equ.(11) in the paper
if torch.cuda.is_available():
C_1 = torch.ones(y_true.shape, dtype=torch.float32).cuda()
C_2 = torch.zeros(y_true.shape, dtype=torch.float32).cuda()
else:
C_1 = torch.ones(y_true.shape, dtype=torch.float32)
C_2 = torch.zeros(y_true.shape, dtype=torch.float32)
if combine is not None:
region_in = torch.abs(torch.mean(y_pred * ((y_true - C_1)**2)) ) # equ.(12) in the paper
region_out = torch.abs(torch.mean((1. - y_pred) * ((y_true - C_2)**2))) # equ.(12) in the paper
else:
region_in = torch.abs(torch.sum(y_pred * ((y_true - C_1)**2)) ) # equ.(12) in the paper
region_out = torch.abs(torch.sum((1. - y_pred) * ((y_true - C_2)**2))) # equ.(12) in the paper
lambdaP = 5. # lambda parameter could be various.
loss = lenth + lambdaP * (region_in + region_out)
return loss
from active-contour-loss.
Sorry, I didn't notice that you closed it. If you are still wondering, here is my reply:
AC loss (this version) may not work well in imbalanced label problem. A hyperparameter between region_in and region_out may be required in practice.
0 in y_pred[:,0,:,:] means slicing the 1st channel out so y_pred will have a new size as same as C1 and C2.
Input shape is 'channel first' like (batch size, channel, H, W).
from active-contour-loss.
from active-contour-loss.
Xu, Thank you for your reply. I had understood the AC loss and implemented it successfully.Thank you anyway. best wishes Li
…
Sorry to bother. Could you please share your pytorch implemention? I am quite confused about this loss function. I got very bad performance. :(
Thank you very much!
from active-contour-loss.
Xu, Thank you for your reply. I had understood the AC loss and implemented it successfully.Thank you anyway. best wishes Li
…Sorry to bother. Could you please share your pytorch implemention? I am quite confused about this loss function. I got very bad performance. :(
Thank you very much!class ActiveContourLoss(Module):
def init(self):
super(ActiveContourLoss, self).init()def forward(self, y_pred, y_true, combine=None): x = y_pred[:,:,1:,:] - y_pred[:,:,:-1,:] # horizontal and vertical directions y = y_pred[:,:,:,1:] - y_pred[:,:,:,:-1] delta_x = x[:,:,1:,:-2]**2 delta_y = y[:,:,:-2,1:]**2 delta_u = torch.abs(delta_x + delta_y) epsilon = 0.00000001 # where is a parameter to avoid square root is zero in practice. w = 1. if combine is not None: lenth = w * torch.mean(torch.sqrt(delta_u + epsilon)) # equ.(11) in the paper else: lenth = w * torch.sum(torch.sqrt(delta_u + epsilon)) # equ.(11) in the paper if torch.cuda.is_available(): C_1 = torch.ones(y_true.shape, dtype=torch.float32).cuda() C_2 = torch.zeros(y_true.shape, dtype=torch.float32).cuda() else: C_1 = torch.ones(y_true.shape, dtype=torch.float32) C_2 = torch.zeros(y_true.shape, dtype=torch.float32) if combine is not None: region_in = torch.abs(torch.mean(y_pred * ((y_true - C_1)**2)) ) # equ.(12) in the paper region_out = torch.abs(torch.mean((1. - y_pred) * ((y_true - C_2)**2))) # equ.(12) in the paper else: region_in = torch.abs(torch.sum(y_pred * ((y_true - C_1)**2)) ) # equ.(12) in the paper region_out = torch.abs(torch.sum((1. - y_pred) * ((y_true - C_2)**2))) # equ.(12) in the paper lambdaP = 5. # lambda parameter could be various. loss = lenth + lambdaP * (region_in + region_out) return loss
Thank you very much! And may I ask if I need sigmoid y_pred? And how do you set the value of learning rate and iteration? In my experiment, the output always like this
I am very confused why ac loss itself do not work...
from active-contour-loss.
from active-contour-loss.
from active-contour-loss.
Hi Xu,
I had used AC Loss to try seg 2-class images. Although the loss is decreasing, dice score doesn't improve at all. On the contrary, dice score is at a fairly low value, about 0.0001.
`
class ActiveContourLoss(Module):
def init(self):
super(ActiveContourLoss, self).init()def forward(self, y_pred, y_true): x = y_pred[:,:,1:,:] - y_pred[:,:,:-1,:] # horizontal and vertical directions y = y_pred[:,:,:,1:] - y_pred[:,:,:,:-1] delta_x = x[:,:,1:,:-2]**2 delta_y = y[:,:,:-2,1:]**2 delta_u = torch.abs(delta_x + delta_y) epsilon = 0.00000001 # where is a parameter to avoid square root is zero in practice. w = 1. lenth = w * torch.sum(torch.sqrt(delta_u + epsilon)) # equ.(11) in the paper C_1 = torch.ones(y_true.shape, dtype=torch.float32).cuda() C_2 = torch.zeros(y_true.shape, dtype=torch.float32).cuda() region_in = torch.abs(torch.sum(y_pred * ((y_true - C_1)**2)) ) # equ.(12) in the paper region_out = torch.abs(torch.sum((1. - y_pred) * ((y_true - C_2)**2))) # equ.(12) in the paper lambdaP = 5. # lambda parameter could be various. loss = lenth + lambdaP * (region_in + region_out) return loss
`
this is my pytorch implemention, y_pred[:,0,:,:] what is the mean 0, is channels? and y_pred as the input need to sigmoid? the input shape is (channel, batchsize, H, W) or (batchsize, channel, H, W) .
My y_pred shape is (16, 1, 512, 512).So i need modify it?Best,
qaqzzz
hello,bro.I have the same problem now that dice score is always 0.00000,and I have set learn rate to 0.00005.So I'm very confused about it.Could you give me some suggestions?Thank you very much.
from active-contour-loss.
Related Issues (15)
- When the code will be released? HOT 2
- What's the input shape in AC loss? HOT 6
- I wonder if this loss can work when the foregrounds are very small HOT 2
- Is this really the implementation from the paper? HOT 8
- I wonder what is the shape of y_repd in AC loss? HOT 1
- Can't this loss function be used directly?
- 你好,论文可以可以分享一下吗?现在还搜不到[email protected] HOT 2
- Could you please give more details about the structure of Dense-Unet in your work?
- Is the input of AC loss function a binary graph after segmentation? HOT 1
- why the length is component of the loss? HOT 2
- Loss is not minimizing after 2nd epoch. HOT 5
- Question about the implementation of coutour extraction HOT 4
- The input image is only source image and ground truth? HOT 8
- A new implementation of Active-Contour-Loss (2D and 3D).
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from active-contour-loss.