Comments (20)
Thanks for your interest in DenseNet.
We are experimenting on ImageNet with different model sizes. Right now we have some preliminary results (relatively small models), which is shown in the figures below.
As shown in the figure, DenseNet with the same amount of parameters or computation cost(measured in #flops) as ResNet has lower validation error. The DenseNets in the figures have growthrate = 32. The error of ResNet is copied from results reported by fb.resnet.torch. All the hyperparameters are also kept the same as theirs. When all the models are finished we'll update the paper and Readme with ImageNet results.
The DenseNet architecture used in ImageNet is different from what we used in CIFAR and SVHN dataset. The differences are listed below:
- The major difference is we used "bottleneck structure", which is inspired by the ResNet paper. In each layer, before producing new feature maps through 33 convolution on previous layers' feature maps, a 11 convolution with output size 4*growthRate is performed.
- In transition layers we halved the number of feature maps.
- Following the design strategy of ResNet on ImageNet, we use 4 dense blocks, and they have different depths.
from densenet.
Thanks for your answer.
That sounds very promising !
from densenet.
Great results.
When will you release the prototxt file for imagenet?
from densenet.
@baiyancheng20 sorry this is trained using torch. If you want to use them we can give you torch model definitions first (or pre-trained models later).
Model definition here:
densenet-imagenet.txt
After we get the full results we'll include imagenet models in both Torch and caffe repos.
from densenet.
@liuzhuang13 Thank you for sharing the code. Densenet is a very interesting work. I will try to use the code for Cifar to train on Imagenet Dataset. Thanks a lot.
from densenet.
@baiyancheng20 Thanks for your interest. In order to get better performance, you may want to adapt the caffe code for CIFAR a little bit according to the differences I listed above. For more detail you can refer to the torch code.
from densenet.
Model definition here:
densenet-imagenet.txt
The network model in the 'densenet-imagenet.txt' seems to be different from the paper. In the paper, DenseNet 169 has four dense blocks of size {6, 12, 32, 32} but the file has {6, 12, 48, 16}. Does that make a big difference? I am trying to train the network for Imagenet but convergence curve after the first 32 epochs does not look great (I am using fb.resnet.torch setup and just specified this network type via the -netType).
Thanks
Ganesh
from densenet.
Sorry, the file was wrong, it was probably an older version. I'll correct it. Sorry but I couldn't remember whether this would make a big difference.
Also, there was pretrained models available in the readme page, in case your purpose is just to use a pretrained model.
from densenet.
Thanks for the reply. No problem at all -- just wanted to confirm.
Thank you for uploading the pre-trained models. They have been very helpful but I did want to train a model for a different study I was doing.
from densenet.
Hi! I'm trying to train DenseNet-121-BC on the ImageNet (my own implementation) and I am just wondering weather the training curves I'm getting are any close to what it looked like for you. It would be great if you could share some of them for comparison or give me your opinion on mine results.
In this setup one epoch lasts for roughly 25.6k iterations, so above you can see around 10 epochs (I'm using just one GPU for trining), those are the params that I'm using:
Thanks!
from densenet.
Yes it would be great if the authors could post their convergence curves -- I tried to train with fb.resnet.torch repo where I just replace netType to DenseNet but my initial training curve looked weird. It would helpful if I had a curve to compare to so that I will know if it is expected or I am doing something wrong. Thanks in advance for the help!
from densenet.
Could you post the prototxt files used for training DenseNet's in caffe?
It would be great to check and make some changes to it to experiment.
from densenet.
Hi, @nihalgoalla please check https://github.com/liuzhuang13/DenseNetCaffe (for CIFAR, without BC structure) and https://github.com/shicai/DenseNet-Caffe (for ImageNet).
from densenet.
from densenet.
@nihalgoalla At https://github.com/liuzhuang13/DenseNetCaffe, we have a solver prototxt file (for CIFAR training) and an example prototxt file that contains the last layers. Thanks
from densenet.
@liuzhuang13 Did you scale the ImageNet images to [0,1]
before feeding to DenseNet?
from densenet.
Hi,
I tried to extract image features using DenseNet-121 which is pre-trained (ImageNet). What would be the shape of the output features?
from densenet.
Hi @liuzhuang13, thanks for your great work in dense net.
Comparing to resnet, I wonder why you choose concatenate but not the add function in original resnet.
hope to hear you soon.
from densenet.
Hi @JieMEI1994, I think this is discussed in the section 5 of the paper.
from densenet.
Thank you.
from densenet.
Related Issues (20)
- What is architecture of (L=40,k=12) DenseNet on Cifar10 HOT 6
- Covolution before entering the first dense block for imagenet dataset HOT 1
- DenseNet on Pascal VOC HOT 2
- results on cifar100 HOT 1
- I tried to reproduce Wide-DenseNet-BC results on cifar10, but got 0.5% more than your error HOT 4
- Why is composite function BN-ReLU-Conv3x3 ? HOT 1
- Pretrained weights for the 0.8M parameters config HOT 1
- Why not share the first BN and ReLU? HOT 2
- The layers within the second and third dense block don't assign the least weight to the outputs of the transition layer in my trained model
- Why we can detach any layer without affecting others in densenet?
- question about standardization HOT 6
- cifar validation loss decrease than increase after learning rate change HOT 4
- Question on channel before entering the first block HOT 2
- Question on impede information flow HOT 1
- Is there a pretrained CIFAR 100 or CIFAR 10 model? HOT 2
- Densenet on CIFAR training from scratch
- Question on the last transition layer
- Receptive field of DenseNet
- image classification
- cannot open </cifar-10-python/data_batch_1>
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from densenet.