Which one are the best Densnet or Resnet?

Can someone explain which one are the best algorithm ? I know but I want to learn from a person who may best experience in above. Thanks

On what kind of public dataset?

Cefar10 or other one. But the Densnet are the change version of resnet . Can you suggest what are the benefits of Densnet?

I mean Densnet vs resnet

I suggest to take a look at:

https://arxiv.org/abs/2010.12496

2 Likes

Check out the following recent papers:

  1. ResNet strikes back: An improved training procedure in timm

The influential Residual Networks designed by He et al. remain the gold-standard architecture in numerous scientific publications. They typically serve as the default architecture in studies, or as baselines when new architectures are proposed. Yet there has been significant progress on best practices for training neural networks since the inception of the ResNet architecture in 2015. Novel optimization & data-augmentation have increased the effectiveness of the training recipes. In this paper, we re-evaluate the performance of the vanilla ResNet-50 when trained with a procedure that integrates such advances. We share competitive training settings and pre-trained models in the timm open-source library, with the hope that they will serve as better baselines for future work. For instance, with our more demanding training setting, a vanilla ResNet-50 reaches 80.4% top-1 accuracy at resolution 224x224 on ImageNet-val without extra data or distillation. We also report the performance achieved with popular models with our training procedure.

  1. A ConvNet for the 2020s.

The “Roaring 20s” of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model. A vanilla ViT, on the other hand, faces difficulties when applied to general computer vision tasks such as object detection and semantic segmentation. It is the hierarchical Transformers (e.g., Swin Transformers) that reintroduced several ConvNet priors, making Transformers practically viable as a generic vision backbone and demonstrating remarkable performance on a wide variety of vision tasks. However, the effectiveness of such hybrid approaches is still largely credited to the intrinsic superiority of Transformers, rather than the inherent inductive biases of convolutions. In this work, we reexamine the design spaces and test the limits of what a pure ConvNet can achieve. We gradually “modernize” a standard ResNet toward the design of a vision Transformer, and discover several key components that contribute to the performance difference along the way. The outcome of this exploration is a family of pure ConvNet models dubbed ConvNeXt. Constructed entirely from standard ConvNet modules, ConvNeXts compete favorably with Transformers in terms of accuracy and scalability, achieving 87.8% ImageNet top-1 accuracy and outperforming Swin Transformers on COCO detection and ADE20K segmentation, while maintaining the simplicity and efficiency of standard ConvNets.

1 Like

In the first work I suppose that the new training regime/protocol could be applied also to a densnet. But It seems that we don’t have a densnet baseline in the tables applying the same protocol. Instead similar tricks are included in the 2nd paper.

For the second one it is a recent STOA model form Facebook Research (reference pytorch official impl + pretrained weights available on GitHub) but I suppose we could consider that one outside of resnet/densnet perimeter but still good for who Is interested in CNN-family STOA. I suppose that the real scope was to integrate this in new CNN-Transformer Hybrid models.

1 Like