본문 바로가기

머신러닝/Network

(3)
[논문 리뷰] EfficientNet 논문 리뷰 (Rethinking Model Scaling for Convolutional Neural Networks) Transformer에 이어 ViT를 리뷰하려고 했으나,, 인턴 때 사용해보았던 Efficient Net이 급 궁금해져서 읽어보았습니다. EfficientNet : Rethinking Model Scaling for Convolutional Neural Networks (2020) 논문 링크 : https://arxiv.org/abs/1905.11946 EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accu..
[논문 리뷰] Transformer 논문 리뷰 (Attention Is All You Need) NLP에서 아주 유명한 논문인 Transformer를 리뷰해보겠습니다. ViT를 리뷰하기 전에 먼저 리뷰해보도록 하겠습니다. 논문 링크 : https://arxiv.org/abs/1706.03762 Attention Is All You Need The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new arxiv..
[논문 리뷰] ResNet 논문 리뷰 오늘은 최근에 읽은 유명한 논문, ResNet을 리뷰해보려고 합니다. 논문 링크 : https://arxiv.org/abs/1512.03385 Deep Residual Learning for Image Recognition Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with arxiv.org Abstra..