關(guān)于跳躍連接你需要知道的一切
點(diǎn)擊下方“AI算法與圖像處理”,一起進(jìn)步!
重磅干貨,第一時(shí)間送達(dá)
目錄
為什么需要跳躍連接? 什么是跳躍連接? 跳躍連接的變體 跳躍連接的實(shí)現(xiàn)
為什么要跳躍連接?


訓(xùn)練精度的下降表明并非所有系統(tǒng)都同樣易于優(yōu)化。
什么是跳躍連接?
顧名思義,Skip Connections(或 Shortcut Connections),跳躍連接,會跳躍神經(jīng)網(wǎng)絡(luò)中的某些層,并將一層的輸出作為下一層的輸入。

跳躍連接的變體
殘差網(wǎng)絡(luò)(ResNets)

了解 ResNet 并分析 CIFAR-10 數(shù)據(jù)集上的各種模型:https://www.analyticsvidhya.com/blog/2021/06/understanding-resnet-and-analyzing-various-models-on-the-cifar-10-dataset/
卷積網(wǎng)絡(luò) (DenseNets)
談到跳躍連接,DenseNets 使用串聯(lián),而 ResNets 使用求和

U-Net:用于生物醫(yī)學(xué)圖像分割的卷積網(wǎng)絡(luò)

跳躍連接的實(shí)現(xiàn)
ResNet – 殘差塊
#?import?required?libraries
import?torch
from?torch?import?nn
import?torch.nn.functional?as?F
import?torchvision
#?basic?resdidual?block?of?ResNet
#?This?is?generic?in?the?sense,?it?could?be?used?for?downsampling?of?features.
class?ResidualBlock(nn.Module):
????def?__init__(self,?in_channels,?out_channels,?stride=[1,?1],?downsample=None):
????????"""
????????A?basic?residual?block?of?ResNet
????????Parameters
????????----------
????????????in_channels:?Number?of?channels?that?the?input?have
????????????out_channels:?Number?of?channels?that?the?output?have
????????????stride:?strides?in?convolutional?layers
????????????downsample:?A?callable?to?be?applied?before?addition?of?residual?mapping
????????"""
????????super(ResidualBlock,?self).__init__()
????????self.conv1?=?nn.Conv2d(
????????????in_channels,?out_channels,?kernel_size=3,?stride=stride[0],?
????????????padding=1,?bias=False
????????)
????????self.conv2?=?nn.Conv2d(
????????????out_channels,?out_channels,?kernel_size=3,?stride=stride[1],?
????????????padding=1,?bias=False
????????)
????????self.bn?=?nn.BatchNorm2d(out_channels)
????????self.downsample?=?downsample
????def?forward(self,?x):
????????residual?=?x
????????#?applying?a?downsample?function?before?adding?it?to?the?output
????????if(self.downsample?is?not?None):
????????????residual?=?downsample(residual)
????????out?=?F.relu(self.bn(self.conv1(x)))
????????
????????out?=?self.bn(self.conv2(out))
????????#?note?that?adding?residual?before?activation?
????????out?=?out?+?residual
????????out?=?F.relu(out)
????????return?out
#?downsample?using?1?*?1?convolution
downsample?=?nn.Sequential(
????nn.Conv2d(64,?128,?kernel_size=1,?stride=2,?bias=False),
????nn.BatchNorm2d(128)
)
#?First?five?layers?of?ResNet34
resnet_blocks?=?nn.Sequential(
????nn.Conv2d(3,?64,?kernel_size=7,?stride=2,?padding=3,?bias=False),
????nn.MaxPool2d(kernel_size=2,?stride=2),
????ResidualBlock(64,?64),
????ResidualBlock(64,?64),
????ResidualBlock(64,?128,?stride=[2,?1],?downsample=downsample)
)
#?checking?the?shape
inputs?=?torch.rand(1,?3,?100,?100)?#?single?100?*?100?color?image
outputs?=?resnet_blocks(inputs)
print(outputs.shape)????#?shape?would?be?(1,?128,?13,?13)
#?one?could?also?use?pretrained?weights?of?ResNet?trained?on?ImageNet
resnet34?=?torchvision.models.resnet34(pretrained=True)
DenseNet – 殘差塊
實(shí)現(xiàn)一個(gè) DenseNet 層 建立一個(gè)殘差塊 連接多個(gè)殘差塊得到一個(gè)殘差網(wǎng)絡(luò)模型
class?Dense_Layer(nn.Module):
????def?__init__(self,?in_channels,?growthrate,?bn_size):
????????super(Dense_Layer,?self).__init__()
????????self.bn1?=?nn.BatchNorm2d(in_channels)
????????self.conv1?=?nn.Conv2d(
????????????in_channels,?bn_size?*?growthrate,?kernel_size=1,?bias=False
????????)
????????self.bn2?=?nn.BatchNorm2d(bn_size?*?growthrate)
????????self.conv2?=?nn.Conv2d(
????????????bn_size?*?growthrate,?growthrate,?kernel_size=3,?padding=1,?bias=False
????????)
????def?forward(self,?prev_features):
????????out1?=?torch.cat(prev_features,?dim=1)
????????out1?=?self.conv1(F.relu(self.bn1(out1)))
????????out2?=?self.conv2(F.relu(self.bn2(out1)))
????????return?out2
class?Dense_Block(nn.ModuleDict):
????def?__init__(self,?n_layers,?in_channels,?growthrate,?bn_size):
????????"""
????????A?Dense?block?consists?of?`n_layers`?of?`Dense_Layer`
????????Parameters
????????----------
????????????n_layers:?Number?of?dense?layers?to?be?stacked?
????????????in_channels:?Number?of?input?channels?for?first?layer?in?the?block
????????????growthrate:?Growth?rate?(k)?as?mentioned?in?DenseNet?paper
????????????bn_size:?Multiplicative?factor?for?#?of?bottleneck?layers
????????"""
????????super(Dense_Block,?self).__init__()
????????layers?=?dict()
????????for?i?in?range(n_layers):
????????????layer?=?Dense_Layer(in_channels?+?i?*?growthrate,?growthrate,?bn_size)
????????????layers['dense{}'.format(i)]?=?layer
????????
????????self.block?=?nn.ModuleDict(layers)
????
????def?forward(self,?features):
????????if(isinstance(features,?torch.Tensor)):
????????????features?=?[features]
????????
????????for?_,?layer?in?self.block.items():
????????????new_features?=?layer(features)
????????????features.append(new_features)
????????return?torch.cat(features,?dim=1)
#?a?block?consists?of?initial?conv?layers?followed?by?6?dense?layers
dense_block?=?nn.Sequential(
????nn.Conv2d(3,?64,?kernel_size=7,?padding=3,?stride=2,?bias=False),
????nn.BatchNorm2d(64),
????nn.MaxPool2d(3,?2),
????Dense_Block(6,?64,?growthrate=32,?bn_size=4),
)
inputs?=?torch.rand(1,?3,?100,?100)
outputs?=?dense_block(inputs)
print(outputs.shape)????#?shape?would?be?(1,?256,?24,?24)
#?one?could?also?use?pretrained?weights?of?DenseNet?trained?on?ImageNet
densenet121?=?torchvision.models.densenet121(pretrained=True)
尾注
交流群
歡迎加入公眾號讀者群一起和同行交流,目前有美顏、三維視覺、計(jì)算攝影、檢測、分割、識別、醫(yī)學(xué)影像、GAN、算法競賽等微信群
個(gè)人微信(如果沒有備注不拉群!) 請注明:地區(qū)+學(xué)校/企業(yè)+研究方向+昵稱
下載1:何愷明頂會分享
在「AI算法與圖像處理」公眾號后臺回復(fù):何愷明,即可下載。總共有6份PDF,涉及 ResNet、Mask RCNN等經(jīng)典工作的總結(jié)分析
下載2:終身受益的編程指南:Google編程風(fēng)格指南
在「AI算法與圖像處理」公眾號后臺回復(fù):c++,即可下載。歷經(jīng)十年考驗(yàn),最權(quán)威的編程規(guī)范!
下載3 CVPR2021 在「AI算法與圖像處理」公眾號后臺回復(fù):CVPR,即可下載1467篇CVPR?2020論文 和 CVPR 2021 最新論文

評論
圖片
表情
