Yolov5 网络结构

yolov5 的网络结构

yolov5 的网络结构的配置文件在models文件夹下,有yolov5n.yaml, yolov5s.yaml, yolov5m.yaml等等。几个网络结构其实都一样,通过depth_multiple和width_multiple参数来控制网络结构的深度和宽度。

主要理解 head和backbone部分,根据需求对模型做修改时主要也是改动backbone部分。


# YOLOv5 v6.0 backbone
backbone:
  # [from, number, module, args]
  [[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2
   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4
   [-1, 3, C3, [128]],
   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8
   [-1, 6, C3, [256]],
   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16
   [-1, 9, C3, [512]],
   [-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32
   [-1, 3, C3, [1024]],
   [-1, 1, SPPF, [1024, 5]],  # 9
  ]

# YOLOv5 v6.0 head
head:
  [[-1, 1, Conv, [512, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 6], 1, Concat, [1]],  # cat backbone P4
   [-1, 3, C3, [512, False]],  # 13

   [-1, 1, Conv, [256, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 4], 1, Concat, [1]],  # cat backbone P3
   [-1, 3, C3, [256, False]],  # 17 (P3/8-small)

   [-1, 1, Conv, [256, 3, 2]],
   [[-1, 14], 1, Concat, [1]],  # cat head P4
   [-1, 3, C3, [512, False]],  # 20 (P4/16-medium)

   [-1, 1, Conv, [512, 3, 2]],
   [[-1, 10], 1, Concat, [1]],  # cat head P5
   [-1, 3, C3, [1024, False]],  # 23 (P5/32-large)

   [[17, 20, 23], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)
  ]

最新版本的yolov5 v6.0的backbone已经不用Focus了,直接使用卷积核为6,stride为2的卷积层。

from:输入来自那一层,-1代表前面一层,1代表第1层,3代表第3层,[-1, 6]则表示来自上一层和第6层的输入维度相加(concat)

number:模块的数量,最终数量需要乘width,然后四舍五入取整,如果小于1,取1。

module:子模块名称

args:模块参数,有 out_channel,kernel_size,stride,padding,bias等

自定义网络结构

自定义网络结构需要三个步骤:

  1. 在model文件夹下创建新的模块,比如mobilenetv3.py,编写你要加的结构,再在yolo.py中导入
  2. 更改yaml配置文件,将backbone或者head中需要改的层换成自己的
  3. 更改yolo.py 中的parse_model解析函数

第一步,编写mobilenetv3.py

# MobileNetV3

import torch.nn as nn


class h_sigmoid(nn.Module):
    def __init__(self, inplace=True):
        super(h_sigmoid, self).__init__()
        self.relu = nn.ReLU6(inplace=inplace)

    def forward(self, x):
        return self.relu(x + 3) / 6


class h_swish(nn.Module):
    def __init__(self, inplace=True):
        super(h_swish, self).__init__()
        self.sigmoid = h_sigmoid(inplace=inplace)

    def forward(self, x):
        y = self.sigmoid(x)
        return x * y


class SELayer(nn.Module):
    def __init__(self, channel, reduction=4):
        super(SELayer, self).__init__()
        self.avg_pool = nn.AdaptiveAvgPool2d(1)
        self.fc = nn.Sequential(
            nn.Linear(channel, channel // reduction),
            nn.ReLU(inplace=True),
            nn.Linear(channel // reduction, channel),
            h_sigmoid()
        )

    def forward(self, x):
        b, c, _, _ = x.size()
        y = self.avg_pool(x)
        y = y.view(b, c)
        y = self.fc(y).view(b, c, 1, 1)
        return x * y


class conv_bn_hswish(nn.Module):
    """
    This equals to
    def conv_3x3_bn(inp, oup, stride):
        return nn.Sequential(
            nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
            nn.BatchNorm2d(oup),
            h_swish()
        )
    """

    def __init__(self, c1, c2, stride):
        super(conv_bn_hswish, self).__init__()
        self.conv = nn.Conv2d(c1, c2, 3, stride, 1, bias=False)
        self.bn = nn.BatchNorm2d(c2)
        self.act = h_swish()

    def forward(self, x):
        return self.act(self.bn(self.conv(x)))

    def fuseforward(self, x):
        return self.act(self.conv(x))


class MobileNetV3_InvertedResidual(nn.Module):
    def __init__(self, inp, oup, hidden_dim, kernel_size, stride, use_se, use_hs):
        super(MobileNetV3_InvertedResidual, self).__init__()
        assert stride in [1, 2]

        self.identity = stride == 1 and inp == oup

        if inp == hidden_dim:
            self.conv = nn.Sequential(
                # dw
                nn.Conv2d(hidden_dim, hidden_dim, kernel_size, stride, (kernel_size - 1) // 2, groups=hidden_dim,
                          bias=False),
                nn.BatchNorm2d(hidden_dim),
                h_swish() if use_hs else nn.ReLU(inplace=True),
                # Squeeze-and-Excite
                SELayer(hidden_dim) if use_se else nn.Sequential(),
                # pw-linear
                nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),
                nn.BatchNorm2d(oup),
            )
        else:
            self.conv = nn.Sequential(
                # pw
                nn.Conv2d(inp, hidden_dim, 1, 1, 0, bias=False),
                nn.BatchNorm2d(hidden_dim),
                h_swish() if use_hs else nn.ReLU(inplace=True),
                # dw
                nn.Conv2d(hidden_dim, hidden_dim, kernel_size, stride, (kernel_size - 1) // 2, groups=hidden_dim,
                          bias=False),
                nn.BatchNorm2d(hidden_dim),
                # Squeeze-and-Excite
                SELayer(hidden_dim) if use_se else nn.Sequential(),
                h_swish() if use_hs else nn.ReLU(inplace=True),
                # pw-linear
                nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),
                nn.BatchNorm2d(oup),
            )

    def forward(self, x):
        y = self.conv(x)
        if self.identity:
            return x + y
        else:
            return y

在yolo.py中加上:

from models.mobilenetv3 import *

第二步,新建mobilenetv3small.yaml,将yolov5中的backbone的卷积模块全部替换成MobileNetV3_InvertedResidual

# parameters
nc: 80  # number of classes
depth_multiple: 1.0
width_multiple: 1.0
# anchors
anchors:
  - [10,13, 16,30, 33,23]  # P3/8
  - [30,61, 62,45, 59,119]  # P4/16
  - [116,90, 156,198, 373,326]  # P5/32

# custom backbone
backbone:
  # MobileNetV3-small
  # [from, number, module, args]
  [[-1, 1, conv_bn_hswish, [16, 2]],                             # 0-p1/2
   [-1, 1, MobileNetV3_InvertedResidual, [16,  16, 3, 2, 1, 0]],  # 1-p2/4
   [-1, 1, MobileNetV3_InvertedResidual, [24,  72, 3, 2, 0, 0]],  # 2-p3/8
   [-1, 1, MobileNetV3_InvertedResidual, [24,  88, 3, 1, 0, 0]],  # 3-p3/8
   [-1, 1, MobileNetV3_InvertedResidual, [40,  96, 5, 2, 1, 1]],  # 4-p4/16
   [-1, 1, MobileNetV3_InvertedResidual, [40, 240, 5, 1, 1, 1]],  # 5-p4/16
   [-1, 1, MobileNetV3_InvertedResidual, [40, 240, 5, 1, 1, 1]],  # 6-p4/16
   [-1, 1, MobileNetV3_InvertedResidual, [48, 120, 5, 1, 1, 1]],  # 7-p4/16
   [-1, 1, MobileNetV3_InvertedResidual, [48, 144, 5, 1, 1, 1]],  # 8-p4/16
   [-1, 1, MobileNetV3_InvertedResidual, [96, 288, 5, 2, 1, 1]],  # 9-p5/32
   [-1, 1, MobileNetV3_InvertedResidual, [96, 576, 5, 1, 1, 1]],  # 10-p5/32
   [-1, 1, MobileNetV3_InvertedResidual, [96, 576, 5, 1, 1, 1]],  # 11-p5/32
  ]

head:
  [[-1, 1, Conv, [256, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 8], 1, Concat, [1]],  # cat backbone P4
   [-1, 1, C3, [256, False]],  # 15

   [-1, 1, Conv, [128, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 3], 1, Concat, [1]],  # cat backbone P3
   [-1, 1, C3, [128, False]],  # 19 (P3/8-small)

   [-1, 1, Conv, [128, 3, 2]],
   [[-1, 16], 1, Concat, [1]],  # cat head P4
   [-1, 1, C3, [256, False]],  # 22 (P4/16-medium)

   [-1, 1, Conv, [256, 3, 2]],
   [[-1, 12], 1, Concat, [1]],  # cat head P5
   [-1, 1, C3, [512, False]],  # 25 (P5/32-large)

   [[19, 22, 25], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)
  ]

第三步,更改解析模块,models/yolo.py中的parse_model函数

        if m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv,
                 BottleneckCSP, C3, C3TR, C3SPP, C3Ghost, conv_bn_hswish, MobileNetV3_InvertedResidual]:

开始训练

python train.py --data coco.yaml --img 640 --batch 64  --cfg mobilenetv3small.yaml --weights ''

型号参数

                 from  n    params  module                                  arguments
  0                -1  1       464  models.mobilenetv3.conv_bn_hswish       [3, 16, 2]
  1                -1  1       612  models.mobilenetv3.MobileNetV3_InvertedResidual[16, 16, 16, 3, 2, 1, 0]
  2                -1  1      3864  models.mobilenetv3.MobileNetV3_InvertedResidual[16, 24, 72, 3, 2, 0, 0]
  3                -1  1      5416  models.mobilenetv3.MobileNetV3_InvertedResidual[24, 24, 88, 3, 1, 0, 0]
  4                -1  1     13736  models.mobilenetv3.MobileNetV3_InvertedResidual[24, 40, 96, 5, 2, 1, 1]
  5                -1  1     55340  models.mobilenetv3.MobileNetV3_InvertedResidual[40, 40, 240, 5, 1, 1, 1]
  6                -1  1     55340  models.mobilenetv3.MobileNetV3_InvertedResidual[40, 40, 240, 5, 1, 1, 1]
  7                -1  1     21486  models.mobilenetv3.MobileNetV3_InvertedResidual[40, 48, 120, 5, 1, 1, 1]
  8                -1  1     28644  models.mobilenetv3.MobileNetV3_InvertedResidual[48, 48, 144, 5, 1, 1, 1]
  9                -1  1     91848  models.mobilenetv3.MobileNetV3_InvertedResidual[48, 96, 288, 5, 2, 1, 1]
 10                -1  1    294096  models.mobilenetv3.MobileNetV3_InvertedResidual[96, 96, 576, 5, 1, 1, 1]
 11                -1  1    294096  models.mobilenetv3.MobileNetV3_InvertedResidual[96, 96, 576, 5, 1, 1, 1]
 12                -1  1     25088  models.common.Conv                      [96, 256, 1, 1]
 13                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 14           [-1, 8]  1         0  models.common.Concat                    [1]
 15                -1  1    308736  models.common.C3                        [304, 256, 1, False]
 16                -1  1     33024  models.common.Conv                      [256, 128, 1, 1]
 17                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 18           [-1, 3]  1         0  models.common.Concat                    [1]
 19                -1  1     77568  models.common.C3                        [152, 128, 1, False]
 20                -1  1    147712  models.common.Conv                      [128, 128, 3, 2]
 21          [-1, 16]  1         0  models.common.Concat                    [1]
 22                -1  1    296448  models.common.C3                        [256, 256, 1, False]
 23                -1  1    590336  models.common.Conv                      [256, 256, 3, 2]
 24          [-1, 12]  1         0  models.common.Concat                    [1]
 25                -1  1   1182720  models.common.C3                        [512, 512, 1, False]
 26      [19, 22, 25]  1     18879  models.yolo.Detect                      [2, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
Model Summary: 340 layers, 3545453 parameters, 3545453 gradients, 6.3 GFLOPs

yaml 文件中的这些参数[16, 16, 3, 2, 1, 0]]也是自定义的

   [-1, 1, MobileNetV3_InvertedResidual, [16,  16, 3, 2, 1, 0]],  # 1-p2/4
   [-1, 1, MobileNetV3_InvertedResidual, [24,  72, 3, 2, 0, 0]],  # 2-p3/8
   [-1, 1, MobileNetV3_InvertedResidual, [24,  88, 3, 1, 0, 0]],  # 3-p3/8
   [-1, 1, MobileNetV3_InvertedResidual, [40,  96, 5, 2, 1, 1]],  # 4-p4/16
   [-1, 1, MobileNetV3_InvertedResidual, [40, 240, 5, 1, 1, 1]],  # 5-p4/16
   [-1, 1, MobileNetV3_InvertedResidual, [40, 240, 5, 1, 1, 1]],  # 6-p4/16
   [-1, 1, MobileNetV3_InvertedResidual, [48, 120, 5, 1, 1, 1]],  # 7-p4/16
   [-1, 1, MobileNetV3_InvertedResidual, [48, 144, 5, 1, 1, 1]],  # 8-p4/16
   [-1, 1, MobileNetV3_InvertedResidual, [96, 288, 5, 2, 1, 1]],  # 9-p5/32
   [-1, 1, MobileNetV3_InvertedResidual, [96, 576, 5, 1, 1, 1]],  # 10-p5/32
   [-1, 1, MobileNetV3_InvertedResidual, [96, 576, 5, 1, 1, 1]],  # 11-p5/32

在mobilenetv3.py中都有定义,除了inp(input_channel)以外,其他一一对应

class MobileNetV3_InvertedResidual(nn.Module):
    def __init__(self, inp, oup, hidden_dim, kernel_size, stride, use_se, use_hs):

以[16, 16, 3, 2, 1, 0]]为例:

oup=16,
hidden_dim=16,
kernel_size=3,
stride=2,
use_se=1,表示是否使用SELayer
use_hs=0,表示使用h_swish还是ReLU
use_se表示是否使用SELayer

参考

  1. 目标检测 YOLOv5 自定义网络结构

文章出处登录后可见!

已经登录?立即刷新

共计人评分,平均

到目前为止还没有投票!成为第一位评论此文章。

(0)
青葱年少的头像青葱年少普通用户
上一篇 2022年3月21日 上午10:12
下一篇 2022年3月21日

相关推荐